id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
41252d8a45ec9a682d58c01e1c9fab92085ecbce
|
A Memetic Algorithm Based Task Scheduling considering Communication Cost on Cluster of Workstations
S.Padmavathi¹*, S.Mercy Shalinie² and R.Abhilaash³
Department of Computer Science & Engineering,
Thiagarajar College of Engineering,
Madurai-625 015, Tamilnadu,
India
¹ spmcse@tce.edu
² shalinie@tce.edu
³ abhilaash.ravichandran@gmail.com
*Corresponding author
Abstract
Task Scheduling is one of the most challenging problems in parallel and distributed computing. For static scheduling, the program to be parallelized is usually modeled as a Directed Acyclic Graph (DAG). In general, the scheduling of a DAG is a strong NP hard problem. The objective of this problem is minimizing the schedule length considering the communication costs. Genetic algorithm (GA) based technique have been proposed to search optimal solutions from entire solution space. The main shortcoming of this approach is to spend much time doing scheduling and hence, needs exhaustive time. This paper proposes a Memetic Algorithm (MA) to overcome with this shortcoming. Hill Climbing algorithm as local search is applied in the proposed memetic algorithm. Extended simulation results demonstrate that the proposed method outperforms the existing GA-based method, producing the optimal schedule.
Keywords: Direct Acyclic Graph (DAG), Task scheduling, Genetic algorithm (GA), Memetic Algorithm (MA), Hill Climbing Algorithm, Local search, schedule length.
1 Introduction
Performance of the program critically depends on the partitioning of the program and the scheduling of the resulting tasks onto the physical processors. Classical scheduling model assumes that each task is processed on one processor at a time. The objective of the task scheduling problem is to minimize the makespan (schedule length) i.e., the overall computation time of any application represented as Directed Acyclic Graph (DAG). Optimal scheduling of tasks of a DAG onto a set of processors is a strong NP-Hard problem. It has been proven to be NP-Complete for which optimal solutions can be found only after an exhaustive search. The optimal solutions for many scheduling heuristics stated in the literature [1, 2] have been proposed in the past. Early scheduling algorithms did not take communication into account. But due to the increasing importance for parallel performance, the consideration of communication was included in the proposed approach. The consideration of communication cost [3] is significant to produce an accurate and efficient schedule length.
List scheduling algorithms such as Heterogeneous Earliest Finish Time (HEFT), Critical Path On a Processor (CPOP) [4] and Performance Effective Task Scheduling (PETS) [5] are complex in nature and take higher complexity in implementation. The HEFT algorithm uses a recursive procedure to compute the rank of a task by traversing the graph upwards from the exit task and vice-versa for CPOP. The rank of a task is the length of the critical path from the exit task to that task. The rank computation is a recursive procedure and also complex in nature for both the algorithms. In [6, 20] a Simple Genetic Algorithm (GA) for multiprocessor task scheduling is proposed.
Some GA parameters are to be used for mapping and scheduling general task graph [7] whereas in [8] bichromosomal representation for task scheduling problem is used. GA’s [9-12] are a class of random search techniques for the task scheduling problem. Although GA provide good quality schedules, their execution times are significantly higher than other alternatives. Extensive tests are required to find optimal values for the set of control parameters used in GA – based solutions [13].
GA for static scheduling of \( m \) tasks to \( n \) processors based on \( k \)-way partitioning was developed in [14]. Successive improvements to the initial schedule were made through reproduction, mutation and one-point crossover operators. The traditional methods such as Branch and Bound, Divide and Conquer and Dynamic programming give the global optimum, but it is time consuming [15]. The researchers [16] have derived optimal task assignments to minimize the sum of the task execution and communication costs with the Branch and Bound method and evaluated the computational complexity of this method using simulation techniques. Modern heuristic techniques [17] are general purpose optimization algorithms. Their efficiency or applicability is not tied to any specific problem-domain. To improve the efficiency of the heuristic based approach, there exist
guided random search techniques such as Simulated Annealing, Tabu Search, Particle Swarm Optimization, Genetic Algorithm etc.
The GA is not well suited for fine-tuning structures which are close to optimal solution [18]. Memetic Algorithm (MA) are evolutionary algorithms (EAs) that apply a separate local search process to refine individual (i.e. improve their fitness by Hill-Climbing) [22, 23]. They are a special kind of GA with local search. The local search may be Hill Climbing or Tabu search or Simulated Annealing [32]. The memetic algorithms [19] can be viewed as a marriage between a population-based global technique and a local search made by each of the individuals. Like GA, MA is also a population-based approach. A GA with local search, which is known as Memetic Algorithm (MA), can use one or more local search techniques [23]. Here, the Hill Climbing algorithm is used as a local search algorithm. They have shown that they are orders of magnitude faster than traditional GAs for some problem domains [21]. MA yields faster convergence when compared to GA, because the balance between the exploration and exploitation is in the search process [30, 31].
MA is the subject of intense scientific research and has been applied to a multitude of real world problems [24]. It represents one of the recent emerging areas of research in evolutionary computation. The term MA is now widely used as synergy of evolutionary or any population-based approach with separate individual learning or local improvement procedures for problem search [24]. Quite often, MA is also referred to in the literature as Baldwinian EAs, Lamarckian EAs, Cultural algorithms or Genetic Local Search or hybrid genetic algorithm [29]. In case of hybrid flow shop scheduling problem, MA produces better quality solution and it is efficient when compared to GA and constraint programming based branch and bound algorithm [26].
To validate the performance of the proposed approach, highly communicating task graph like Gaussian elimination is generated and also tested with randomly generated DAGs.
The paper is organized as follows: introduction is followed by the problem definition which is presented in Section 2. Section 3 discusses the fundamentals of MA. Section 4 introduces the proposed algorithms and implementation aspects. Section 5 presents the experimental results and discussions. Finally the conclusion and future research direction are presented in Section 6.
2 Problem Definition and Background
An application program is represented by a Directed Acyclic Graph (DAG). A DAG is a directed acyclic graph \( G = (V,E,w,c) \) representing a program \( P \). Here, \( V \) is a set of task nodes and \( E \) is a set of communication edges, corresponding to the dependency among tasks. An edge \( e_{ij} \in E \) represents the communication from node \( n_i \) to node \( n_j \). The positive weight \( w(n) \) associated with node \( n \in V \) represents its computation cost and the nonnegative weight \( c(e_{ij}) \) associated with edge \( e_{ij} \in E \).
A Memetic Algorithm Based Task Scheduling
$E$ represents its communication cost. The communication cost between two nodes assigned to the same processor is assumed to be zero. ‘$<$’ represents a partial order on $V$. For any two tasks $n_i$, $n_k$ the existence of the partial order $n_i < n_k$ means that $n_k$ cannot be scheduled until task $n_i$ has been completed, hence $n_i$ is a predecessor of $n_k$ and $n_k$ is a successor of $n_i$. The task executions of a given application are assumed to be non-preemptive. In a given task graph, a task without any predecessor is called an entry task and task without any child is called an exit task. Here, it is assumed that there is one entry and exit task in DAG.
A node cannot begin execution until all its inputs have arrived and no output is available until the computation has finished. It can be defined as precedence constraints which is as follows
$$t_s(n_i, P) = t_f(e_{ij})$$
(1)
where $t_s(n_i, P)$ denotes the start time of the node in the processor “$P$” and $t_f(e_{ij})$ is the edge finish time of the communication associated with $e_{ij}$. Data Ready Time (DRT) $t_{dr}$ of a node can be calculated as follows:
$$t_{dr}(n_i, P) = \max \{t_f(e_{ij}) + w(e_{ij})\}$$
(2)
and hence for a valid schedule,
$$t_s(n, P) \geq t_{dr}(n, P)$$
(3)
A task graph for Gaussian elimination for $3 \times 3$ matrix is shown in Fig.1 and its computation cost matrix is shown in Table 1. Let $EST(n_i, p_j)$ and $EFT(n_i, p_j)$ are the Earliest Start Time and Earliest Finish Time of task $n_i$ on $p_j$, respectively. For the entry task $v_{entry}$, $EST(v_{entry}, p_j) = 0$, and for the other tasks in the graph, the EST and EFT values are computed recursively, starting from the entry task, as shown in Eqns. (4) and (5). In order to compute the EFT of a task $n_i$, all immediate predecessor tasks of $n_i$ must have been scheduled.
$$EST(n, p) = \max\{avail[j], \max(\text{AFT}(n_i + C_i))\}$$
where $n_i \in \text{pred}(n_i)$
(4)
$$EFT (n_i, p_j) = w_{ij} + EST (n_i, p_j)$$
(5)
where $\text{pred}(n_i)$ is the set of immediate predecessor tasks of task $n_i$ and $\text{avail}[j]$ is the earliest time at which processor $p_j$ is ready for task execution. If $n_k$ is the last assigned task on processor $p_j$, then $\text{avail}[j]$ is the time that processor $p_j$ completed the execution of the task $n_k$ and it is ready to execute another task. The inner max block in the EST equation returns the ready time, i.e., the time when all the data
needed by \( n_i \) has arrived at processor \( p_j \). After a task \( n_k \) is scheduled on a processor \( p_j \), the earliest start time and the earliest finish time of \( n_i \) on processor \( p_j \) is equal to the Actual Start Time \( AST(n_k) \) and the Actual Finish Time \( AFT(n_k) \) of task \( n_k \), respectively. After all tasks in a graph are scheduled, the schedule length (i.e. the overall completion time) will be the actual finish time of the exit task, \( n_{\text{exit}} \).
Fig. 1. Gaussian elimination task graph represented by DAG for 3 x 3 matrix
Table 1: Computational cost matrix (W) for Fig.1
<table>
<thead>
<tr>
<th>Task</th>
<th>P1</th>
<th>P2</th>
<th>P3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3</td>
<td>3</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>4</td>
<td>5</td>
<td>4</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
<td>6</td>
<td>4</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>3</td>
<td>5</td>
</tr>
<tr>
<td>5</td>
<td>3</td>
<td>7</td>
<td>2</td>
</tr>
<tr>
<td>6</td>
<td>3</td>
<td>6</td>
<td>1</td>
</tr>
<tr>
<td>7</td>
<td>5</td>
<td>3</td>
<td>6</td>
</tr>
<tr>
<td>8</td>
<td>2</td>
<td>4</td>
<td>5</td>
</tr>
<tr>
<td>9</td>
<td>5</td>
<td>8</td>
<td>5</td>
</tr>
</tbody>
</table>
Priority is computed and assigned to each task based on the following attributes namely, Average Computation Cost (ACC), Data Transfer Cost (DTC) and Rank of the Predecessor task (RPT). The ACC of a task is the average computation cost on all the ‘m’ processors and it is computed by using Eqn. (6).
$$ACC(v_i) = \frac{\sum_{j=1}^{m} w_{i,j}}{m}$$ \hspace{1cm} (6)
The DTC of a task $v_i$ is the amount of communication cost incurred to transfer the data from task $v_i$ to all its immediate successor task and it is computed at each level $l$ using Eqn. (7)
$$DTC(v_i) = \sum_{j=1}^{n} C_{i,j} : i < j$$ \hspace{1cm} (7)
Where $n$ is the number of nodes in the next level.
$DTC = 0$ for exit tasks
The RPT of a task $v_i$ is the highest rank of all its immediate predecessor task and it is computed using Eqn. (8)
$$RPT(v_i) = \max\{rank(v_1), rank(v_2), ..., rank(v_h)\}$$ \hspace{1cm} (8)
Where $v_1, v_2, v_3, ..., v_h$ are the immediate predecessor of $v_i$.
$RPT = 0$ for entry task
Rank is computed for each task $v_i$ based on its ACC, DTC, RPT values. Here, the maximum rank of predecessor tasks of task $v_i$ as one of the parameters to calculate the rank of the task $v_i$, and the rank computation is given by Eqn. (9).
$$rank(v_i) = \text{round}\left\{ACC(v_i) + DTC(v_i) + RPT(v_i)\right\}$$ \hspace{1cm} (9)
Priority is assigned to all the tasks at each level $l$, based on its rank value. At each level, the task with highest rank value receives the highest priority followed by task with next highest rank value and so on. Tie, if any, is broken using ACC value. The task with minimum ACC value receives the higher priority.
Finally the objective function $f(x)$ can be defined as
$$f(x) = \min(\text{schedule length})$$ \hspace{1cm} (10)
Where schedule length is defined as
Memetic Algorithm (MA) combine GA with local search. MA are inspired by memes (Dawkins, 1976), pieces of mental idea like stories, ideas and gossip, which reproduce (propagate) themselves through population of memes carriers. Corresponding to the selfish gene idea (Dawkins, 1976) in this mechanism each meme uses the host (the individual) to propagate itself further through the population, and in this way the population competes with different memes for the limited resources.
MA starts with several alternative solutions to the optimization problem, which are considered as individuals in a population. These solutions are coded as binary strings called chromosomes. Suitable encoding plays an important role in deciding the performance of MA. The population is initialized at random or using a heuristic. To form a new population for the next generation, higher quality individuals are selected. The selection phase is identical in form to that used in the classical GA selection phase. Local search is performed to select the best chromosome from the pool of available chromosomes. Once the best chromosome has been selected, they are subjected to crossover and mutation to generate new individuals. Finally, one best chromosome is selected by applying the final local search. The role of local search in MA is to search and locate the local optimum more efficiently than the GA. Fig.3 explains the generic implementation of Memetic Algorithm.
1. Encode solution space
2. (a) set pop_size, max_gen, gen=0;
(b) set cross_rate, mutate_rate;
3. initialize population
4. while(gen < gensize)
Apply generic GA
Apply local search
end while
Apply final local search to best chromosome
Fig.3. The Memetic Algorithm
3.1 Hill Climbing local search algorithm
Local search can be thought of as the process of an individual improving its idea of the solution. The Hill Climbing search algorithm is a local search algorithm and is shown in Fig. 4. It is nature-based stochastic computational technique [25]. It is
used to execute local search to find better solutions in the neighborhood of the current solution produced by GA in each iterations. When the termination condition is met, it returns with the best solution.
```
Best solution ← initial solution
While (termination condition is not satisfied) do
New solution ← neighbors (solution after cross over and mutation)
If New solution is better than best solution then
Best solution ← New solution
End if
End while
```
Fig. 4. The Hill Climbing local search algorithm
4 The Proposed Method
4.1 Encoding
The generic formulation of a problem begins with the definition of an appropriate chromosome encoding. Each chromosome encodes a schedule solution. In order to achieve good performance, the chromosome should be simple, because this permits one to employ simple and fast operators.
For task-scheduling, a chromosome represents a solution to the scheduling problem; in other words a schedule. A schedule consists of the processor allocation and the start time of each node of the task graph. The representation of the chromosome holds the information that serves as an input for a heuristic search to create a schedule. There are three basic elements to choose among. The first is the list of tasks to be scheduled. The second is the order in which these tasks should be executed on a given processor and the third is the list of processors to which these tasks should be assigned.
Each chromosome is represented as a group of genes i.e. task-processor pair \((T_i, P_j)\) indicating that task \(T_i\) is assigned to the processor \(P_j\) shown in Fig. 5. The position of genes in a chromosome represents the order in which the tasks should be executed. For example the following chromosomal representation show that task 1 and task 2 should be executed on processor 1 and task 3 on processor 2. It also indicates that task 2 is executed first followed by task 3 and followed by task 1.
4.2 Initial population
Most of the scheduling heuristics generate the initial population randomly with the necessary care on feasible solutions. The population is created randomly i.e. a predefined number of chromosomes are generated, the collection of which form the initial population. Here the initial population is generated based on the priority calculation of the tasks at each level as shown in Table 2.
Table 2: The DTC, ACC, RPT, Rank and Priority values for the tasks in Fig.1
<table>
<thead>
<tr>
<th>Level</th>
<th>Task</th>
<th>ACC</th>
<th>RPT</th>
<th>DTC</th>
<th>Rank</th>
<th>Priority</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
<td>0</td>
<td>6</td>
<td>9</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>4.33</td>
<td>9</td>
<td>4</td>
<td>17</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>5</td>
<td>4.0</td>
<td>17</td>
<td>17</td>
<td>38</td>
<td>1</td>
</tr>
<tr>
<td>4</td>
<td>6</td>
<td>3.33</td>
<td>38</td>
<td>10</td>
<td>51</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>8</td>
<td>3.67</td>
<td>51</td>
<td>12</td>
<td>66</td>
<td>1</td>
</tr>
<tr>
<td>6</td>
<td>9</td>
<td>6</td>
<td>66</td>
<td>0</td>
<td>72</td>
<td>1</td>
</tr>
</tbody>
</table>
Fitness function
As the objective of the task scheduling problem is to find the shortest possible schedule, the fitness of a chromosome is directly related to the length of the associated schedule. Here the fitness value is determined by the earliest finish time of the last task.
4.3 Selection
In this step, the chromosomes in the population are ranked first based on their fitness value from the best to the worst. The chromosomes with least fitness values are ranked as best chromosomes. This process of obtaining the best chromosome is called as selection. This is done using local search from the pool of available chromosomes.
### 4.4 Reproduction
Reproduction process forms a new population of chromosomes by selecting chromosome in the old population based on their fitness value through Crossover and mutation.
**Cross over**
The cross over operator is the most significant one since it implements the principle of evolution. New chromosomes are created with this operator by combining two selected parent chromosomes and swaps second part of each chromosome after a randomly selected point. This is equivalent to assigning a subset of tasks to different processors. Single point and two point crossovers are alternatively performed and the crossover probability is selected randomly.
**Mutation**
This operator is applied with a lower probability (about 0.1 or less) than the crossover operator. Its main purpose is to serve as a safeguard to avoid the convergence of the state search to a locally best solution. Here the partial-gene mutation is employed. It takes each chromosome from the fittest ones and changes a randomly selected gene $(T_i, P_j)$ to $(T_i, P_j)$ which introduces diversity each time it is applied, and consequently the population continues slowly to improve. Therefore the probability of crossover and partial-gene mutation is not fixed in the proposed algorithm.
### 4.5 Local Search
The Hill climbing search algorithm is a local search algorithm that iteratively performs a neighborhood search to pick best chromosome from a pool of available chromosomes. When the termination criterion is met, the search algorithm terminates and returns the best solution. It is explained in Fig. 4.
### 4.6 Termination Criteria
When no improvement solution has been found over the last $n$ iterations, the algorithm terminates. Typically this value lies between 50 to 500 depending on the desired quality of the solution and the size of the problem. Since for a larger problem, improvement moves are likely to be found with lower frequency.
The proposed memetic algorithm is as follows:
1. Generate the initial population of size $M$ based on task priority at each level and calculate the fitness value of each Chromosome based on earliest finish time of the last task.
2. Select the fittest chromosome from the initial pool based on the Least Fitness Value (schedule length) using local search.
4. Evaluating the chromosomes obtained from the step 3 and form a pool of fittest Chromosomes using local search.
5. Repeat steps 3 and 4 until termination criteria is met.
Fig.6. The proposed memetic algorithm
5 Results and Discussions
In this section a number of experiments are carried out which outlines the effectiveness of the proposed algorithm. The purpose of these experiments is to compare the performance of memetic algorithm approach with genetic algorithm approach for the task scheduling problem. Although the memetic algorithm is a GA combined with the Hill Climbing algorithm as a local search, it is not necessarily the case that the genetic parameters are the most ideal for a memetic algorithm. The experiments were tested on a cluster of workstations consisting of 32 node HP Proliant cluster.
DAGs are generated randomly with different communication cost whose size varies from 10 to 50. Highly communication intensive application like Gaussian Elimination task graph is also generated with matrix size varying from 3 to 15. The results are compared for varying population size, where the size ranges from 5 to 200. The tasks are selected for an initial pool according to the priority value as shown in Table 2 for the Gaussian Elimination task graph in Fig.1. Then they are selected according to their fitness value.
For the proposed approach, the effects of the different population size and different number of iterations are investigated and the results are depicted in Figs. 7 and 8. The performance of MA improves when the population size is increased.
MA converges very fast when compared to GA as shown in Fig. 9. The results of both MA and GA are compared by varying the number of iterations from 5 to 250 for the Gaussian Elimination task graph. Traditionally GAs suffer from slow convergence to locate a precise enough solution because of their failure to exploit local information. But MAs are hybrid GAs that combine global and local search which uses GA, to perform exploration while the local search method performs exploitation.
Fig. 7. Schedule length Vs Population size for Gaussian Elimination Task Graph
Fig. 8. Schedule length Vs No. of Iteration for Gaussian Elimination task Graph
MA is compared with the classical list scheduling algorithms like HEFT, CPOP and PETS algorithm for the Gaussian Elimination task graph whose matrix size is 5 x 5. The performance of MA is compared with existing HEFT, CPOP and PETS algorithm is shown in Fig.10. From the above results, the proposed MA performs well when compared to other list scheduling algorithms. Since the three algorithms are based on list scheduling and the method producing the scheduling list and the priority assigning rules are different.
The HEFT algorithm uses a recursive procedure to compute the rank of a task by traversing the graph upwards from the exit task. Based on the rank, priority is assigned to each task. The CPOP algorithm uses a reverse fashion of calculating the rank by traversing the graph from downwards from the entry task. The PETS algorithm calculates rank based on ACC, DTC and RPT values [4, 5]. But in the proposed MA, first chromosomes are encoded as task-processor pair or as a schedule solution. Task priorities are calculated like PETS algorithm. Then the processors are assigned to each task pseudo-randomly. The chromosomes are encoded to represent the task-processor pairs. The best chromosome is selected using local search. The fitness value of that best chromosome gives the schedule length.
6 Conclusion
The proposed MA is appropriate for scheduling DAG structured applications onto homogeneous computing system with different topologies. However GA’s and MA’s are gaining popularity due to their effectiveness of solving the optimization problems within a reasonable time. Experimental results showed that the proposed approach is better than GA in almost all cases. MA converges very fast when compared to GA, hence the proposed approach outperforms all the existing heuristics for the task scheduling problem. The future enhancement of this work is to introduce contention awareness in task scheduling using MA.
ACKNOWLEDGEMENTS
The authors owe their acknowledgement to the Management and Principal of Thiagarajar College of Engineering for their encouragement and support.
References
|
{"Source-Url": "http://www.i-csrs.org/Volumes/ijasca/vol.2/vol.2.2.2.July.10.pdf", "len_cl100k_base": 6009, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 37226, "total-output-tokens": 8291, "length": "2e12", "weborganizer": {"__label__adult": 0.00033664703369140625, "__label__art_design": 0.0004508495330810547, "__label__crime_law": 0.0004911422729492188, "__label__education_jobs": 0.0012569427490234375, "__label__entertainment": 0.0001252889633178711, "__label__fashion_beauty": 0.00020623207092285156, "__label__finance_business": 0.0005974769592285156, "__label__food_dining": 0.0004723072052001953, "__label__games": 0.0009717941284179688, "__label__hardware": 0.0022716522216796875, "__label__health": 0.0009784698486328125, "__label__history": 0.0004656314849853515, "__label__home_hobbies": 0.00017642974853515625, "__label__industrial": 0.000926971435546875, "__label__literature": 0.0002942085266113281, "__label__politics": 0.0004351139068603515, "__label__religion": 0.0005445480346679688, "__label__science_tech": 0.418701171875, "__label__social_life": 0.00011354684829711914, "__label__software": 0.011444091796875, "__label__software_dev": 0.55712890625, "__label__sports_fitness": 0.0004525184631347656, "__label__transportation": 0.000911712646484375, "__label__travel": 0.0002651214599609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30940, 0.04794]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30940, 0.30506]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30940, 0.88294]], "google_gemma-3-12b-it_contains_pii": [[0, 1432, false], [1432, 4543, null], [4543, 7610, null], [7610, 10130, null], [10130, 11022, null], [11022, 12830, null], [12830, 14856, null], [14856, 16812, null], [16812, 18024, null], [18024, 20349, null], [20349, 22322, null], [22322, 22969, null], [22969, 24277, null], [24277, 25269, null], [25269, 27705, null], [27705, 29948, null], [29948, 30940, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1432, true], [1432, 4543, null], [4543, 7610, null], [7610, 10130, null], [10130, 11022, null], [11022, 12830, null], [12830, 14856, null], [14856, 16812, null], [16812, 18024, null], [18024, 20349, null], [20349, 22322, null], [22322, 22969, null], [22969, 24277, null], [24277, 25269, null], [25269, 27705, null], [27705, 29948, null], [29948, 30940, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30940, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30940, null]], "pdf_page_numbers": [[0, 1432, 1], [1432, 4543, 2], [4543, 7610, 3], [7610, 10130, 4], [10130, 11022, 5], [11022, 12830, 6], [12830, 14856, 7], [14856, 16812, 8], [16812, 18024, 9], [18024, 20349, 10], [20349, 22322, 11], [22322, 22969, 12], [22969, 24277, 13], [24277, 25269, 14], [25269, 27705, 15], [27705, 29948, 16], [29948, 30940, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30940, 0.10497]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
557e9b73ef48b39c521c4fbe650fd4998ac35fb6
|
WebFlow - A Visual Programming Paradigm for Web/Java Based Coarse Grain Distributed Computing
Dimple Bhatia
*Syracuse University Northeast Parallel Architectures Center, dbhatia@npac.syr.edu*
Vanco Burzevski
*Syracuse University*
Maja Camuseva
*Syracuse University, Northeast Parallel Architectures Center, maja@top.syr.edu*
Geoffrey C. Fox
*Syracuse University, Northeast Parallel Architectures Center*
Follow this and additional works at: [https://surface.syr.edu/npac](https://surface.syr.edu/npac)
Part of the [Computer Sciences Commons](https://surface.syr.edu/npac)
**Recommended Citation**
[https://surface.syr.edu/npac/4](https://surface.syr.edu/npac/4)
This Working Paper is brought to you for free and open access by the College of Engineering and Computer Science at SURFACE. It has been accepted for inclusion in Northeast Parallel Architecture Center by an authorized administrator of SURFACE. For more information, please contact surface@syr.edu.
WebFlow—A Visual Programming Paradigm for Web/Java Based Coarse Grain Distributed Computing
Dimple Bhatia, Vanco Burzevski, Maja Camuseva
Geoffrey Fox, Wojtek Furmanski, and Girish Premchandran
Northeast Parallel Architectures Center
Syracuse University
111 College Place
Syracuse, New York 13244
dbhatia@npac.syr.edu, vanco@top.syr.edu, maja@top.syr.edu
gcf@npac.syr.edu, furm@npac.syr.edu, girishp@npac.syr.edu
Presented at Workshop on Java for Computational Science and Engineering Workshop, Syracuse University, December 1996.
Abstract
We present here the recent work at NPAC aimed at developing WebFlow—a general purpose Web based visual interactive programming environment for coarse grain distributed computing. We follow the 3-tier architecture with the central control and integration WebVM layer in tier-2, interacting with the visual graph editor applets in tier-1 (front-end) and the legacy systems in tier-3. WebVM is given by a mesh of Java Web servers such as Jeeves from JavaSoft or Jigsaw from MIT/W3C. All system control structures are implemented as URL-addressable servlets which enable Web browser-based authoring, monitoring, publication, documentation and software distribution tools for distributed computing. We view WebFlow/WebVM as a promising programming paradigm and coordination model for the exploding volume of Web/Java software, and we illustrate it in a set of ongoing application development activities.
1 Introduction
As anticipated in our WebWindows ansatz [WebHPCC96], current Web systems, fueled by Java, evolve rapidly towards a powerful open infrastructure that will enable world-wide distributed computing. In the current Web/Java expansion phase, we are witnessing a wide variety of new interesting tools and technologies but the overall integration framework is still missing and the software reuse remains difficult. We need a coarser grain encapsulation unit than a Java class to enable user-friendly distributed computing on the Web. In fact, several attempts at specifying such a framework are underway, for example JavaBeans from JavaSoft. However, the Web software industry is currently focused mainly on the front-end support for component based GUI integration, whereas the middleware and back-end layers are still an open research and prototyping area.
At NPAC, we are monitoring the emergent Web technologies pertaining to the domain of world wide scalable distributed computing and we are designing and prototyping a visual graph based dataflow environment, WebFlow, using the mesh of Java Web Servers as a control and coordination middleware, WebVM.
In this document, we review briefly our Web technology evaluation activities in Sections 2–4, followed by the presentation of our WebFlow/WebVM prototype (tier-2) in Section 4 which forms the core of this paper. This is followed by the discussion of the WebFlow front-end (tier-1) in Section 6 and some initial back-end (tier-3) activities in Section 7. Finally, we summarize in Sections 8 a set of planned or ongoing application development activities in the areas of command and control, telemedicine, distance education and Internet commerce that will build on top of WebFlow/WebVM infrastructure.
Table 1: Comparative analysis of strategies and components for Web based distributed computing in selected systems investigated at NPAC.
<table>
<thead>
<tr>
<th>Module</th>
<th>Habanero</th>
<th>Jigsaw</th>
<th>Infospheres</th>
<th>JavaSoft</th>
<th>Netscape</th>
</tr>
</thead>
<tbody>
<tr>
<td>Port/Channel</td>
<td>Java socket</td>
<td>any HTTP carrier</td>
<td>portlet→mailbox</td>
<td>RMI</td>
<td>custom?</td>
</tr>
<tr>
<td>Message</td>
<td>Marshallled Event or Action</td>
<td>Pickled Resource</td>
<td>any object bytesteam</td>
<td>Serialized Object</td>
<td>JavaScript</td>
</tr>
<tr>
<td>Runtime</td>
<td>Collaboratory server</td>
<td>Java HTTP server</td>
<td>dapplet/DJINN manager?</td>
<td>Jeeves (Java server)</td>
<td>community or enterprise system</td>
</tr>
<tr>
<td>User Interface</td>
<td>AWT</td>
<td>Forms</td>
<td>visual authoring?</td>
<td>HotJava</td>
<td>Navigator</td>
</tr>
<tr>
<td>Coordination</td>
<td>instantaneous broadcast</td>
<td>client-server</td>
<td>synchronous multi-server flat file?</td>
<td>CORBA</td>
<td>multi-server</td>
</tr>
<tr>
<td>Persistency</td>
<td>Resource Store</td>
<td>javadoc</td>
<td></td>
<td>JDBC</td>
<td>LiveWire→DB</td>
</tr>
</tbody>
</table>
2 Web/Java Expansion Phase
Expressive power of Java attracts developers and we observe an explosion of first generation Java systems on the Internet. Examples include: NCSA Habanero [Haba96] for synchronous collaboratory; dynamic HTTP servers such as Jigsaw [Jigs96] from MIT/W3C or Jeeves from JavaSoft; Marimba’s Castanet and Bongo trying to establish a new pure Java based Web-like framework; Caltech Infospheres [ChaRi96], IBM aglets for intelligent agents based computing; and many others.
At NPAC, where we are closely monitoring this ‘bleeding edge’ of interactive Web, we observe that although these new systems offer attractive capabilities, the current generation Java software is still difficult to customize, repackage or reuse. The reason is that Java class is a too small, too fine grain encapsulation unit and hence reusing a package requires usually detailed understanding of a large number of its tightly interwoven classes.
Figure 1: Overview of the WebFlow/WebVM architecture: WebVM is formed in tier-2 as a mesh of Java Web servers, managing WebFlow nets (or compute-webs) and interacting with the legacy systems in tier-3 (back-end) and with the visual graph editor applets in tier-1 (front-end).
3 WebFlow/WebVM Concepts
Our goal is to provide a coarser grain packaging model and the associated user-friendly authoring framework for distributed applications on the Web. We believe that we should build on top of the established standards such as HTTP, HTML and Java, and hence we adopt Java Web server as a base runtime and coordination node of any distributed Web system. Dataflow model, already proven effective by previous systems such as AVS, Khoros, CODE [Browne92], HeNCE [Dong94] and others, seems to be a natural coordination framework to extend the current 2-node model in which HTTP/MIME data flows between Web client and server towards multi-server systems.
Hence, we propose a runtime environment given by a mesh of Web Java servers to coordinate distributed computation represented as a set of channel-connected coarse grain Java modules. Modules are thin veneer Java interfaces so that any chunk of Java can be easily modularized and connected to other modules via suitable communication ports, acting as terminals of point-to-point dataflow channels. Modules run asynchronously, are mobile, i.e., can be instantiated on any WebVM server, and communicate by exchanging Java objects along their dataflow channels.
Aspects of such emergent architecture can be already found in current systems, analyzed in Table 1. For example: Jigsaw/Jeeves develop the concept or resources/servlets as control encapsulation units; Infospheres develops portlets/mailboxes as terminals for communication channels; Habanero is a multi-server system; and so on.
4 Early Experiments
We initiated the WebFlow/WebVM design process by experimenting with existing systems. Over the summer/fall '96, we evaluated a suite of new Java systems including Aglets, Habanero, Infospheres, Jeeves, Jigsaw, JSDA, Shaking Hands, and others. One of early decisions we made was that rather then developing custom Java servers from scratch as in Habanero or Infospheres, we prefer to add new services and maintain them within the Web Java server addressing space.
Such organization facilitates management and offers natural, Web-browser based monitoring, publication and distribution support for the Web software.
Figures 2 and 3 illustrate our early experiments with Jigsaw where we constructed a chat collaboratory as Jigsaw resource (Figure 2) and we formed a token ring by connecting a set of Jigsaw resources viewed as WebFlow modules using Infospheres portlets. Later on, we switched to the Jeeves model since the servlet API is likely to become a standard as given by a core Java package java.servlet. We intend to continue the exploration of Jigsaw and other promising public domain Java systems and we tentatively base the WebFlow/WebVM prototype development on the Jeeves server architecture.
5 Tier-2 WebFlow/WebVM Prototype
5.1 Overview
Our prototype WebVM is given by a mesh of Jeeves servers, running servlets that manage and coordinate distributed computation. Atomic encapsulation units of WebVM computation are
Figure 2: Internal dynamics of the Jigsaw Java Web server by MIT/W3C. All services are structured and managed as Resource objects (similar to Servlet objects in Jeeves). Resources are maintained in a persistent store, editable and downloadable on demand. The figure illustrates a set of standard Jigsaw resources such as File or Editors, and our own experiments with multi-user and/or multi-server extensions such as Chat session or WebFlow module Resources.
Figure 3: Early integration experiments: Portlet library extracted from Caltech Infospheres is used to form a token ring, connecting a set of Jigsaw Resource nodes. A message packet rotates along the ring with a user-adjustable speed and generates visual feedback in a monitor applet.
called \textit{modules} and they communicate by sending objects along \textit{channels} attached to module \textit{ports}. Unlike management servlets which are usually persistent and application independent, modules are more transient and can be dynamically created, connected, scheduled, run, relocated and destroyed by servlets. WebFlow is a particular programming paradigm implemented over WebVM and given by a dataflow programming model (other models under experimentation include data parallel, collaboratory, and televirtual paradigms). WebFlow application is given by a computational graph, visually edited by end-users using Java applets.
Modules are written by module developers, people who have only limited knowledge of the system on which the modules will run. They not need concern themselves with issues such as:
- allocating and running the modules on various machines
- creating connections among the modules
- sending and receiving data across these connections
- running several modules concurrently on one machine
The WebFlow system hides these management and coordination functions from the developers, allowing them to concentrate on the modules being developed.
WebFlow management is currently implemented in terms of the following three servlets: Session Manager, Module Manager, and Connection Manager. These servlets are URL addressable and can offer dynamic information about their services and current state. Each of them can also communicate with each other through sockets as discussed in the next section.
Figure 4 illustrates the three base servlets employed in setting up and managing WebFlow operation. Session Manager receives graph specification from the editor applet, creates an image of the whole compute-web using module proxy objects called ModuleRepresentation, decides on the compute-web decomposition strategy, and notifies Module Manager about local modules to be instantiated.
Module Manager starts and maintains ModuleWrapper threads than run Modules. Each module, when created, notifies ConnectionManager about the connectivity required by this module Ports, and waits for the connections to be established.
WebFlow channels connecting two module Ports are formed dynamically by the corresponding ConnectionManagers: Sockets returned by their 'accept' and 'connect' calls are passed to the appropriate ports. After all ports of a module receive their requested sockets, the module notifies the Module Manager and is ready to participate in the dataflow operations.
5.2 WebFlow requirements
The requirements placed on WebFlow stem from the discussion above. Namely, WebFlow shall:
- allow modules to be run on demand
- support communication between the modules
- provide facilities for the user to create and destroy an application, where an application is a set of interconnected modules.
To support the requirements placed on the system, the following components have been created:
Figure 4: Initial design of the WebFlow management layer, implemented as a set of Jeeves servlets and including: Session Manager, Module Manager and Connection Manager.
Module Manager, in charge of running modules on demand
Connection Manager, in charge of creating connections between the modules
Session Manager, in charge of executing all the actions the user performs on the front end.
In the following section, we describe each of these management entities in more detail.
5.3 WebFlow management
Module Manager The Module Manager is the simplest of the three system components. It is in charge of running modules on demand. A user/editor request to create a module is sent to the Module Manager residing on the particular machine on which the module should be run. The Module Manager creates a separate thread for the module (thus enabling concurrent execution of multiple modules), and loads the module code, making the module ready for execution.
A request for running (destroying) a module triggers a special method called run (destroy). These methods were written by the module developers.
An important observation is that the Module Manager has no notion of a session built into it. It can support any number of modules, and requests coming from any number of Session Managers.
Figure 6: Steps involved in making the connection between the two ports.
Connection Manager The Connection Manager is in charge of establishing connections between modules. To be precise, it establishes connections between individual ports, regardless of the module on which they reside, and regardless of the machine on which the module is run.
As each module is initialized, its ports register with the Connection Manager. This enables the Connection Manager to establish connections between registered ports as illustrated in Figure 6.
To connect port 1 and port 2 in Figure 6, a connect request is received by the first Connection Manager in step 1. In step 2, an establish request is sent to the second Connection Manager, which then, in step 3, sends an OK message back to the first Connection Manager to acknowledge the establish request. The second Connection Manager proceeds to send a Connection back to the first Connection Manager which receives the connection and passes it on to the port. Finally, in step 5, the first Connection Manager replies that the operation has succeeded.
If an error occurs in any stage of the protocol, then instead of OK messages, error messages will be sent back, thus aborting the protocol, and notifying the caller that the connection failed.
The figure shows the more general case in which the two ports reside in separate Connection Managers. Of course, the two ports may be registered at the same Connection Manager, in which case the whole connection procedure is simplified, and steps two and three are not needed. As with the Module Manager, the Connection Manager has no notion of a session built into it. It can support any number of Session Managers.
6 Session Manager
The Session Manager is the part of the system in charge of accepting to the rest of the system. These requests include: creating a new module, connecting two ports, running the application, and destroying the application.
Both the Session Manager and the front end store a representation of the application that the user is building. The difference between the two is that the Session Manager needs to worry about the machines on which each of the modules has been started, while the front end worries about the position of the representation of the module on the screen.
In the WebFlow prototype, the Session Manager can only work with one user at a time. In other words, there is only one session active at any one point in time (we are currently exploring JSDA support for WebFlow to provide multi-user collaborative editing capabilities).
Figure 7: Servers in the WebFlow system are accessible through both URL and socket connections.
6.1 Internal communication in WebFlow
All the URLs point to the web server. The web server analyzes the URL, as illustrated in Figure 7, the WebFlow prototype supports two types of communication:
- via URL
- via socket connections
In the figure, the client can either be front end, or the Session Manager, while the servlet can be any of the three servlets that exist in the system.
All the URLs point to the web server. The web server analyzes the URL, and forwards the request to the servlet denoted in the URL. Socket connections are received directly by the servlet.
The former—via URL—is used when a component’s socket address is unknown. This feature allows the whole system to be accessed over the web. However, the current implementation of the URL addressing scheme does not provide a convenient way to send whole objects as parameters. On the other hand, the socket connection scheme provides for a very natural way of sending any object, provided it knows how to serialize itself, over the socket. This is very useful, as all the requests and replies can easily be expressed as objects whose internal state holds the type and parameters of the request or reply.
At the time being, each of the three servlets in the WebFlow system listens to both the URL and socket connections at all times. Internal requests and replies for creation, running, and destruction of modules, as well as connecting ports all go through the socket connection, whereas the URL communication is being used to provide the socket address of the server and to perform system-wide operations, such as give usage statistics, reset the system, add new resources to the system, etc.
It is conceivable that the HTTP protocol will evolve so that the whole WebFlow communication could be eventually handled uniformly in the URL addressing mode. For the time being, we will support both URL and socket based addressing modes and we will monitor, participate in and respond to W3C efforts aimed at dynamic and object-oriented extensions of the HTTP protocol.
6.2 Module and Port identification in WebFlow
As soon as a module is created, it is assigned a unique identifier. This identifier is present with all the requests associated with the module, i.e. the module’s running and destruction (recall that creation also creates the identifier). Module identifiers are necessary because of the following reasons:
- they provide an easy way of identifying the target of module operations
- they enable multiple instances of the same module to be run on the same machine, each of the instances having a separate identifier
Each port also has an identifier, but they are less general than the module identifiers. Since ports can never exist outside of a module, it suffices to assign unique identifiers to ports on one module. The current implementation is a bit more general, however, since it assigns identifier to ports per one Module Manager/Connection Manager combination.
6.3 I/O modules in WebFlow
The previous discussion took for granted the input and output modules in an application. However, current web browser restrictions make input and output modules a non-trivial task.
Since the front end can be invoked from an arbitrary machine connected to the web, the input and output modules should be able to receive their input and send their output to the same, arbitrarily chosen, machine. The only way of doing so in the current state of affairs is to provide applets that will be able to receive user inputs, and show the application’s outputs.
Therefore, the input/output modules are made of two parts: a WebFlow part—that works under the WebFlow model, and an applet part—that provides I/O capabilities, as illustrated in Figure 8. Upon initialization, the I/O modules inform the system that they require an applet to be spawned for them. That request is forwarded all the way to the system’s front end, which has the capability to open a new frame on the screen, and load an HTML page in it. That HTML page can contain an invocation of the I/O module’s applet.
The front end receives the HTML pages by making separate requests to the Session Manager. In the long run, the responsibility of creating and serving these HTML pages will be placed in a separate manager—the Viewer Manager, a topic further discussed in the following sections.
Figure 8: I/O Modules in WebFlow
6.4 WebFlow API
WebFlow offers a well-defined API for module developers that hides the communication details in terms of port and module abstractions. We include here for illustrative purposes a few samples of WebFlow programming at the module developer level.
Ports It is fairly easy to create and add new ports in the module implementation. Any new port type has to be derived from abstract Port class. The new port type has to only override the send and receive methods of the Port class for data transfer. The Port class constructor automatically registers the port with the Connection Manager. When the module terminates, the port deregisters itself from the Connection Manager.
An example port is shown below. The port is an Integer port which sends and receives Integer objects.
```java
public class IntPort extends Port {
DataInputStream is;
DataOutputStream os;
Integer data;
boolean bool;
public void send(Object num) {
data = (Integer)num;
if(getSocket()!=null) {
try {
os = new DataOutputStream
(getSocket().getOutputStream());
os.writeInt(data.intValue());
} catch (IOException e){}
}
}
public Object receive() {
if(getSocket()!=null) {
try {
is = new DataInputStream
(getSocket().getInputStream());
data = new Integer(is.readInt());
} catch (IOException e){}
return(data);
}
else return(null);
}
}
```
Ports can be both synchronous as well as asynchronous depending upon the way they check for data. Asynchronous ports remain dormant and wake up whenever data is available for receiving or sending. Synchronous ports keep on polling for data, so the user is flexible to use synchronous or asynchronous ports depending upon the application.
Sample Adder application
Figure 9: Sample Adder Application
**Modules** Modules basically consist of three main methods:
- initialize
- run
- destroy
Code for a basic adder module is shown below. This module receives numbers from two other modules and sends the result to a third module, as shown in Figure 9.
The *initialize* method initializes the module by registering its ports with the Connection Manager, creating a MetaModule object containing the module id and port id’s and then passing on the MetaModule object to the Module Manager. Essentially, all ports are declared and instantiated in this method. For example,
```java
public MetaModule initialize() {
//set the Metamodule
MetaModule mm = new MetaModule("mm");
//declare the ports
port1 = new IntPort(); //input port1
mm.putPortID(port1.getPortID());
```
port2 = new IntPort(); // input port
mm.putPortID(port2.getPortID());
port3 = new IntPort(); // output port
mm.putPortID(port3.getPortID());
return(mm);
}
Viewer module specifies additionally an HTML string to be passed to the front-end and used there to fire a suitable viewing frame. In the initialize method, the MetaModule object holds this string and passes it on to the ModuleManager. The HTML syntax may contain code to display images, run other applets, etc. An example of the initialize method of the viewer module is given below.
```java
public MetaModule initialize() {
// Set the MetaModule
MetaModule mm = new MetaModule("mm");
int i;
// declare the port
ImgPort = new ImgPort();
mm.putPortID(imgPort.getPortID());
// data reqd for this particular module
try {
InetAddress local =
java.net.InetAddress.getLocalHost();
hostName = local.getHostName();
listener = new ServerSocket(0);
// open new server socket
portNumber = listener.getLocalPort();
} catch (UnknownHostException e) {
System.out.println(e);
}
catch (IOException e) {
System.out.println(e);
}
// create the HTML String object
String htmlString = new String(...HTML code...);
// Store the object in the MetaModule
mm.setHTML(htmlString);
// return the Metamodule
return(mm);
}
```
The `run` method describes the behavior of the module. Upon receiving the run request from the ModuleManager, the module executes the run method in which the module may receive,
send or process data. It is here that the module can interact with various other modules by data transfer.
```java
public void run() {
while(true) {
//receive values
num1=port1.receive().intValue();
num2=port1.receive().intValue();
num3=num1 + num2;
//send result
port2.send(new Integer(num3));
}
}
```
The `destroy` method terminates a running module. All ports are deregistered and the module stops executing. All socket connections of the ports are closed.
```java
public void destroy(){ //terminate
port1.destroy(); //destroy all ports
port2.destroy();
port3.destroy();
}
```
### 6.5 Next Steps
The WebFlow prototype served its role as a proof that such a system can be built but it also showed that several new servers are needed to provide the full functionality. One of them—the Viewer Manager—was already mentioned above. At least the following new servlets will be added to the WebFlow system:
- Viewer Manager
- WebVM Server Manager
- Resource Manager
- Communication Manager
The Viewer Manager will be in charge of providing HTML pages that include I/O module’s applet. One Viewer Manager will reside with every Module Manager, since any Module Manager may have I/O modules.
The WebVM Server Manager will be responsible for managing the servers in the system. It will be capable of adding and deleting servers from the system, as well as responding to queries about active servers. Unlike Module Managers and Connection Managers, Server Managers will be scarce in the system.
The Resource Manager will provide a list of resources, or modules, that can be found in the system. There are two possible ways of implementing resource management functions. One is by assigning a dedicated Resource Manager to each host, and the second is via a more collective Resource Manager, responsible for a group of hosts. In the first case, the Resource Manager
could be grouped with the Module Manager, and in the second, it could be grouped with the Server Manager.
The Communication Manager will multiplex all the communications between ports registered on a given WebVM node. In the WebFlow prototype, each port has its own socket through which it communicates with the remote port, thus not only wasting system resources (one extra socket per port), but also having to deal with the low level details of sending and receiving messages (although it has the distinct advantage of having the sockets themselves take care of message buffering).
The Communication Manager will provide facilities for sending, receiving, and buffering messages. Its natural place is together with the Connection Manager, since these two servlets actually represent only two stages in the overall communication process. Future WebFlow implementations will probably have just one Connection and Communication Manager, instead of two separate ones.
7 Tier-1 Visual Graph Editor
Since the idea of WebFlow is to create and maintain a domain of world-wide reusable computational modules, the natural place for accessing and maintaining such a domain is the Web itself. Therefore we are faced with the existing browsers such as Netscape or Internet Explorer as a basis for the WebFlow Graphical User Interface. The security restrictions imposed by these browsers, implementation differences due to the ongoing corporate competition, as well as the recent developments in the Network Computer domain all point towards a design solution of a light weighted front end, that will be accessible trough any browser (including new consumer electronic front-ends) and a solid back end given by a personal Java Web server, hooked to a WebVM network, which will implement the most of the functionality of the system.
The front end is designed as a tool for visual authoring of computational dataflow graphs that integrate the existing public domain software modules. It is based on highly intuitive visual icons and click-and-drag design metaphors which hide the inherent complexity of the WebFlow system.
In the current implementation of the front end we used the UCI’s Graph Editing Framework (GEF) [Robb96] as a basis to develop the front end of the WebFlow system. GEF supports the basic graph editing mechanisms and it is naturally extensible. This framework is well structured with cleanly decoupled layers, which makes possible to concentrate on the application specific details that concern the WebFlow front end. Figure 10 shows a snapshot of the current editor in action.
The front end is implemented as an applet, it resides in the top level layer of the system, and it creates and maintains a connection with the Session Manager in the back end of the system.
The user creates a computational graph from modules as building blocks, by selecting the corresponding icons from a list of available modules in the system and inserting them into the graph. Multiple instances of a specific module can be created and their internal state and their connections are completely independent.
After the modules are inserted as nodes in the graph, the applet requests its initialization from the back end. After the initialization is done the back end replies to the applet, bringing information about the interface of the selected module. The applet builds and stores the representation of the graph, keeping information just about its visual representation. The
Figure 10: Initial WebFlow front-end, based on extended GEF (Graph Editing Framework) from UCI. Modules are selected from the palette in the click-and-drag style. Compute-webs are constructed interactively in the click-click-to-connect model. Individual modules can be given user-programmable visual appearance. In the next step, vector graphics drawing tools will be provided for interactive authoring of module icons.
information about the actual modules and their mapping on real machines are stored by the back end.
In the same fashion as the modules, the connections between modules are created. Connecting two modules means connecting a port from one module to a port of the other, by means of simple clicking and dragging.
Individual modules and/or connections can be removed from the graph, which results in deleting them from the structure maintained by the applet itself and in killing the initialized instances of the corresponding modules in the real system as well as breaking the real connections between initialized modules.
After the computational graph is created it can be executed as well. The results are monitored trough the input/output modules that are inserted in the graph. The execution of a computational graph can generate variety of feedback patterns, ranging from just producing final results from a complex computation, to periodic performance visualization and system monitoring modes, to real-time interactive display modes. Current WebFlow editor is restricted to single-user 2D graphics operations but we are also initiating activities on bringing the front-end to the next level of interactivity. This includes integrating WebFlow with JSDA to support collaboratory editing and with VRML2 to support televirtual authoring paradigm.
Figure 12: WebVM as a reusable middleware, tested in a set of research projects at Syracuse University such as WebSpace, VDCE and Televirtuality, focused on various front-end metaphors in tier-1 and/or various computational paradigms in tier-3.
8 Tier-3 Legacy Layer
In parallel with the core WebFlow development work described so far, we are also starting activities on building domain-specific tier-3 module libraries, including WebFlow wrappers to existing codes and legacy systems. ModuleWrapper discussed in Section 5 can wrap any computation, including pure Java, native libraries or external UNIX or NT processes.
In the pure Java sector, we are developing control, monitoring and coordination support for the base WebVM/WebFlow operations. Native libraries with C-coded optimized primitives offer a natural extension for media processing and high performance computing. In the external processing sector, we are experimenting with JDBC drivers for Oracle and mSQL, with JDBC/ODBC drivers for PC databases such as Access or SQL Server and we are developing a WebVM based distributed database layer (see Figure 11), with intelligent agent (such as IBM aglets) based connectivity and visual WebFlow support for designing high level information retrieval and data mining strategies.
9 WebFlow/WebVM Applications
We view WebVM as a reusable middleware and we intend to test it in a set of Web based distributed applications under development. These efforts, partially supported by Department of Energy, Rome Laboratory and IBM Watson, allow us to test various aspects of the WebVM architecture as illustrated in Figure 12. In two 'depth' projects, WebSpace and VDCE, we are probing selected tier-1 and tier-3 aspects of WebVM, respectively. In the 'breadth' area, focused on system scalability we are initiating collaboration with IBM Watson in the area of Televirtuality and we are seeking federal funds to address World-Wide Virtual Machine architecture [HPDC96] [SC96].
9.1 Command and Control
In the VDCE project [VDCE96], we are analyzing C3I functions recently published by the RL C3I Parallel Benchmarking Project and we are developing a library of C3I modules that would support interactive composition of Battle Management C3I systems such shown as in Figure 13 using the visual graph editing tools. More generally, VDCE addresses complementary aspects of Web based distributed computing and it offers a natural connectivity between the pure Java
Figure 14: WebFlow based telemedicine bridge authoring toolkit
based WebFlow model and the ATM based HPDC environments.
9.2 Telemedicine
In the CareWeb project [CareWeb96], conducted jointly with Syracuse University College of Nursing, SUNY Health Science Center and Syracuse City School District, we are developing a collaborative telemedicine system for school nursing, based on the ‘bridge’ topology [Bridge96]. Figure 14 illustrates a CareWeb bridge under development, connecting ‘points of need’ (parents, nurses) with ‘points of care’ (nurse practitioners, pediatricians) via an intelligent Web based switchboard. Individual bridge services are managed as WebVM nodes and connected, integrated and customized for individual healthcare provider needs using the WebFlow visual authoring tools.
9.3 Televirtuality
In a joint project with IBM Watson [TVR96], we are analyzing scalability issues of WebVM architecture in the context of televirtual, i.e., 3D multi-user collaborative environments on the Internet.
Figure 15: Example of a Televirtuality application with non trivial compute-web topology: Virtual Shopping Mall
We use Java based Liquid Reality VRML2 browsers for the interactive front-ends and we start building 3D worlds that would provide experimentation platform for the scalability research.
We selected urban architectural domain for world building due to its natural modularity and we develop two specific worlds: Virtual SU Campus based on CAD data by the SU Department of Architecture, and Virtual Shopping Mall jointly with IBM Watson.
Our initial Mall architecture is drafted in Figure 15. Individual stores, floors and towers are powered by WebVM servers, managed by the Mall tenants and offering interactive shopping services. WebFlow authoring tools will be used for specifying connectivity between architectural, commerce and human/avatar components of such a complex world.
We are also working on building system level WebFlow tools for performance visualization and interactive debugging, based on the TVR metaphor. In this world, modules are represented as rooms, ports as doors, channels as halls connecting rooms etc. New object arriving at an input port would result in the corresponding door opening and an avatar-messenger entering the room with a new chunk of data to be taken over by other avatars-managers for further handling.
References
http://www.telemed.med.ecu.edu/bridge/bridge_1.htm
University of Texas at Austin, 1992, http://www.cs.utexas.edu/users/code
[CareWeb96] SU College of Nursing, NPAC, Syracuse School District, SUNY HSC,
“CareWeb—a Web based community oriented healthcare communications system,”
http://www.npac.syr.edu/projects/careweb
[ChaRi96] Mani Chandy, Adam Rifkin, “The Caltech Infospheres Project,”
http://www.cs.caltech.edu/~adam/CALTECH/infospheres.html
http://www.cs.utk.edu/netsolve/
[Haba96] NCSA’s Habanero Collaborative Tools Library,
http://www.ncsa.uiuc.edu/SDG/Software/Habanero/ToolsLibrary.html
http://www.npac.syr.edu/projects/webspace/doc/hpdc5/talk
[Jigs96] Jigsaw HTTP Server, World-Wide-Web Consortium,
http://www.w3.org/pub/WWW/Jigsaw/
[Robb96] Jason Robbins, “GEF: Graph Editing Framework,”
http://www.ics.uci.edu/~jrobbins/GraphEditingFramework.html
http://www.npac.syr.edu/projects/webspace/doc/sc96/talk
http://www.npac.syr.edu/projects/webspace/webbasedhpcc.html
|
{"Source-Url": "https://surface.syr.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1003&context=npac", "len_cl100k_base": 8091, "olmocr-version": "0.1.49", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 54093, "total-output-tokens": 9387, "length": "2e12", "weborganizer": {"__label__adult": 0.00029158592224121094, "__label__art_design": 0.0004024505615234375, "__label__crime_law": 0.00023186206817626953, "__label__education_jobs": 0.0008025169372558594, "__label__entertainment": 7.37905502319336e-05, "__label__fashion_beauty": 0.00012826919555664062, "__label__finance_business": 0.00019919872283935547, "__label__food_dining": 0.00025582313537597656, "__label__games": 0.00038909912109375, "__label__hardware": 0.0012912750244140625, "__label__health": 0.0004596710205078125, "__label__history": 0.00028777122497558594, "__label__home_hobbies": 8.314847946166992e-05, "__label__industrial": 0.0004036426544189453, "__label__literature": 0.0001920461654663086, "__label__politics": 0.00016808509826660156, "__label__religion": 0.0004673004150390625, "__label__science_tech": 0.041900634765625, "__label__social_life": 8.445978164672852e-05, "__label__software": 0.01189422607421875, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.0002219676971435547, "__label__transportation": 0.0004887580871582031, "__label__travel": 0.0002028942108154297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39641, 0.01232]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39641, 0.37007]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39641, 0.88478]], "google_gemma-3-12b-it_contains_pii": [[0, 1201, false], [1201, 3514, null], [3514, 6249, null], [6249, 6525, null], [6525, 9541, null], [9541, 10000, null], [10000, 10285, null], [10285, 13226, null], [13226, 13395, null], [13395, 14519, null], [14519, 17093, null], [17093, 18368, null], [18368, 21528, null], [21528, 21561, null], [21561, 23474, null], [23474, 24320, null], [24320, 25897, null], [25897, 27831, null], [27831, 31306, null], [31306, 31726, null], [31726, 33077, null], [33077, 34367, null], [34367, 35542, null], [35542, 36561, null], [36561, 36673, null], [36673, 37918, null], [37918, 39641, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1201, true], [1201, 3514, null], [3514, 6249, null], [6249, 6525, null], [6525, 9541, null], [9541, 10000, null], [10000, 10285, null], [10285, 13226, null], [13226, 13395, null], [13395, 14519, null], [14519, 17093, null], [17093, 18368, null], [18368, 21528, null], [21528, 21561, null], [21561, 23474, null], [23474, 24320, null], [24320, 25897, null], [25897, 27831, null], [27831, 31306, null], [31306, 31726, null], [31726, 33077, null], [33077, 34367, null], [34367, 35542, null], [35542, 36561, null], [36561, 36673, null], [36673, 37918, null], [37918, 39641, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39641, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39641, null]], "pdf_page_numbers": [[0, 1201, 1], [1201, 3514, 2], [3514, 6249, 3], [6249, 6525, 4], [6525, 9541, 5], [9541, 10000, 6], [10000, 10285, 7], [10285, 13226, 8], [13226, 13395, 9], [13395, 14519, 10], [14519, 17093, 11], [17093, 18368, 12], [18368, 21528, 13], [21528, 21561, 14], [21561, 23474, 15], [23474, 24320, 16], [24320, 25897, 17], [25897, 27831, 18], [27831, 31306, 19], [31306, 31726, 20], [31726, 33077, 21], [33077, 34367, 22], [34367, 35542, 23], [35542, 36561, 24], [36561, 36673, 25], [36673, 37918, 26], [37918, 39641, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39641, 0.0274]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
e2dd714836b0fd190561010ef9d40b8ef83ab7cb
|
An experience of young software engineers’
employability in the Moroccan offshore industry
Vincent Ribaud, Philippe Saliou
To cite this version:
HAL Id: hal-00769821
https://hal.univ-brest.fr/hal-00769821
Submitted on 3 Jan 2013
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
An experience of young software engineers' employability in the Moroccan offshore industry
Philippe Saliou, Vincent Ribaud
Université de Brest, UEB
LabSTICC
Brest, France
{psaliou, ribaud}@univ-brest.fr
Abstract—Last few years, French customers’ demand for relocation of part of IT projects has allowed the emergence of an offshore software development industry in Morocco. A network of eight Moroccan universities and the University of Brest has set up a mobility scheme for Moroccan Master students. The OTI programme "Offshoring des Technologies de l’Information" is governed by the strong principle of driving skilled young engineers to the benefit of Morocco economic development. Education in Morocco and France is specialized on the issue of software development. The mobility scheme should help to reduce the socio-cultural distance that is inherent to offshore software projects. Without neglecting the need of the good quality of academic education, the programme success relies also on complementary actions: students’ selection in Morocco; welcome and support in France; search, placement and follow-up of internships in France with pre-employment in Morocco. This article presents original points of the programme and an evaluation using criteria related to students' needs, business demand and institutions requirements.
Keywords-component; students’ mobility, employability, global software development
I. INTRODUCTION
“Global software development seems to have become a business necessity for various reasons, including cost, scarcity of resources, and the need to locate development closer to the customers [1].” The industry of information technology in Morocco has a rapid growth thanks to offshore practices within a global company where mixed teams of employees in France and Morocco produce or maintain information systems for customers which are mainly French. Although this young offshore industry benefits from the feedback of a comprehensive development model, they are faced to the well-known problems of Global Software Development (GSD). 10 years ago, [2] introduced a special issue of IEEE Software with the dimensions of the GSD problem: strategic issues, cultural differences, communication among stakeholders, knowledge management, topics related to project management and processes, and of course the technical difficulties. Most of these dimensions have to be addressed by management and organization of the extended company, but a good preparation for young engineers to these new challenges is an important issue for which the Moroccan-French experience presented in this article was designed and implemented.
This article describes two aspects of a mobility programme called OTI - Offshoring of Information Technology (Offshoring des Technologies de l’Information), intended to final year students of Moroccan universities’ Master of Computing. The role of the University of Brest, in addition to hosting students for a semester, is to coordinate the round trip among French and Moroccan components of firms performing GSD between France and Morocco. Without neglecting the challenges and difficulties of providing a high-level academic education, this article focuses on extra-academic aspects, especially the employability of beneficiaries of the OTI programme. The individual dimension is taken into account through the cultural and communication issues discussed in sections II and IV. Elements related to the societal dimension - especially institutional constraints - are exposed in § II.C and used in Section IV to evaluate the OTI programme. The OTI programme itself is described in Section III. We conclude with a brief conclusion.
II. ISSUES ANALYSIS
A. Cultural issues
In the literature, the notion of distance is considered as a major factor impacting the GSD [3, 4]. GSD teams are usually composed of members from different countries, speaking different languages and with different managerial tradition. For example, [5] reported a case study where employees of an offshore component in the Far East preferred to resign rather than refuse the additional work requested by the Western management. The socio-cultural distance is a complex dimension that includes cultural, linguistic, political aspects, and also individual issues. Several models of cultural dimensions are used in research; one of the most used is that of Hofstede [6], although it is not without detractors. For example, [7] conclude that Hofstede's model is not relevant for studies on software development. Thus we hypothesize that the cultural dimension models are less relevant if we can integrate the cultural differences in the training process. The major players in the Moroccan offshore software industry (Logica, AtoS, Capgemini, HP-CDG) requested us to provide facilities enabling Moroccan and French team members to share periods of several months of "friction". We made a pragmatic response that offers to prospective young Moroccan employees a long enough stay in a French company to understand the professional behavior of French teams. This immersion also helps reduce the cultural distance, both for young Moroccan collaborators and French teams that host them.
B. Communication issues
The difficulties of communication in a GSD project are emphasized by many authors: “rich communication is most important [8], “the key challenge of global software engineering is to establish appropriate communication and coordination habits in a global project environment [9]”, “The unique characteristics of Global Virtual Teams (GVTs) make the communication among team members a critical indicator of GVTs performance [10]”. Most authors propose solutions based either on the media technology [3], or on management practices and information sharing [9], or on techniques reducing the temporal or geographical distance [4]. To the extent that these practices are within the purview of software companies, our academic approach is intended to improve oral and written communication skills in French and to develop documentation practices.
C. Institutional issues
The reform of higher education system implemented by Morocco aimed to equip the 14 Moroccan public universities with a greater autonomy, to adapt courses and training to socioeconomic needs and to renovate university curricula based on the model of the European LMD (Bachelor – Licence, Master, Doctorate). To support this reform, a Priority Solidarity Fund called Fonds de Solidarité Prioritaire d’Appui à la Réforme de l’Enseignement Supérieur Marocain (FSP-ARESM) was established in 2004. It covered the areas of governance, teaching - instructional design and research.
Under the instructional design cooperation about computer science, French and Moroccan partners have quickly converged on a proposal for a final internship in France for students of Moroccan Master called Master Offshoring. It was stated that the return to country of origin was a crucial point that only strong institutional governance can guarantee.
To promote youth mobility in the Mediterranean area, a cooperation of 16 countries in 2010 allowed the establishment of the Mediterranean Office for Youth (MOY) (http://www.officemediterraneeendelajeunesse.org). This office aims to promote student mobility similarly to the Erasmus programme. The MOY was established in recognition of the fact that circular migration for educational purposes is a decisive factor in the development of wealth, intercultural exchange, and mutual understanding in the Mediterranean region. Since its inception, the MOY issues an annual call for proposals for labeling Master and PhD programmes. The objective is to facilitate the mobility experience of young people on priority programmes for development of the Mediterranean area and to facilitate youth employment in their country of origin. The MOY label provides significant scholarships that motivate high-level students to apply for labeled programmes. The OTI programme, a candidate for the first call for proposals, has been labeled by the MOY in March 2011.
D. GSD education programme
Some universities in Europe and North America offer a training course about Global Software Development / Global Software Engineering. In essence, this type of course is usually held in cooperation with several universities distant in space and / or time. Most of the time, there are group projects that aim to collaborate students or groups of students from different universities [11, 12, 13]. By its very nature, these types of courses teach and use many means of communication and collaboration: instant messaging and asynchronous video conferencing and Internet telephony, collaboration tools, shared resource management systems. Several articles report positive feedback, because the inconveniences inherent in distributed academic projects are meaningful of GSD projects: “Beside their core tasks the students were also challenged by a lot of problems referring to communication issues and organizational coordination. Thus, they could experience how social aspects affect communication [11]”. “The difference of nine hours presented a significant difficulty in arranging meetings. Cultural differences also played some role (attending meetings in a timely manner was one of them) [12]”. “This would appear to underline the importance of directly addressing known cultural problems in multicultural teaming. The general trend of the students comments for this item was along the lines of “we had a lot of problems in the project but we dealt with them and that was very satisfying [13]”.
Some universities offer comprehensive programmes to prepare for software engineering in a multicultural environment. Detroit Mercy University offers over 20 years the International Studies in Software Engineering Programme (ISSE). This programme aims to make the students understand that there are models of practice not originating or being different from their culture of origin and, consequently, aims to equip students with the rules and practices of other cultures [13]. The principal mean of action ISSE programme is the immersion in one or two cultures, which is also our main development tool. The difference lies in the fact that we offer to Moroccan students an experience in a foreign university (ours) and a foreign company (the French component associated with the possible Moroccan employer).
In Europe, we are aware of two European Master programmes in Software Engineering. The programme European Master on Software Engineering (EMSE, http://emse.fi.upm.es/) is a joint initiative of four universities in Blekinge (Sweden), Bolzano (Italy), Kaiserslautern (Germany) and Madrid (Spain). The principle of mobility is to choose two universities and spend a year in each, with obtaining the two Master’s degrees on success. The last semester is devoted to the Master's thesis supervised by a master instructor of each university. This case of foreign immersion leading to a double Master’s degree is similar to our programme, but as the purpose of the EMSE Master is research, we cannot speak of employability into the Global Software Development market.
The programme Global Software Engineering European Master (GSEEM, http://www.gseem.eu/) is offered by three universities in Västerås (Sweden), L'Aquila (Italy) and Amsterdam (Netherlands). Unlike the EMSE programme, the objective is to prepare future software engineers to work in distributed professional environments. The mobility scheme is the same: a year in the home university, one year in the host university and obtaining two Master’s degrees on success. The Master is research-oriented with three specialization profiles: software architecting, real-time embedded systems engineering, web systems and services engineering. Students are well
prepared for GSD; GSEEM students won the first competition Student Contest on Software Engineering (SCORE) at ICSE 2009 conference.
Compared to existing programmes, our programme differs with its strong career orientation designed to gain a first professional experience in France intended to a pre-employment in Morocco.
III. DESCRIPTION OF THE PROGRAMME OFFSHORING DES TECHNOLOGIES DE L’INFORMATION
A. Fundamentals Principles
The issues of mobility and employability are central to the programme since its inception in 2007. Particular attention was paid to the problem of mobile students’ evaporation in the host country (the "brain drain") to achieve an ethical and balanced cooperation model. French government bodies as well as teachers of the University of Brest did not want to bring students from Morocco to keep them in France. All the Moroccan students want their students to work back to Morocco after a successful stay in France. Industrial players involved in the project seek to improve both sides of their extended enterprise: the French side (generally called the front office) in connection with the client and the Moroccan side (generally called the back office) to which is assigned a portion of the offshore development.
The exchange of students and academics between countries is a proven system that usually takes place in intergovernmental arrangements such as Erasmus Mundus. The qualification within the work in a foreign country – a graduation internship, for example – obeys precise laws and complicated regulation that involves far more of actors that an academic exchange.1
The partners have therefore agreed:
A founding principle - Acquire a first experience in France and then mobilize the skills for the benefit of economic development of Morocco.
Centralized coordination of mobility and employability - This coordination is supported by the University of Brest; it acts as a hub that connects the Moroccan universities, Moroccan students, future Moroccan employers and French companies working in the offshore software development; it coordinates also the different academic, administrative and legal procedures.
B. Terms of mobility
The OTI programme includes two measures having a different pattern of mobility.
Since 2007-2008, the measure called “Stage en France avec une pré-embauche au Maroc” (SFM), Internship in France with a pre-employment in Morocco, provides a mobility of a semester. This pattern allows 2nd year Master students to perform their graduation internship in France with the prospect of an employment in Morocco. The University of Brest centralizes and controls the assignments of internship proposals issued by French companies’ departments (almost all located elsewhere than Brest) performing offshore development with their Moroccan counterparts. On behalf of Moroccan universities, the University of Brest also conducts internships follow-up and internships assessment required to obtain the Moroccan Master’s degree.
In 2010, we introduced a second measure based on a mobility of one year. This is a joint Master's degree from the University of Brest and 8 Moroccan universities. The first year of study takes place in Morocco, the second year in France: 6 months of study at Brest followed by a period of 6 months in France with a pre-employment in Morocco. The Master "Offshoring of Information Technology" is one of 33 MOY-labelled Master programmes in March 2011, the only one in the field of Information Technology.
C. Content of the double Master’s degree
The programme curriculum has been designed to train engineers in software development (design, production and maintenance) of offshore projects. The curriculum does not only focus – as other GSD courses or programs do – on aspects specific to the offshore. The programme objective is to acquire a foundation of skills and knowledge on new technologies and industrialization tools used in large software development companies. It is assumed that the processes, methods, techniques and tools of offshore development vary from company to company and are taught and mastered during the training internship, which should also be a formative period.
While historically the first Moroccan partners followed a common curriculum framework (so-called Offshoring Masters), the first year of Master in Morocco can be performed in four quite different specialties:
- Software development and quality: universities Hassan II Mohammedia (UH2M - Casablanca), Chouaib Doukkali (UCD - El Jadida) and Ibn Tofail (UIT - Kenitra);
- Networking and Systems: universities Ibn Zohr (UIZ - Agadir), Hassan II Mohammedia (UH2M - Casablanca), Hassan 1st (UH1 - Settat) and Abdelmalek Essaâdi (UAE - Tanger);
- Information System Engineering: university Cadi Ayyad (UCAM - Marrakech);
Therefore, the knowledge basin acquired after the first year may vary from student to student and an individual harmonization process is implemented after a prognostic evaluation in three ground subjects: operating systems and networks, databases and information systems, object-oriented programming in Java. Depending on his/her evaluation results, a student attends courses on one, two or three ground subjects over a period of 6 weeks.
From November to March, all students attend 6 technical courses of 60 hours each: object-oriented design, distributed systems, Web technologies, software engineering, information systems, J2EE development. They attend also English and French courses. The 6-months internship lasts from April to September.
1 This explains why universities might be reluctant.
D. Practical Aspects
According to mobility measure, students’ selection is subject to different rules.
For the internship in France, the selection is made in two stages. First, a screening of the second year Masters students is achieved by each Moroccan university based on the academic performance and the presumed ability of students to comply with the programme ethics and requirements. The internship aims to foster skills for a possible employment in Morocco, and the Human Resources of Moroccan employers perform their recruitment procedures, select candidates that suit them and assign them to an internship in France. In collaboration with each candidate, the University of Brest coordinates the logistics: internship formalities, visa, travel, accommodation...
For the double Master’s degree, the number of students is limited to 40 despite of many requests. Each of the nine Masters establishes a shortlist with at most 10 candidates. The screened students pass a knowledge test in Information Technology, the French language test, Test des Connaissances en Français (TCF) and a personal interview with the University of Brest programme manager and his counterpart in each Moroccan Master. Up to 40 students are finally selected. Although the workload is very heavy, this selection process is transparent, understood and agreed by all partners and contributes to the success of the collaboration.
The MOY label provides us with 4 excellence grants of 6500 Euros (for 6 months) and thanks to Logica - our main industrial sponsor, we are able to offer 12 grants of 2000 Euros, assigned on financial criteria. During the 6-months internship, companies provide interns with an allocation between 900 Euros and 1000 Euros per month. Academic fees are cheap in public French universities: around 450 Euros per year, including a good social insurance. The overall cost of the mobility patterns is affordable for many Moroccan students, a key factor to decide them to apply for the programme.
Welcome conditions and stay facilities are a factor for students’ success. The OTI programme has forged a close partnership with the CLOUS of Brest (local branch of the national organization whose mission is to facilitate students' lives in many areas: food, housing, scholarships, social and cultural, etc.). The CLOUS provides 50 campus rooms to the students in France under the responsibility of the University of Brest: welcome on arrival, administrative support, visa, travel, accommodation...
The first steps in France are supported by teachers from the University of Brest: welcome on arrival, administrative facilities, and settlement help. Throughout the year, support is provided to students on demand. A successful integration into French society is fundamental to the programme requirements, and university staff has to deal with extra-academic issues.
E. Historical data
1) Interns counting: Table I shows the number of mobile students in France under the responsibility of the University of Brest between 2007 and 2012. This is a cumulated count of both mobility patterns described in § III.B.
<table>
<thead>
<tr>
<th>University</th>
<th>07-08</th>
<th>08-09</th>
<th>09-10</th>
<th>10-11</th>
<th>11-12</th>
<th>Σ</th>
</tr>
</thead>
<tbody>
<tr>
<td>UIZ-Agadir</td>
<td>2</td>
<td>5</td>
<td>4</td>
<td>7</td>
<td>5</td>
<td>23</td>
</tr>
<tr>
<td>UH2M-Casablanca</td>
<td>2</td>
<td>6</td>
<td>11</td>
<td>8</td>
<td>27</td>
<td></td>
</tr>
<tr>
<td>UCD-El Jadida</td>
<td>1</td>
<td>3</td>
<td>4</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>UIT-Kenitra</td>
<td>2</td>
<td>5</td>
<td>13</td>
<td>5</td>
<td>30</td>
<td></td>
</tr>
<tr>
<td>UCAM-Marrakech</td>
<td>2</td>
<td>5</td>
<td>2</td>
<td>4</td>
<td>13</td>
<td></td>
</tr>
<tr>
<td>UH1-Rabat</td>
<td>2</td>
<td>3</td>
<td>7</td>
<td>10</td>
<td>6</td>
<td></td>
</tr>
<tr>
<td>UH1-Settat</td>
<td>2</td>
<td>3</td>
<td>2</td>
<td>2</td>
<td>9</td>
<td></td>
</tr>
<tr>
<td>UAE-Tanger</td>
<td>2</td>
<td>2</td>
<td>3</td>
<td>2</td>
<td>9</td>
<td></td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td><strong>14</strong></td>
<td><strong>27</strong></td>
<td><strong>20</strong></td>
<td><strong>47</strong></td>
<td><strong>143</strong></td>
<td></td>
</tr>
</tbody>
</table>
2) Distribution: Table II shows the number of graduating interns located by city from 2007 to 2011. Each city is one of the French establishments of a company offshoring software development in Morocco. This is a cumulated count of both mobility patterns described in § III.B.
<table>
<thead>
<tr>
<th>City</th>
<th>2008</th>
<th>2009</th>
<th>2010</th>
<th>2011</th>
<th>Σ</th>
</tr>
</thead>
<tbody>
<tr>
<td>Amiens</td>
<td>2</td>
<td>2</td>
<td></td>
<td></td>
<td>4</td>
</tr>
<tr>
<td>Bordeaux</td>
<td>3</td>
<td>1</td>
<td>3</td>
<td>15</td>
<td>22</td>
</tr>
<tr>
<td>Brest</td>
<td>2</td>
<td>8</td>
<td>5</td>
<td>15</td>
<td></td>
</tr>
<tr>
<td>Lille</td>
<td>3</td>
<td>3</td>
<td></td>
<td></td>
<td>6</td>
</tr>
<tr>
<td>Lyon</td>
<td>8</td>
<td>8</td>
<td></td>
<td></td>
<td>16</td>
</tr>
<tr>
<td>Montpellier</td>
<td>3</td>
<td>3</td>
<td></td>
<td></td>
<td>6</td>
</tr>
<tr>
<td>Nantes</td>
<td>7</td>
<td>17</td>
<td>3</td>
<td>5</td>
<td></td>
</tr>
<tr>
<td>Paris</td>
<td>7</td>
<td>8</td>
<td>15</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Rennes</td>
<td>1</td>
<td>3</td>
<td>4</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Toulouse</td>
<td>2</td>
<td>2</td>
<td></td>
<td>4</td>
<td></td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td><strong>14</strong></td>
<td><strong>27</strong></td>
<td><strong>20</strong></td>
<td><strong>47</strong></td>
<td><strong>108</strong></td>
</tr>
</tbody>
</table>
IV. Evaluation
This section presents an evaluation of the programme regarding the issues presented in Section II.
A. Cultural issues
Almost all Moroccan students are Muslims and respect Islamic rules that can be unknown and misunderstood in France. Conversely, secularism (for instance, the absence of distinctive religious signs) at school and at work is an important aspect of French society and can be difficult to grasp for Moroccans people. This cultural difference – related to religion – is hardly taken into account in GSD research.
Nevertheless the knowledge of some fundamental principles and rules helps the understanding of many relational situations. It is quite logical to believe that a semester or one year in France allows Moroccan students to better understand the French customs. It is also likely to think that the integration of young Moroccan men and women in French companies educates the French teams to cultural differences. The University of Brest has sometimes to accompany some French teams to help them understand that we cannot force Moroccan students to fully adopt a Western model and that the workplace must also accept to adapt some cultural principles, such to allow to draw an headscarf at work.
B. Communication issues
As indicated in section II.B, the concerns associated with methods and tools of communication used in a GSD environment are not specifically addressed in the programme.
In addition to the scientific and technical aspects, the programme offers numerous actions to prepare for employability: a course intended to prepare to professional integration, lectures by industrial workers, workshops simulating job interviews, coaching sessions for job applicants. These activities allow students to better understand the codes of conduct and communication in France. They also help them to be aware that they might have to adapt some attitudes to comply with French culture, for example their dressing or a gregarious way of life.
[15] states that in addition to the official communication, it is necessary to have an informal communication within the team. “We will argue that most of the existing coordination support tools have used formal communication procedures and that there is a need for nurturing informal communication procedures as well [15]”. The difficulty to communicate in the same language is a barrier to the use of these two types of communication, and indirectly to team performance [2]. We assume that a long stay in France – including a course in written and oral communication by French teachers – develops the two forms of communication.
C. Institutional criteria
This programme was designed in 2009 as a result of the cooperation within the intergovernmental framework FSP-ARESM. The Mediterranean Office for Youth (MOY) is promoting higher education training programmes that correspond to fields of Mediterranean interest. Programmes shall fulfill several criteria which are:
- **Degree of internationalization of programme** – in its design, implementation, and the origin of students.
- **Academic excellence of the programme** – how the proposed programme will contribute to the excellence, innovativeness, and competitiveness of the Mediterranean region.
- **Quality of the partnership and of the mobility scheme** – what are the complementarities among the partners and how the partnership ensures excellence; and the justification and the relevance of the compulsory mobility requirements.
- **Career placement** – how the programme responds to the skill needs of the Mediterranean region and how it makes graduates more employable.
- **Facilities and services proposed to the students** – what are the measures that will be taken to provide information and support to mobile students.
In the remaining of the section, we will discuss some aspects of the programme using the MOY selection criteria.
1) **Degree of internationalization**: The FSP-ARESM computing cooperation included in its infancy 7 Moroccan universities and three French universities. Mobility patterns between a Moroccan university and a French university were also considered. However all Moroccan partners agreed to participate to a centralized model where the University of Brest coordinates and arbitrates Moroccan universities-business relations. This is a hub model that is fundamentally different from other MOY mobility programmes or other international joint programmes [13, 14].
2) **Excellence of the programme**: We are not aware of any international French-speaking Master’s degree, specialized on offshore software development. The programme was jointly established with an objective of employability as a software engineer in the context of extended team. The programme is only a year old and we did not have yet assessment indicators. Without claiming to excellence, we believe that, in the field of computing studies, this programme is the best that the partnership between Moroccan universities and Brest university can achieve.
3) **Quality of the partnership and of the mobility scheme**: The partnership is built on a successful cooperation (FSP-ARESM). Mutual trust and a strong concern to protect the interests of all stakeholders provided us with a sound basis. The cooperation worked well because it was built on the feedback from three years of experimentation and adaptation. We gather course assessment, internships follow-up, interns’ evaluation from other MOY mobility programmes or other international joint programmes.
4) **Career placement**: This programme is rooted in the emergence of offshore software development industry in Morocco. From the beginning, the goal is that students can acquire a first experience in France and then mobilize the skills for the benefit of economic development of Morocco. This aim of employability in Morocco drove most organizational choices related to the internships seek and monitoring. For example, seeking an internship is not a student’s responsibility but performed by the University of Brest in close partnership with major IT companies that support this programme (Logica, AtoS, Capgemini, HP –CDG). We are also supported by the APEBI, the federation of information technology,
telecommunications and offshoring in Morocco (http://www.apebi.org.ma). An overview of employability figures is presented in Section IV.E.
5) Facilities and service provided to the students: The support of mobile students starts when they are selected. Each of them has a Moroccan referee and a French referee that shall answer to any questions, help during the formalities process and assist in logistics until her/his arrival in France. Upon arrival, students are supported until a relative autonomy. During their mobility in France, students have the support of a single referee to answer all their academic and extra-academic concerns: residence permit, registration, housing, banking, insurance, social security, etc.
D. Students’ feedback
As the programme is running only since one year, we did not establish yet a formal evaluation of the Master degree. The classroom is exclusively composed of Moroccan students and students asked for a co-education with French students. We were only able to offer a few societal and technical conferences issued by industrial stakeholders. Students reported a couple of drawbacks, and we set up few improvements that have to be monitored. The only measured satisfaction index is the increase in the number of applicants at the entrance of the programme: 50 in 2010-2011 and 90 in 2011-2012.
E. Employers’ feedback
While it is possible to carry out satisfaction surveys in major Moroccan companies, it is difficult to correlate the results. The hiring rate after the graduation provides us with a success indication of the programme, although it should also take into account the market and the need for graduates – which has considerably decreased in 2011.
<table>
<thead>
<tr>
<th>Company</th>
<th>08</th>
<th>08</th>
<th>09</th>
<th>10</th>
<th>10</th>
<th>11</th>
<th>11</th>
<th>Σ</th>
</tr>
</thead>
<tbody>
<tr>
<td>AtoS</td>
<td>3</td>
<td>3</td>
<td>2</td>
<td>2</td>
<td>6</td>
<td>3</td>
<td>73%</td>
<td></td>
</tr>
<tr>
<td>Capgemini</td>
<td>6</td>
<td>6</td>
<td>13</td>
<td>10</td>
<td>84%</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>HP-CDG</td>
<td>2</td>
<td>2</td>
<td></td>
<td></td>
<td>100%</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Logica</td>
<td>3</td>
<td>3</td>
<td>12</td>
<td>6</td>
<td>20</td>
<td>19</td>
<td>41</td>
<td>30</td>
</tr>
<tr>
<td>Total</td>
<td>14</td>
<td>14</td>
<td>27</td>
<td>18</td>
<td>20</td>
<td>19</td>
<td>47</td>
<td>33</td>
</tr>
</tbody>
</table>
Table III shows the number of internships and the number of hires, group by year and by company. The cumulated counts of both mobility patterns give the hiring rate at the end of the internship. Cumulating all enterprises together, 84 of 108 interns were hired by their company, i.e. 78%.
PERSPECTIVES ET CONCLUSION
The development of the SFM measure is expected, but it is dependent on the market that has tightened in 2011. France issued new laws related to internships and we have to drastically change the SFM measure to fulfill with the law.
This article presented a mobility programme for Moroccan students coming to France, governed by a strong principle of directing skills to the benefit of economic development of Morocco. The Master education in Morocco and France is specialized on the issue of software development. However, part of the success of this programme is not related to academic education, but additional actions external to the academic courses: students’ selection, help and support in France, seeking, assigning and monitoring of internships in France with pre-employment in Morocco. Moreover, unlike many studies where the view is (relatively) one-way – the companies that are outsourcing their development, the OTI programme also takes into account the expectations of Moroccan companies that are insourcing their software development activity.
REFERENCES
|
{"Source-Url": "https://hal.univ-brest.fr/file/index/docid/769821/filename/SoftwareEngineerEmployability.pdf", "len_cl100k_base": 7317, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22710, "total-output-tokens": 8612, "length": "2e12", "weborganizer": {"__label__adult": 0.001049041748046875, "__label__art_design": 0.0014820098876953125, "__label__crime_law": 0.0011081695556640625, "__label__education_jobs": 0.47314453125, "__label__entertainment": 0.00020742416381835935, "__label__fashion_beauty": 0.0006413459777832031, "__label__finance_business": 0.00365447998046875, "__label__food_dining": 0.0010547637939453125, "__label__games": 0.0014085769653320312, "__label__hardware": 0.0010728836059570312, "__label__health": 0.00125885009765625, "__label__history": 0.0008702278137207031, "__label__home_hobbies": 0.0003898143768310547, "__label__industrial": 0.0011463165283203125, "__label__literature": 0.0009407997131347656, "__label__politics": 0.0009617805480957032, "__label__religion": 0.0012416839599609375, "__label__science_tech": 0.01003265380859375, "__label__social_life": 0.000911235809326172, "__label__software": 0.007328033447265625, "__label__software_dev": 0.486328125, "__label__sports_fitness": 0.0008945465087890625, "__label__transportation": 0.0017995834350585938, "__label__travel": 0.000912189483642578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36371, 0.0467]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36371, 0.11389]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36371, 0.9203]], "google_gemma-3-12b-it_contains_pii": [[0, 1074, false], [1074, 6280, null], [6280, 12893, null], [12893, 18570, null], [18570, 23804, null], [23804, 29479, null], [29479, 36371, null], [36371, 36371, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1074, true], [1074, 6280, null], [6280, 12893, null], [12893, 18570, null], [18570, 23804, null], [23804, 29479, null], [29479, 36371, null], [36371, 36371, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36371, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36371, null]], "pdf_page_numbers": [[0, 1074, 1], [1074, 6280, 2], [6280, 12893, 3], [12893, 18570, 4], [18570, 23804, 5], [23804, 29479, 6], [29479, 36371, 7], [36371, 36371, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36371, 0.21769]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
dbe53b73c84b13a941e9f1a566c58963eb1c745c
|
LCP Array Construction
The LCP array is easy to compute in linear time using the suffix array $SA$ and its inverse $SA^{-1}$. The idea is to compute the lcp values by comparing the suffixes, but skip a prefix based on a known lower bound for the lcp value obtained using the following result.
**Lemma 4.9:** For any $i \in [0..n)$, $LCP[SA^{-1}[i]] \geq LCP[SA^{-1}[i - 1]] - 1$
**Proof.** For each $j \in [0..n)$, let $\Phi(j) = SA[SA^{-1}[j] - 1]$. Then $T_{\Phi(j)}$ is the immediate lexicographical predecessor of $T_j$ and $LCP[SA^{-1}[j]] = lcp(T_j, T_{\Phi(j)})$.
- Let $\ell = LCP[SA^{-1}[i - 1]]$ and $\ell' = LCP[SA^{-1}[i]]$. We want to show that $\ell' \geq \ell - 1$. If $\ell = 0$, the claim is trivially true.
- If $\ell > 0$, then for some symbol $c$, $T_{i-1} = cT_i$ and $T_{\Phi(i-1)} = cT_{\Phi(i-1)+1}$. Thus $T_{\Phi(i-1)+1} < T_i$ and $lcp(T_i, T_{\Phi(i-1)+1}) = lcp(T_{i-1}, T_{\Phi(i-1)}) - 1 = \ell - 1$.
- If $\Phi(i) = \Phi(i - 1) + 1$, then $\ell' = lcp(T_i, T_{\Phi(i)}) = lcp(T_i, T_{\Phi(i-1)+1}) = \ell - 1$.
- If $\Phi(i) \neq \Phi(i - 1) + 1$, then $T_{\Phi(i-1)+1} < T_{\Phi(i)} < T_i$ and $\ell' = lcp(T_i, T_{\Phi(i)}) \geq lcp(T_i, T_{\Phi(i-1)+1}) = \ell - 1$. $\square$
The algorithm computes the lcp values in the order that makes it easy to use the above lower bound.
**Algorithm 4.10:** LCP array construction
Input: text $T[0..n]$, suffix array $SA[0..n]$, inverse suffix array $SA^{-1}[0..n]$
Output: LCP array $LCP[1..n]$
1. $\ell \leftarrow 0$
2. for $i \leftarrow 0$ to $n - 1$ do
3. $k \leftarrow SA^{-1}[i]$
4. $j \leftarrow SA[k - 1]$ // $j = \Phi(i)$
5. while $T[i + \ell] = T[j + \ell]$ do $\ell \leftarrow \ell + 1$
6. $LCP[k] \leftarrow \ell$
7. if $\ell > 0$ then $\ell \leftarrow \ell - 1$
8. return $LCP$
The time complexity is $O(n)$:
- Everything except the while loop on line (5) takes clearly linear time.
- Each round in the loop increments $\ell$. Since $\ell$ is decremented at most $n$ times on line (7) and cannot grow larger than $n$, the loop is executed $O(n)$ times in total.
**RMQ Preprocessing**
The range minimum query (RMQ) asks for the smallest value in a given range in an array. Any array can be preprocessed in linear time so that RMQ for any range can be answered in constant time.
In the LCP array, RMQ can be used for computing the lcp of any two suffixes.
**Lemma 4.11:** The length of the longest common prefix of two suffixes $T_i < T_j$ is $lcp(T_i, T_j) = \min\{LCP[k] \mid k \in [SA^{-1}[i] + 1..SA^{-1}[j]]\}$.
The lemma can be seen as a generalization of Lemma 1.25 and holds for any sorted array of strings. The proof is left as an exercise.
- The RMQ preprocessing of the LCP array supports the same kind of applications as the LCA preprocessing of the suffix tree, but RMQ preprocessing is simpler than LCA preprocessing.
- The RMQ preprocessed LCP array can also replace the LLCP and RLCP arrays.
We will next describe the RMQ data structure for an arbitrary array $L[1..n]$ of integers.
- We precompute and store the minimum values for the following collection of ranges:
- Divide $L[1..n]$ into blocks of size $\log n$.
- For all $0 \leq \ell \leq \log(n/\log n)$, include all ranges that consist of $2^\ell$ blocks. There are $O(\log n \cdot \frac{n}{\log n}) = O(n)$ such ranges.
- Include all prefixes and suffixes of blocks. There are a total of $O(n)$ of them.
- Now any range $L[i..j]$ that overlaps or touches a block boundary can be exactly covered by at most four ranges in the collection.
The minimum value in $L[i..j]$ is the minimum of the minimums of the covering ranges and can be computed in constant time.
Ranges $L[i..j]$ that are completely inside one block are handled differently.
- Let $NSV(i) = \min\{k > i \mid L[k] < L[i]\}$ (NSV=Next Smaller Value). Then the position of the minimum value in the range $L[i..j]$ is the last position in the sequence $i, NSV(i), NSV(NSV(i)), \ldots$ that is in the range. We call these the NSV positions for $i$.
- For each $i$, store the NSV positions for $i$ up to the end of the block containing $i$ as a bit vector $B(i)$. Each bit corresponds to a position within the block and is one if it is an NSV position. The size of $B(i)$ is $\log n$ bits and we can assume that it fits in a single machine word. Thus we need $O(n)$ words to store $B(i)$ for all $i$.
- The position of the minimum in $L[i..j]$ is found as follows:
- Turn all bits in $B(i)$ after position $j$ into zeros. This can be done in constant time using bitwise shift operations.
- The right-most 1-bit indicates the position of the minimum. It can be found in constant time using a lookup table of size $O(n)$.
All the data structures can be constructed in $O(n)$ time (exercise).
Enhanced Suffix Array
The enhanced suffix array adds two more arrays to the suffix and LCP arrays to make the data structure fully equivalent to suffix tree.
- The idea is to represent a suffix tree node \( v \) representing a factor \( S_v \) by the suffix array interval of the suffixes that begin with \( S_v \). That interval contains exactly the suffixes that are in the subtree rooted at \( v \).
- The additional arrays support navigation in the suffix tree using this representation: one array along the regular edges, the other along suffix links.
With all the additional arrays the suffix array is not very space efficient data structure any more. Nowadays suffix arrays and trees are often replaced with compressed text indexes that provide the same functionality in much smaller space.
Burrows–Wheeler Transform
The Burrows–Wheeler transform (BWT) is an important technique for text compression, text indexing, and their combination compressed text indexing.
Let $T[0..n]$ be the text with $T[n] = \$. For any $i \in [0..n]$, $T[i..n]T[0..i]$ is a rotation of $T$. Let $M$ be the matrix, where the rows are all the rotations of $T$ in lexicographical order. All columns of $M$ are permutations of $T$. In particular:
- The first column $F$ contains the text characters in order.
- The last column $L$ is the BWT of $T$.
**Example 4.12:** The BWT of $T = \text{banana}\$ is $L = \text{annb\$aa}$.
Here are some of the key properties of the BWT.
- The BWT is easy to compute using the suffix array:
\[
L[i] = \begin{cases}
\$, & \text{if } SA[i] = 0 \\
T[SA[i] - 1], & \text{otherwise}
\end{cases}
\]
- The BWT is invertible, i.e., \( T \) can be reconstructed from the BWT \( L \) alone. The inverse BWT can be computed in the same time it takes to sort the characters.
- The BWT \( L \) is typically easier to compress than the text \( T \). Many text compression algorithms are based on compressing the BWT.
- The BWT supports backward searching, a different technique for indexed exact string matching. This is used in many compressed text indexes.
**Inverse BWT**
Let $\mathcal{M}'$ be the matrix obtained by rotating $\mathcal{M}$ one step to the right.
**Example 4.13:**
\[
\begin{array}{c|c|c}
\mathcal{M} & \text{rotate} & \mathcal{M}' \\
\hline
\$ b a n a n a a & a & \$ b a n a n a \\
a \$ b a n a n a & n a \$ b a n a a & n a a $ b a n a \\
an a $ b a n a & n a a $ b a a & n a a $ b a a \\
an a a $ b a & b a n a a a & b a n a a a $ \\
b a n a a a $ & b a n a a a & b a n a a a \\
n a $ b a n a & a n a a $ b a n a & a n a a $ b a n a \\
n a n a a $ b a & a n a a $ b a & a n a a $ b a \\
\end{array}
\]
- The rows of $\mathcal{M}'$ are the rotations of $T$ in a different order.
- In $\mathcal{M}'$ without the first column, the rows are sorted lexicographically. If we sort the rows of $\mathcal{M}'$ stably by the first column, we obtain $\mathcal{M}$.
This cycle $\mathcal{M} \xrightarrow{\text{rotate}} \mathcal{M}' \xrightarrow{\text{sort}} \mathcal{M}$ is the key to inverse BWT.
• In the cycle, each column moves one step to the right and is then permuted. The permutation is fully determined by the last column of \( M \), i.e., the BWT.
• Thus if we know column \( j \), we can obtain column \( j + 1 \) by permuting column \( j \). By repeating this, we can reconstruct \( M \).
• To reconstruct \( T \), we do not need to compute the whole matrix just one row.
**Example 4.14:**
\[
\begin{array}{cccccccc}
\text{rotate} & \text{sort} & \text{rotate} & \text{sort} & \text{rotate} & \text{sort} & \text{rotate} & \text{sort} \\
\hline
\text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} \\
\text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} \\
\text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} & \text{---n} \\
\text{---b} & \text{---b} & \text{---b} & \text{---b} & \text{---b} & \text{---b} & \text{---b} & \text{---b} \\
\hline
\text{---$} & \text{---$} & \text{---$} & \text{---$} & \text{---$} & \text{---$} & \text{---$} & \text{---$} \\
\text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} \\
\text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} & \text{---a} \\
\hline
\text{ba---a} & \text{ban---a} & \text{bana-a} & \text{bana-a} & \text{bana-a} & \text{bana-a} & \text{bana-a} & \text{bana-a} \\
\text{a$b---n} & \text{aba---n} & \text{aba---n} & \text{aba---n} & \text{aba---n} & \text{aba---n} & \text{aba---n} & \text{aba---n} \\
\text{ana---n} & \text{ana---n} & \text{ana---n} & \text{ana---n} & \text{ana---n} & \text{ana---n} & \text{ana---n} & \text{ana---n} \\
\text{ana---b} & \text{ana---b} & \text{ana---b} & \text{ana---b} & \text{ana---b} & \text{ana---b} & \text{ana---b} & \text{ana---b} \\
\text{ban---$} & \text{ban---$} & \text{ban---$} & \text{ban---$} & \text{ban---$} & \text{ban---$} & \text{ban---$} & \text{ban---$} \\
\text{na$---a} & \text{na$---a} & \text{na$---a} & \text{na$---a} & \text{na$---a} & \text{na$---a} & \text{na$---a} & \text{na$---a} \\
\text{nan---a} & \text{nan---a} & \text{nan---a} & \text{nan---a} & \text{nan---a} & \text{nan---a} & \text{nan---a} & \text{nan---a} \\
\end{array}
\]
The permutation that transforms $M'$ into $M$ is called the LF-mapping.
- LF-mapping is the permutation that stably sorts the BWT $L$, i.e., $F[LF[i]] = L[i]$. Thus it is easy to compute from $L$.
- Given the LF-mapping, we can easily follow a row through the permutations.
**Algorithm 4.15:** Inverse BWT
Input: BWT $L[0..n]$
Output: text $T[0..n]$
Compute LF-mapping:
1. for $i \leftarrow 0$ to $n$ do $R[i] = (L[i], i)$
2. sort $R$ (stably by first element)
3. for $i \leftarrow 0$ to $n$ do
4. $(\cdot, j) \leftarrow R[i]; LF[j] \leftarrow i$
Reconstruct text:
5. $j \leftarrow$ position of $\$ in $L$
6. for $i \leftarrow n$ downto 0 do
7. $T[i] \leftarrow L[j]$
8. $j \leftarrow LF[j]$
9. return $T$
The time complexity is dominated by the stable sorting.
On Burrows-Wheeler Compression
The basic principle of text compression is that, the more frequently a factor occurs, the shorter its encoding should be.
Let $c$ be a symbol and $w$ a string such that the factor $cw$ occurs frequently in the text.
- The occurrences of $cw$ may be distributed all over the text, so recognizing $cw$ as a frequently occurring factor is not easy. It requires some large, global data structures.
- In the BWT, the high frequency of $cw$ means that $c$ is frequent in that part of the BWT that corresponds to the rows of the matrix $M$ beginning with $w$. This is easy to recognize using local data structures.
This localizing effect makes compressing the BWT much easier than compressing the original text.
We will not go deeper into text compression on this course.
Example 4.16: A part of the BWT of a reversed English text corresponding to rows beginning with **ht**:
```
oreeereeeieeeaoeeeeeeaeereereereereereereereereereereereereereereereereereereereereereereee
```
and some of those symbols in context:
```
t raise themselves, and the hunter, thankful and very night it flew round the glass mountain keeping agon, but as soon as he threw an apple at it the big animals, were resting themselves. "Halloa, comrades below to life. All those who have perished on that the czar gave him the beautiful Princess Mil
ng of guns was heard in the distance. The czar ancked magician put me in this jar, sealed it with too acted as messenger in the golden castle flew past u have only to say, 'Go there, I know not where; b
```
Backward Search
Let $P[0..m)$ be a pattern and let $[b..e)$ be the suffix array range corresponding to suffixes that begin with $P$, i.e., $SA[b..e)$ contains the starting positions of $P$ in the text $T$. Earlier we noted that $[b..e)$ can be found by binary search on the suffix array.
Backward search is a different technique for finding this range. It is based on the observation that $[b..e)$ is also the range of rows in the matrix $M$ beginning with $P$.
Let $[b_i, e_i)$ be the range for the pattern suffix $P_i = P[i..m)$. The backward search will first compute $[b_{m-1}, e_{m-1})$, then $[b_{m-2}, e_{m-2})$, etc. until it obtains $[b_0, e_0) = [b, e)$. Hence the name backward search.
Backward search uses the following data structures:
- An array $C[0..\sigma)$, where $C[c] = \left| \{i \in [0..n] \mid L[i] < c\} \right|$. In other words, $C[c]$ is the number of occurrences of symbols that are smaller than $c$.
- The function $rank_L : \Sigma \times [0..n + 1] \rightarrow [0..n]$:
$$rank_L(c, j) = \left| \{i \mid i < j \text{ and } L[i] = c\} \right| .$$
In other words, $rank_L(c, j)$ is the number of occurrences of $c$ in $L$ before position $i$.
Given $b_{i+1}$, we can now compute $b_i$ as follows. Computing $e_i$ from $e_{i+1}$ is similar.
- $C[P[i]]$ is the number of rows beginning with a symbol smaller than $P[i]$. Thus $b_i \geq C[P[i]]$.
- $rank_L(P[i], b_{i+1})$ is the number of rows that are lexicographically smaller than $P_{i+1}$ and contain $P[i]$ at the last column. Rotating these rows one step to the right, we obtain the rotations of $T$ that begin with $P[i]$ and are lexicographically smaller than $P_i = P[i]P_{i+1}$.
- Thus $b_i = C[P[i]] + rank_L(P[i], b_{i+1})$.
195
**Algorithm 4.17:** Backward Search
**Input:** array $C$, function $\text{rank}_L$, pattern $P$
**Output:** suffix array range $[b..e)$ containing starting positions of $P$
(1) $b \leftarrow 0$; $e \leftarrow n + 1$
(2) for $i \leftarrow m - 1$ downto 0 do
(3) $c \leftarrow P[i]$
(4) $b \leftarrow C[c] + \text{rank}_L(c,b)$
(5) $e \leftarrow C[c] + \text{rank}_L(c,e)$
(6) return $[b..e)$
- The array $C$ requires an integer alphabet that is not too large.
- The trivial implementation of the function $\text{rank}_L$ as an array requires $\Theta(\sigma n)$ space, which is often too much. There are much more space efficient (but slower) implementations. There are even implementations with a size that is close to the size of the compressed text. Such an implementation is the key component in many compressed text indexes.
Suffix Array Construction
Suffix array construction means simply sorting the set of all suffixes.
- Using standard sorting or string sorting the time complexity is \( \Omega(\Sigma LCP(T_{0..n})) \).
- Another possibility is to first construct the suffix tree and then traverse it from left to right to collect the suffixes in lexicographical order. The time complexity is \( O(n) \) on a constant alphabet.
Specialized suffix array construction algorithms are a better option, though.
Prefix Doubling
Our first specialized suffix array construction algorithm is a conceptually simple algorithm achieving $O(n \log n)$ time.
Let $T^\ell_i$ denote the text factor $T[i.. \min\{i + \ell, n + 1\}]$ and call it an $\ell$-factor. In other words:
- $T^\ell_i$ is the factor starting at $i$ and of length $\ell$ except when the factor is cut short by the end of the text.
- $T^\ell_i$ is the prefix of the suffix $T_i$ of length $\ell$, or $T_i$ when $|T_i| < \ell$.
The idea is to sort the sets $T^\ell_{[0..n]}$ for ever increasing values of $\ell$.
- First sort $T^1_{[0..n]}$, which is equivalent to sorting individual characters. This can be done in $O(n \log n)$ time.
- Then, for $\ell = 1, 2, 4, 8, \ldots$, use the sorted set $T^\ell_{[0..n]}$ to sort the set $T^{2\ell}_{[0..n]}$ in $O(n)$ time.
- After $O(\log n)$ rounds, $\ell > n$ and $T^\ell_{[0..n]} = T_{[0..n]}$, so we have sorted the set of all suffixes.
We still need to specify, how to use the order for the set $T_{[0..n]}^\ell$ to sort the set $T_{[0..n]}^{2\ell}$. The key idea is assigning order preserving names (lexicographical names) for the factors in $T_{[0..n]}^\ell$. For $i \in [0..n]$, let $N_{i}^\ell$ be an integer in the range $[0..n]$ such that, for all $i, j \in [0..n]$:
$$N_{i}^\ell \leq N_{j}^\ell \text{ if and only if } T_{i}^\ell \leq T_{j}^\ell$$
Then, for $\ell > n$, $N_{i}^\ell = SA^{-1}[i]$.
For smaller values of $\ell$, there can be many ways of satisfying the conditions and any one of them will do. A simple choice is
$$N_{i}^\ell = |\{j \in [0, n] \mid T_{j}^\ell < T_{i}^\ell\}|$$
**Example 4.18:** Prefix doubling for $T = \text{banana}$.
<table>
<thead>
<tr>
<th>$N^1$</th>
<th>$N^2$</th>
<th>$N^4$</th>
<th>$N^8 = SA^{-1}$</th>
</tr>
</thead>
<tbody>
<tr>
<td>4</td>
<td>b</td>
<td>4</td>
<td>4</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>5</td>
<td>n</td>
<td>5</td>
<td>6</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>5</td>
<td>n</td>
<td>5</td>
<td>5</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>0</td>
<td>$</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
199
Now, given $N^\ell$, for the purpose of sorting, we can use
- $N^\ell_i$ to represent $T^\ell_i$
- the pair $(N^\ell_i, N^\ell_{i+\ell})$ to represent $T^2_{i} = T^\ell_iT^\ell_{i+\ell}$.
Thus we can sort $T^2_{[0..n]}$ by sorting pairs of integers, which can be done in $O(n)$ time using LSD radix sort.
**Theorem 4.19**: The suffix array of a string $T[0..n]$ can be constructed in $O(n \log n)$ time using prefix doubling.
- The technique of assigning order preserving names to factors whose lengths are powers of two is called the **Karp–Miller–Rosenberg naming technique**. It was developed for other purposes in the early seventies when suffix arrays did not exist yet.
- The best practical variant is the **Larsson–Sadakane algorithm**, which uses ternary quicksort instead of LSD radix sort for sorting the pairs, but still achieves $O(n \log n)$ total time.
Let us return to the first phase of the prefix doubling algorithm: assigning names $N^1_i$ to individual characters. This is done by sorting the characters, which is easily within the time bound $O(n \log n)$, but sometimes we can do it faster:
- On an ordered alphabet, we can use ternary quicksort for time complexity $O(n \log \sigma_T)$ where $\sigma_T$ is the number of distinct symbols in $T$.
- On an integer alphabet of size $n^c$ for any constant $c$, we can use LSD radix sort with radix $n$ for time complexity $O(n)$.
After this, we can replace each character $T[i]$ with $N^1_i$ to obtain a new string $T'$:
- The characters of $T'$ are integers in the range $[0..n]$.
- The character $T'[n] = 0$ is the unique, smallest symbol, i.e., $\$.
- The suffix arrays of $T$ and $T'$ are exactly the same.
Thus we can construct the suffix array using $T'$ as the text instead of $T$.
As we will see next, the suffix array of $T'$ can be constructed in linear time. Then sorting the characters of $T$ to obtain $T'$ is the asymptotically most expensive operation in the suffix array construction of $T$ for any alphabet.
|
{"Source-Url": "https://www.cs.helsinki.fi/u/tpkarkka/opetus/13s/spa/lecture11.pdf", "len_cl100k_base": 6511, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 46752, "total-output-tokens": 7476, "length": "2e12", "weborganizer": {"__label__adult": 0.000400543212890625, "__label__art_design": 0.00044035911560058594, "__label__crime_law": 0.0006961822509765625, "__label__education_jobs": 0.00074005126953125, "__label__entertainment": 0.00013375282287597656, "__label__fashion_beauty": 0.00018489360809326172, "__label__finance_business": 0.0003390312194824219, "__label__food_dining": 0.0004322528839111328, "__label__games": 0.00069427490234375, "__label__hardware": 0.0019283294677734375, "__label__health": 0.0006556510925292969, "__label__history": 0.0004165172576904297, "__label__home_hobbies": 0.00014960765838623047, "__label__industrial": 0.0007600784301757812, "__label__literature": 0.0006985664367675781, "__label__politics": 0.0003919601440429687, "__label__religion": 0.0007004737854003906, "__label__science_tech": 0.20458984375, "__label__social_life": 0.0001018047332763672, "__label__software": 0.01531982421875, "__label__software_dev": 0.76904296875, "__label__sports_fitness": 0.00036835670471191406, "__label__transportation": 0.0006036758422851562, "__label__travel": 0.00024437904357910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19736, 0.02501]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19736, 0.43874]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19736, 0.73863]], "google_gemma-3-12b-it_contains_pii": [[0, 1216, false], [1216, 2080, null], [2080, 2930, null], [2930, 3666, null], [3666, 4762, null], [4762, 5564, null], [5564, 6178, null], [6178, 6850, null], [6850, 7802, null], [7802, 10137, null], [10137, 10908, null], [10908, 11710, null], [11710, 12468, null], [12468, 13168, null], [13168, 14204, null], [14204, 15051, null], [15051, 15541, null], [15541, 16478, null], [16478, 17735, null], [17735, 18607, null], [18607, 19736, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1216, true], [1216, 2080, null], [2080, 2930, null], [2930, 3666, null], [3666, 4762, null], [4762, 5564, null], [5564, 6178, null], [6178, 6850, null], [6850, 7802, null], [7802, 10137, null], [10137, 10908, null], [10908, 11710, null], [11710, 12468, null], [12468, 13168, null], [13168, 14204, null], [14204, 15051, null], [15051, 15541, null], [15541, 16478, null], [16478, 17735, null], [17735, 18607, null], [18607, 19736, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19736, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19736, null]], "pdf_page_numbers": [[0, 1216, 1], [1216, 2080, 2], [2080, 2930, 3], [2930, 3666, 4], [3666, 4762, 5], [4762, 5564, 6], [5564, 6178, 7], [6178, 6850, 8], [6850, 7802, 9], [7802, 10137, 10], [10137, 10908, 11], [10908, 11710, 12], [11710, 12468, 13], [12468, 13168, 14], [13168, 14204, 15], [14204, 15051, 16], [15051, 15541, 17], [15541, 16478, 18], [16478, 17735, 19], [17735, 18607, 20], [18607, 19736, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19736, 0.04167]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
7cbb701bd14818fd569d76e03b10f42a00308512
|
Certified Lightweight Contextual Policies for Android
Citation for published version:
Digital Object Identifier (DOI):
10.1109/SecDev.2016.032
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
Published In:
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and/or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
Certified Lightweight Contextual Policies for Android
Mohamed Nassim Seghir
University of Edinburgh
David Aspinall
University of Edinburgh
Lenka Marekova
University of Edinburgh
Abstract—Security in Android applications is enforced with access control policies implemented via permissions giving access to different resources on the phone. These permissions are often too coarse and their attribution is based on an all-or-nothing decision on most of Android distributions. How can we grant permissions and be sure they will not be misused? We propose a policy-based lightweight approach for the verification and certification of Android applications with respect to a given policy. It consists of a verifier running on a conventional computer and a checker residing on an Android mobile device. The verifier applies static analysis to show the conformance between an application and a given policy. It also generates a certificate asserting the validity of the analysis result. The checker, on a mobile device, can then check the validity of the certificate to confirm or refute the fulfillment of the policy by the application before installing it. This scheme represents a potential future model for app stores where apps are equipped with policies and checkable evidence. We have implemented our approach, we report on the preliminary results obtained for a set of popular real-world applications.
I. INTRODUCTION
Android’s openness and ubiquity make it an ideal target for malware. Security in Android applications is enhanced with access control policies implemented via permissions giving access to different resources on the phone. But the permission model depends on the good judgment of the user, who needs to have some knowledge about the reasonable behavior of the application. For example, Brightest Flashlight Free is an app which was downloaded 50 million times; its purpose is to turn on all the lights on a phone to their maximum level. However, it turned out that this app requested many inappropriate permissions, stealing the user’s location and unique ID, and sending them to advertisers [1]. Most users would probably be unaware or surprised by this behavior.
A straightforward solution to the previous case is to refuse granting permissions (refuse installation) to an app if its natural functionality does not match the requested permissions. But what if an app asks for permissions for some extra functional tasks which are not harmful? On the opposite side, if the required permissions match the logical functionality of the app, can we grant them and be sure they will not be misused? For example, an application for SMS management needs the permission SEND_SMS for sending, but should not use it to send out private data or contact premium rate numbers. Another example concerns a sound recording app. While the RECORD_AUDIO permission is a legitimate requirement for the natural functionality of the app, using it for recording without the user consent is an unwanted and a suspicious behavior.
We propose fine-grained yet lightweight policies to prescribe the reasonable behavior of applications. They refine the raw permissions model by making permissions bound to specific contexts, similar to the idea used in Pegasus [8]. For example, sound can only be recorded as a response to a user interaction, i.e., responding to a GUI event. We use static analysis to show the conformance between policies and application behavior. A question that arises: can we trust the soundness (result) of the analysis? Moreover, how do we know that the analysis was indeed carried out? To address these questions, we propose a policy-based scheme, illustrated in Figure 1, which consists of the following ingredients:
- **Policy**: specify a set of rules to which the application must adhere. It can be provided by either a client of the application as a requirement or by the application provider as an advertisement to promote the safety/security features of its application.
- **Verifier**: static analysis that runs on the application provider side. It checks the conformance between the application and a policy, and generates a certificate.
- **Certificate**: audit for the accountability of the static analysis (verifier). It attests the correctness of the verifier outcome.
- **Checker**: static analysis that runs on the client (mobile device) side. It checks the validity of the certificate with respect to the application and the related policy. The checker is much lighter compared to the verifier.
The certificate provides independently verifiable guarantees in concert with cryptographic signatures. It broadens the idea of Proof-Carrying Code by Necula [22] by encompassing lightweight forms of evidence specific to particular properties, e.g., program annotations tracking permissions or resource usage. It also goes beyond cryptographic signatures as it allows to certify properties inherent to the functionality of the application, such as the absence of information leakage or...
is this permission used? In the normal case, the user would expect the recording to begin when the button Start is pushed. A possible malicious behaviour is to trigger the recording without the intervention nor the knowledge of the user. To rule out such a behaviour we provide a policy expressing that the RECORD_AUDIO permission will only be used in the context of the function onClick. An app can have multiple entry points. Hence, in terms of method invocations, we do not want to have a sequence of calls in which the API method associated with RECORD_AUDIO is reachable from an entry point of the app other than onClick. We express this via the following rule:
\[
\text{ENTRY_POINT, \neg CLICK_HANDLER : \neg RECORD_AUDIO}
\]
The context variable ENTRY_POINT ranges over the set of entry points and CLICK_HANDLER ranges over click event handlers. The notation \(\neg CLICK_HANDLER\) means that click event handlers are discarded and \(\neg RECORD_AUDIO\) means that the permission for audio recording should not be used. So the rule says: “in all entry points, apart from click event handlers, the permission RECORD_AUDIO must not be used”. This means, setAudioSource should only be reachable from a click event handler. This rule lacks some precision in describing the functionality of the app as the click event handler could be associated with the Start button as well as the Stop button. We can be more precise in our specification, if needed, by directly providing the method identifier instead of using context variables.
To check the validity of the previous rule, we use a simple reachability analysis which computes the transitive closure of the call graph with respect to permission usage. The result of the analysis is a map associating with each method the set of permissions corresponding to API methods which are potentially reachable from it. Starting with the initial map
\[
\text{setAudioSource : RECORD_AUDIO}
\]
the analysis returns the new map
\[
\begin{align*}
\text{setAudioSource} & : \text{RECORD_AUDIO} \\
\text{setAudioSource} & : \text{RECORD_AUDIO} \\
\text{onCreate} & : \text{RECORD_AUDIO} \\
\text{startRecording} & : \text{RECORD_AUDIO}
\end{align*}
\]
Entry points are underlined. We can see that RECORD_AUDIO is only associated with onClick, thus our policy is valid. If we uncomment the line 8 (Figure 2), the policy is violated as RECORD_AUDIO will be reachable from the entry point onCreate as well.
Certificate. Now the question is how can a client of the analysis trust its claim? The analysis might contain errors or, even worse, an attacker can provide such a result without applying the analysis at all. For this, the computed map will serve as a certificate. To test its validity, we just need to check that for each pair of (caller, callee) methods, the set of permissions associated with the caller includes the ones associated with the callee. An auxiliary implicit condition is that all methods must have entries in the map. Let us try to tamper with the certificate generated for the previous example by omitting RECORD_AUDIO from the entry corresponding to onClick. This will be detected as RECORD_AUDIO is included in startRecording which is called by onClick, so it must be included in the caller as well. Let us have a more extreme scenario where we remove RECORD_AUDIO from all entries. This
Fig. 2: Code snippets and graphical interface of the Recorder app
III. POLICY AND DIGITAL EVIDENCE
In this section, we describe the semantics of our policy language and provide an algorithm for checking the satisfiability of a given policy. We also show how to use the result as a certificate.
A. Policy language
Our policy language has the following grammar:
\[
\begin{align*}
R & := H \mid T \\
H & := \text{mid} \mid (CV)^+ \\
CV & := \text{ENTRY\_POINT} \mid \text{ACTIVITY} \mid \text{SERVICE} \mid \text{RECEIVER} \\
& \mid \text{ONCLICK\_HANDLER} \mid \text{ONTOUCH\_HANDLER} \mid LC \\
LC & := \text{ONCREATE} \mid \text{ONSTART} \mid \text{ONRESUME} \mid \ldots \\
T & := (\neg id)^+
\end{align*}
\]
In the grammar, \text{mid} represents a method identifier which consists of the method name, its signature and the class it belongs to. Also we have \text{CV} for context variables, which can be \text{ENTRY\_POINT} referring to all entry points of the app, \text{ACTIVITY} representing methods belonging to activities, \text{SERVICE} for methods belonging to service components, \text{RECEIVER} for methods belonging to receiver components, in addition to \text{ONCLICK\_HANDLER} and \text{ONTOUCH\_HANDLER} respectively referring to the click and touch event handlers. Moreover, \text{CV} can also be an activity life cycle callback such as \text{ONCREATE}, \text{ONSTART}, \text{ONRESUME}, etc. Activity callbacks as well as the touch and click event handlers are considered to be entry points. For a context variable \text{CV}, we write \( S_A(CV) \) to denote the set of methods of the application \( A \) represented by \text{CV}, e.g., \( S_A(\text{ENTRY\_POINT}) \) is the set of all entry points of the application and \( S_A(*) \) represents the set of all methods of the program. Finally, \( id \) simply represents an identifier or a tag such as a permission.
B. Semantics
A policy is given as a set of rules. It is satisfied if all the rules it contains are satisfied. In what follows we show when a rule is satisfied by an application. First, for a rule \( R \) we call \( H \) the head of the rule and \( T \) its tail. A rule can have either an or-semantics (\( \lor \)) or an and-semantics (\( \land \)). We define the function \text{Interpret} which gives an interpretation for the rule’s head within an application; it simply returns a set of method identifiers. If \( H \) consists of just one method identifier \text{mid}, then \( \text{Interpret}(H, A) = \{\text{mid}\} \). If \( H \) is a list of (negated) context variables \( CV_1, \ldots, CV_m, \neg CV_{m+1}, \ldots, \neg CV_n \) then
\[
\text{Interpret}(H, A) = \bigcap_{i=1}^{m} S_A(CV_i) \cap \bigcap_{i=m+1}^{n} (S_A(*) \setminus S_A(CV_i))
\]
Example 1: Let us assume that \( H \) is of the form
\( \neg \text{ONTOUCH\_HANDLER} \text{ ENTRY\_POINT ACTIVITY} \)
In this case \( \text{Interpret}(H, A) \) represents the set of entry point methods belonging to activity components of the application \( A \), which are not touch event handlers.
Given a rule \( R \) of the form \( H : T \), we write \( Id(T) \) to denote the set of identifiers appearing in the rule’s tail \( T \). The
C. Call Graph
The call graph is the key representation on which our analysis relies. It is therefore essential that the generated call graph is as complete as possible, i.e., any pair of (caller, callee) in real executions of the application is present in the graph. The semantics of an and-rule \( R \) of the form \( H \lor T \) is given by:
\[
A \models R \text{ if } \forall x \in \text{Interpret}(H, A).
\]
\[
\text{Id}(T) \cap \text{reach}_A(x) = \emptyset
\]
(1)
A policy \( P \) is a mixture of and- and or-rules.
D. Policy verification
As mentioned previously, our verification technique generates a certificate as an audit for its outcome. This is implemented via Algorithm 1 which takes as input an application and a policy (set of rules) and returns a pair (Boolean, tag map) if the policy is satisfied. The returned map is a certificate for the validity of the analysis. If the policy is violated no certificate is returned. We have previously seen that rules interpretation with respect to an application \( A \) (formulae (1) and (2)) depends on the set of tags associated with the different methods in \( \text{reach}_A \), hence our algorithm proceeds in two phases. First, the tag map \( \text{reach}_A \) is computed via a simple working list procedure (lines 5-12). Tags are propagated backwards from callees to callers until a fixpoint is reached. In the second phase, we iterate over rules composing the current policy and check their validity (line 14) with respect to the application. This amounts to checking the (non) violation of the formulas (1) and (2) for and-rules and or-rules respectively. If no rule is violated, a map (certificate) accompanying the validity answer is returned (line 17), otherwise the verification process is terminated without providing a certificate (line 16).
E. Certification
To check the validity of the generated certificate (tag map) computed by Algorithm 1, we do not need to re-apply a reachability analysis. In fact, the checking process, which is implemented via Algorithm 2, is lighter than the generation one. It takes an app, a tag map and a policy as parameters and returns true if the certificate is valid and the policy is satisfied or false otherwise. First, we check that all methods belonging to the platform API are present in the certificate together with their predefined tags (lines 1-5). In the next step, it suffices to go through the different methods and locally check if their associated set of tags is equal to the union of all the sets of tags associated with the functions they call (line 6-10). As illustrated by the tests at lines 3 and 8, it suffices to find one inconsistency to invalidate the certificate. If no inconsistency is found then the final step consists of assigning the certificate to \( \text{reach}_A \) (line 11) and then checking the satisfiability of the policy by the application (lines 12-16), similar to Algorithm 1.
The procedure \text{CheckCertificate} has a linear complexity in the number of methods of the program. It also has a constant space complexity as we are just performing checks without generating any information which needs to be stored. Moreover, we do not require the complete call graph to be present in memory. As we are performing a single (linear) pass, we can get rid of the current entry as soon as we move to the next one.
As the call graph is not part of the certificate, it is computed the same way via function \( \text{CG} \) in both the verifier (Algorithm 1) and the checker ((Algorithm 2).
F. Discussion
As mentioned previously, generating the call graph by itself is not a trivial task due virtual method dynamic resolution. Reflection is also a known issue for static analysis. A simple and conservative solution for this problem is to associate a tag \( t_{ref} \) with methods of the class \texttt{java/lang/reflect/Method}. We then use the tag \( t_{ref} \) to make the policy reflection-aware, e.g.,
Algorithm 2: CheckCertificate
Input: application A, policy P, map M
Output: Boolean
1. Let $M_0$ be the permission map for API functions;
2. foreach $f \in A$ do
3. if $M[f] \neq M_0[f]$ then
4. print “certificate invalid”;
5. return false;
6. foreach $(f, \_)$ \in CG($A$) do
7. Let $S = \bigcup \{M[f'] \mid (f, f') \in CG(A)\}$;
8. if $M[f] \neq S$ then
9. print “certificate invalid”;
10. return false;
11. \text{reach}_A := M;
12. foreach rule $r$ in $P$ do
13. if $A \not\models r$ then
14. print “policy violated”;
15. return (false);
16. return (true);
Another key point is related to the nature of our analysis which is a may-analysis. It can show that an application may use a given permission but cannot show that the permission is actually used. This makes it more appropriate for disproving permission usage rather than proving it and explains the occurrence of identifiers in negated form in our policy language.
Finally, a question that needs to be addressed is: who provides policies? Although our tool gives the user the possibility of specifying policies, we do not expect an average user to do it by himself. Security experts could prescribe a bunch of specifying policies, we do not expect an average user to do it by himself. Security experts could prescribe a bunch of policies rather than proving it and explains the occurrence of identifiers in negated form in our policy language.
IV. IMPLEMENTATION AND EXPERIMENTS
a) Implementation: We have implemented the checker, which runs on mobile devices, as part of our tool EviCheck [24]. EviCheck accepts apps directly in bytecode (APK) format and uses AndroGuard [12] as a back-end for parsing them. As EviCheck is written in Python, we use kivy to facilitate the deployment of the checker module on Android mobile devices. The verifier module takes an app together with a policy as input, and answers whether the policy is satisfied by the app, and eventually outputs a certificate. The checker takes as input an app, a certificate and a policy, and answers whether the certificate is valid. Both the verifier and checker return diagnostic information pointing to the first violated rule in case of policy violation or to the first inconsistent map entry when checking the certificate. They also generate chain of method calls as witness.
b) Experiments: We have performed experiments on 13 real-world popular applications, from the Google Play store, ranging over different domains: banking, multimedia, games, social, etc. We use a typical Linux desktop to host the verifier and a Motorola G3 mobile phone (Qualcomm Snapdragon 1.4GHz processor) running Android to host the checker. In our study, we have specified a policy consisting of 6 rules which can potentially match undesirable behaviour. For example, reading contacts and using Internet in the background, which might indicate that private contacts are sent over the Internet. First, we call the verifier to verify the validity of the policy and to generate a certificate. In a second step, the checker is invoked to check the generated certificate. The results are illustrated in Table I. Column #M shows the number of methods per application as an indicator of the application size. Columns V(d) and C(d) respectively represent the verification and checking times on desktop computer. Column C(m) contains the checking times on mobile device. We have included checking times on desktop on purpose to illustrate how checking is more efficient than verification on a similar architecture. This motivated us to carry out the checking directly on mobile device. While the performance of the checker on mobile is not as good as on desktop, it still runs in less than 10 minutes and for one case in less than one minute. This is encouraging given the size of the considered applications and the limitations of mobile devices. To give an idea about the complexity of these apps for static analysis tools, Flowdroid [2] is unable to analyze the Hsbc app within a bound of 30 minutes on a desktop computer.
The remaining part of the table concerns the rules forming the policy. The presence of the symbol ∗ indicates that the concerned rule is violated. A description of each rule is given in Figure 4. Policy violation does not necessarily mean malicious behaviour, but it can serve as an alarm to trigger more careful scrutinizing. For example, rule 6 is violated by the Hsbc app. This was surprising, knowing that it is a banking app. Why would it use the camera at all? Our investigation revealed that this app offers a mobile check deposit service which uses the camera to take a picture of the check. Further to this, we wanted to know if the app is taking pictures without user consent as rule 6 indicates that the camera is used in a method which is not a click handler. By analysing the bytecode of the application, we found out that there are camera-related methods which are reachable from an onResume callback of an activity. However, they are used for configuration purposes.
Fig. 4: Rules composing the policy used in the study
\begin{verbatim}
1. ENTRY_POINT SERVICE "OR" \rightarrow ACCESS_FINE_LOCATION \rightarrow SEND_SMS
2. ENTRY_POINT SERVICE "OR" \rightarrow ACCESS_FINE_LOCATION \rightarrow INTERNET
3. ENTRY_POINT SERVICE "OR" \rightarrow READ_CONTACTS \rightarrow SEND_SMS
4. ENTRY_POINT SERVICE "OR" \rightarrow READ_CONTACTS \rightarrow INTERNET
5. ACTIVITY ENTRY_POINT \rightarrow ONCLICK_HANDLER \rightarrow RECORD_AUDIO
6. ACTIVITY ENTRY_POINT \rightarrow ONCLICK_HANDLER \rightarrow CAMERA
\end{verbatim}
\footnote{https://play.google.com/store/apps}
\footnote{https://www.us.hsbc.com/1/2/home/personal-banking/pib/mobile/mobile-deposit}
<table>
<thead>
<tr>
<th>App</th>
<th>#M</th>
<th>V(d)</th>
<th>C(d)</th>
<th>C(m)</th>
<th>Policy rules</th>
</tr>
</thead>
<tbody>
<tr>
<td>Angrybirds</td>
<td>128</td>
<td>123</td>
<td>50.11</td>
<td>436.2</td>
<td>✓</td>
</tr>
<tr>
<td>CandyCrushSaga</td>
<td>12877</td>
<td>25.38</td>
<td>19.12</td>
<td>182.75</td>
<td></td>
</tr>
<tr>
<td>Facebook</td>
<td>7969</td>
<td>12.16</td>
<td>11.36</td>
<td>67.56</td>
<td>✓</td>
</tr>
<tr>
<td>FacebookMessenger</td>
<td>4201</td>
<td>7.02</td>
<td>6.78</td>
<td>34.34</td>
<td></td>
</tr>
<tr>
<td>FirefoxBrowser</td>
<td>28442</td>
<td>50.52</td>
<td>30.07</td>
<td>287.07</td>
<td></td>
</tr>
<tr>
<td>Hike</td>
<td>18365</td>
<td>18.32</td>
<td>13.83</td>
<td>122.56</td>
<td></td>
</tr>
<tr>
<td>Instagram</td>
<td>39062</td>
<td>94.55</td>
<td>50.63</td>
<td>427.32</td>
<td></td>
</tr>
<tr>
<td>LinkedIn</td>
<td>50743</td>
<td>191.97</td>
<td>56.88</td>
<td>550.89</td>
<td></td>
</tr>
<tr>
<td>OperaBrowser</td>
<td>28137</td>
<td>34.68</td>
<td>23.61</td>
<td>218.71</td>
<td></td>
</tr>
<tr>
<td>SkyScanner</td>
<td>44374</td>
<td>121.78</td>
<td>55.57</td>
<td>449.75</td>
<td></td>
</tr>
<tr>
<td>Twitter</td>
<td>45700</td>
<td>151.03</td>
<td>47.95</td>
<td>462.65</td>
<td></td>
</tr>
<tr>
<td>Uber</td>
<td>48600</td>
<td>113.84</td>
<td>46.15</td>
<td>426.05</td>
<td></td>
</tr>
<tr>
<td>Viber</td>
<td>50876</td>
<td>153.6</td>
<td>53.69</td>
<td>479.64</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>App</th>
<th>#M</th>
<th>V(d)</th>
<th>C(d)</th>
<th>C(m)</th>
<th>Policy rules</th>
</tr>
</thead>
<tbody>
<tr>
<td>CandyCrushSaga</td>
<td>12877</td>
<td>25.38</td>
<td>19.12</td>
<td>182.75</td>
<td></td>
</tr>
<tr>
<td>Facebook</td>
<td>7969</td>
<td>12.16</td>
<td>11.36</td>
<td>67.56</td>
<td></td>
</tr>
<tr>
<td>FacebookMessenger</td>
<td>4201</td>
<td>7.02</td>
<td>6.78</td>
<td>34.34</td>
<td></td>
</tr>
<tr>
<td>FirefoxBrowser</td>
<td>28442</td>
<td>50.52</td>
<td>30.07</td>
<td>287.07</td>
<td></td>
</tr>
<tr>
<td>Hike</td>
<td>18365</td>
<td>18.32</td>
<td>13.83</td>
<td>122.56</td>
<td></td>
</tr>
<tr>
<td>Instagram</td>
<td>39062</td>
<td>94.55</td>
<td>50.63</td>
<td>427.32</td>
<td></td>
</tr>
<tr>
<td>LinkedIn</td>
<td>50743</td>
<td>191.97</td>
<td>56.88</td>
<td>550.89</td>
<td></td>
</tr>
<tr>
<td>OperaBrowser</td>
<td>28137</td>
<td>34.68</td>
<td>23.61</td>
<td>218.71</td>
<td></td>
</tr>
<tr>
<td>SkyScanner</td>
<td>44374</td>
<td>121.78</td>
<td>55.57</td>
<td>449.75</td>
<td></td>
</tr>
<tr>
<td>Twitter</td>
<td>45700</td>
<td>151.03</td>
<td>47.95</td>
<td>462.65</td>
<td></td>
</tr>
<tr>
<td>Uber</td>
<td>48600</td>
<td>113.84</td>
<td>46.15</td>
<td>426.05</td>
<td></td>
</tr>
<tr>
<td>Viber</td>
<td>50876</td>
<td>153.6</td>
<td>53.69</td>
<td>479.64</td>
<td></td>
</tr>
</tbody>
</table>
TABLE I: Results of checking a policy composed of 6 rules against 13 popular apps from the Google Play store. The symbol ✓ indicates that a rule (policy) is violated.
V. RELATED WORK
Recently, many tools for analysing different security aspects of Android have emerged. Some of them rely on dynamic analysis [5], [14], [23], [26], [27]. Others are based on static analysis [2], [4], [9], [16], [18]. The last family of tools perform an exhaustive exploration of the application behaviour. This is made possible thanks to abstraction (over-approximation) which also leads to some imprecision. We are interested in this category (static analysis) of tools as our aim is to certify the absence of bad behaviours. Our work is a complement to these tools. In addition to analysing applications, we also return a verifiable evidence attesting the validity of the analysis. The tool Kirin [15] uses lightweight rules which conservatively match undesirable behaviour. Their policy language can refer to permissions but does not refer to their usage context. Our language does not have this limitation. Moreover, our analysis is more faithful to the application behaviour by operating on its code as opposed to Kirin’s analysis which is restricted to the manifest file. Chen et al. use temporal logic and model checking to specify and verify API and permission sequences in a given context [8]. While their approach can capture more interesting properties than ours, it does not generate a certificate for the result. A combination of their approach with ours is an interesting track for investigation.
Jeon et al. proposed an approach for inferring and enforcing fine-grained permissions for Android by making them bound to a set of arguments [17]. Their policy language also defines a kind of fine-grained permissions, but they are bound to the usage context. Enforcing fine-grained access control policies was also investigated in the context of runtime monitoring [7], [10], [13], [20] where the policy is checked at runtime. In our case, the policy is statically checked, and we do not have further checks when the application is executed.
The idea of associating proofs with code was initially proposed by Necula under the moniker Proof-Carrying Code (PCC) [21], [22]. It was then used to support resource policies for mobile code [3], [6]. Furthermore, Desmet et al. presented an implementation of PCC for the .NET platform [11]. To the best of our knowledge, Cassandra is the only work in the literature about applying PCC to Android [19]. Their approach proposes a type system to precisely track information flows.
While precision is an advantage, it is hard to assess the practicability of their approach as no experiments involving real-world applications are reported⁶. Our approach is applicable to real-world large applications.
VI. CONCLUSION AND FURTHER WORK
We have presented a policy-based lightweight approach for the verification and certification of Android applications with respect to a given policy. It consists of a simple policy language, a verifier running on a conventional computer and a checker residing on an Android mobile device. We described an implementation of this technique and reported on experimental results obtained on real-world applications. This policy-based scheme represents a potential future model for app stores, where apps are equipped with policies and checkable evidence. Our next step is to increase the efficiency of the checking process on device and to integrate more sophisticated analyses, such as information flow tracking.
REFERENCES
We have contacted the author with regards to the applicability of Cassandra to real-world apps but so far we have not received a response.
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/28280720/main_15.pdf", "len_cl100k_base": 6912, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 28248, "total-output-tokens": 8986, "length": "2e12", "weborganizer": {"__label__adult": 0.0004606246948242187, "__label__art_design": 0.00032210350036621094, "__label__crime_law": 0.001720428466796875, "__label__education_jobs": 0.0004487037658691406, "__label__entertainment": 7.37905502319336e-05, "__label__fashion_beauty": 0.00018608570098876953, "__label__finance_business": 0.0002999305725097656, "__label__food_dining": 0.0002837181091308594, "__label__games": 0.0007672309875488281, "__label__hardware": 0.001824378967285156, "__label__health": 0.0005326271057128906, "__label__history": 0.00021135807037353516, "__label__home_hobbies": 8.64267349243164e-05, "__label__industrial": 0.00040078163146972656, "__label__literature": 0.0002498626708984375, "__label__politics": 0.0004172325134277344, "__label__religion": 0.0003437995910644531, "__label__science_tech": 0.05816650390625, "__label__social_life": 9.584426879882812e-05, "__label__software": 0.0188446044921875, "__label__software_dev": 0.91357421875, "__label__sports_fitness": 0.00027871131896972656, "__label__transportation": 0.0004436969757080078, "__label__travel": 0.00014960765838623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32901, 0.05767]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32901, 0.14213]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32901, 0.8666]], "google_gemma-3-12b-it_contains_pii": [[0, 1312, false], [1312, 6341, null], [6341, 9767, null], [9767, 12887, null], [12887, 16848, null], [16848, 22553, null], [22553, 30558, null], [30558, 32901, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1312, true], [1312, 6341, null], [6341, 9767, null], [9767, 12887, null], [12887, 16848, null], [16848, 22553, null], [22553, 30558, null], [30558, 32901, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32901, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32901, null]], "pdf_page_numbers": [[0, 1312, 1], [1312, 6341, 2], [6341, 9767, 3], [9767, 12887, 4], [12887, 16848, 5], [16848, 22553, 6], [22553, 30558, 7], [30558, 32901, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32901, 0.14721]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
da08b7ffe97f556765eafbd9a6180f9c9fd3ca4d
|
TUTORIAL OVERVIEW
This tutorial provides the SAS® user who needs to access data stored in DL/I data bases with the fundamental concepts and techniques necessary to write a SAS/IMS-DL/I® DATA step. The session includes an introduction to DL/I programming concepts and an examination of SAS/IMS-DL/I® statement extensions. Sample DL/I data bases and SAS/IMS-DL/I® programs illustrate these concepts and programming techniques. The material focuses on retrieving data from DL/I data bases. Due to time restrictions, data base update considerations remain outside the scope of this presentation.
Since the tutorial is geared to experienced SAS users who know little or nothing about DL/I, much of the discussion deals with basic data base concepts and the components of DL/I. While it is impossible to teach you everything you need to know about DL/I in such a short session, we aim to provide you with a basic DL/I vocabulary and a foundation of concepts on which you can build. After this session you will not have all the answers, but you should know what questions to ask.
DL/I CONCEPTS
What Is DL/I?
DL/I stands for Data Language/1 and is IBM’s data base manager that forms the core of two IBM products: IMS/VS and DL/I DOS/VS. IMS/VS is IBM’s data base management system for OS environments and consists of two separate products: IMS/VS DB and IMS/VS DC. IMS/VS DB represents the data base manager DL/I and can be used alone for managing DL/I data bases in batch mode. IMS/VS DC adds data communications facilities for on-line access and control of DL/I data bases, and is only run in conjunction with IMS/VS DB. While IMS/VS DC is commonly used with IMS/VS DB, CICS/VS also provides on-line facilities for accessing DL/I data bases under OS.
DL/I DOS/VS represents the data base manager for DOS environments. IBM does not provide a facility for DOS parallel to IMS/VS DC. Under DOS, CICS/VS is used for on-line access to DL/I data bases.
What Is a Data Base?
DL/I is IBM's data base manager, but what is a data base? This term is much abused. 'Data base' is sometimes used to refer to any file or set of files in order to project a state-of-the-art image. We can, however, define the term more precisely.
A data base is a collection of interrelated data stored in one place for access by multiple users. The data are collected together and are in one place to provide control and eliminate duplication. In addition, the physical structure of a data base is transparent to the users. Only the data base system must know the physical organization of the data, since users and applications access this data through the data base manager. Then, if the data base changes physically, the applications are not necessarily affected. Often just the definition of the data base to the data base manager must change. Finally, a data base is organized according to a particular data model, which is used to characterize the data base system. Data base models include relational, network, and hierarchical types. DL/I uses a hierarchical data model.
SAS Data Sets As Data Bases
According to our definition, SAS data sets are data bases. A SAS data set can contain interrelated data for easy processing by various SAS procedures. The physical organization of a SAS data set is totally transparent to the users. You select the data needed simply by naming the variables. And while users do not need to know the details of the data set structure, SAS data sets use a rectangular, self-describing data model.
A Sample Data Base
A sample data base illustrates the data base concepts discussed. Let's consider a sample application environment and examine how the data might be collected into a data base. This environment provides the context for all the examples in this tutorial as well as in the SAS/IMS-DL/I User's Guide, 1984 Edition.
Information regarding customer accounts in a banking environment includes three main groups of data: customer data, account data and transaction data. The bank has customers; the customers hold accounts and initiate transactions against the accounts. While the bank has...
different application programs to maintain and report on the data, the data are interrelated.
Consider how you might store this interrelated data in a SAS data set. Assuming one observation per customer, how many accounts would you allow per customer, and how many transactions per account? It would be necessary to set an arbitrary limit for these occurrences. This limit might not always be adequate, and yet in many cases would exceed a given customer's requirements, resulting in much wasted space. How might you organize the data more efficiently?
In order not to waste space, you could set up three different SAS data sets -- one for customers, one for accounts and one for transactions. When the number of occurrences of the data varies unpredictably, store that data in a SAS data set. Assuming one application program to maintain and report on the data, the data are interrelated. A segment also represents the structure of the data varies unpredictably, store that data in a SAS data set. However, you would then need to tie the data in the different data sets together by duplicating identifier variables such as CUST_SSN and ACCT_NUM. In addition, the organization of the data is no longer completely transparent, because you need to know which data sets contain the information needed. Accessing the data becomes more complex also, when you need to perform data set merges. Thus, while space is used more effectively, other disadvantages arise.
**DL/I Data Bases and Terminology**
With DL/I you can easily store all customer accounts data in one data base. Figure 12 represents the hierarchy of the ACCOUNT data base. A hierarchy is a pyramid-like structure of SEGMENTS. Data items that tend to be used together are stored in the same segment. A segment contains one or more fields of data and represents a unit of organization between a field and a record, a variable and an observation. A segment also represents the basic unit of transfer between DL/I and an application program.
Figure 12 shows the seven different SEGMENT TYPES in the ACCOUNT data base: at the top level, the CHECKACCT and SAVEACCT segments with the account data; on the bottom level, the CHECKCRDT, CHECKDEBT, SAVERCDT and SAVERDEBT segments with the transaction data. It is this segmentation that allows DL/I to handle variable occurrences of data without wasting space. If a particular customer, for example, has no savings account at the bank, no space is allocated for possible savings account data for the customer. The SAVEACCT and associated segments simply would not exist.
Relationships among the segments define the hierarchy of a DL/I data base. The primary relationship is that of PARENT and CHILD segments. In Figure 12, CUSTOMER is the parent of CHECKACCT and SAVEACCT; CHECKACCT is in turn the parent of CHECKCRDT and CHECKDEBT. Any given segment can be both the parent of one segment and the child of another -- except the ROOT segment. The one segment type at the top level of the hierarchy represents the root segment, and it is never a child. In fact, all the other segments are DEPENDENTS of the root segment, just as any child segment is a dependent of its parent. In the ACCOUNT data base, the CHECKACCT and SAVEACCT segments are also SIBLINGS, segments of different types under a single parent.
A DL/I data base is a series of DATA BASE RECORDS, each composed of a root segment and its dependents. The order of the data base records depends on the DL/I access method used. The sequential access methods store the data base records in key sequence; the direct access methods store them randomly. No matter what the DL/I access method, however, segments within a data base record are stored, and retrieved when requested one after the other, in HIERARCHICAL PROCESSING SEQUENCE.
Figure 15 illustrates a particular data base record from the ACCOUNT data base and the hierarchical processing sequence. Customer Jane Smith holds two checking accounts at the bank. The two CHECKACCT segments that contain information for these accounts represent two different SEGMENT OCCURRENCES of the CHECKACCT segment type and since they have the same parent, they are called TWINS. The numbers in the lower right corners of the segments indicate the hierarchical processing sequence. Starting with the root, you move first from top-to-bottom along a segment path. Then at the bottom level, you move from front-to-back processing twins. After twins, process siblings, moving from left-to-right. Movement through a data base proceeds in this fashion: always first to a lower level to process a segment's children before moving back through a twin chain, or to the right to process siblings and their children.
While DL/I processes all segments of a data base record in hierarchical sequence, the segment SENSITIVITY of certain programs may prevent DL/I from returning segments below a specific level. Segment sensitivity refers to the relationship of an application program to a data base. Application programs must be defined to DL/I in terms of the data bases they access. While a program may access a particular data base, it may only be sensitive to certain segment types in that data base. A given program, for example, may only need to report on what accounts customers have. The program is then defined as sensitive only to the CUSTOMER, CHECKACCT and SAVEACCT segments. DL/I provides sensitivity as a security feature.
**DL/I Segment Retrieval**
While the SAS/IMS-DL/I product supports DL/I data base update calls, as well as message queue and system service calls, this tutorial focuses on DL/I segment retrieval. Retrieval calls are perhaps the most complex, because so many factors are involved. Stricter rules govern the
other call types. In order to retrieve data from DL/I data bases, it is necessary to understand the DL/I hierarchical processing sequence and the concept of sensitivity. However, other factors also affect the results of a DL/I retrieval request. The particular call function and the search criteria specified, as well as the position in the hierarchy after the prior call, determine the results of a DL/I retrieval call.
DL/I provides three different call functions for retrieval calls: Get-Unique ('GU'), Get-Next ('GN') and Get-Next-within-Parent (GNF). Get-Unique can move forward or backward through the data base. Get-Next can only move forward in hierarchical processing sequence, and Get-Next-within-Parent only moves forward through the dependents of the established parent in hierarchical processing sequence.
Any of these calls may be qualified by supplying specific search criteria in Segment Search Arguments (SSAs). When no SSAs are specified, the calls are unqualified. An unqualified Get-Unique call always returns the first segment in the data base. If you issue eleven unqualified Get-Unique calls against the data base segments in Figure 15, DL/I returns segment #1 each time. If you issue eleven unqualified Get-Next calls one after the other, and your program is sensitive to all segments, DL/I returns the eleven segments in hierarchical processing sequence as shown. After retrieving segment #1 to establish parentage, ten unqualified Get-Next-within-Parent calls return segments #2 through #10 again only if your program is sensitive to all segments. An eleventh unqualified Get-Next-within-Parent results in a segment-not-found condition, because no more segments exist under that parent.
Segment search arguments can specify just a segment type, or a segment type with a particular search field value or range of values. After you retrieve segment #1, a Get-Next for a CHCKCRDT segment returns segment #5; a Get-Next for a CHCKCRDT segment with CRDTDATE= 02/15/84 returns segment #5. Refer to Figure 16 for examples of segment search arguments. After you retrieve segment #5, a Get-Next that specifies the SSA shown in Figure 16 for the customer with social security number 123-45-6789 results in a segment-not-found condition, because the Get-Next call only moves forward in the data base from the position after the prior call. Issue a Get-Unique instead to move backward to retrieve segment #1.
Figure 17 summarizes these various factors that affect the results of a DL/I retrieval call.
DL/I Control Blocks
With DL/I you cannot just start using a data base by allocating space for it and assigning it a name, as you can with standard files and SAS data sets. You must first define the data base and your programs to DL/I by setting-up DL/I control blocks. The DBD (Data Base Description) defines the physical characteristics of a data base to DL/I. You code the appropriate Assembler macros and process them by running the DL/I DBDGEN utility to create a DBBLIB member that DL/I will use when you access the data base. The PSB (Program Specification Block) defines the application program to DL/I. As for the Data Base Description, you code Assembler macros and create a PSBLIB member by running the PSBGEN utility. A PSB is composed of PCBs (Program Communication Blocks), each of which identifies a DL/I resource for program use. A data base PCB describes the program view of the data base. In other words, it specifies which segments of the data base are sensitive for the application program that the PSB defines.
Figure 19 illustrates the relationship between physical data bases, DL/I control blocks, and application programs. A DBD defines one data base, and a PSB defines one application. However, a PSB can contain multiple PCBs for the various data bases the program might access. Figure 20 represents the PSB for the customer accounts ACCUPDT application. This PSB contains two data base PCBs -- one for the ACCOUNT data base and one for the WIRETRN transaction data base. The SENSEG statements in each PCB specify the segments in the data base to which the program is sensitive. Since the PSBGEN statement specifies 'CMPAT=YES', another PCB, the IO PCB, is generated as the first PCB in this PSB. Generation of the IO PCB ensures compatibility between batch and on-line environments. While this PCB is not needed for DL/I data base retrieval functions, you need to recognize that it exists as the first PCB in the PSB when 'CMPAT=YES' is specified.
It is important to understand the program specification block, because it provides the link between the DL/I program and the data base descriptions that describe the data bases the program accesses. You must always specify a PSB when executing a program to access DL/I data bases. Thus, in order to access DL/I data bases using the SAS/IMS-DL/I interface, you must also specify a PSB. While you do not need to generate special SAS/IMS-DL/I PCBs, you must know which PCBs you provide access to the DL/I resources you need and which PCBs within the PSBs to reference.
Data Bases and Data Base Management Systems
Before proceeding to our examination of the SAS/IMS-DL/I interface and program examples, let's review our discussion of DL/I and data base concepts. A data base is a collection of interrelated data for access by multiple users. The users do not need to know the physical organization of the data because they only access it through the data base system. The data is organized according to a particular data model. Both SAS data sets and DL/I data bases fit this definition. DL/I, however, provides other features that the SAS System does not, such as keyed access to data, concurrent access to data, logging, backup and recovery facilities. While
the SAS System provides the advantages of a data base approach, it is not a full-function data base management system like DL/I. With the SAS/IMS-DL/I interface you can take advantage of the desirable features of both systems.
A SIMPLE SAS/IMS-DL/I PROGRAM
Figure 22 represents a simple SAS/IMS-DL/I program, a streamlined version of the CUSTLIST sample program from the SAS/IMS-DL/I User's Guide, 1984 Edition. This CUSTLIST program reads customer information from the ACCOUNT data base and prints a sorted list of customers with phone numbers. Note that the only difference between this SAS/IMS-DL/I program and a standard SAS program is the DLI parameter on the INFILE statement. This positional parameter indicates to the SAS supervisor that the external file to access is a DL/I data base.
The program is simple, though not quite as simple as it appears. Several options may be specified on the DL/I INFILE statement. In this case they are omitted, because the defaults are appropriate. A later section examines these defaults.
As mentioned earlier, a program specification block must be referenced when executing a program to access DL/I data bases. This example refers to the CUSREAD PSB; the PSB name precedes the DLI parameter on the DL/I INFILE statement. Figure 23 contains the CUSREAD PSB. Note that this PSB contains only one program communication block (PCB), the one for the ACCOUNT data base. Since no 'CMPAT=YES' appears on the PSBGEN statement, no IO PCB will be generated as the first PCB. The ACCOUNT data base PCB is the first and only PCB in the PSB and the one that DL/I will use when issuing the calls requested by the program. Since this PCB specifies only the CUSTOMER segment as sensitive, DL/I will return only CUSTOMER segments to the program. Although the program retrieves the segments one after the other in sequential fashion, they are not returned in key sequence, because the ACCOUNT data base uses a direct rather than a sequential DL/I access method. Therefore, PROC SORT is executed to sort the data before printing.
Figure 24-25 show the output from this sample program. The SAS log in Figure 24 contains two NOTE messages that do not appear in standard SAS program logs. The first NOTE after the DATA step statements indicates that the program involves DL/I access and specifies the PSB referenced. The following NOTE, issued only when a DL/I data base is processed in sequential fashion, verifies that the data base was processed to the end when DL/I returned its end-of-data-base condition, a 'GB' status code. Figure 25 shows the expected PROC PRINT output.
THE DL/I INFILE EXTENSIONS
In the CUSTLIST sample program above, the basic form of the DL/I INFILE statement was used, and default values were assumed for the other parameters. Use of the basic form implies certain conditions. Figure 27 outlines these conditions. The program can only retrieve segments one after the other in sequential fashion, because the DL/I call function defaults to Get-Next with no SSAs. In addition, for the INPUT statements to specify correct segment formats when segments are simply returned one after the other, either the segment formats must always be the same or the program must be able to determine the segment format before reading the entire segment. Also, since the default size of the buffer used for segments returned from DL/I is 1000 bytes, retrieved segments must not exceed this length. Finally, the interface specifies the first PCB in the PSB as the default for DL/I to use in communicating call information.
When these conditions do not reflect your programming requirements, use the appropriate DL/I INFILE extensions. Figure 28 lists the most commonly used extension options; they are all keyword parameters. On the INFILE statement, assign variable names for the options needed. (Exceptions are the LRECL= and PCBNO= options, for which you simply specify numeric values.) Then, before issuing an INPUT statement, assign appropriate values to these variables. For clarity, you can categorize the various options by function. The first six represent information you can pass to DL/I; the last two represent information DL/I can return to your program.
Of the six options that pass data to DL/I, three of them specify which PCB in the PSB to use. PCBNO= allows you to bypass PCBs not relevant to your program. Then, if a non-blank value is assigned to the DBNAME= variable, the appropriate PCB is selected by matching on DBD name. Otherwise, the PCB= variable value is used as an index to the appropriate PCB. If none of these are specified, the default is the first PCB in the PSB.
To issue calls other than Get-Next, assign the appropriate call functions to the CALL= variable. To qualify calls, assign correctly formatted segment search argument values to the SSA= variable or variables. Since you can specify multiple SSAs on a DL/I call, multiple SSA variables are allowed. If you retrieve any segments or segment paths that exceed 1000 bytes in length, assign the appropriate value to the LRECL= parameter; otherwise, results are unpredictable.
You might also want to check information returned from DL/I before continuing processing. Specify SEGMENT= to request that the interface return to your program the name of the segment retrieved. When various segment types can be returned, issue the INPUT with a trailing .
After each call, the DL/I status code is checked. A 'GB' status signals the end of the data base, and the program terminates. A 'GE' indicates that no segment meeting the search criteria was found. In this case, the DELETE statement causes an immediate return to the top of the DATA step and thus a repetition of the call. On this next call, DL/I returns the 'GB' status, and the program terminates. Since this program prints the report as part of the DATA step, the program is complete when the DATA step reaches the end of the data base.
TUTORIAL SUMMARY
These sample programs appear in full, along with a range of other examples, in the SAS/IMS-DL/I User’s Guide, 1984 Edition. You may want to review these programs in greater detail before attempting to write SAS/IMS-DL/I programs of your own. Also, you will need to gather information pertaining to your data bases and how you may access them. Do you know what questions to ask? Figure 32 summarizes key points covered in the tutorial and provides the beginnings of a checklist of information you need to collect.
SAS and SAS/IMS-DL/I are registered trademarks of SAS Institute, Inc., Cary, N.C., USA.
A SAS/IMS-DL/I SAMPLE APPLICATION
ACCOUNT DATA BASE
Figure 12
CALL FUNCTIONS
GU
GN
GNP
SEGMENT SEARCH ARGUMENTS
CUSTOMER
CUSTOMER(SSNUMBER = 123-45-6789)
RESULTS DEPEND ON
• Call function specified
• SSAs specified
• Segment sensitivity of the program
• Position in the data base after the prior call
Figure 16
Figure 17
A SAS/IMS-DL/I SAMPLE APPLICATION
ACCUPDT DATA BASE UPDATE
PROGRAM SPECIFICATION BLOCK
PCB TYPE=DB, NAME=ACCOUNT, PROCOPT=A
SENSEG NAME=CUSTOMER
SENSEG NAME=CHCKACCT
SENSEG NAME=CHCKCRDT
SENSEG NAME=CHCKDEBT
SENSEG NAME=SANEACCT
SENSEG NAME=SANEDEBT
PCB TYPE=DB, NAME=WIRETRN, PROCOPT=A
SENSEG NAME=WIRETRAN
PSBGEN NAME=ACCUPDT, CMPAT=YES
END
A SAS/IMS-DL/I SAMPLE APPLICATION
CUSTLIST
DATA CUSTLIST;
INFIL CUSREAD DL1;
INPUT
@1 CUST_SSN $CHAR11.
@12 NAME $CHAR40.
@172 H_PHONE $CHAR12.
@184 O_PHONE $CHAR12;
IF _ERROR_ THEN ABORT;
PROC SORT DATA=CUSTLIST; BY NAME;
PROC PRINT;
VAR H_PHONE O_PHONE;
ID NAME;
TITLE 'CUSTOMER PHONE LIST';
Figure 19
Figure 20
Figure 22
Figure 23
**SAS DL/I INFFILE STATEMENT**
**ASSUMPTIONS AND DEFAULTS**
- Sequential processing
- Read-only
- Segment types known, or format always the same
- Segments retrieved not longer than 1000 bytes
- PCB needed is first in the PSB
**SAS DL/I INFFILE STATEMENTS**
**Figure 24**
**CUSTOMER PHONE LIST**
<table>
<thead>
<tr>
<th>NAME</th>
<th>H_PHONE</th>
<th>O_PHONE</th>
</tr>
</thead>
<tbody>
<tr>
<td>Barnhardt, Pamela</td>
<td>803-345-0346</td>
<td>803-355-2543</td>
</tr>
<tr>
<td>Booker, April</td>
<td>803-657-1346</td>
<td>803-657-1346</td>
</tr>
<tr>
<td>Booker, Ralph</td>
<td>803-657-1346</td>
<td>803-657-1346</td>
</tr>
<tr>
<td>Jones, J.M.</td>
<td>803-657-7636</td>
<td>803-657-7636</td>
</tr>
<tr>
<td>Jones, Roger</td>
<td>803-657-5656</td>
<td>803-657-5656</td>
</tr>
<tr>
<td>Little, Nancy</td>
<td>803-657-2566</td>
<td>803-657-2566</td>
</tr>
<tr>
<td>Smith, James</td>
<td>803-657-7437</td>
<td>803-657-7437</td>
</tr>
<tr>
<td>Stoppers, Mary</td>
<td>803-657-1687</td>
<td>803-657-1687</td>
</tr>
<tr>
<td>Walls, Hooper</td>
<td>803-657-2098</td>
<td>803-657-2098</td>
</tr>
<tr>
<td>Windsor, Jonathan</td>
<td>803-657-7330</td>
<td>803-657-7330</td>
</tr>
</tbody>
</table>
**Figure 25**
**SAS DL/I INFFILE STATEMENTS**
**Figure 27**
**SAS DL/I INFFILE EXTENSIONS**
**Figure 28**
INFFILE psbname DLI
- [CALL = variable]
- SSA = (variable1, ...)
- LRECL = number
- PCBNO = number
- DBNAME = variable
- PCB = numeric variable
- STATUS = variable
- SEGMENT = variable
CODING TECHNIQUES
MACRO FACILITY FOR SEGMENT DESCRIPTIONS
allocate MACRO data set in job control
%INCLUDE MACRO(CUSTOMER);
DATA CUSTLIST;
INFILE CUSREAD DLJ;
INPUT %CUSTOMER;
Figure 29
A SAS/IMS-DL/I SAMPLE PROGRAM
TRANREAD
%INCLUDE SOURCE(ACCTTRAN);
DATA _NULL_; RETAIN SSA1 'CHKACC'T'D';
SSA2 'CHKDEBT(DATE= 03/28/83)' DB 'ACCOUNT';
INFILE TRANREAD DLJ SSA1(SSA1,SSA2) STATUS=ST
DBNAME=DB;
INPUT %ACCITRAN;
IF _ERROR_ THEN DO;
_ERROR_=0;
IF ST='G' THEN STOP;
IF ST='G' THEN DELETE;
PUT _ALL_;
ABORT 888 ABEND;
END;
FILE TRANREPT HEADER=NEWPAGE NOTITLES;
PUT @10 ACCTNUM @30 AMOUNT DOLLAR13.2
@45 TRAN_TIME TIME8. @55 DESCRIPT;
RETURN;
NEWPAGE:
PUT / 'CHECKING ACCOUNT DEBITS FOR 03/28/83
ACCOUNT NUMBER' @33 'AMOUNT' @47 'TIME'
@65 'DESCRIPTION';
Figure 31
TUTORIAL SUMMARY
DL/I INFORMATION CHECKLIST
◊ What PSBs to use; PSB names?
◊ What databases to access; DBD names?
◊ Which PCBs in the PSBs refer to data bases needed?
◊ Need to allow for IO PCB as 1st PCB?
◊ Appropriate segment sensitivity in the PCB/PSB for data needed?
◊ Names and layouts of segments to retrieve?
◊ Any segments/segment paths to retrieve longer than 1000 bytes?
◊ Sequential processing: Is the database sequentially organized?
◊ Random processing: Search field names and lengths defined in the DBD?
Figure 32
|
{"Source-Url": "https://support.sas.com/resources/papers/proceedings-archive/SUGI84/Sugi-84-171%20Turney.pdf", "len_cl100k_base": 5991, "olmocr-version": "0.1.49", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 25355, "total-output-tokens": 6475, "length": "2e12", "weborganizer": {"__label__adult": 0.0003113746643066406, "__label__art_design": 0.0004839897155761719, "__label__crime_law": 0.00044465065002441406, "__label__education_jobs": 0.005596160888671875, "__label__entertainment": 0.00013434886932373047, "__label__fashion_beauty": 0.0001571178436279297, "__label__finance_business": 0.004100799560546875, "__label__food_dining": 0.0003154277801513672, "__label__games": 0.0006494522094726562, "__label__hardware": 0.0016775131225585938, "__label__health": 0.0003859996795654297, "__label__history": 0.00027298927307128906, "__label__home_hobbies": 0.00015223026275634766, "__label__industrial": 0.001068115234375, "__label__literature": 0.00024700164794921875, "__label__politics": 0.0002467632293701172, "__label__religion": 0.00031876564025878906, "__label__science_tech": 0.055206298828125, "__label__social_life": 0.00013327598571777344, "__label__software": 0.2333984375, "__label__software_dev": 0.69384765625, "__label__sports_fitness": 0.0001857280731201172, "__label__transportation": 0.0003767013549804687, "__label__travel": 0.0001939535140991211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25697, 0.0517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25697, 0.47468]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25697, 0.86653]], "google_gemma-3-12b-it_contains_pii": [[0, 4111, false], [4111, 9844, null], [9844, 15601, null], [15601, 20982, null], [20982, 22143, null], [22143, 22470, null], [22470, 23194, null], [23194, 24398, null], [24398, 25697, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4111, true], [4111, 9844, null], [9844, 15601, null], [15601, 20982, null], [20982, 22143, null], [22143, 22470, null], [22470, 23194, null], [23194, 24398, null], [24398, 25697, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25697, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25697, null]], "pdf_page_numbers": [[0, 4111, 1], [4111, 9844, 2], [9844, 15601, 3], [15601, 20982, 4], [20982, 22143, 5], [22143, 22470, 6], [22470, 23194, 7], [23194, 24398, 8], [24398, 25697, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25697, 0.06522]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
c13fe2d0b559f178db3934c0188c5ff608193dbf
|
[REMOVED]
|
{"len_cl100k_base": 6215, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30263, "total-output-tokens": 8188, "length": "2e12", "weborganizer": {"__label__adult": 0.0003368854522705078, "__label__art_design": 0.0008587837219238281, "__label__crime_law": 0.0005927085876464844, "__label__education_jobs": 0.00811004638671875, "__label__entertainment": 0.00021564960479736328, "__label__fashion_beauty": 0.00026035308837890625, "__label__finance_business": 0.0008368492126464844, "__label__food_dining": 0.0004401206970214844, "__label__games": 0.0009889602661132812, "__label__hardware": 0.0012235641479492188, "__label__health": 0.0010747909545898438, "__label__history": 0.0004031658172607422, "__label__home_hobbies": 0.0001804828643798828, "__label__industrial": 0.0005617141723632812, "__label__literature": 0.0013532638549804688, "__label__politics": 0.000469207763671875, "__label__religion": 0.0005006790161132812, "__label__science_tech": 0.413330078125, "__label__social_life": 0.0002911090850830078, "__label__software": 0.05255126953125, "__label__software_dev": 0.5146484375, "__label__sports_fitness": 0.0002281665802001953, "__label__transportation": 0.0004241466522216797, "__label__travel": 0.00018465518951416016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34889, 0.01724]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34889, 0.50677]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34889, 0.9066]], "google_gemma-3-12b-it_contains_pii": [[0, 4515, false], [4515, 9291, null], [9291, 12131, null], [12131, 17516, null], [17516, 20926, null], [20926, 24147, null], [24147, 27724, null], [27724, 28272, null], [28272, 31790, null], [31790, 34889, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4515, true], [4515, 9291, null], [9291, 12131, null], [12131, 17516, null], [17516, 20926, null], [20926, 24147, null], [24147, 27724, null], [27724, 28272, null], [28272, 31790, null], [31790, 34889, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34889, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34889, null]], "pdf_page_numbers": [[0, 4515, 1], [4515, 9291, 2], [9291, 12131, 3], [12131, 17516, 4], [17516, 20926, 5], [20926, 24147, 6], [24147, 27724, 7], [27724, 28272, 8], [28272, 31790, 9], [31790, 34889, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34889, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
f219b92ee58b0f2f411d715778fcb5a1f11e3273
|
Are your lights off? Using problem frames to diagnose system failures
How to cite:
© 2009 IEEE
Version: Version of Record
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
Are Your Lights Off?
Using Problem Frames to Diagnose System Failures*
Thein Than Tun1,2 Michael Jackson2 Robin Laney2 Bashar Nuseibeh2 Yijun Yu2
1PReCISE Research Centre, Faculty of Computer Science, University of Namur, Belgium
2Department of Computing, The Open University, UK
ttu@info.fundp.ac.be {m.jackson, r.c.laney, b.nuseibeh, y.yu}@open.ac.uk
Abstract
This paper reports on our experience of investigating the role of software systems in the power blackout that affected parts of the United States and Canada on 14 August 2003. Based on a detailed study of the official report on the blackout, our investigation has aimed to bring out requirements engineering lessons that can inform development practices for dependable software systems. Since the causes of failures are typically rooted in the complex structures of software systems and their world contexts, we have deployed and evaluated a framework that looks beyond the scope of software and into its physical context, directing attention to places in the system structures where failures are likely to occur. We report that (i) Problem Frames were effective in diagnosing the causes of failures and documenting the causes in a schematic and accessible way, and (ii) errors in addressing the concerns of bidable domains, model building problems, and monitoring problems had contributed to the blackout.
1 Introduction
In mature branches of engineering, failures and “the role played by reaction to and anticipation of failure” are regarded as essential for achieving design success [11]. Identification of the causes of past system failures, organisation and documentation of them in a way accessible by engineers within an engineering community, and application of knowledge of failures when designing future systems, all play a central role in establishing “normal design” practices [15]. Although there have been several excellent reports on high-profile system failures involving software systems [5, 7, 9], development practices for dependable systems have not exploited input from incident or accident investigations in a systematic way [2]. This work is a small step in the direction to address the gap.
Requirements Engineering (RE) is concerned with defining the behaviour of required systems, and any error introduced or prevented early in the development significantly contributes to the system dependability. In this respect, RE has a valuable role to play in systematising and documenting causes of past failures, and utilising this systematised knowledge in the development of future systems. In the same way that system failures can be attributed to programming, design, and human/operational errors, it is possible to attribute certain failures to RE errors. RE errors may be due to missing requirements, incorrect assumptions about the problem context, weak formulation of requirements and unexpected interactions between requirements.
Although the broader context—such as the organisational settings, regulatory regimes and market forces—often plays an important role in failures, we deliberately focus on the role of the software system in its physical context in order to bring out clear lessons for requirements engineers. Therefore, a framework is needed for investigating failures, which looks beyond the scope of software and into its physical context, and directs attention to places in the system structures where failures are likely to occur.
In this paper, we report on our experience of using Problem Frames [4] to identify, organise and document knowledge about the causes of past system failures. In the Problem Frames framework, potential causes of failures—known as “concerns”—are named and associated with a specific pattern of problem structure, a style of problem composition, a type of problem world domain, the requirement and the specification. An instantiation of a pattern, for instance, will immediately raise the need to address certain concerns in the sys-
*The title is inspired by [3]. An extended version of this paper can be found in [13]. This research is supported by the EPSRC, UK and the CERUNA programme of the University of Namur. Helpful comments and suggestions by the anonymous reviewers are gratefully acknowledged.
tem structures. This is, in a sense, similar to generating “verification conditions” for a program in order to prove its correctness with respect to the specification [1]. In this case, concerns raised will have to be discharged by requirements engineers, perhaps in collaboration with other stakeholders.
The rest of the paper is organised as follows. Section 2 gives an overview of the power blackout case study, the methodology used in the investigation, and some of the key principles of Problem Frames. The role of the software systems in the blackout is described and analysed in Section 3. Related work is discussed in Section 4. Section 5 summarises the findings.
2 Preliminaries
This section provides an overview of our case study, the research methodology used to investigate the failures, the conceptual framework of Problem Frames, and the expected outcome of our study.
2.1 2003 US-Canada Electricity Blackout
The electricity blackout that occurred on 14 August, 2003 in large parts of the Midwest and Northeast United States and Ontario, Canada, affected around 50 million people, according to the official report by the U.S.–Canada Power System Outage Task Force [14]. The outage began around 16:00 EDT (Eastern Daylight Time), and power was not fully restored for several days in some parts of the United States. The effect of the outage could be seen in satellite images of North America, whilst financial losses reportedly ran into billions of US dollars. The official report concluded that “this blackout could have been prevented”, and software failures leading to the operator’s reliance on outdated information was identified as one of the two “most important causes” of the blackout [14, p. 46].
2.2 Methodology
Investigating real-life system failures is difficult not least because of the size and complexity of these systems and limited availability of verifiable information about the failures and the systems involved [5]. Even when it is possible to master these difficulties, it is still a challenge to locate exactly when in the development an error was introduced [10]. The official report makes clear that factors such as the sagging of power lines, overgrown trees, poor communication, and lack of personnel training all contributed to the blackout.
Since our interest was to learn RE lessons, our methodology for investigating failures examined the chain of events leading up to the failure, and isolated the role of software systems in the failure. We ascertained what the components of the system did, what they should have done, and how it would have been possible to identify the causes at the RE stage. Therefore, a framework was needed that allowed us to structure the potential causes of failures in a schematic way.
2.3 Problem Frames
The Problem Frames framework [4] is based on certain principles, four of which are relevant to the discussion. First, the framework encourages a systematic separation of descriptions into requirements, problem world context and specifications. For example, Figure 1 shows a high-level description of a type of software problem known as Commanded Behaviour Frame. In this problem, a software system, Control Machine, is required to apply control on a domain in the physical world, the Controlled Domain, according to the commands of a human agent, the Operator. Exactly how the Controlled Domain should behave, or what property it must have, when the Operator issues commands is described by the Commanded Behaviour Requirement. Therefore the requirement states the relationship between the operator command OCommand at the interface a_O, and the behaviour and property of the controlled domain CDBehaviour and CDProperty at the interface a_CD.
Description of the operator behaviour is concerned with the relationship between OInput at the interface b_O and OCommand at the interface a_O, namely what input the operator produces when a command is issued. Similarly, description of the Controlled Domain is concerned with the relationship between CMAction at the interface a_CM and CDBehaviour and CDProperty at the interface a_CD, namely what behaviour or property the controlled domain produces when machine actions are performed. The Operator and the Controlled Domain constitutes the problem world context of the Control Machine. The specification, description of the Control Machine, is concerned with the relationship between OInput at the interface b_O and CMAction at the interface a_CM, namely what actions the machine must perform when operator input is observed.
The operator may be a lift user and the controlled domain, a lift. The requirement will state how the lift should behave when the lift user issues commands. The
specification will state what operations the lift controller will perform when the operator input is received.
Second, this framework emphasises the need to understand the physical structure of the problem world context, and the behaviour of the domains involved. Third, the framework is based on recurring patterns of software problems, called frames. Each frame captures “concerns” of a certain type of software problems. For instance, the main concern of the “Commanded Behaviour” frame is to ensure that the system obeys the operator commands in imposing control on the behaviour of the system. An instantiation of a frame implies generation of certain conditions that need to be discharged.
Fourth, the framework provides a rich scheme for categorising and recording causes of failures. For instance, there are concerns specific to problem world domains, such as reliability, identity and breakage; there are frame concerns such as that of the required behaviour frame; and there are composition concerns such as conflict, consistency and synchronisation.
Therefore, we hypothesised that the Problem Frames framework provides an appropriate foundation for diagnosing failures involving software systems.
2.4 Expected Outcomes
There were two expected outcomes of this study. First, to establish whether Problem Frames are appropriate for investigating systems failures in terms of (i) locating causes of failure in the system structures, and (ii) recording them in a schematic way accessible by engineers within a community. Second, to identify causes of the blackout and either confirm them as known concerns or expand the repertoire of existing concerns by recording them schematically.
3 The Case Study
We now discuss two software-related failures that contributed significantly to the blackout. We briefly recount the chain of events leading to the blackout before discussing how Problem Frames were applied to diagnose the causes of failures and record the causes of failures.
3.1 Problem #1: State Estimator and Real Time Contingency Analysis
The infrastructure of the electric systems are large and complex, comprising many power generation stations, transformers, transmission lines, and individual and industrial customers. Providing reliable electricity through “real-time assessment, control and coordination of electricity production at thousands of generators, moving electricity across an interconnected network of transmission lines, and ultimately delivering the electricity to millions of customers” is a major technical challenge [14].
Reliability coordinators and control operators use complex monitoring systems to collect data about the status of the power network. In addition, they use a system called State Estimator (SE) to improve the accuracy of the collected data against the mathematical model of the power production and usage. When the divergence between the actual and predicted model of power production and usage is large, State Estimator will “produce a solution with a high mismatch”. Information from the improved model is then used by various software tools, including Real Time Contingency Analysis (RTCA), to evaluate the reliability of the power system, and alert operators when necessary, for instance when the power production is critically low. This evaluation can be done periodically or on demand of the operator.
“On August 14 at about 12:15 EDT, MISO’s [Midwest Independent System Operator] state estimator produced a solution with a high mismatch […] To troubleshoot this problem the analyst had turned off the automatic trigger that runs the state estimator every five minutes. After fixing the problem he forgot to re-enable it […] Thinking the system had been successfully restored, the analyst went to lunch. The fact that the state estimator was not running automatically on its regular 5-minute schedule was discovered about 14:40 EDT.”
When the automatic trigger was subsequently re-enabled, the state estimator produced a solution with a high mismatch due to further developments on the network. The official report assesses the situation as follows.
“In summary, the MISO state estimator and real time contingency analysis tools were effectively out of service between 12:15 EDT and 16:04 EDT. This prevented MISO from promptly performing precontingency “early warning” assessments of power system reliability over the afternoon of August 14.”
3.1.1 Problem Analysis
Based on this information, we constructed several problem diagrams to analyse relationships between the problem world domains mentioned in the description. Figure 2 shows a composite of two problem diagrams.
The problem of State Estimator is to produce Revised-Data for the Improved Electrical System Model of the grid, based on StatusData, and Estimates produced by the Mathematical Model. In Problem Frames, this type of problem is known as a “model building problem”. The problem of RTCA System is to examine Revised-Data and raise appropriate alerts on the Display Screen used by the Operator. This type of problem is known as an “information display problem”.
3.1.2 A Requirements Engineering Error?
On August 14, when the SE could not produce a consistent model, the operator turned off the automatic trigger of the SE in order to carry out maintenance work. Figure 3 shows the problem diagram, where the Maintenance Engineer uses the machine SE Trigger to turn on or turn off the State Estimator. This problem fits the Commanded Behaviour Frame shown in Figure 1. Part of the requirement here is to ensure that when the engineer issues the command OffNow, the SE should cease running.
When the maintenance work was done, the engineer forgot to re-enable the SE, leaving the electrical system model which the operators rely on, outdated. The resulting reliance by the operator on the outdated information was a significant contributing factor.
Clearly, the maintenance engineer should not have forgotten to re-engage the monitoring systems, and as a result, the problem would not have arisen. However, there is more to the problem than this being a “human error”. Perhaps the fallibility of human operators should have been better recognised in the system’s model of the world context.
3.1.3 Naming and Categorising Concerns
A key part of the problem is the requirement that says that the operator commands always have precedence over the system actions. This requirement relies on the world assumption that the biddable domain—i.e., a human agent such as the maintenance engineer—always gives the correct commands. However, the Commanded Behaviour frame recognises that the operator is a biddable domain, whose behaviour is non-causal and may not be reliable. Therefore, the operator always giving the correct command may be too strong a condition to discharge. This gives rise to two concerns: one related to the biddable domain and the other, related to the Commanded Behaviour frame.
We will call the concern related to the biddable domain the reminder concern, which raises the following conditions to discharge: (i) Whenever the biddable domain overrides the system operations, which system domain(s) should be reminded about the override? (ii) How long should the override last? (iii) What happens when the length of time expires? In the case of the blackout, this may be translated into a requirement that says (i) whenever the SE has stopped, the system should remind the operator of the SE status and how long it has had that status, and (ii) at the end of a maintenance procedure, the system should remind the engineer of the SE status. Such a reminder could make the engineer’s behaviour more reliable and perhaps could have helped prevent the failure.
A concern related to the Commanded Behaviour frame is whether the system should ignore the operator commands and take control of the system under certain circumstances. We will call this the system precedence concern. This may mean that the system should monitor the actions by the biddable domain, and intervene when the domain does not seem to be reliable. In that case, the requirement should be formulated as follows: Whenever maintenance work is thought to have been completed, the automatic trigger should be enabled.
Another key part of the problem is related to the issue of fault-tolerance in information display: What happens when the input the system receives from the
analogous model is unexpected? This may be due to an incorrect data type or an untimely input from the analogous model. We will call this the outdated information concern. Pertinent questions in this case are: 1) Can RTCA know that the Improved Electrical System Model is outdated? 2) What should it do about it? Had requirements engineers asked such questions, it could have led to a requirement such as “The Improved Electrical System Model must have a timestamp of when it was last updated successfully” and “If the Improved Electrical System Model is older than 30 minutes, the RTCA system should alert the operator that the electrical system model is now outdated”. This will at least warn the operator not to rely on the information provided by the improved electrical system model.
3.2 Problem #2: Alarm and Event Processing Routine (AEPR) System
Another significant cause of the blackout was due, in part, to the Alarm and Event Processing Routine (AEPR) system, “a key software program that gives grid operators visual and audible indications of events occurring on their portion of the grid” [14].
“Alarms are a critical function of an EMS [Energy Management System], and EMS-generated alarms are the fundamental means by which system operators identify events on the power system that need their attention. If an EMS’s alarms are absent, but operators are aware of the situation and the remainder of the EMS’s functions are intact, the operators can potentially continue to use the EMS to monitor and exercise control of their power system. In the same way that an alarm system can inform operators about the failure of key grid facilities, it can also be set up to warn them if the alarm system itself fails to perform properly. FE’s EMS did not have such a notification system.”
The problem of alerting the Grid Operator of the grid status, ascertained from the Grid & Sensors is shown in Figure 4. This problem fits a type of problem known as the Information Display Frame. The requirement is to raise a separate alarm to the operator (GOAlertedGrid) if and only if there are events on the grid that threaten the system reliability (GridOK): ¬GridOK ↔ GOAlertedGrid. The specification of AEPR could be to raise an alert (RaiseAlert) if and only if danger is detected on the grid (DangerDetected): DangerDetected ↔ RaiseAlert. In the case study, the AEPR system failed silently, leading the operators to continue to rely on outdated information, and was one of “the most important causes” of the blackout.
3.2.1 A Requirements Engineering Error?
The official report is very clear about the fact that there was a missing requirement “to monitor the status of EMS and report it to the system operators.” The British Standard 5839 on fire detection and fire alarm systems [12] is also concerned with monitoring systems, and anticipates such a requirement. Since fire alarms may fail when electricity is disconnected, the standard requires that alarms are fitted with a secondary independent source of power. In addition, when the source of power is switched from the primary to secondary source, the system should raise an alarm.
3.2.2 Naming and Categorising Concerns
The cause of this failure can be called a silent failure of alarm systems. Addressing this concern could raise questions such as: What happens if AEPR fails silently? Is it possible to detect such failures? What should be done when such failures are detected. This could have led the designers to the requirement that the system should monitor the behaviour of AEPR and raise an additional alarm when AEPR is thought to have failed. Figure 5 shows a problem diagram in which a wrapper intercepts the input to and output from the AEPR and when AEPR fails to respond as expected, a separate alarm is raised.
(GOAlertedAEPR). The wrapper AEPR Monitor can pass on danger detection from the grid to AEPR (DangerDetected@b GS ➔ DangerDetected@b' AM) and pass on the alert trigger from AEPR to the grid operator (RaiseAlert@a AM ➔ RaiseAlert@a' AM). Then the requirement to alert silent failure of AEPR is
\[ \neg \text{GridOK} \land \neg \text{GOAlertedGrid} \equiv \text{DetectDanger@b GS} \land \neg \text{GOAlertedAEPR}. \]
The specification for AEPR Monitor is
\[ \text{DetectDanger@b GS} \land \neg \text{RaiseAlert@a' AM} \equiv \text{RaiseSecondaryAlert@a' AM}. \]
An implementation of such a specification could have prevented the failure.
4 Related Work
There are many studies of software-related failures. Leveson, for instance, carried out several studies of software-related accidents, including those involving Therac-25 [7]. Johnson also has contributed an extensive literature on system accidents and incidents [5, 6, 2]. However, those studies of system failure of which we are aware have not been based on a clear conceptual structure for identifying, classifying, and recording the lessons learned at the level of detail appropriate for use by software engineers. For instance, the software engineering lessons Leveson and Turner [7] draw from the Therac-25 accidents include: “Documentation should not be an afterthought”, and “Designs should be kept simple”. Johnson investigated this power blackout in order to “sketch arguments for and against deregulation as a cause of the black-out” [6]. In this paper, we have applied a systematic approach to learning software engineering lessons, structured and described in ways that software engineers can relate to specifically.
Several variants of the Failure Modes and Effect Analysis (FMEA) method have been developed and applied in the development of dependable systems. Lutz and Woodhouse [8], for instance, applied a FMEA-based method to identify critical errors in requirements documents of two spacecraft systems. Our work is complementary to such methods, in the sense that we are concerned with identifying, structuring and documenting past software failures, which can then be used to narrow the search space in failure analysis.
5 Summary
Our experience of using Problem Frames to investigate system failures involving software systems showed that the framework of Problem Frames was appropriate for identifying causes of system failures and documenting the causes in a schematic and accessible way. The suggestion by the framework that requirements engineers should “look out” into the physical world, rather than “look into” the software was useful in directing and focusing the attention, because many of the causes of failures originated in the physical world context.
The separation of descriptions into requirements, problem world context and the specification enabled us to locate sources of failures in specific descriptions. Some failures were related to the requirements (such as missing requirements) and others to the problem world context (such as mismatch between the assumed and actual behaviour of the problem world domains). Furthermore, associating concerns to the requirement, problem world context, frame, domain type, style of composition, and the specifications provides a good basis for recording concerns in a schematic way.
In summary, specific lessons learnt from the blackout case study are: (i) a further specialisation of the reliability of the biddable domain, called the reminder concern, (ii) a further specialisation of the concern of the Commanded Behaviour frame where the system may have to take precedence over the operator action, called the system precedence concern, (iii) a further specialisation of the Information Display frame called the outdated information concern, and (iv) the silent failure concern related to the monitoring systems.
References
|
{"Source-Url": "http://oro.open.ac.uk/19427/1/Tun09RE.pdf", "len_cl100k_base": 4959, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 22047, "total-output-tokens": 6064, "length": "2e12", "weborganizer": {"__label__adult": 0.0003380775451660156, "__label__art_design": 0.0005564689636230469, "__label__crime_law": 0.0006256103515625, "__label__education_jobs": 0.0029697418212890625, "__label__entertainment": 0.00010764598846435548, "__label__fashion_beauty": 0.00018405914306640625, "__label__finance_business": 0.0006337165832519531, "__label__food_dining": 0.0003554821014404297, "__label__games": 0.0006265640258789062, "__label__hardware": 0.0030231475830078125, "__label__health": 0.0007777214050292969, "__label__history": 0.0003838539123535156, "__label__home_hobbies": 0.00017774105072021484, "__label__industrial": 0.0013065338134765625, "__label__literature": 0.00044918060302734375, "__label__politics": 0.00029158592224121094, "__label__religion": 0.0003969669342041016, "__label__science_tech": 0.27294921875, "__label__social_life": 0.00013685226440429688, "__label__software": 0.0210418701171875, "__label__software_dev": 0.69140625, "__label__sports_fitness": 0.0002084970474243164, "__label__transportation": 0.0007910728454589844, "__label__travel": 0.00017201900482177734}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27660, 0.03377]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27660, 0.60695]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27660, 0.92746]], "google_gemma-3-12b-it_contains_pii": [[0, 631, false], [631, 4882, null], [4882, 9604, null], [9604, 14256, null], [14256, 18008, null], [18008, 21801, null], [21801, 27660, null]], "google_gemma-3-12b-it_is_public_document": [[0, 631, true], [631, 4882, null], [4882, 9604, null], [9604, 14256, null], [14256, 18008, null], [18008, 21801, null], [21801, 27660, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27660, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27660, null]], "pdf_page_numbers": [[0, 631, 1], [631, 4882, 2], [4882, 9604, 3], [9604, 14256, 4], [14256, 18008, 5], [18008, 21801, 6], [21801, 27660, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27660, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
30f58a6e7facc52a151cf8067bff7cadb12999e0
|
Express Yourself! Regular Expressions vs SAS Text String Functions
Spencer Childress, Rho®, Inc., Chapel Hill, NC
ABSTRACT
SAS® and Perl regular expression functions offer a powerful alternative and complement to typical SAS text string functions. By harnessing the power of regular expressions, SAS functions such as PRXMATCH and PRXCHANGE not only overlap functionality with functions such as INDEX and TRANWRD, they also eclipse them. With the addition of the modifier argument to such functions as COMPRESS, SCAN, and FINDC, some of the regular expression syntax already exists for programmers familiar with SAS 9.2 and later versions. We look at different methods that solve the same problem, with detailed explanations of how each method works. Problems range from simple searches to complex search and replaces. Programmers should expect an improved grasp of the regular expression and how it can complement their portfolio of code. The techniques presented herein offer a good overview of basic data step text string manipulation appropriate for all levels of SAS capability. While this article targets a clinical computing audience, the techniques apply to a broad range of computing scenarios.
INTRODUCTION
This article focuses on the added capability of Perl regular expressions to a SAS programmer’s skillset. A regular expression (regex) forms a search pattern, which SAS uses to scan through a text string to detect matches. An extensive library of metacharacters, characters with special meanings within the regex, allows extremely robust searches.
Before jumping in, the reader would do well to read over ‘An Introduction to Perl Regular Expressions in SAS 9’, referencing page 3 in particular (Cody, 2004). Cody provides an excellent overview of the regex and a convenient table of the more common metacharacters, with explanations. Specifically, knowledge of the basic metacharacters, [\$\^\*\+\?\(\)\], goes a long way. Additionally, he covers the basics of the PRX suite of functions.
SAS character functions and regexes have many parallels. They both perform searches, search and replaces, and modifications. A clear breakdown and understanding of their similarities and differences allow a programmer to choose the most powerful method for dealing with text fields.
SAS MODIFIERS AND REGEX EQUIVALENTS
The SAS modifier, introduced in SAS 9, significantly enhances such functions as COMPRESS, SCAN, and FINDC. SAS modifiers are to regex character classes what Vitamin C is to L-ascorbic acid: an easily remembered simplification. A programmer with an understanding of these modifiers can jump right into regex programming.
Table 1 illustrates the relationship between SAS modifiers and regex character class equivalents:
<table>
<thead>
<tr>
<th>SAS Modifier</th>
<th>SAS Definition</th>
<th>POSIX Character Class</th>
<th>Regex Option</th>
<th>Regex Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>a or A</td>
<td>adds alphabetic characters to the list of characters.</td>
<td>/[[[0-9a-zA-Z_]]/</td>
<td></td>
<td></td>
</tr>
<tr>
<td>c or C</td>
<td>adds control characters to the list of characters.</td>
<td>/[[[-$]_]+]/</td>
<td></td>
<td></td>
</tr>
<tr>
<td>d or D</td>
<td>adds digits to the list of characters.</td>
<td>/[[[0-9]]/</td>
<td>\d/</td>
<td>\d is the metacharacter for digits.</td>
</tr>
<tr>
<td>f or F</td>
<td>adds an underscore and English letters (that is, valid first characters in a SAS variable name using VALIDVARNAME=V7) to the list of characters.</td>
<td>/[[[a-zA-Z_]/</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Modifier</td>
<td>Description</td>
<td>Regex</td>
<td>Example</td>
<td></td>
</tr>
<tr>
<td>----------</td>
<td>-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</td>
<td>----------------------------------------------------------------------</td>
<td>------------------------------------------------------------------------</td>
<td></td>
</tr>
<tr>
<td>g or G</td>
<td>adds graphic characters to the list of characters. Graphic characters are characters that, when printed, produce an image on paper.</td>
<td><code>/[:graph:]</code></td>
<td>\t is the metacharacter for tab.</td>
<td></td>
</tr>
<tr>
<td>h or H</td>
<td>adds a horizontal tab to the list of characters.</td>
<td><code>/\t</code></td>
<td></td>
<td></td>
</tr>
<tr>
<td>i or I</td>
<td>ignores the case of the characters.</td>
<td><code>/expression/i</code></td>
<td>The 'i' after the second delimiter of the regex tells the regex to ignore case in 'expression'.</td>
<td></td>
</tr>
<tr>
<td>k or K</td>
<td>causes all characters that are not in the list of characters to be treated as delimiters. That is, if K is specified, then characters that are in the list of characters are kept in the returned value rather than being omitted because they are delimiters. If K is not specified, then all characters that are in the list of characters are treated as delimiters.</td>
<td><code>/[^expression]</code></td>
<td>The '~' as the first character of a character class enclosed in square brackets, negates 'expression'. That is, this character class matches everything not included in 'expression'.</td>
<td></td>
</tr>
<tr>
<td>l or L</td>
<td>adds lowercase letters to the list of characters.</td>
<td><code>/[:lower:]</code></td>
<td></td>
<td></td>
</tr>
<tr>
<td>n or N</td>
<td>adds digits, an underscore, and English letters (that is, the characters that can appear in a SAS variable name using VALIDVARNAME=V7) to the list of characters.</td>
<td><code>/[a-zA-Z_0-9]</code></td>
<td>Similar to SAS modifier ‘f’, ‘n’ adds digits. To match, a character class needs only the range 0-9 added to the character class equivalent of ‘f’.</td>
<td></td>
</tr>
<tr>
<td>o or O</td>
<td>processes the charlist and modifier arguments only once, rather than every time the function is called.</td>
<td></td>
<td>Equivalent to initializing and retaining the regex ID with PRXPARSE at the top of the data step, rather than initializing it at each data step iteration.</td>
<td></td>
</tr>
<tr>
<td>p or P</td>
<td>adds punctuation marks to the list of characters.</td>
<td><code>/[:punct:]</code></td>
<td></td>
<td></td>
</tr>
<tr>
<td>s or S</td>
<td>adds space characters to the list of characters (blank, horizontal tab, vertical tab, carriage return, line feed, and form feed).</td>
<td><code>/[:space:]</code></td>
<td>\s is the metacharacter for invisible space, including blank, tab, and line feed.</td>
<td></td>
</tr>
<tr>
<td>t or T</td>
<td>trims trailing blanks from the string and charlist arguments.</td>
<td><code>/\b</code></td>
<td>The word boundary metacharacter \b, positioned after a space, prevents a regex from matching trailing blanks.</td>
<td></td>
</tr>
<tr>
<td>u or U</td>
<td>adds uppercase letters to the list of characters.</td>
<td><code>/[:upper:]</code></td>
<td></td>
<td></td>
</tr>
<tr>
<td>w or W</td>
<td>adds printable (writable) characters to the list of characters.</td>
<td><code>/[:print:]</code></td>
<td></td>
<td></td>
</tr>
<tr>
<td>x or X</td>
<td>adds hexadecimal characters to the list of characters.</td>
<td><code>/[:xdigit:]</code></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Table 1. SAS Modifiers and Equivalent POSIX Character Classes and/or Regexes
POSIX character classes are collections of common characters and map not only to a subset of SAS modifiers, but to the ANY and NOT collection of functions such as ANYALPHA or NOTPUNCT. However other modifiers do not map directly, such as 'n', which can be used to identify appropriately named SAS variables. Note that character classes within square brackets can be customized extensively to identify any set of characters.
BYTE offers a simple method to check which characters a character class identifies. The code snippet below makes for an excellent SAS abbreviation to test regexes:
```sas
data test;
file print;
do i = 0 to 255;
char = byte(i);
regex = prxmatch('/expression/', char);
put i char regex;
output;
end;
run;
```
This data step creates a dataset called ‘test’ and prints to the Output window. By feeding the function BYTE values ranging from 0 to 255, SAS illustrates the ASCII or EBCDIC collating sequence. In Windows, Unix, and OpenVMS operating system environments, 0 through 127 comprise the standard set of ASCII characters while 128-255 vary between OS environments.
Within the regex, ’expression’ represents the character class of interest. PRXMATCH matches the regex in its first argument against each character captured in variable ‘char’. If the character matches PRXMATCH returns a 1.
SEARCHING TEXT
One of a SAS programmer’s most common tasks involves text searches. Below, examples range from simple to complex.
SEARCHING TEXT – INDEX
INDEX might very well be the first function a programmer uses. It returns the position of the first occurrence of a substring within a string:
```sas
data Search_INDEX;
indexed = INDEX('asdf', 'sd');
put indexed;
run;
```
INDEX searches source string ‘asdf’ for substring ‘sd’. As one would expect the variable INDEXED returns a 2, corresponding to the second character in ‘asdf’.
The same outcome can be accomplished with PRXMATCH:
```sas
data Search_PRXMATCH;
prxmatched = PRXMATCH('/sd/', 'asdf');
put prxmatched;
run;
```
PRXMATCH takes as its first argument either the regex itself or a regex ID and the source string as its second. The forward slashes in the first argument are called delimiters, which open and close the regex. Everything in between them is the search pattern.
SAS generates a regular expression ID which defines the regex at each invocation of a PRX function. Thus, to reduce processing time, a regex could be defined at the top of a dataset and retained like so:
```sas
data Retain_PRXPARSE;
retain regexid;
if _n_ = 1 then regexid = prxparse('/sd/');
prxmatched = prxmatch(regexid, 'asdf');
put prxmatched;
run;
```
PRXPARSE only processes a regex; it does not match it against anything. For simplicity’s sake, PRXPARSE will not appear in code examples.
**SEARCHING TEXT – HANDLING CHARACTER CASE**
INDEX cannot inherently account for letter case. Suppose the letter case of the source string is unknown. In this situation INDEX would require the services of UPCASE or LOWCASE:
```
data Search_INDEX;
indexed = INDEX('ASDF', UPCASE('sd'));
put indexed;
run;
```
As one might expect, the substring needs to be hardcoded to ‘SD’ or nested within UPCASE; otherwise, INDEX might come back empty-handed. A regex handles letter case a little more easily:
```
data Search_PRXMATCH;
prxmatched = PRXMATCH('/sd/i', 'ASDF');
put prxmatched;
run;
```
Notice the regex now contains an ‘i’ after the closing forward slash. This modifier simply ignores case in the source string ‘ASDF’.
**SEARCHING TEXT – DIGITS**
FINDC trumps INDEX when dealing with character classes because of its modifier argument. Suppose one is interested in identifying any digit:
```
data Search_FINDC;
found = FINDC('2357', , 'd');
put found;
run;
```
The modifier ‘d’ in the third argument identifies any digit in a string. Similarly, with a regex, the character class ‘\d’ applies:
```
data Search_PRXMATCH;
prxmatched = PRXMATCH('/\d/', '2357');
put prxmatched;
run;
```
Notably, PRXMATCH handles the functionality of both INDEX and FINDC.
**SEARCHING TEXT – DATES**
Dates are wonderful bits of data. They come in all shapes and sizes, at times in the same variable. This variability can raise significant programming hurdles which the regex mitigates. With a few regexes, any date can be identified. ‘DATEw.’ and ‘YYMMDDw.’ provide excellent examples:
```
data Search_DATEw;
*Four 'DATEw.' examples, the last of which is not a valid date.;
dates = '05jan1986 5jan1986 05jan86 05jau1986';
do i = 1 to countw(dates);
*Matching simply two digits followed by a valid three character month followed by four digits (DDMMYYYY).;
datew1 = prxmatch('/\d\d(jan|feb|mar|apr|may|jun|' || 'jul|aug|sep|oct|nov|dec)\d{4}/i',
scan(dates, i));
*Matching as above except with optional leading '0' to month and day.;
datew2 = prxmatch('/[0123]?\d(jan|feb|mar|apr|may|jun|' || 'jul|aug|sep|oct|nov|dec)\d{4}/i',
scan(dates, i));
run;
```
*Four 'DATEw.' examples, the last of which is not a valid date.;
*Matching simply two digits followed by a valid three character month followed by four digits (DDMMYYYY).;
*Matching as above except with optional leading '0' to month and day.;
In the second and third examples, the metacharacter '?' tells the regex to look for the preceding character or group of characters 0 or 1 time, essentially classifying it as optional. In the third example the parentheses surround two two-digit numbers, '19' and '20'. The '|' is the alternation operator, equivalent to 'or', and tells the regex to match the pattern to the left or the pattern to the right.
In the fourth example, use of the POSIX character class PUNCT allows the regex to accept any punctuation mark as a delimiter, including '-' and '/'. In the final example, applying '?' to the punctuation character class makes the delimiter optional.
PRXMATCH harnesses the power of the regex to match a wide range of text patterns, all within the bounds of a single function.
SEARCH AND REPLACE
Modifying a text string logically follows searching for a text string. A number of SAS functions modify text strings: COMPRESS eliminates specified characters; COMPBL reduces multiple blanks to a single blank; LEFT, TRIM, and STRIP remove leading, trailing, and both leading and trailing blanks; UPCASE and LOWCASE modify letter case; TRANWRD replaces one substring with another substring. That is a long list of functions, a list which PRXCHANGE duplicates almost entirely.
A regex which searches and replaces has two parts: the search pattern between the first and second delimiters, just like in PRXMATCH, and the replacement pattern between the second and third delimiters. A regex search and replace has the basic form 's/<search pattern>/<replacement pattern>/'. Note the leading 's'; the function will cause an error and stop the data step without this signifier.
SEARCH AND REPLACE – COMPRESS
COMPRESS provides a good example of one of PRXCHANGE's parallels:
data Replace_COMPRESS;
compressed = compress('abacadabra', 'a');
put compressed;
run;
The second argument of COMPRESS tells it which character, in this case ‘a’, to dispense with in the first argument, ‘abacadabra’. Variable ‘compressed’ returns ‘bcdbr’.
PRXCHANGE works a bit differently:
data Replace_PRXCHANGE;
prxchanged = prxchange('s/a///', -1, 'abacadabra');
put prxchanged;
run;
This regex looks a little more complicated than one which simply searches. The leading ‘s’ dictates that this regex is a search and replace. The ‘a’ between the first two delimiters specifies what to search for. Notice the second and third delimiters have no intermediate characters. Effectively, this regex replaces ‘a’s in ‘abacadabra’ with nothing. However, to replace all ‘a’s, PRXCHANGE’s second argument must be ‘-1’; were it ‘1’, only the first ‘a’ would be removed.
Keep in mind that the power of the regex is that it can identify and modify a number of different characters with a character class enclosed in square brackets. The inverse subset of characters, i.e. everything except ‘a’, can be identified with a slight modification to the search:
data Replace_COMPRESS;
compressed = compress('abacadabra', 'a', 'k');
put compressed;
run;
data Replace_PRXCHANGE;
prxchanged = prxchange('s/[^a]/', -1, 'abacadabra');
put prxchanged;
run;
Both methods return ‘aaaaa’. Setting the third parameter of COMPRESS to ‘k’ compresses everything except ‘a’. Similarly, the character class ‘[^a]’ specifies everything but ‘a’.
SEARCH AND REPLACE – COMPBL
COMPBL reduces multiple blanks to a single blank:
data Replace_COMPBL;
compbled = compbl(' abc def ghi');
put compbled;
run;
PRXCHANGE, in conjunction with a quantifier of the form ‘{n,m}’ where ‘n’ is the lower bound of the quantifier and ‘m’ the upper, behaves similarly:
data Replace_PRXCHANGE;
prxchanged = prxchange('s/ {2,}/ /', -1, ' abc def ghi');
put prxchanged;
run;
Between the first two delimiters is the expression ‘ {2,}’, which identifies any sequence of two or more spaces. Notice the upper bound of the quantifier is missing; no upper bound instructs the search pattern to find any number of the previous character class or group of characters.
SEARCH AND REPLACE – LEFT, TRIM, STRIP
PRXCHANGE can also emulate LEFT, TRIM, and STRIP. The metacharacters ‘^’ and ‘$’ come in handy here:
data Replace_PRXCHANGE;
*Leading blanks;
lefted = count(prxchange('s/^ +//', 1, ' abc def ghi'), ' ');
*Trailing blanks;
trimmed = count(prxchange('s/ +$//', 1, ' abc def ghi'), ' ');
*Both leading and trailing blanks;
stripped = count(prxchange('s/(^ +| +$)\//', 2, 'abc xyz'), ' ');
put lefted trimmed stripped;
run;
In the first example, '^' instructs the search pattern to search only the beginning of the text string. In the second, '$'
searches only the end of the text string. And in the third, enclosing the first and second search patterns in
parentheses and concatenating them with the alternation operator '|' captures both conditions. Notice that the
second argument changed to '2'. With two conditions PRXCHANGE needs to match both patterns to alter both the
leading and trailing blanks.
SEARCH AND REPLACE – UPCASE AND LOWCASE
PRXCHANGE can change letter case as well. This task is simple enough with UPCASE and LOWCASE:
data Replace_UPCASE;
upcased = upcase('asdf');
put upcased;
run;
data Replace_LOWCASE;
lowcased = lowcase('ASDF');
put lowcased;
run;
The regex requires a capture buffer and the application of a character class between the second and third delimiters:
data Replace_PRXCHANGE;
upcased = prxchange('s/(.*)/\U$1/', 1, 'asdf');
lowcased = prxchange('s/(.*)/\L$1/', 1, 'ASDF');
put upcased;
run;
Everything between the parentheses goes into what is known as a capture buffer. A search pattern can have any
number of capture buffers, which may then be referred to between the second and third delimiters by '$n', where 'n'
refers to the where the capture buffer falls in the entire sequence of capture buffers. In the above example, '$1'
refers to the single capture buffer between the first two delimiters, '(.*). This search pattern effectively recognizes the
entire source string: '.' is the wildcard metacharacter and '*' matches 0 or more instances of the previous character or
group of characters.
To alter character case, apply either the character class \U or \L, to up-case or low-case the subsequent capture
buffer reference.
SEARCH AND REPLACE – TRANWRD
TRANWRD replaces all occurrences of a substring in a character string:
data Replace_TRANWRD;
tranwrded = tranwrd('As easy as 1-two-three!', '1', 'one');
put tranwrded;
run;
TRANWRD's arguments are three-fold: source string, target string, and replacement string. In this example,
TRANWRD replaces all occurrences of '1' with 'one' to return 'As easy as one-two-three!'. PRXCHANGE easily
reproduces the same functionality:
data Replace_PRXCHANGE;
prxchanged = prxchange('s/l/one/\', -1, 'As easy as 1-two-three!');
put prxchanged;
run;
The search pattern in the above regex equates to the second argument of TRANWRD. The replacement pattern
equates to the third argument of TRANWRD. These two functions diverge in that PRXCHANGE can replace one or
more occurrences of '1'.
PRXCHANGE can reproduce most SAS text string modification functions. In some cases it adds unnecessary complexity, but in most cases it provides a superior, more versatile approach.
APPLICATION
Regexes have many practical applications, as the two examples below demonstrate.
APPLICATION – SPACE-DELIMITED TO COMMA-DELIMITED
A common macro parameter is a list of variables, space-delimited. Space-delimited values work great for keep lists, but not so well in PROC SQL or CALL MISSING. Thus, conversion to a comma-delimited list allows a list to be used in both cases:
```
%let VarList = Var1 Var2 Var3 Var4 Var5;
%let TRANWRDCommas = %sysfunc(tranwrd(&VarList, %str( ), %str(, )));
%let PRXCHANGECommas = %sysfunc(prxchange(s/%str( +)/%str(, )/, -1, &VarList));
```
```
%put &TRANWRDCommas;
Var1, Var2, , Var3, , , Var4, Var5
%put &PRXCHANGECommas;
Var1, Var2, Var3, Var4, Var5
```
Input list 'VarList' is space-delimited, but it has a variable number of spaces between each item. TRANWRD by itself behaves predictably, matching each space with a comma, but this new list will cause errors. Of course, %SYSFUNC(COMPBL()) could be nested around 'VarList', but PRXCHANGE handles this issue by itself. With the quantifier '+' which signifies one or more, following the search pattern, any number of spaces can be replaced with a single ','.
APPLICATION – INTEGERS, NON-INTEGERS, AND OTHER STATISTICS
The clinical programmer produces a LOT of tables. Without the regex, distinguishing a statistic in the form 'Median (Min, Max)' from one in the form 'Count (Percentage)' and their derivations is a tall task. They could be identified, but with no small amount of IF-THEN-ELSE clauses. Regexes can identify almost all statistics with just four (the final ELSE illustrates what would not be captured with the first four clauses):
```
%let width = 20;
data fin;
set stats;
array cols (0:3) $25 col5 col2-col4;
*Pad column variables with spaces, for table alignment, assuming monospaced font.;
do i = 0 to &ntrt;
*Strip column variables to remove leading blanks.;
cols(i) = prxchange('s/^ +//', 1, (cols(i));
*Match integers first by searching for an optional dash and any number of digits, with trailing spaces only.;
if prxmatch('/^(-?\d+) *$/', cols(i)) gt 0
then cols(i) = repeat(' ', floor(&width/2) - length(cols(i)) - 1)
|| cols(i);
*Match statistics with commas.;
else if prxmatch('/,/', cols(i)) gt 0
then cols(i) = repeat(' ', floor(&width/2) - prxmatch('/,/', cols(i)))
|| cols(i);
```
*Match statistics with parentheses which do not contain commas.
else if prxmatch('/ ?\(/', cols(i)) gt 0
then cols(i) = repeat(' ', floor(&width/2) -
prxmatch('/ ?\(/', cols(i)))
|| cols(i);
*Match statistics with any other punctuation mark.
else if prxmatch('/\[[[:punct:]]\]/', cols(i)) gt 0
then cols(i) = repeat(' ', floor(&width/2) -
prxmatch('/\[[[:punct:]]\]/', cols(i))
|| cols(i);
*Match anything else, but hopefully not.
if prxmatch('/[^ ]/', cols(i)) gt 0
then cols(i) = repeat(' ', floor(&width/2) -
length(cols(i)) - 1)
|| cols(i);
end;
run;
By identifying a search pattern for each type of statistic, the position of that pattern can be used to align all statistics along a common column.
For example, aligned summary statistics might look like so:
<table>
<thead>
<tr>
<th>Characteristics</th>
<th>Boring</th>
<th>Drugs!</th>
</tr>
</thead>
<tbody>
<tr>
<td>Age at screening (years)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>n</td>
<td>9</td>
<td>10</td>
</tr>
<tr>
<td>Mean</td>
<td>36.3</td>
<td>36.7</td>
</tr>
<tr>
<td>SD</td>
<td>7.16</td>
<td>6.53</td>
</tr>
<tr>
<td>Median</td>
<td>35.0</td>
<td>38.5</td>
</tr>
<tr>
<td>Range (Min, Max)</td>
<td>(23, 44)</td>
<td>(27, 47)</td>
</tr>
</tbody>
</table>
| Ethnicity - n (%) | | |
| Hispanic or Latino | 1 (11.1) | 0 |
| Not Hispanic or Latino | 8 (88.9) | 10 (100.0) |
| Not Reported | 0 | 0 |
| Unknown | 0 | 0 |
Table 2. Aligned Statistics
The common column is the punctuation mark, except for the integer and the percentage, which align on the column following the first ones digit. Thus subtracting the position of the common column from the midpoint of the table cell yields the number of leading blanks necessary to align each statistic
CONCLUSION
The regex is a powerful complement to a SAS programmer’s repertoire of code. It can reduce multiple IF-THEN-ELSE statements to a single line of code. Not only does the regex allow a SAS programmer to improve his or her efficiency, he or she can perform searches which were previously impossible. And most importantly, crafting a regex is fun, so go out and express yourself!
REFERENCES
RECOMMENDED READING
http://www.regular-expressions.info/
CONTACT INFORMATION
Your comments and questions are valued and encouraged. Contact the author at:
- Name: Spencer Childress
- Enterprise: Rho, Inc.
- Address: 6330 Quadrangle Dr
- City, State ZIP: Chapel Hill, NC 27514
- Work Phone: 919 408 8000
- Fax: 919 408 0999
- E-mail: spencer_childress@rhoworld.com
SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration.
Other brand and product names are trademarks of their respective companies.
|
{"Source-Url": "https://pharmasug.org/proceedings/2014/BB/PharmaSUG-2014-BB08.pdf", "len_cl100k_base": 6364, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 22698, "total-output-tokens": 6718, "length": "2e12", "weborganizer": {"__label__adult": 0.00018930435180664065, "__label__art_design": 0.0003764629364013672, "__label__crime_law": 0.0002529621124267578, "__label__education_jobs": 0.0006861686706542969, "__label__entertainment": 6.860494613647461e-05, "__label__fashion_beauty": 9.775161743164062e-05, "__label__finance_business": 0.0003230571746826172, "__label__food_dining": 0.00019478797912597656, "__label__games": 0.0003006458282470703, "__label__hardware": 0.0004572868347167969, "__label__health": 0.00024127960205078125, "__label__history": 0.00010526180267333984, "__label__home_hobbies": 7.230043411254883e-05, "__label__industrial": 0.0002636909484863281, "__label__literature": 0.00014853477478027344, "__label__politics": 0.000148773193359375, "__label__religion": 0.0002465248107910156, "__label__science_tech": 0.01337432861328125, "__label__social_life": 7.456541061401367e-05, "__label__software": 0.03692626953125, "__label__software_dev": 0.94482421875, "__label__sports_fitness": 0.00013625621795654297, "__label__transportation": 0.00016009807586669922, "__label__travel": 0.00011605024337768556}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27623, 0.01504]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27623, 0.524]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27623, 0.81004]], "google_gemma-3-12b-it_contains_pii": [[0, 3520, false], [3520, 9533, null], [9533, 12193, null], [12193, 14826, null], [14826, 16601, null], [16601, 19193, null], [19193, 21926, null], [21926, 24471, null], [24471, 27051, null], [27051, 27623, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3520, true], [3520, 9533, null], [9533, 12193, null], [12193, 14826, null], [14826, 16601, null], [16601, 19193, null], [19193, 21926, null], [21926, 24471, null], [24471, 27051, null], [27051, 27623, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27623, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27623, null]], "pdf_page_numbers": [[0, 3520, 1], [3520, 9533, 2], [9533, 12193, 3], [12193, 14826, 4], [14826, 16601, 5], [16601, 19193, 6], [19193, 21926, 7], [21926, 24471, 8], [24471, 27051, 9], [27051, 27623, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27623, 0.10897]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
4906bca16e7a07c86da2b308e148fce7aaf027d9
|
Filtering Network Traffic Based on Protocol Encapsulation Rules
Original
Availability:
This version is available at: 11583/2503367 since:
Publisher:
IEEE
Published
DOI:10.1109/ICCNC.2013.6504238
Terms of use:
openAccess
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
(Article begins on next page)
Filtering Network Traffic Based on Protocol Encapsulation Rules
Ivano Cerrato
Politecnico di Torino
Corso Duca degli Abruzzi 24,
Torino, Italy
Email: ivano.cerrato@polito.it
Marco Leogrande
Politecnico di Torino
Corso Duca degli Abruzzi 24,
Torino, Italy
Email: marco.leogrande@polito.it
Fulvio Risso
Politecnico di Torino
Corso Duca degli Abruzzi 24,
Torino, Italy
Email: fulvio.risso@polito.it
Abstract—Packet filtering is a technology at the foundation of many traffic analysis tasks. While languages and tools for packet filtering have been available for many years, none of them supports filters operating on the encapsulation relationships found in each packet. This represents a problem as the number of possible encapsulations used to transport traffic is steadily increasing and we cannot define exactly which packets have to be captured.
This paper presents our early work on an algorithm that models protocol filtering patterns (including encapsulation constraints) as Finite State Automata and supports the composition of multiple expressions within the same filter. The resulting, optimized filter is then translated into executable code. The above filtering algorithms are available in the NetBee open source library, which provides some basic tools for handling network packets (e.g., a `tcpdump-like` program) and APIs to build more advanced tools.
I. INTRODUCTION
In the recent years we have observed a reduction in the number of layer-7 protocols in use. In fact, while in the past each application defined its own protocol, nowadays most of the traffic is conveyed through the web. As a consequence, HTTP has become the de-facto protocol for many different applications. Surprisingly, the opposite phenomenon was observed at the bottom of the protocol stack. While protocol encapsulations were definitely simple in the past (IP in Ethernet was by far the most common encapsulation), new necessities, arising in particular from network virtualization, are transforming the lower layers of the protocol stack into a mess. Figure 1 presents one of the possible examples of the complexity growing over the years, which translates e.g. into frames that need several more fields to transport a simple IP packet, compared to what it was defined in the original Ethernet DIX specification in the early '80s.
Particularly, when operating at the upper layers (e.g., filtering based on TCP ports) it is important to be able to capture all the traffic we are interested in, independently from the actual encapsulations used at lower layers, be it plain Ethernet, VLAN in WiFi, MPLS, IPv6 in IPv4, GRE-tunneled, or anything else. While this looks simple in principle (essentially, we need to support more encapsulations when generating the actual filtering executable), it may not be easy to modify a filtering tool to handle more complex protocol encapsulations.
The NetPDL language [1] aims at solving this problem, by enabling the creation of tools in which protocol formats and encapsulations are no longer hardwired into the filtering program, but are in a separate XML file. Additional encapsulations can be added by simply editing the XML file, without any modification to the tool itself. This solution allows users to define high-level packet filtering rules (e.g., `tcp.port == 80`), while the NetPDL-based tool will take care of selecting the desired traffic, whatever is the encapsulation that is being used. However, the capability to follow any possible encapsulation may result into slower processing (as the filtering code is forced to check for any possible encapsulation path) and this additional cost may not always be acceptable. A possible solution to this issue is provided by the NetPFL language [2], which allows to define packet filtering rules including which encapsulation paths have to be followed (e.g., `ip in vlan in ethernet`), without modifying the NetPDL protocol definitions.
While the NetPFL language is very flexible, so far only a partial implementation is available [2], which does not support the explicit filtering on protocol chains; consequently the possible optimizations were not taken into consideration at all. This paper extends the initial work by presenting an
algorithm that can generate efficient filtering code based on NetPFL header chains, that can select traffic based on one or more encapsulation rules specified at run-time. The NetPFL filtering expression is transformed into a formalism based on Finite State Automata (FSA), where states and transitions are derived by the NetPDL database in use. The FSA algebra offers the possibility to compose multiple filters, with a result that can be translated into a deterministic FSA (DFA) that guarantees the fastest matching path for that filter. Finally, that DFA is translated into the executable code that actually analyzes the network packets.
This paper is structured as follows. Section II presents the main concepts of the NetPFL language. The DFA building algorithm is presented in Section III, while an overview of the implementations is given in Section IV. A preliminary evaluation of the algorithm is shown in Section V, while Section VI concludes the paper.
II. THE NETPFL LANGUAGE
The Network Packet Filtering Language (NetPFL) [2] is a declarative high-level language that can be used to define packet filtering rules. The NetPFL syntax does not define the list of protocols and fields supported; instead, they are dynamically bound to those defined in an external data set that, in our implementation, is based on the NetPDL language.
NetPFL is more complex than other existing packet filtering languages, as it allows to specify not only the conditions that a packet must satisfy in order to be accepted, but also the actions to be executed when a packet is accepted and the stream the packet belongs to, in order to support multiple filters at the same time. The filtering syntax is very similar to the one used by classical packet filters and it supports multiple conditions to be joined with the and and or logical operators. Moreover, a condition can be negated through the not operator.
In addition to the constructs mentioned above, which are fairly common across different filtering languages, NetPFL supports other primitives that operate on protocol encapsulations. Among those, there is the header chains feature, which is the focus of this paper.
A header chain defines a filtering condition based on protocol encapsulation rules that have to be satisfied when capturing the traffic. Its core elements are the keywords in and not in, which require respectively that, within a packet: (i) the left-hand element is directly encapsulated into the right-hand one, and that (ii) the left-hand element is encapsulated in any protocol other than the right-hand one. For instance, tcp in ip accepts a packet defined as WiFi-IP-TCP, while rejects WiFi-IP-IPv6-TCP; ip not in vlan matches a packet such as Ethernet-IP, while discards Ethernet-VLAN-IP. An element of the header chain could be a header set, which specifies a set of protocols that can be (or must not be, in case of the not in keyword) in a given position of the encapsulation stack. For instance, previous examples make use of a single protocol, which can be seen as a header set with cardinality equal to one. A header set is expressed by a comma-separated list of protocol identifiers, enclosed in curly braces; e.g., ip in \{vlan, llc\} selects all the packets having IP directly encapsulated in VLAN or LLC. The any placeholder can be used to define a single encapsulation in which any protocol is valid. For instance, the header chain tcp in any in wifi accepts packets such as WiFi-IP-TCP and WiFi-IPv6-TCP, while WiFi-IPv6-IP-TCP is rejected because any matches a single protocol only. The last components of a header chain are the repeat operators, i.e. ’+’, ’∗’ and ‘?’ which mean respectively (i) one or more, (ii) zero or more and (iii) zero or one consecutive occurrences of one or more protocols. E.g., tcp in ip+ in ppp accepts any packet having TCP encapsulated in a sequence of one or more IP headers, finally encapsulated in PPP. Instead, tcp in any+ in ppp allows, between TCP and PPP, any protocol to be repeated any number of times, such as in the packet PPP-IP-IPv6-IP-TCP.
It is worth noting that a header chain specifies a sequence of protocols that could be everywhere in the packet, and therefore could be preceded and followed by any protocol repeated an unspecified number of times. For instance, ipv6 in ip does not mandate the use of a specific encapsulation at the link layer, hence all the supported ones are allowed (e.g., plain Ethernet, Ethernet with VLANs, etc.). An exception is given by the sequences having, in the right-most position, the starting protocol of the database in use; e.g. ip in ethernet in startproto2 matches the packets having IP encapsulated in Ethernet, which in turn is not encapsulated in any other protocol.
III. BUILDING THE ENCAPSULATION DFA
This section presents the algorithm used to create the encapsulation DFA, i.e., a deterministic FSA that describes the traffic to be filtered according to a NetPFL header chain and the encapsulations defined in a NetPDL database. Those encapsulations can be represented with a Protocol Encapsulation Graph (PEG), a direct, potentially cyclic graph modeling the encapsulation relationships among protocols. Each node of the PEG corresponds to a different protocol, while the edge from X to Y means that, within a packet, the protocol Y could be directly encapsulated into X. In this section we refer to the PEG shown in Figure 2, excluding dashed lines and protocols.
Although the PEG looks similar to an automaton, the encapsulation DFA cannot be simply obtained by removing edges from the PEG itself. An example is provided by the filter tcp in ip in ip in ipv6, which requires exactly two IP headers between IPv6 and TCP, which cannot be modeled by a naive transformation of the PEG in Figure 2 into an automaton. As a consequence, a more complex algorithm for the creation of the encapsulation DFA that models arbitrary header chains is needed.
\footnote{Since IPv4 traffic is nowadays much more common than IPv6, in this paper we use the ip token to refer to IPv4.}
\footnote{In NetPDL, startproto is a dummy protocol that identifies the beginning of the packet; all link-layer protocols defined in the NetPDL database are encapsulated directly into it.}
The first step of the algorithm aims at creating a deterministic FSA (DFA) which is derived from (i) the NetPFL filtering string and (ii) the encapsulation rules defined in the given NetPDL database.
To perform the translation into a FSA, the NetPFL filtering string is split in a target protocol followed by an arbitrary number of tokens (potentially zero), where target is the left-most protocol of the header chain. Instead, a token is defined as:
\[(\text{in|notin})\ \text{pSet} \ [\text{repOp}]\]
where pSet can be a header set or the any placeholder, and repOp determines how many instances of that pSet can be present (at most one, from zero to \(N\), from one to \(N\)). Obviously, if the repOp is not specified, the pSet must appear exactly once. The above mentioned elements of the NetPFL string are converted, from right to left, one at a time, into automaton basic building blocks, depending on their repeat operator. The translation rules are depicted in Figure 3 and derive from the standard mapping rules defined from the FSA theory [3]. Furthermore, since the header chain allows the sequence of protocols to be everywhere into the packet, optionally preceded and followed by any protocol repeated an arbitrary number of times, the resulting automaton begins and ends with an “eatall” state, equivalent to the \(\ast\) element of the regular expressions\(^3\). Then, the last “eatall” state of the automaton is replaced with an equivalent self loop, firing with any symbol of the alphabet, over the last state of the automaton. All states are then connected in order and the right-most one represents the accepting state of the automaton.
Further down in the algorithm we will need to remember which protocol originated each state: for this reason, each state is associated with the protocols specified by the token (or target) from which it derives\(^4\). If a state has an incoming \(\epsilon\) transition, the state itself is also associated with the protocols related to the origin state of the transition itself. We perform this operation because, when the control of the automaton reaches the origin state, the \(\epsilon\) transition causes the control to spontaneously reach the target state; hence, when parsing the packet, any protocol reached in the origin state will be reached also in the target one.
As an example, the FSA of Figure 4 represents the automaton built from the header chain tcp in ip* in ipv6 in ethernet. As highlighted with the double circle, the rightmost state represents the accepting state. Figure 4 shows also the element of the NetPFL string from which each state of the FSA derives (at the top), and the protocols associated with each state (in the grey boxes at the bottom). The IPv6 entry in the box related to state \(Q_3\) is enclosed in square brackets to emphasize that this association is a consequence of the \(\epsilon\) transition.
So far, FSA transitions are not associated with any symbol, except for the \(\epsilon\) transitions that derive directly from the building blocks of Figure 3 and the self loop on the last state. In order to label properly each transition, we must define the alphabet of the FSA, which consists in the set of protocol encapsulation rules that derive from the PEG created from the NetPDL database in use. Each symbol of the alphabet is named after the two protocols involved in that encapsulation rule, the (abbreviated) name of the originating protocol first, the target last. For instance, the transition from Ethernet to IPv6 originates the symbol eth-ipv6, which is received by the FSA if IPv6 is directly encapsulated into Ethernet.
In our FSA building algorithm, a transition is labeled with all the symbols having the name satisfying the following constraints: (i) the first part, representing the origin protocol, is equal to one of the protocols associated with the source state of the transition itself; (ii) the second part, i.e. the target protocol, is equal to one of the protocols specified by the NetPFL token/target from which the destination state derives, hence excluding the associations derived from the possible presence of an \(\epsilon\) transition.
Figure 5 shows the result of the previous labeling rules when applied on the transition of the FSA in Figure 4. In this case, the * symbol is a compact form to indicate that the transition
\(^3\)Filters having startproto in the right-most position are an exception to this rule, since by definition this fictitious protocol represents the beginning of the packet. In those cases, the leading “eatall” state is omitted.
\(^4\)In case the pSet is preceded by the notin keyword, the state is associated to all the protocols in the PEG, excluding those listed explicitly in the token.
fires for every symbol of the alphabet. Since the FSA created so far may be non-deterministic, it must be converted into a deterministic form using the well-known algorithms defined in the literature [3]. Figure 6 comes from the determinization of the automaton of Figure 5; the notation ‘* – (…)’ indicates each symbol of the alphabet except those in the curly braces.
### B. Assigning a single protocol to each state
While the DFA obtained so far looks nice from a theoretical point of view, it still cannot be used to generate the executable code that implements the given NetPFL filter.
In order to carry out this final lowering step, we need to associate each state of the DFA with a single protocol, so that reaching a certain state of the DFA corresponds to reaching a specific protocol within the packet under analysis. Unfortunately, automata obtained by our building process may not satisfy this condition. For instance, the original FSA in Figure 5 shows states associated with multiple protocols (e.g., states $Q_0$ and $Q_1$); the situation may become even worse in the next steps, as states originally associated with a single protocol can lose this property in the determinization process, when states are manipulated (e.g., joined or split) by FSA algorithms.
We designed an algorithm to label, whenever possible, each state with the corresponding protocols.
First, each state is inspected and, if all of its incoming transitions share the second part of their name (i.e., the target protocol of the encapsulation rule), then that state is labeled with that protocol. Two exceptions are (i) the initial state of the automaton, which corresponds to a single protocol (i.e., Startproto), only if it does not have any incoming transitions and (ii) the accepting state, whose self loop is not considered.
---
5 Note that, since a symbol leading to Startproto does not exist, an incoming transition would associate the initial state with multiple protocols.
Second, unnecessary symbols are pruned from transitions, as we know (from the PEG) that, while control is a given state, the FSA can receive only symbols whose origin protocol is the one associated with the state. Symbol pruning is done by inspecting the outgoing transitions of each state: if that transition is associated with a symbol whose first part does not match the protocol associated with the state itself, that symbol is removed from the transition. Obviously, we remove all the transitions that remain without symbols and all the states that are disconnected from the rest of the FSA. These operations are repeated until there are no more changes in the DFA; the final result of this step in our example is shown in Figure 7.
Unfortunately, when the previous algorithm terminates, some states may still not be associated with a single protocol. The solution consists in transforming the obtained DFA into an equivalent form in order to reach our objective. Each unlabeled state is split into multiple states, one for each protocol identified by the target protocol of its incoming transitions. For example, the dashed state in the left of Figure 8 originates two states, one associated with IP and the other with IPv6, as shown in the right part of the figure. A transition originating in an expanded state is replaced with new transitions, based on the origin protocols of the symbols labeling the transition itself. Each of those new transitions starts in the new state representing the source protocol of its symbols, and ends in the same state of the original transition. For example, the transition exiting from the dashed state in the left of Figure 8 originates two transitions: one labeled with $ip$-$ipv6$ and exiting from the new state representing IP, the other firing with $ipv6$-$tcp$ and coming from the new state associated with IPv6. Similarly, each transition ending in an expanded state is managed according to the target protocols of its symbols. In our example, the transition leading to the dashed state is replaced with two transitions, the former firing with $eth$-$ip$ and terminated on the new state representing IP, and the latter labeled with $eth$-$ipv6$ and entering into the new state associated with IPv6. Figure 8 shows also how the self loop on an expanded state, for each one of its symbols, originates a new transition starting and ending in the proper new states.
After the expansion some states could be useless, because their transitions originate circular paths that do no longer bring the control to an accepting state. We identify those “traps” with a reverse post-order visit of the DFA, starting from the accepting state; states that cannot be visited are removed. Figure 9 shows the encapsulation DFA recognizing
---
The initial state is also associated with startproto.
---
Fig. 5. FSA with labeled transitions.
Fig. 6. DFA representing a header chain.
packets that match the filter \texttt{tcp in ip} in \texttt{ipv6} in ethernet.
Finally, the encapsulation DFA is transformed into a piece of executable code that implements the filter, which is then used to analyze network packets.
It is worth remembering that a filter could have more header chains, composed through the Boolean operators \texttt{and} and \texttt{or}. In this case, our algorithm is executed for each header chain, and the resulting DFA are combined using traditional algorithms defined in literature.
IV. IMPLEMENTATION
The proposed algorithm was implemented in the NetBee library [4]. This library includes a tcpdump-compatible tool named nbeedump for packet filtering, which exploits the NetVM [5] virtual machine for executing the packet filtering code. The overall system architecture is shown in Figure 10: nbeedump receives as input both the NetPDL database (containing the format of the supported protocols and the encapsulation rules) and the NetPFL filtering expression. These information are taken by a compiler that, after the application of several optimization algorithms, emits the final filtering code under the form of NetIL instructions (i.e., the NetVM assembly language). The NetIL can be interpreted by the NetVM itself, or transformed into native code for a bunch of target architectures. Within the above compiler, the algorithm presented in this paper is implemented in the DFA builder module, which takes the PEG (dynamically extracted by the NetPDL database) and the NetPFL filter, and builds the DFA representing the packet filter. Subsequently, the DFA lowering module generates the corresponding NetIL code. Input symbols of the FSA are generated by the protocol scanner; even if it is a logically separated module, its operations are in fact performed by the same assembly program implementing the DFA. This means that, when generating the NetIL code for a state, we link together both the code that implements the automaton and the one that handles the encapsulations.
V. EXPERIMENTAL RESULTS
The algorithm presented in this paper has been validated with two types of tests: the former evaluates the time required to create the FSA and generate the machine depended code that recognizes packets matching the filter, while the latter addresses the performance of the generated code at run-time. Tests have been performed on a workstation with 4 GiB of memory, CPU Intel E8400 @ 3.00 GHz and OS Ubuntu 10.04, kernel 2.6.32-38-generic, 64 bits. All tests were executed with the nbeedump tool, which used the NetPFL filters shown in Table I and the NetPDL protocol database shown in Figure 2 (including dashed lines and protocols\(^7\)).
It is worth noting that, at the best of the authors’ knowledge, no other filtering languages exist that allow users to specify filters based on protocols encapsulations. For instance, neither libpcap [6], which represents the foundation of many packet filtering tools (e.g., tcpdump, Wireshark), nor the display filters [7] implemented in Wireshark (which replace the basic filtering capabilities of libpcap when packets have to be shown on screen) support filtering based on protocol encapsulation rules. As a consequence, we cannot compare the performance of our implementation with other competitors. However, we took care of obtaining our results by using a framework that has already been proved to be at least equivalent to the state of the art in this field [8].
A. Compilation time
This test evaluates the filter compilation time, i.e. the time required for generating the actual x64 assembly code
\(^7\)Actually, the original NetPDL database available on the nbeee.org web site accounts for more than one hundred protocols; we used a reduced database for the sake of clarity.
implementing a specific NetPFL filter. This process includes an initial step represented by our algorithm followed by a very complex compilation and optimization process (part of the NetVM framework) before the final code emission. Our test measures the time spent by our algorithm with respect to the total code generation time. Each filter has been compiled thousand times and averaged, obtaining the numbers depicted in Figure 11. Results show that the time required for creating the encapsulation DFA is negligible compared to the total generation time, for all the considered filters. Furthermore, the time needed to build the FSA increases when the number of protocols that could match the initial eatall state in the FSA representing the filter is reduced. The reason is that our algorithm expands the initial state in most of the protocols defined in the database in use, and then prunes the unnecessary ones. As a consequence, NetPFL filters that explicitly mention startproto generate very compact filtering DFA and represent the fastest generation case for our algorithm.
B. Filtering time
This test aims at evaluating the quality of the resulting filtering code (i.e. the x64 assembly program) executed on a set of real packets. The filtering code measures only the time needed to check if a packet satisfies the filter, without taking into account additional overhead such as reading packets from disk (or from the NIC) and more. Each filter has been repeated one thousand times on each test packet and the results have been averaged. Since a filter execution lasts a few nanoseconds, measurement has been performed with the RDTSC instruction available in the Intel x64 instruction set.
Figure 12 shows the number of CPU ticks needed for the filters of Table I applied to a first, simple packet, and a second with a tunnel involving the IP protocol; precise encapsulations are written on the figure itself. As evident, the cost of an accepting filter (shown with the (a) on top of the bar in Figure 12) decreases when the filter is more specific, i.e. leaves less freedom to the protocols that may appear in a certain position of the packet. Furthermore, accepting filters will require more time to analyze the complex packet because the encapsulation sequence of interest is matched later within the packet itself. Finally, filter #5 is so fast because it represents a condition that, according with the PEG in use, does not match any of our packets, which are then discarded almost immediately after a few checks.
VI. CONCLUSIONS
This paper presents an algorithm that enables the creation of efficient packet filters operating on protocol encapsulations. Our algorithm enables the creation of more specific filters based on the header chains defined in the NetPFL filtering string. This capability is important for many network tools based on packet filtering capabilities that will be deployed in the near future, as it would be needed to support multiple encapsulations; it will also enable the efficient filtering of the traffic we want, given the growing amount of packets that use very complex encapsulations.
Although we cannot compare our performance with other systems (as we are not aware of other software supporting filtering on protocol encapsulations), our preliminary results seem to confirm that the generated filters can be extremely efficient, suggesting that our NetBee library could be used as a foundation for more sophisticated packet capturing tools that require fast and flexible traffic filtering.
Future works include the extension of our algorithm to support other features defined in the NetPFL language, such as the capability to specify rules based on a specific instance of a protocol. The filter tcp in ip%, which matches when the ip-tcp encapsulation refers to the second instance of the IP protocol, represents a possible example.
REFERENCES
|
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2503367/58373/fsa_filtering.pdf", "len_cl100k_base": 5861, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24895, "total-output-tokens": 6844, "length": "2e12", "weborganizer": {"__label__adult": 0.0004582405090332031, "__label__art_design": 0.00029015541076660156, "__label__crime_law": 0.0006060600280761719, "__label__education_jobs": 0.00032448768615722656, "__label__entertainment": 0.000133514404296875, "__label__fashion_beauty": 0.0001807212829589844, "__label__finance_business": 0.00022411346435546875, "__label__food_dining": 0.0004029273986816406, "__label__games": 0.0006976127624511719, "__label__hardware": 0.003887176513671875, "__label__health": 0.000606536865234375, "__label__history": 0.0003445148468017578, "__label__home_hobbies": 0.00010520219802856444, "__label__industrial": 0.000804901123046875, "__label__literature": 0.0002551078796386719, "__label__politics": 0.0003864765167236328, "__label__religion": 0.0006198883056640625, "__label__science_tech": 0.164306640625, "__label__social_life": 0.00012040138244628906, "__label__software": 0.0239105224609375, "__label__software_dev": 0.79931640625, "__label__sports_fitness": 0.0004813671112060547, "__label__transportation": 0.0011386871337890625, "__label__travel": 0.0002613067626953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29937, 0.01746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29937, 0.58099]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29937, 0.90952]], "google_gemma-3-12b-it_contains_pii": [[0, 719, false], [719, 4970, null], [4970, 11223, null], [11223, 15981, null], [15981, 20875, null], [20875, 24653, null], [24653, 29937, null]], "google_gemma-3-12b-it_is_public_document": [[0, 719, true], [719, 4970, null], [4970, 11223, null], [11223, 15981, null], [15981, 20875, null], [20875, 24653, null], [24653, 29937, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29937, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29937, null]], "pdf_page_numbers": [[0, 719, 1], [719, 4970, 2], [4970, 11223, 3], [11223, 15981, 4], [15981, 20875, 5], [20875, 24653, 6], [24653, 29937, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29937, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
4c40e9b5f34d78a7ebc25681e9d36580a74e9f45
|
[REMOVED]
|
{"Source-Url": "http://gauss.ececs.uc.edu/Courses/c626/notes/coudert.1.pdf", "len_cl100k_base": 6261, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 19719, "total-output-tokens": 7582, "length": "2e12", "weborganizer": {"__label__adult": 0.000728607177734375, "__label__art_design": 0.0008225440979003906, "__label__crime_law": 0.0008268356323242188, "__label__education_jobs": 0.0007543563842773438, "__label__entertainment": 0.00015401840209960938, "__label__fashion_beauty": 0.00035858154296875, "__label__finance_business": 0.0005674362182617188, "__label__food_dining": 0.0007352828979492188, "__label__games": 0.0013256072998046875, "__label__hardware": 0.024139404296875, "__label__health": 0.0012464523315429688, "__label__history": 0.0005092620849609375, "__label__home_hobbies": 0.0003733634948730469, "__label__industrial": 0.00269317626953125, "__label__literature": 0.00027251243591308594, "__label__politics": 0.0006413459777832031, "__label__religion": 0.001087188720703125, "__label__science_tech": 0.335693359375, "__label__social_life": 0.00010579824447631836, "__label__software": 0.0062408447265625, "__label__software_dev": 0.61767578125, "__label__sports_fitness": 0.0007023811340332031, "__label__transportation": 0.002002716064453125, "__label__travel": 0.00034236907958984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25309, 0.0301]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25309, 0.73974]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25309, 0.83064]], "google_gemma-3-12b-it_contains_pii": [[0, 2265, false], [2265, 5706, null], [5706, 8753, null], [8753, 11264, null], [11264, 15110, null], [15110, 18493, null], [18493, 20855, null], [20855, 22864, null], [22864, 25309, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2265, true], [2265, 5706, null], [5706, 8753, null], [8753, 11264, null], [11264, 15110, null], [15110, 18493, null], [18493, 20855, null], [20855, 22864, null], [22864, 25309, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25309, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25309, null]], "pdf_page_numbers": [[0, 2265, 1], [2265, 5706, 2], [5706, 8753, 3], [8753, 11264, 4], [11264, 15110, 5], [15110, 18493, 6], [18493, 20855, 7], [20855, 22864, 8], [22864, 25309, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25309, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
54c7e7c5fbb4b5a20b46c6d7d2f18669d78cbe5f
|
PDSim: Planning Domain Simulation and Animation with the Unity Game Engine
Emanuele De Pellegrin, Ronald P. A. Petrick
Edinburgh Centre for Robotics
Heriot-Watt University
Edinburgh, Scotland, United Kingdom
ed50@hw.ac.uk, R.Petrick@hw.ac.uk
Abstract
Modelling planning domains that are correct and robust can be a challenging problem, especially in real-world domains. This paper presents an overview of the current state of the Planning Domain Simulation (PDSim) project, an asset for the Unity game engine to simulate plans in a 2D or 3D environment with custom animations and graphics effects. PDSim aims to provide an intuitive tool for users to define animations without the need to learn a new scripting language, using the Unity game engine's internal and industry-standard visual scripting language in order to quickly evaluate the validity of planning models. PDSim fills an important gap in the area of planning simulation and validation: simulating a planning problem using 3D or 2D graphics and animation techniques can help the user to quickly evaluate the quality and correctness of a plan, and improve the design of a planning domain and problem. This paper presents the current state of PDSim development and future plans for the project.
Introduction
The task of modelling planning domains that are both correct and robust can be a challenging problem, especially in real-world domains. For instance, consider the following robot planning task: a set of robots are deployed in a factory to help with the warehouse logistics. The robots can navigate on a predefined grid map with simple 4-way movements, pick up and drop boxes, and deliver objects to a van parked in the warehouse. The problem also imposes a few limitations: the robots cannot cross each other, and the vans can only accept a specific box.
The above problem could be viewed as a slightly modified version of the sequential Floor Tile domain from the 2011 International Planning Competition (IPC):\(^1\) a real-world inspired problem that can be modelled using a representation language such as PDDL (McDermott et al. 1998) and solved with classical automated planning techniques. The grid could be modelled as a set of interconnected nodes representing locations in the warehouse for objects and agents (e.g., vans, boxes, and robots), as illustrated in Figure 1. A trivial example of a goal might be to ensure that particular objects are in specific locations, e.g., box1 is in van1. Using this domain model, we can quickly find a valid solution to the problem. For instance, Figure 2 (left) shows a plan generated by the FastDownward planner (Helmert 2006) for the problem in Figure 1, where a robot moves to grid cell (1,0) to pick up the box before delivering it to the van at (0,0).
Figure 2 (right) shows an alternative action sequence, generated using an incorrect version of the domain. Although the plan is similar to the one on the left, it is incorrect: the robot executes the pickup action when in grid cell (0,0) before loading the van. (This plan is the result of a missing precondition on the pickup action which normally ensures that the robot and object are in the same cell.) While this kind of error can be trivial to debug and correct by an expert knowledge engineer, this isn’t always the case for students and newcomers to languages such as PDDL. Catching modelling errors (i.e., incorrect logic in action preconditions and effects, missed predicates in an init block, etc.) can still be difficult due to the complexity of the knowledge that needs to be specified and the level of abstraction that is often required for ensuring the generation of tractable solutions.
In this paper, we present the current state of the Planning Domain Simulation (PDSim) (De Pellegrin 2020; De Pellegrin and Petrick 2021, 2022) system, a framework for visualising and simulating classical and temporal planning problems. PDDL is used to define the domain knowledge and the problem formulation (e.g., planner requirements, language models used in the domain such as types and objects, plus standard definitions of the domain and problem). A planner then uses this information to check that a solution exists and to generate a plan that satisfies the goal. Using the generated plan, PDSim interprets the action effects as 3D animations and graphics effects to deliver a visual representation of the world and its actions during plan execution and aid the user in assessing the validity of the plan during execution.
While several tools do exist for validating planning models (e.g., plan validation tools like VAL (Howey and Long 2003) and formal plan verification methods such as (Bensalem, Havelund, and Orlandini 2014; Cimatti, Micheli, and Roveri 2017; Hill, Komendantskaya, and Petrick 2020)), approaches based on visual simulation and visual feedback can also play an important role in addressing the problem of correctly modelling planning domains. Visual tools can serve as powerful environments for displaying, inspecting, and sim-
---
ulating the planning process, which can aid in plan explainability for human users (Fox, Long, and Magazzeni 2017).
In this paper, we describe the aims, structure, and core components of PDSim that are responsible for providing visualisations, and illustrate how PDSim can be used to simulate planning problems. PDSim is built by extending the Unity game engine editor (Unity Technologies 2022) and uses the components offered by the engine such as a path planner, lighting system, and scene management, among others. The system uses a backend server that is responsible for parsing PDDL files and managing plan generation, providing support for a wide range of PDDL language features (such as typing, temporal actions, action cost, etc.). This paper provides a comprehensive description of PDSim, extending earlier versions of the system.
The rest of the paper is organised as follows. First, we review work related to planning problem visualisation and verification. We then describe the structure and main components of PDSim, providing examples of their use in practice by illustrating a number of planning domains. Finally, we conclude with future work and planned additions to PDSim.
### Related Work
PDSim (De Pellegrin and Petrick 2022) is part of the small ecosystem of simulators for automated planning which use visual cues and animations to translate the output of a plan into a 3D environment. The closest approach to ours is Planimation (Chen et al. 2020) which uses Unity as the front-end engine to display objects and animate their position while following a given plan. PDSim is built by extending the Unity game engine editor (Unity Technologies 2022) and uses the components offered by the engine such as a path planner, lighting system, and scene management, among others.
The Logic Planning Simulator (LPS) (Tapia, San Segundo, and Artieda 2015) also provides a planning simulation system that represents PDDL objects with 3D models in a user-customisable environment. The approach is integrated with a SAT-based planner and a user interface that enables plan execution to be simulated while visualising updates to the world state and individual PDDL properties in the 3D environment. LPS is not based on Unity but provides the user with a simple interface for plan visualisation. Several user-specified files are also required to define 3D object meshes, the relationship between PDDL elements and 3D objects, and the specific animation effects.
vPlanSim (Roberts et al. 2021) is a similar application that also aims to provide a 3D visualization of a plan, but with a number of important differences. While vPlanSim offers a simple and fast custom graphical environment for creating plan simulations with few dependencies, PDSim uses the Unity game engine to offer the user industry-standard tools for creating realistic scenarios. PDSim also provides a language-agnostic tool to set up simulations which is key for users who are not familiar with PDDL and Unity.
Several systems also exist to help users formalise planning domains and problems through user-friendly interfaces. For instance, GIPO (Simpson, Kitchin, and McCluskey 2007), ItSimple (Vaquero et al. 2007) and VIZ (Vodrážka and Chrpa 2010) use graphical illustrations of the domain and problem elements, removing the requirement of PDDL language knowledge, to help new users approach planning domain modelling for the first time. Other tools such as Web Planner (Magnaguagno et al. 2017) and Planning.Domains (Muise 2016) use Gantt charts or tree-like visualisation methods to illustrate generated plans and the state spaces searched by a particular planning algorithm. PlanCurves (Le Bras et al. 2020) uses a novel interface based...
on time curves (Bach et al. 2015) to display timeline-based multiagent temporal plans distorted to illustrate the similarity between states. All of these tools attempt to assist users in understanding how a plan is generated and to help detect potential errors in the modelling process.
Simulators are also prevalent in robotics applications, and multiple systems make use of game engines to provide virtual environments, such as MORSE (Echeverria et al. 2011) or Drone Sim Lab (Ganoni and Mukundan 2017). Game engines also offer several benefits such as multiple rendering cameras, physics engines, realistic post-processing effects, and audio engines, without the need to implement these features from scratch (Ganoni and Mukundan 2017), making them desirable tools for simulation. For example, Unity has been used as a tool for data visualisation, architectural prototypes, robotics simulation (Green et al. 2020), and synthetic data generation for computer vision (James Fort and Davis 2021) and machine learning applications (Haas 2014; Craighead, Burke, and Murphy 2008). There are also interesting use cases of Unity related to AI and planning, including the Unity AI Planner, an integrated planner being created by Unity as a component for developing AI solutions for videogames, and Unity’s machine learning agents, a solution for training and displaying agents whose behaviour is driven by an external machine learning component.
Automated Planning Background
Automated planning involves reasoning about a set of actions in order to construct a plan (usually a sequence of actions) that achieves a goal from an initial state (Ghallab, Nau, and Traverso 2004; Haslum et al. 2019). Planning is often thought of as a search through a state space where actions provide a transition system between states.
PDDL (McDermott et al. 1998) provides a standard language for modelling planning problems, by specifying a representation for properties, actions, initial states, and goals (among other features). PDDL splits the planning problem into two parts: the domain, which defines the state properties and the actions; and the problem, which defines the initial state and the goal. A state is a set of all the properties in the planning problem, describing the conditions of the objects or agents at some point in time (Ghallab, Nau, and Traverso 2004). PDDL uses predicates to represent fluents that can be true or false in a state. Predicates are typically parametrized with a set of variable arguments that could be replaced by objects in the problem. For example,
\[ \text{at} (\text{robot}, \text{location}) \]
might be used to describe the current location of a robot.
PDDL actions are formalised following a defined schema that specifies the parameters, preconditions, and the effects of each action, as in Figure 3. The preconditions specify the conditions required to perform the action, while the effects describe the changes to the state after an action is performed. Together, the actions capture the state transitions that are possible in the problem. For example, Figure 4 represents the PDDL action for picking up a box in the custom warehouse domain. The action has three parameters: \( ?r \), the robot, \( ?b \), the box, and \( ?c \), the cell. The precondition specifies the robot must be located at the cell \( ?c \), the robot must be empty (i.e., not carrying any boxes), and the box \( ?b \) must be located at the same cell. The effect of the action states that \( ?r \) will no longer be empty, the box will no longer be at the cell \( ?c \), and the robot will have the box \( ?b \).
\[
\begin{align*}
\text{(:action} & \quad \text{pick-up-box}) \\
\text{:parameters} & \quad (?r - \text{robot}, ?b - \text{box}, ?c - \text{cell}) \\
\text{:precondition} & \quad (\text{robot-at} ?r ?c) (\text{robot-empty} ?r) (\text{box-at} ?b ?c)) \\
\text{:effect} & \quad (\text{not} (\text{robot-empty} ?r)) (\text{not} (\text{box-at} ?b ?c)) (\text{robot-has} ?r ?b))
\end{align*}
\]
Figure 4: PDDL action for the custom warehouse domain.
Figure 3: PDDL action representation.
Figure 5: Plan example for Robot warehouse.
Figure 6: High-level PDSim system architecture.
that are possible in the problem. For example, Figure 4 represents the PDDL action for picking up a box in the custom warehouse domain. The action has three parameters: \( ?r \), the robot, \( ?b \), the box, and \( ?c \), the cell. The precondition specifies the robot must be located at the cell \( ?c \), the robot must be empty (i.e., not carrying any boxes), and the box \( ?b \) must be located at the same cell. The effect of the action states that \( ?r \) will no longer be empty, the box will no longer be at the cell \( ?c \), and the robot will have the box \( ?b \). PDSim help in visualising these textual representations with animations.
An automated planner can use the PDDL problem representation to generate a plan by choosing actions to sequence together so that the plan achieves the goal from the initial state. For instance, Figure 5 shows a plan for a robot to achieve a goal in a warehouse problem. PDSim uses the PDDL problem representation in order to define and animate several aspects of its visualisation, as described below.
1Unity AI Planner: https://docs.unity3d.com/Packages/com.Unity.ai.planner@0.0/manual/index.html
2Machine Learning Agents: https://github.com/Unity-Technologies/ml-agents
PDSim System Architecture
The high-level structure of the PDSim system is shown in Figure 6. The PDSim system can be imported into Unity3D as a common asset, where the Unity editor interface is used to interact with PDSim components, such as setting the simulation scene, creating animations, or importing 3D or 2D models. PDSim also relies on a Python backend implementation, which is used to parse PDDL files and generate plans. A PDSim simulation is initialised and handled by the backend server running the Unified Planning Framework (UPF; see below), which is responsible for parsing and building a JSON representation of the planning model and running a user-defined planner (defaulting to FastDownward) to generate a plan. UPF is a planner-agnostic framework for Python, which increases PDSim’s modularity and lets users select their preferred planner implementation, separating it from the simulation stage itself which comes later in the process. We describe the major components of PDSim below.
PDSim Components
Several PDDL components are key to simulating a planning problem, including: predicates, actions, types (not mandatory), and initial values. A PDDL domain file is used to build the core elements and the animations for the simulation. The types and objects define SimulationObjects, the visual aspect of the simulation in Unity: 3D models or 2D sprites. PDDL’s predicates are used to define the 2D/3D animations using the visual scripting option that Unity offers. This visual scripting language is used to define common transformation operations, path planning, audio emission, particle effects, etc.
For example, Figure 7 shows an animation definition for the earlier Warehouse planning problem, for a predicate that captures the movement of the robot position from the current grid to an adjacent cell. Action effects are the animated components, where every predicate in the effects list that has an associated animation graph will execute an animation at simulation time. Finally, the problem’s initial values are used during simulation time to set up the scene. Similar to the animation effects, all the grounded values are animated if the predicates are associated with an animation.
Backend System
PDSim’s backend system is a Python server that communicates with the Unity editor and supports communication between the planning and animation components of the system. In particular, Figure 9 shows the workflow executed by the system when the user wants to create a new simulation. The user interacts with PDSim using the Unity editor to specify the planning domain and problem files. Unity tries to connect to the backend server by submitting a request using these files. The planner interface can use a local planner, such as FastDownward (Helmert 2006) (the default planner), or the planning web service offered by Planning.Domains (Muise 2016). If either the parsing or planning actions fail, the interface will warn the user of the error.
PDDL domain and problem elements are converted to a JSON representation and sent back to Unity, which will create objects and animations that can be customised by the user. Domain entities are used to set up the core Unity simulation. For instance, Figure 7 shows the JSON code used to establish the internal definitions of actions, types, and predicates for the logistics domain. Problem entities are used to set up a Unity-level scene, as in Figure 8. Once these components are defined, the user can customise them using the Unity editor, for instance configuring multiple problems for the same domain, or multiple simulations for different plans.
At the technical level, communication between PDSim’s backend server and Unity is provided by the ZeroMQ networking library,5 in particular the Python implementation package pyzmq6 on the server side and the C# implementation netMQ7 on the Unity side.
Unified Planning Framework (UPF)
PDSim’s backend system wraps the functionality of the Unified Planning Framework (UPF) as the main tool for manipulating and solving planning problems in PDSim. UPF is a Python library provided by the AIPlan4EU project8 that aims to simplify the use of automated planning tools for AI application development. UPF attempts to standardize aspects of the planning process, making it accessible to
---
5https://zeromq.org/
6https://pypi.org/project/pyzmq/
7https://github.com/zeromq/netmq/
8https://www.aiplan4eu-project.eu/
users of any level of expertise. In particular, it offers a well-developed PDDL parser and a standard interface for communicating with external planners. Integration with UPF enables the PDSim system to take advantage of these features and any future updates that UPF may provide.
Planning Domain Simulation in Unity
Unity (Unity Technologies 2022) is a popular state-of-the-art game engine used for building 3D projects across a range of diverse applications. In PDSim, Unity provides the frontend interface and is responsible for handling all of the 2D/3D graphics and animations related to the simulation.
One of the fundamental design concepts used by Unity is the idea of composition, which means that an object can be composed of different types of objects. In particular, Unity’s component system provides the capability for every object in a Unity scene to be assigned custom scripts or modules, such as a rigid body for the physics simulation, a collision volume, an audio source, etc. Every object in Unity can also be scripted using the C# language, meaning that an object can have a user-defined behaviour in the scene. For example, an object can respond to user inputs from a mouse or keyboard, or can be translated, rotated and scaled, or have its colour changed, based on conditional events. Object scripting in Unity is key to the modularity of the simulation, especially for the custom representation of PDDL elements.
Scripting can also be applied to the editor window, where users interact with the engine and where it is possible to set the properties of the objects in the scene by using Unity’s user interface. PDSim make heavy use of all the features provided by Unity, such as the Visual Scripting Language used to create animations and events. As a result, users do not need to learn a new language for developing animations, and animation graphs can be modified on-the-fly without waiting for scripts to be recompiled.
PDDL to PDSim
To use PDDL in the Unity Game Engine, there is a need to translate the PDDL specification into a format that can be used within the Unity environment. This involves creating custom C# classes and objects that represent the knowledge specified in the PDDL domain. Taking the warehouse domain example, the PDDL specification includes a set of objects, such as robots, boxes and vans. There is a need to create corresponding C# classes to represent these objects. Including properties to represent the various attributes of the objects, such as their transform (position, rotation and scale), colour, and other relevant visualization features. In addition, methods to perform various operations on the objects and perform updates on the object attributes. Similarly, PDDL predicates and actions need to be mapped using C# dictionaries to animations that visualise a change in the world state.
Simulation Objects
A PDDL type in PDSim is represented by a simulation object, a structure that shares similar information for all the objects defined in a planning problem. A simulation object is defined by two main components: models and control points. Models are used to visually represent the object type in the virtual world (e.g., block, airport, player, robot, etc.). These can be 3D meshes or 2D textured sprites that can be imported in the Unity editor. A user can add as many models as they like. A collision box that wraps all the models is automatically calculated to be used later in the simulation to detect the interaction with the user inputs and the collisions calculated by the physics engine. Control points are 3D vectors that represent particular points of interest in the object type representation (e.g., the cardinal points of an object, a point that represents the arm position of an agent, etc.).
Animations
One of the most important aspects of PDSim is the visual script animation system. As shown in Figure 10, users can create their own particular behaviour in the virtual scene for every predicate they want to animate. The example shows a simple translation animation from an object position to a target position. In particular, the example shows one of the custom animation nodes developed in PDSim to help simplify the creation of animations for new users. Every predicate in an action’s effect can have one of these graphs linked to it, and every graph comes with an EffectEvent that is invoked during plan simulation with the corresponding objects from the Unity scene (i.e., the objects in the plan’s action).
To simplify the development of new animations, and to help new users with visual scripting, a set of predefined animation nodes has been created which cover a number of useful simulation cases that frequently arise, such as:
1. **TranslateTo**: An animation for moving a particular object in the scene to a specific point in the world or to another object position.
2. **RotateTo**: An animation to rotate a particular object in the scene to an angle or to look at another object in the world.
3. **PathTo**: An animation for moving an object using Unity’s path planning system.
4. **Spawn**: A node to instantiate an object (i.e., a 3D mesh) in the scene.
5. **GetObjectFromScene**: A node mainly used to access the components added by the user that aren’t part of the PDDL definition.
These predefined animation nodes aim to reduce the size and complexity of the scripting graph, and simplify the animation process for new Unity users. From a technical point of view, these animations use Unity Coroutines that allow the user to write functions that can run concurrently in the main Unity thread and be suspended or resumed either by user choice or if a condition is met.
Simulation Manager
Figure 11 shows a diagram of the simulation manager, a component that handles simulations on the front-end side.
```json
{ 'status' : 'OK',
'plan': [
{ 'action': 'pick-up',
'attributes': [ 'b' ]
},
{ 'action': 'stack',
'attributes': [ 'b', 'a' ]
},
{ 'action': 'pick-up',
'attributes': [ 'c' ]
}, ...
]
}
```
Figure 12: A parsed plan in JSON format.
The main responsibilities of the simulation manager include:
- Starting or pausing the simulation (during plan time),
- Holding references of all the PDDL descriptors in the scene (predicates, actions, objects),
- Holding references of all animation graphs defined by the user,
- Keeping track of the existing types (if defined) in the domain file, and
- Sending request to the backend server to update or initialize the PDDL representation.
If types are specified in the PDDL domain file, then the simulation manager creates simulation object blueprints for all the leaf types of the type tree that is built when the domain is parsed for the first time. These types are replicated for each object in the PDDL problem file that matches the particular type, using the user configuration of simulation objects, as described above.
The simulation manager is initialised using the JSON data from the backend server containing the PDDL elements and the representation of the plan, as shown in Figure 12, using a plan for a Blocks World domain problem. Every action effect will have an associated list of animation graphs representing the effect of the PDDL action. The simulation manager will execute the animations using the attributes in the plan representing the simulation objects involved in the simulation of that action. As the first step in every simulation, PDDL’s init block is animated. Init represents the starting state of a planning problem and is defined by a list of fluents describing the current state of the world. These fluents are represented in the form of fluent_name(arguments) where the arguments are the objects that are present in the environment. The simulation manager will publish events with the corresponding fluent name and objects from the simulation scene that will be used by the visual scripting language to map which animation to execute and the graphical objects to use. The same process is then repeated for every action effect from the plan.
Planning Domain Examples
PDSim has been tested using the published benchmark domains from the International Planning Competition (IPC).\(^9\)
We illustrate the capabilities of PDSim to simulate planning problems using the Blocks World, Logistics, and Sokoban domains, plus the custom Warehouse domain introduced earlier in the paper.
**Blocks World:** The Blocks World domain (IPC 2000) is one of the simplest planning domains: blocks can be stacked on top of each other and only one block can be picked, moved, and dropped at a time. The goal is achieved when the specified stack sequence is reproduced.
Figure 13 shows an example of a Blocks World action sequence being simulated in PDSim. The three snapshots represent different steps in a plan and demonstrate the interaction with objects in the scene. The user interface shows the transition of fluents that describe an object after the action effects are applied during plan execution. In the example, the object $d$ starts with the fluent *on table* describing the initial condition of the problem for that object. The third image in Figure 13 shows how the fluents change after the plan has finished executing.
**Logistics:** The Logistics domain (IPC 2000) describes a problem involving packages that need to be transported between cities using an aeroplane, and within cities using trucks. This domain steps up the complexity of the simulation environment while keeping simple definitions of predicates and actions (e.g., the predicates InCity, At, In are used to respectively describe if a location is inside a city, if an object is in a location, and if a package is in a vehicle). Figure 15 shows a snapshot of the Logistics simulation, highlighting the animation of boxes and aeroplanes.
**Sokoban:** The Sokoban domain (IPC 2008) describes the Sokoban game problem,\(^10\) where a player needs to move an object to a predefined goal on a grid. Figure 16 illustrates a typical problem level for Sokoban with a player and stone that need to be moved. This domain adds to the complexity of the previous example, illustrating the functionality of this simulation in Unity and its ability to rapidly provide an in-game agent.
**Custom Warehouse Domain:** To demonstrate the use of user-defined domain models, the warehouse planning problem presented in the introduction has been simulated in PDSim as illustrated in Figure 14. Both correct and incorrect domain models have been tested with PDSim, showing the robot picking up the box in the correct cell position and executing the non-intuitive action of picking up the box from the wrong cell (as represented in the images).
**Real World Robot Application:** PDSim has been used as part of a robotics project involving robots acting in a human environment. Figure 17 show the simulation for a custom planning problem involving a robot operating in a flat equipped with a different type of sensors. PDSim can simulate the change in the environment state (as shown in the bottom image).

**Discussion**
In general, PDSim offers a powerful and flexible framework for visualising planning problems using a state-of-the-art graphical engine. More specifically, PDSim aims to fill a gap in current systems that provide plan simulations, by offering users a simplified environment to develop 3D or 2D simulations, compared with current approaches that come with the overhead of learning and using an ad hoc scripting language to interact with a custom simulator (Tapia, San Segundo, and Artieda 2015; Chen et al. 2020; Roberts et al. 2021). PDSim is designed as a support system for automated planning by providing intuitive tools to interface with a plan solution. One main limitation of PDSim is the lack of an automated and formal validation tool for planning, which could be integrated in the future. However, our approach offers a practical and human-centred way to check if a plan is valid and can be executed (Howey and Long 2003) using a graphical engine to gamify the process of validating a planning solution. Approaches like (Le Bras et al. 2020; Fox, Long, and Magazzini 2017) also suggest that answering the question of why an action has been successfully executed or has failed, further increases the explainability of a plan. In this context, PDSim provides intuitive hints about possible errors using visual cues, by displaying an interface with the transitions of each action and how they modify the state of a particular object or agent. It is important to reiterate, however, that PDSim is primarily aimed at planning-agnostic users like students. Within this group, as (Chen et al. 2020) indicates, there is a difference between the mental model the user has of the planning problem and the actual implementation. In fact, PDDL is often approached as a traditional program-
---
\(^9\)https://github.com/potassco/pddl-instances
\(^10\)https://en.wikipedia.org/wiki/Sokoban
ming language by beginners, rather than a knowledge definition language. With this in mind, PDSim aims to simplify the learning curve of PDDL by assisting with components that provide information about the state of planning entities in real-time.
Conclusion and Future Work
This paper presented the structure and operation of PDSim, a simulation system for animating PDDL-based planning domains and plans. PDSim uses the Unity game engine to animate PDDL predicates through an action’s effects, by linking a visual scripting graph to each of them. The user can modify and customise the behaviour by effectively programming an animation from scratch or by quickly using the high-level animations provided. Created simulations can be exported in an executable and, given the cross-platform nature of Unity, all major operating systems can be targeted (e.g., Windows, Mac, Linux).
As future work, we plan to introduce a more intuitive way to create and modify the knowledge model, using the same visual scripting paradigm, and thus completely removing the need to know the PDDL language syntax. This will be internally used together with an in-engine planner that the user can interact with at planning time to change object properties and replan on the fly. Given the close relationship between PDSim and Unity, it will also be possible to use applications such as extended reality (XR) to interact with the plan. Another planned direction for PDSim will also be to include extensions for visualising the current state of an agent’s knowledge and beliefs to support epistemic planning, allowing visualisations to be generated from different agent perspectives. Finally, we also plan to evaluate the use of PDSim in an education setting, and gather feedback about the overall helpfulness and usefulness of PDSim as a development aid for students learning about automated planning in an intro-
ductory AI course. To further simplify the accessibility of PDSim, we think that an evaluation with a group of students is needed to gather feedback about the overall helpfulness and easiness of use in respect of the standard approach to planning with PDDL.
References
De Pellegrin, E. 2020. PDSim: Planning Domain Simulation with the Unity Game Engine.
Green, C.; Platin, J.; Pinol, M.; Trang, A.; and Vij, V. 2020. Robotics simulation in Unity is as easy as 1, 2, 3!
James Fort, J. H.; and Davis, N. 2021. Boosting computer vision performance with synthetic data.
|
{"Source-Url": "https://icaps23.icaps-conference.org/program/workshops/keps/KEPS-23_paper_2186.pdf", "len_cl100k_base": 7281, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27575, "total-output-tokens": 9408, "length": "2e12", "weborganizer": {"__label__adult": 0.0005435943603515625, "__label__art_design": 0.0009984970092773438, "__label__crime_law": 0.0005731582641601562, "__label__education_jobs": 0.0033721923828125, "__label__entertainment": 0.00023174285888671875, "__label__fashion_beauty": 0.0003204345703125, "__label__finance_business": 0.0006127357482910156, "__label__food_dining": 0.0004978179931640625, "__label__games": 0.005924224853515625, "__label__hardware": 0.0016489028930664062, "__label__health": 0.0006837844848632812, "__label__history": 0.0008492469787597656, "__label__home_hobbies": 0.0002104043960571289, "__label__industrial": 0.0016527175903320312, "__label__literature": 0.0004649162292480469, "__label__politics": 0.0005311965942382812, "__label__religion": 0.0006747245788574219, "__label__science_tech": 0.29052734375, "__label__social_life": 0.00019884109497070312, "__label__software": 0.026763916015625, "__label__software_dev": 0.66015625, "__label__sports_fitness": 0.0006036758422851562, "__label__transportation": 0.0018024444580078125, "__label__travel": 0.0003387928009033203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38889, 0.01717]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38889, 0.69422]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38889, 0.87328]], "google_gemma-3-12b-it_contains_pii": [[0, 5151, false], [5151, 8879, null], [8879, 14308, null], [14308, 18750, null], [18750, 22531, null], [22531, 26833, null], [26833, 31819, null], [31819, 33712, null], [33712, 38889, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5151, true], [5151, 8879, null], [8879, 14308, null], [14308, 18750, null], [18750, 22531, null], [22531, 26833, null], [26833, 31819, null], [31819, 33712, null], [33712, 38889, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38889, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38889, null]], "pdf_page_numbers": [[0, 5151, 1], [5151, 8879, 2], [8879, 14308, 3], [14308, 18750, 4], [18750, 22531, 5], [22531, 26833, 6], [26833, 31819, 7], [31819, 33712, 8], [33712, 38889, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38889, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
3b2bc8d6055988509b267ea517049dc5d254153a
|
[REMOVED]
|
{"len_cl100k_base": 6176, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20377, "total-output-tokens": 6890, "length": "2e12", "weborganizer": {"__label__adult": 0.0002751350402832031, "__label__art_design": 0.0004227161407470703, "__label__crime_law": 0.0003674030303955078, "__label__education_jobs": 0.00140380859375, "__label__entertainment": 6.4849853515625e-05, "__label__fashion_beauty": 0.0001456737518310547, "__label__finance_business": 0.0004096031188964844, "__label__food_dining": 0.0003268718719482422, "__label__games": 0.0004901885986328125, "__label__hardware": 0.0007338523864746094, "__label__health": 0.00045609474182128906, "__label__history": 0.0002351999282836914, "__label__home_hobbies": 9.1552734375e-05, "__label__industrial": 0.0004758834838867187, "__label__literature": 0.00042176246643066406, "__label__politics": 0.00019609928131103516, "__label__religion": 0.0003838539123535156, "__label__science_tech": 0.051513671875, "__label__social_life": 0.00010085105895996094, "__label__software": 0.01297760009765625, "__label__software_dev": 0.927734375, "__label__sports_fitness": 0.00020933151245117188, "__label__transportation": 0.00041747093200683594, "__label__travel": 0.00014698505401611328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31310, 0.01134]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31310, 0.41226]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31310, 0.89817]], "google_gemma-3-12b-it_contains_pii": [[0, 4314, false], [4314, 9263, null], [9263, 13402, null], [13402, 19355, null], [19355, 23380, null], [23380, 28761, null], [28761, 31310, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4314, true], [4314, 9263, null], [9263, 13402, null], [13402, 19355, null], [19355, 23380, null], [23380, 28761, null], [28761, 31310, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31310, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31310, null]], "pdf_page_numbers": [[0, 4314, 1], [4314, 9263, 2], [9263, 13402, 3], [13402, 19355, 4], [19355, 23380, 5], [23380, 28761, 6], [28761, 31310, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31310, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
59f01be0b4f1c9bf64e3d8beead2029973da919a
|
[REMOVED]
|
{"len_cl100k_base": 7472, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27655, "total-output-tokens": 8718, "length": "2e12", "weborganizer": {"__label__adult": 0.0002932548522949219, "__label__art_design": 0.0004134178161621094, "__label__crime_law": 0.0003571510314941406, "__label__education_jobs": 0.0013408660888671875, "__label__entertainment": 0.00011986494064331056, "__label__fashion_beauty": 0.00016939640045166016, "__label__finance_business": 0.0004072189331054687, "__label__food_dining": 0.0003364086151123047, "__label__games": 0.0005917549133300781, "__label__hardware": 0.0015468597412109375, "__label__health": 0.00045990943908691406, "__label__history": 0.0004324913024902344, "__label__home_hobbies": 0.00011777877807617188, "__label__industrial": 0.0008063316345214844, "__label__literature": 0.0002677440643310547, "__label__politics": 0.0003523826599121094, "__label__religion": 0.0005273818969726562, "__label__science_tech": 0.20654296875, "__label__social_life": 0.00013840198516845703, "__label__software": 0.024810791015625, "__label__software_dev": 0.7587890625, "__label__sports_fitness": 0.0003495216369628906, "__label__transportation": 0.0007429122924804688, "__label__travel": 0.000270843505859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38195, 0.03353]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38195, 0.4115]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38195, 0.91365]], "google_gemma-3-12b-it_contains_pii": [[0, 3348, false], [3348, 9021, null], [9021, 12331, null], [12331, 17100, null], [17100, 20448, null], [20448, 23851, null], [23851, 26855, null], [26855, 31978, null], [31978, 37433, null], [37433, 38195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3348, true], [3348, 9021, null], [9021, 12331, null], [12331, 17100, null], [17100, 20448, null], [20448, 23851, null], [23851, 26855, null], [26855, 31978, null], [31978, 37433, null], [37433, 38195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38195, null]], "pdf_page_numbers": [[0, 3348, 1], [3348, 9021, 2], [9021, 12331, 3], [12331, 17100, 4], [17100, 20448, 5], [20448, 23851, 6], [23851, 26855, 7], [26855, 31978, 8], [31978, 37433, 9], [37433, 38195, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38195, 0.14194]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
b67c6b8b3629a55ba56d47a4869bf702a58ad057
|
## Table of Contents
Introduction ........................................................................................................................................... 3
What is this article about? .................................................................................................................. 3
Who should read it? .......................................................................................................................... 3
Generator Basics .................................................................................................................................. 3
What is a generator? .......................................................................................................................... 3
What is a sequence? ........................................................................................................................... 3
Where are generators stored? ............................................................................................................ 4
What is the maximum value of a generator? ....................................................................................... 5
How many generators are available in one database? ....................................................................... 6
Generators and transactions ................................................................................................................. 7
SQL statements for generators ............................................................................................................. 7
Statement overview .......................................................................................................................... 7
Use of generator statements ............................................................................................................. 8
Using generators to create unique row IDs .......................................................................................... 11
Why row IDs at all? ......................................................................................................................... 11
One for all or one for each? ............................................................................................................. 11
Can you re-use generator values? .................................................................................................... 11
Generators for IDs or auto-increment fields ..................................................................................... 12
What else to do with generators .......................................................................................................... 13
Using generators to give e.g. transfer files unique numbers ............................................................ 13
Generators as “usage counters” for SPs to provide basic statistics ............................................... 13
Generators to simulate “Select count(*) from...” ......................................................................... 14
Generators to monitor and/or control long-running Stored Procedures ........................................ 14
Appendix A: Document history ........................................................................................................... 16
Appendix B: License notice ................................................................................................................ 17
Introduction
What is this article about?
This article explains what Firebird generators are, and how and why you should use them. It is an attempt to collect all relevant information about generators in a single document.
Who should read it?
Read this article if you...
• are not familiar with the concept of generators;
• have questions on using them;
• want to make an Integer column behave like an “AutoInc” field as found in other RDBMSs;
• are looking for examples on how to use generators for IDs or other tasks;
• want to know the Firebird word for a “sequence” in Oracle.
Generator Basics
What is a generator?
Think of a generator as a “thread-safe” integer counter that lives inside a Firebird database. You can create one by giving it a name:
```
CREATE GENERATOR GenTest;
```
Then you can get its current value and increase or decrease it just like a “var i:integer” in Delphi, but it is not always easy to “predictably” set it directly to a certain value and then obtain that same value – it’s inside the database, but outside of transaction control.
What is a sequence?
“Sequence” is the official SQL term for what Firebird calls a generator. Because Firebird is constantly striving for better SQL compliance, the term SEQUENCE can be used as a synonym for GENERATOR in Firebird 2 and up. In fact it is recommended that you use the SEQUENCE syntax in new code.
Although the word “sequence” puts the emphasis on the series of values generated whereas “generator” seems to refer primarily to the factory that produces these values, there is no difference at all between a Firebird generator
and a sequence. They are just two words for the same database object. You can create a generator and access it using the sequence syntax, and vice versa.
This is the preferred syntax for creating a generator/sequence in Firebird 2:
```sql
CREATE SEQUENCE SeqTest;
```
**Where are generators stored?**
Generator declarations are stored in the RDB$GENERATORS system table. Their values however are stored in special reserved pages inside the database. You never touch those values directly; you access them by means of built-in functions and statements which will be discussed later on in this guide.
**Warning**
The information provided in this section is for educational purposes only. As a general rule, you should leave system tables alone. Don’t attempt to create or alter generators by writing to RDB$GENERATORS. (A SELECT won’t hurt though.)
The structure of the RDB$GENERATORS system table is as follows:
- RDB$GENERATOR_NAME CHAR(31)
- RDB$GENERATOR_ID SMALLINT
- RDB$SYSTEM_FLAG SMALLINT
And, as from Firebird 2.0:
- RDB$DESCRIPTION BLOB subtype TEXT
Note that the GENERATOR_ID is – as the name says – an IDentifier for each generator, not its value. Also, don’t let your applications store the ID for later use as a handle to the generator. Apart from this making no sense (the name is the handle), the ID may be changed after a backup-restore cycle. The SYSTEM_FLAG is 1 for generators used internally by the engine, and NULL or 0 for all those you created.
Now let’s have a look at the RDB$GENERATORS table, here with a single self-defined generator:
<table>
<thead>
<tr>
<th>RDB$GENERATOR_NAME</th>
<th>RDB$GENERATOR_ID</th>
<th>RDB$SYSTEM_FLAG</th>
</tr>
</thead>
<tbody>
<tr>
<td>RDB$SECURITY_CLASS</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>SQL$DEFAULT</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>RDB$PROCEDURES</td>
<td>3</td>
<td>1</td>
</tr>
<tr>
<td>RDB$EXCEPTIONS</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<td>RDB$CONSTRAINT_NAME</td>
<td>5</td>
<td>1</td>
</tr>
<tr>
<td>RDB$FIELD_NAME</td>
<td>6</td>
<td>1</td>
</tr>
<tr>
<td>RDB$INDEX_NAME</td>
<td>7</td>
<td>1</td>
</tr>
<tr>
<td>RDB$TRIGGER_NAME</td>
<td>8</td>
<td>1</td>
</tr>
<tr>
<td>MY_OWN_GENERATOR</td>
<td>9</td>
<td>NULL</td>
</tr>
</tbody>
</table>
Firebird 2 notes
- Firebird 2 saw the introduction of an additional system generator, called RDB$BACKUP_HISTORY. It is used for the new NBackup facility.
- Even though the SEQUENCE syntax is preferred, the RDB$GENERATORS system table and its columns have not been renamed in Firebird 2.
What is the maximum value of a generator?
Generators store and return 64-bit values in all versions of Firebird. This gives us a value range of:
\[-2^{63} \ldots 2^{63}-1\]
So if you use a generator with starting value 0 to feed a NUMERIC(18) or BIGINT column (both types represent 64-bit integers), and you would insert 1000 rows per second, it would take around 300 million years (!) before it rolls over. As it is pretty unlikely mankind will still walk on this planet by then (and still use Firebird databases), that's nothing to be really worried about.
A word of warning though. Firebird speaks two SQL “dialects”: dialect 1 and dialect 3. New databases should always be created with dialect 3, which is more powerful in a number of respects. Dialect 1 is a compatibility dialect, to be used only for legacy databases that were first created under InterBase 5.6 or earlier.
One of the differences between the two is that dialect 1 has no native 64-bit integer type available. NUMERIC(18) columns for instance are stored internally as DOUBLE PRECISION, which is a floating point type. The biggest integer type in dialect 1 is the 32-bit INTEGER.
In dialect 1 as in dialect 3, generators are 64-bit. But if you assign the generated values to an INTEGER column in a dialect 1 database, they are truncated to the lower 32 bits, giving an effective range of:
\[-2^{31} \ldots 2^{31}-1\]
Although the generator itself would go on from 2,147,483,647 to 2,147,483,648 and beyond, the truncated value would wrap around at this point, giving the impression of a 32-bit generator.
In the situation described above, with 1000 inserts per second, the generator-fed column would now roll over after 25 days (!!!) and that is indeed something to have an eye on. \(2^{31}\) is a lot, but then again not that much depending on the situation.
Note
In dialect 3, if you assign generator values to an INTEGER field, all goes well as long as the values lie within the 32-bit range. But as soon as that range is exceeded, you get a numeric overflow error: dialect 3 is much stricter on range checking than dialect 1!
Client dialects and generator values
Clients talking to a Firebird server can set their dialect to 1 or 3, regardless of the database they are connected to. It is the client dialect, not the database dialect, that determines how Firebird passes generator values to the client:
• If the client dialect is 1, the server returns generator values as truncated 32-bit integers to the client. But inside the database they remain 64-bit values and they do not wrap after reaching \(2^{31}-1\) (even though it may look that way to the client). This is true both for dialect 1 and dialect 3 databases.
• If the client dialect is 3, the server passes the full 64-bit value to the client. Again, this holds whether the database dialect is 1 or 3.
**How many generators are available in one database?**
Since Firebird version 1.0, the number of generators you can have in a single database is limited only by the maximum assignable ID in the `RDB$GENERATORS` system table. Being a SMALLINT, this maximum is \(2^{15}-1\) or 32767. The first ID is always 1, so the total number of generators cannot exceed 32767. As discussed before, there are 8 or 9 system generators in the database, leaving room for at least 32758 of your own. This should be amply enough for any practical application. And since the number of generators you declare has no effect on performance, you can feel free to use as many generators as you like.
**Older InterBase and Firebird versions**
In the earliest pre-1.0 Firebird versions, as well as in InterBase, only one database page was used to store the generator values. Therefore, the number of available generators was limited by the page size of the database. The following table lists how many generators – including system generators – you can have in various InterBase and Firebird versions (thanks to Paul Reeves for providing the initial information):
<table>
<thead>
<tr>
<th>Version</th>
<th>Page size</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>1K</td>
</tr>
<tr>
<td>InterBase < v.6</td>
<td>247</td>
</tr>
<tr>
<td>IB 6 and early pre-1.0 Firebird</td>
<td>123</td>
</tr>
<tr>
<td>All later Firebird versions</td>
<td></td>
</tr>
</tbody>
</table>
In InterBase versions prior to 6, generators were only 32 bits wide. This explains why these older versions could store roughly twice the number of generators on the same page size.
**Warning**
InterBase, at least up to and including version 6.01, would happily let you “create” generators until the total number reached 32767. What happened if you accessed generators with an ID higher than the number given in the table above depended on the version:
• InterBase 6 would generate an “invalid block type” error because the calculated location lay outside the one page that was allocated to generators.
• In earlier versions, if the calculated location lay outside the database, an error would be returned. Otherwise, if the generator was only read (without increment), the value that just “happened to be” on the calculated spot was returned. If it was written to, it would overwrite data. This could sometimes lead to an immediate error, but most of the time it would just silently corrupt your database.
Generators and transactions
As said, generators live outside of transaction control. This simply means you cannot safely “rollback” generators inside a transaction. There may be other transactions executing at the same time that change the value while your transaction runs. So once you have requested a generator value, consider it as “gone forever”.
When you start a transaction and then call a generator and get a value of – let's say – 5, it will remain at that value even if you roll back the transaction (!). Don't even think of something like “OK, when I rollback, I can just do GEN_ID(mygen,-1) afterwards to set it back to 4”. This may work most of the time, but is unsafe because other concurrent transactions may have changed the value in between. For the same reason it doesn't make sense to get the current value with GEN_ID(mygen,0) and then increment the value on the client side.
SQL statements for generators
Statement overview
The name of a generator must be a usual DB meta-identifier: 31 chars maximum, no special characters except the underscore “_” (unless you use quoted identifiers). The SQL commands and statements that apply to generators are listed below. Their use will be discussed in some detail in the section Use of generator statements.
DDL (Data Definition Language) statements:
CREATE GENERATOR <name>;
SET GENERATOR <name> TO <value>;
DROP GENERATOR <name>;
DML (Data Manipulation Language) statements in client SQL:
SELECT GEN_ID( <GeneratorName>, <increment> ) FROM RDB$DATABASE;
DML statements in PSQL (Procedural SQL, available in stored procedures and triggers):
<intvar> = GEN_ID( <GeneratorName>, <increment> );
Firebird 2 recommended syntax
Although the traditional syntax is still fully supported in Firebird 2, these are the recommended DDL equivalents:
CREATE SEQUENCE <name>;
ALTER SEQUENCE <name> RESTART WITH <value>;
DROP SEQUENCE <name>;
And for the DML statements:
SELECT NEXT VALUE FOR <SequenceName> FROM RDB$DATABASE;
<intvar> = NEXT VALUE FOR <SequenceName>;
Currently the new syntax does not support an increment other than 1. This limitation will be lifted in a future version. In the meantime, use GEN_ID if you need to apply another increment value.
**Use of generator statements**
The availability of statements and functions depends on whether you use them in:
- **Client SQL** – The language you use when you, as a client, talk to a Firebird server.
- **PSQL** – The server-side programming language used in Firebird stored procedures and triggers.
**Creating a generator (“Insert”)**
**Client SQL**
```sql
CREATE GENERATOR <GeneratorName>;
```
Preferred for Firebird 2 and up:
```sql
CREATE SEQUENCE <SequenceName>;
```
**PSQL**
Not possible. Since you cannot change database metadata inside SPs or triggers, you cannot create generators there either.
**Note**
In FB 1.5 and up, you can circumvent this limitation with the EXECUTE STATEMENT feature.
**Getting the current value (“Select”)**
**Client SQL**
```sql
SELECT GEN_ID( <GeneratorName>, 0 ) FROM RDB$DATABASE;
```
This syntax is still the only option in Firebird 2.
**Note**
In Firebird's command-line tool *isql* there are two additional commands for retrieving current generator values:
```
SHOW GENERATOR <GeneratorName>;
SHOW GENERATORS;
```
The first reports the current value of the named generator. The second does the same for all non-system generators in the database.
The preferred Firebird 2 equivalents are, as you could guess:
```
SHOW SEQUENCE <SequenceName>;
SHOW SEQUENCES;
```
Please notice again that these *SHOW...* commands are only available in the Firebird *isql* tool. Unlike *GEN_ID*, you can't use them from within other clients (unless these clients are *isql* frontends).
**PSQL**
```
<intvar> = GEN_ID( <GeneratorName>, 0 );
```
Firebird 2: same syntax.
**Generating the next value (“Update” + “Select”)**
Just like getting the current value, this is done with *GEN_ID*, but now you use an increment value of 1. Firebird will:
1. get the current generator value;
2. increment it by 1;
3. return the incremented value.
**Client SQL**
```
SELECT GEN_ID( <GeneratorName>, 1 ) FROM RDB$DATABASE;
```
The new syntax, which is preferred for Firebird 2, is entirely different:
```
SELECT NEXT VALUE FOR <SequenceName> FROM RDB$DATABASE;
```
**PSQL**
```
<intvar> = GEN_ID( <GeneratorName>, 1 );
```
Preferred for Firebird 2 and up:
```
<intvar> = NEXT VALUE FOR <SequenceName>;
```
**Setting a generator directly to a certain value (“Update”)**
**Client SQL**
```
SET GENERATOR <GeneratorName> TO <NewValue>;
```
This is useful to preset generators to a value other than 0 (which is the default value after you created it) in e.g. a script to create the database. Just like CREATE GENERATOR, this is a DDL (not DML) statement.
Preferred syntax for Firebird 2 and up:
```
ALTER SEQUENCE <SequenceName> RESTART WITH <NewValue>;
```
**PSQL**
```
GEN_ID( <GeneratorName>, <NewValue> - GEN_ID( <GeneratorName>, 0 ) );
```
**Warning**
This is more of a dirty little trick to do what you normally cannot and should not do in SPs and triggers: *setting* generators. They are for *getting*, not *setting* values.
---
**Dropping a generator ("Delete")**
**Client SQL**
```
DROP GENERATOR <GeneratorName>;
```
Preferred for Firebird 2 and up:
```
DROP SEQUENCE <SequenceName>;
```
**PSQL**
Not possible, unless... (Same explanation as with Create: you can't – or rather, shouldn't – change metadata in PSQL.)
Dropping a generator does not free the space it occupied for use by a new generator. In practice this rarely hurts, because most databases don't have the tens of thousands of generators that Firebird allows, so there's bound to be room for more anyway. But if your database does risk to hit the 32767 ceiling, you can free up dead generator space by performing a backup-restore cycle. This will neatly pack the RDB$GENERATORS table, re-assigning a contiguous series of IDs. Depending on the situation, the restored database may also need less pages for the generator values.
**Dropping generators in old IB and Firebird versions**
InterBase 6 and earlier, as well as early pre-1.0 Firebird versions, do not have a DROP GENERATOR command. The only way to drop a generator in these versions is:
```
DELETE FROM RDB$GENERATORS WHERE RDB$GENERATOR_NAME = '<GeneratorName>';
```
...followed by a backup and restore.
In these versions, with the maximum number of generators typically a couple of hundred, it is much more likely that the need will arise to reuse space from deleted generators.
Using generators to create unique row IDs
Why row IDs at all?
The answer to this question would go far beyond the scope of this article. If you see no need to have a generic, unique “handle” for every row inside a table, or don't like the idea of “meaningless” or “surrogate” keys in general, you should probably skip this section...
One for all or one for each?
OK, so you want row IDs. { author's note: congratulations! :-} }
A major, basic decision to take is whether we'll use one single generator for all the tables, or one generator for each table. This is up to you – but take the following considerations into account.
With the “one for all” approach, you:
• + need only a single generator for all your IDs;
• + have one integer number that does not only identify your row within its table, but within the entire database;
• - have less possible ID values per table (this shouldn't really be a problem with 64bit generators...);
• - will soon have to deal with large ID values even in e.g. lookup tables with only a handful of records;
• - will likely see gaps in a per-table ID sequence, since generator values are spread throughout all tables.
With the “one for each” approach you:
• - have to create a generator for every single “ID'd” table in your database;
• - always need the combination of ID and table name to uniquely identify any row in any table;
• + have a simple and robust “insert counter” per table;
• + have a chronological sequence per table: if you find a gap in the ID sequence of a table, then it's caused either by a DELETE or by a failed INSERT.
Can you re-use generator values?
Well – yes, technically you can. But – NO, you shouldn't. Never. Never ever. Not only that this would destroy the nice chronological sequence (you can't judge a row’s “age” by just looking at the ID any more), the more you think about it the more headaches it'll give you. Moreover it is an absolute contradiction to the entire concept of unique row identifiers.
So unless you have good reasons to re-use generator values, and a well-thought-out mechanism to make this work safely in multi-user/multi-transaction environments, JUST DON'T DO IT!
Generators for IDs or auto-increment fields
Giving a newly inserted record an ID (in the sense of a unique “serial number”) is easily done with generators and Before Insert triggers, as we’ll see in the following subsections. We start with the assumption that we have a table called TTEST with a column ID declared as Integer. Our generator’s name is GIDTEST.
Before Insert trigger, version 1
CREATE TRIGGER trgTTEST_BI_V1 for TTEST
active before insert position 0
as
begin
new.id = gen_id( gidTest, 1 );
end
Problems with trigger version 1:
This one does the job all right – but it also “wastes” a generator value in cases where there is already an ID supplied in the INSERT statement. So it would be more efficient to only assign a value when there was none in the INSERT:
Before Insert trigger, version 2
CREATE TRIGGER trgTTEST_BI_V2 for TTEST
active before insert position 0
as
begin
if (new.id is null) then
begin
new.id = gen_id( gidTest, 1 );
end
end
Problems with trigger version 2:
Some access components have the “bad habit” to auto-fill all the columns in an INSERT. Those not explicitly set by you get default values – usually 0 for integer columns. In that case, the above trigger would not work: it would find that the ID column does not have the state NULL, but the value 0, so it would not generate a new ID. You could post the record, though – but only one... the second one would fail. It is anyway a good idea to “ban” 0 as a normal ID value, to avoid any confusion with NULL and 0. You could e.g. use a special row with an ID of 0 to store a default record in each table.
Before Insert trigger, version 3
CREATE TRIGGER trgTTEST_BI_V3 for TTEST
active before insert position 0
as
begin
if ((new.id is null) or (new.id = 0)) then
begin
new.id = gen_id( gidTest, 1 );
end
end
Well, now that we have a robust, working ID trigger, the following paragraphs will explain to you why mostly you won't need it at all:
The basic problem with IDs assigned in Before Insert triggers is that they are generated on the server side, after you send the Insert statement from the client. This plainly means there is no safe way to know from the client side which ID was generated for the row you just inserted.
You could grab the generator value from the client side after the Insert, but in multi-user environments you cannot be really sure that what you get is your own row's ID (because of the transaction issue).
But if you get a new value from the generator before, and post the Insert with that value, you can simply fetch the row back with a “Select ... where ID = <genvalue>” to see what defaults were applied or whether columns were affected by Insert triggers. This works especially well because you usually have a unique Primary Key index on the ID column, and those are about the fastest indexes you can have – they're unbeatable in selectivity, and mostly smaller than indexes on CHAR(n) cols (for n>8, depending on character set and collation).
The bottom line to this is:
You should create a Before Insert trigger to make absolutely sure every row gets a unique ID, even if no ID value was supplied from the client side in the Insert statement.
If you have an “SQL-closed” database (that is, your own application code is the only source for newly inserted records), then you can leave out the trigger, but then you should always obtain a new generator value from the database before issuing the Insert statement and include it there. The same, of course, goes for inserts from within triggers and stored procedures.
What else to do with generators
Here you can find some ideas for usages of generators other than generating unique row IDs.
Using generators to give e.g. transfer files unique numbers
A “classic” usage of generators is to ensure unique, sequential numbers for – well, anything in your application other than the row IDs discussed above. When you have an application that is transferring data to some other system, you can use generators to safely identify a single transfer by labeling it with a generated value. This greatly helps tracking down problems with interfaces between two systems (and, unlike most of the following, this does work safely and exactly).
Generators as “usage counters” for SPs to provide basic statistics
Imagine you just built a fantastic new feature into your database with a stored procedure. Now you update your customer's systems and some time later you'd like to know if the users really use this feature and how often.
Simple: make a special generator that only gets incremented in that SP and you're there... with the restriction that you can't know the number of transactions that were rolled back after or while your SP executed. So in this case you at least know how often users tried to use your SP :-)
You could further refine this method by using two generators: One gets incremented at the very start of the SP, another at the very end just before the EXIT. This way you can count how many attempts to use the SP were succesful: if both generators have the same value, then none of the calls to the SP failed etc. Of course you then still don't know how many times the transaction(s) invoking your SP were actually committed.
**Generators to simulate “Select count(*) from...”**
There is the known problem with InterBase and Firebird that a SELECT COUNT(*) (with no Where clause) from a really large table can take quite a while to execute, since the server must count "by hand" how many rows there are in the table at the time of the request. In theory, you could easily solve this problem with generators:
- Create a special “row counter” generator;
- Make a Before Insert trigger that increments it;
- Make an After Delete trigger that decrements it.
This works beautifully and makes a “full” record count needless – just get the current generator value. I stressed the “in theory” here because the whole thing goes down the drain when any Insert statements fail, because as said those generators are beyond transaction control. Inserts can fail because of constraints (Unique Key violations, NOT NULL fields being NULL, etc.) or other metadata restrictions, or simply because the transaction that issued the Insert gets rolled back. You have no rows in the table and still your Insert counter climbs.
So it depends – when you know the rough percentage of Inserts that fail (you can kinda get a “feeling” for this), and you're only interested in an estimation of the record count, then this method can be useful even though it's not exact. From time to time you can do a “normal” record count and set the generator to the exact value ("re-synchronize" the generator), so the error can be kept rather small.
There are situations when customers can happily live with an info like “there are about 2.3 million records” instantly at a mouseclick, but would shoot you if they have to wait 10 minutes or more to see that there are precisely 2,313,498,229 rows...
**Generators to monitor and/or control long-running Stored Procedures**
When you have SPs that e.g. generate report outputs on large tables and/or complex joins, they can take quite a while to execute. Generators can be helpful here in two ways: they can provide you with a “progress counter” which you can poll periodically from the client side while the SP runs, and they can be used to stop it:
```sql
CREATE GENERATOR gen_spTestProgress;
CREATE GENERATOR gen_spTestStop;
set term ^;
CREATE PROCEDURE spTest (...) AS BEGIN (...) for select <lots of data taking lots of time>
```
do begin
GEN_ID(gen_spTestProgress,1);
IF (GEN_ID(gen_spTestStop,0)>0) THEN Exit;
(...normal processing here...)
end
END^
Just a rough sketch, but you should get the idea. From the client, you can do a GEN_ID(gen_spTestProgress,0) asynchronously to the actual row fetching (e.g. in a different thread), to see how many rows were processed, and display the value in some sort of progress window. And you can do a GEN_ID(gen_spTestStop,1) to cancel the SP at any time from the “outside”.
Although this can be very handy, it has a strong limitation: it's not multi-user safe. If the SP would run simultaneously in two transactions, they would mess up the progress generator – they would both increment the same counter at the same time so the result would be useless. Even worse, incrementing the stop generator would immediately stop the SP in both transactions. But for e.g. monthly reports that are generated by a single module run in batch mode, this can be acceptable – as usual, it depends on your needs.
If you want to use this technique and allow users to trigger the SP at any time, you must make sure by other means that the SP can not be run twice. Thinking about this, I had the idea to use another generator for that: let's call this one gen_spTestLocked (assuming the initial value of 0 of course):
```sql
CREATE GENERATOR gen_spTestProgress;
CREATE GENERATOR gen_spTestStop;
CREATE GENERATOR gen_spTestLocked;
set term ^;
CREATE PROCEDURE spTest (...)
AS
DECLARE VARIABLE lockcount INTEGER;
BEGIN
lockcount = GEN_ID(gen_spTestLocked,1);
/* very first step: increment the locking generator */
if (lockcount=1) then /* _we_ got the lock, continue */
begin
(..."normal" procedure body here...)
end
lockcount = GEN_ID(gen_spTestLocked,-1); /* undo the increment */
/* make sure the gen is reset at the very end even when an exception happens inside the "normal" procedure body: */
WHEN ANY DO
lockcount = GEN_ID(spTestLocked,-1); /* undo the increment */
exit;
END^
```
**Note:** I'm not yet 100% sure this is absolutely multi-user safe, but it looks rock solid – as long as no EXIT occurs in the normal procedure body, for then the SP would stop and quit, leaving the generator incremented. The WHEN ANY clause handles exceptions, but not normal EXITs. Then you'd have to decrement it by hand – but you could decrement the generator just before the EXIT to avoid this. Given the right precautions, I can't make up any situation where this mechanism could fail... If you can – let us know!
Appendix A:
Document history
The exact file history is recorded in the manual module in our CVS tree; see http://sourceforge.net/cvs/?group_id=9028
Revision History
0.1 4 Apr 2006 FI First edition.
0.2 7 May 2006 PV Added SEQUENCE syntax and other Firebird 2 info.
Added information on: the importance of client dialects; the SHOW
GENERATOR statement and friends; dropping generators and packing
generator space.
Edited and extended the following sections more or less heavily: Where
are generators stored?, What is the maximum value of a generator?,
How many generators...?, Use of generator statements.
Further editing, additions and corrections to various sections, mainly in
the first half of the document. Light editing in second half (starting at
Using generators to create unique row IDs).
Appendix B: License notice
The contents of this Documentation are subject to the Public Documentation License Version 1.0 (the “License”); you may only use this Documentation if you comply with the terms of this License. Copies of the License are available at http://www.firebirdtest.com/file/documentation/reference_manuals/firebird_licenses/Public-Documentation-License.pdf (PDF) and http://www.firebirdtest.com/en/public-documentation-license/ (HTML).
The Original Documentation is titled Firebird Generator Guide.
The Initial Writer of the Original Documentation is: Frank Ingermann.
Copyright (C) 2006. All Rights Reserved. Initial Writer contact: frank at fingerman dot de.
Contributor: Paul Vinkenoog – see document history.
Portions created by Paul Vinkenoog are Copyright (C) 2006. All Rights Reserved. Contributor contact: paul at vinkenoog dot nl.
|
{"Source-Url": "https://www.firebirdsql.org/file/documentation/reference_manuals/user_manuals/Firebird-Generator-Guide.pdf", "len_cl100k_base": 7468, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 31671, "total-output-tokens": 8434, "length": "2e12", "weborganizer": {"__label__adult": 0.00021326541900634768, "__label__art_design": 0.00022852420806884768, "__label__crime_law": 0.00017189979553222656, "__label__education_jobs": 0.0005068778991699219, "__label__entertainment": 5.6743621826171875e-05, "__label__fashion_beauty": 7.218122482299805e-05, "__label__finance_business": 0.00031375885009765625, "__label__food_dining": 0.0001906156539916992, "__label__games": 0.000514984130859375, "__label__hardware": 0.0005674362182617188, "__label__health": 0.00022029876708984375, "__label__history": 0.0001347064971923828, "__label__home_hobbies": 7.420778274536133e-05, "__label__industrial": 0.0002429485321044922, "__label__literature": 0.0001595020294189453, "__label__politics": 0.0001036524772644043, "__label__religion": 0.0002574920654296875, "__label__science_tech": 0.00982666015625, "__label__social_life": 5.412101745605469e-05, "__label__software": 0.038787841796875, "__label__software_dev": 0.94677734375, "__label__sports_fitness": 0.00013113021850585938, "__label__transportation": 0.00014412403106689453, "__label__travel": 0.00011515617370605467}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33858, 0.01133]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33858, 0.36623]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33858, 0.86792]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3556, false], [3556, 5167, null], [5167, 7469, null], [7469, 10149, null], [10149, 13096, null], [13096, 15029, null], [15029, 16218, null], [16218, 17709, null], [17709, 19701, null], [19701, 21868, null], [21868, 23651, null], [23651, 26400, null], [26400, 29441, null], [29441, 31982, null], [31982, 32994, null], [32994, 33858, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3556, true], [3556, 5167, null], [5167, 7469, null], [7469, 10149, null], [10149, 13096, null], [13096, 15029, null], [15029, 16218, null], [16218, 17709, null], [17709, 19701, null], [19701, 21868, null], [21868, 23651, null], [23651, 26400, null], [26400, 29441, null], [29441, 31982, null], [31982, 32994, null], [32994, 33858, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33858, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33858, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3556, 2], [3556, 5167, 3], [5167, 7469, 4], [7469, 10149, 5], [10149, 13096, 6], [13096, 15029, 7], [15029, 16218, 8], [16218, 17709, 9], [17709, 19701, 10], [19701, 21868, 11], [21868, 23651, 12], [23651, 26400, 13], [26400, 29441, 14], [29441, 31982, 15], [31982, 32994, 16], [32994, 33858, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33858, 0.04632]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
a409cf223d1fde083ed3fa59ef3a2a058508b736
|
Оглавление
1 About Weblate 3
1.1 Project goals .................................................. 3
1.2 Project name .................................................. 3
1.3 Project website ................................................. 3
1.4 Authors ....................................................... 3
2 Usage guide 5
2.1 Registration ................................................... 5
2.2 Profile information ........................................... 5
2.3 Projects structure ............................................ 5
2.4 Translation links .............................................. 6
2.5 Translating ..................................................... 6
2.6 Dictionary ...................................................... 6
2.7 Suggestions ................................................... 6
2.8 Machine translation .......................................... 6
2.9 Checks ......................................................... 7
3 Installation instructions 9
3.1 Requirements .................................................. 9
3.2 Installation .................................................... 9
3.3 Running server ................................................ 9
3.4 Upgrading ...................................................... 10
4 Configuration 11
5 Administration 13
5.1 Adding new resources ......................................... 13
5.2 Project .......................................................... 13
5.3 Subproject ...................................................... 14
5.4 Updating repositories ......................................... 14
5.5 Pushing changes ................................................ 14
5.6 Interacting with others ....................................... 14
5.7 Access control .................................................. 15
5.8 Lazy commits ................................................... 15
5.9 Customizing checks ............................................ 16
6 Management commands 19
Contents:
Оглавление
1.1 Project goals
Minimalistic web based translation with direct commit to git on each translation made. There is no plan in heavy conflict resolution as these should be primarily handled on git side.
1.2 Project name
The project is named as mixture of words web and translate.
1.3 Project website
You can find project website at <http://weblate.org/>, there is also demonstration server at <http://demo.weblate.org/>. This documentation can be browsed on <http://weblate.readthedocs.org/>.
1.4 Authors
This tool was written by Michal Čihář <michal@cihar.com>.
This document briefly covers how to translate application using Weblate.
2.1 Registration
While everybody can browse projects, view translations or suggest them, only registered users are allowed to actually save changes and are credited for every translation made.
You can register following two simple steps:
1. Fill out the registration form with your credentials
2. Activate registration by following in email you receive
3. Possibly adjust your profile to choose which languages you know
2.2 Profile information
User profile contains your preferences, name and email. Name and email are being used in Git commits, so keep this information accurate.
In preferences, you can choose user interface language, languages which you prefer to translate (list of these will be offered to you on main page) and secondary languages, whose translations will be shown to you while translating.
2.3 Projects structure
Each project can contain various subprojects. The reason for this structure is that all subprojects in a project are expected to have a lot in common. Whenever translation is made in single subproject, it is automatically propagated to others within same project (this is especially useful when translating more version of same project).
2.4 Translation links
Once you navigate to translation, you will be shown set of links which lead to translation. These are results of various checks, like untranslated or fuzzy strings. Should no other checks fire, there will be still link to all translations. Alternatively you can use search field to find translation you need to fix.
2.5 Translating
On translate page, you are shown source string and edit area for translating. Should the translation be plural, multiple source strings and edit areas are shown, each described with label for plural form.
There are various extra information which can be shown on this page. Most of them are coming from the project source code (like context, comments or where the message is being used). When you configure secondary languages in your preferences, translation to these languages will be shown.
Bellow translation can be also shown suggestions from other users, which you can accept or delete.
2.6 Dictionary
Each project can have assigned dictionary for any language. This could be used for storing terminology for given project, so that translations are consistent. You can display terms from currently translated string in bottom tabs.
2.7 Suggestions
As an anonymous user, you have no other choice than making a suggestion. However if you are logged in you can still decide to make only a suggestion instead of saving translation, for example in case you are unsure about the translation and you want somebody else to review it.
2.8 Machine translation
Based on configuration and your language, Weblate provides buttons for following machine translation tools.
2.8.1 MyMemory
Huge translation memory with machine translation.
См.также:
http://mymemory.translated.net/
2.8.2 Apertium
A free/open-source machine translation platform providing translation to limited set of lanugages.
См.также:
http://www.apertium.org/
2.8.3 Microsoft Translator
Machine translation service provided by Microsoft.
См.также:
http://www.microsofttranslator.com/
2.9 Checks
Weblate does wide range of consistency checks on translated messages. The following section describes them in more detail. The checks take account also special rules for different languages, so if you think the result is wrong, please report a bug.
2.9.1 Not translated
The source and translated strings are same at least in one of plural forms. This checks ignores some strings which are quite usually same in all languages.
2.9.2 Starting newline
Source and translated do not both start with a newline.
2.9.3 Trailing newline
Source and translated do not both end with a newline.
2.9.4 Trailing space
Source and translated do not both end with a space.
2.9.5 Trailing stop
Source and translated do not both end with a full stop. Full stop is also checked in various language variants (Chinese, Japanese, Devanagari or Urdu).
2.9.6 Trailing colon
Source and translated do not both end with a colon or colon is not correctly spaced. This includes spacing rules for French or Breton. Colon is also checked in various language variants (Chinese or Japanese).
2.9.7 Trailing question
Source and translated do not both end with question mark or it is not correctly spaced. This includes spacing rules for French or Breton. Question mark is also checked in various language variants (Armenian, Arabic, Chinese, Korean, Japanese, Ethiopic, Vai or Coptic).
2.9.8 Trailing exclamation
Source and translated do not both end with exclamation mark or it is not correctly spaced. This includes spacing rules for French or Breton. Exclamation mark is also check in various language variants (Chinese, Japanese, Korean, Armenian, Limbu, Myanmar or Nko).
2.9.9 Python format
Python format string does not match source.
2.9.10 PHP format
PHP format string does not match source.
2.9.11 C format
C format string does not match source.
2.9.12 Missing plurals
Some plural forms are not translated. Check plural form definition to see for which counts each plural form is being used.
2.9.13 Inconsistent
More different translations of one string in a project. This can also lead to inconsistencies in displayed checks. You can find other translations of this string on All locations tab.
Installation instructions
3.1 Requirements
Django https://www.djangoproject.com/
Translate-toolkit http://translate.sourceforge.net/wiki/toolkit/index
GitPython (>= 0.3) https://github.com/gitpython-developers/GitPython
Django-registration https://bitbucket.org/ubernostrum/django-registration/
Whoosh http://bitbucket.org/mchaput/whoosh/
3.2 Installation
Install all required components (see above), adjust settings.py and then run ./manage.py syncdb to create database structure. Now you should be able to create translation projects using admin interface. You probably also want to run ./manage.py setuplang to get default list of languages and ./manage.py setupgroups to initialize default groups.
Cм.также:
Access control
3.3 Running server
Running Weblate is not different from running any other Django based application.
It is recommended to serve static files directly by your webserver, you should use that for following paths:
/media Serves media directory from Weblate.
/static/admin Serves media files for Django admin interface (eg. /usr/share/pyshared/django/contrib/admin/media/).
Additionally you should setup rewrite rule to serve media/favicon.ico as favicon.ico.
См.также:
https://docs.djangoproject.com/en/1.3/howto/deployment/
### 3.3.1 Sample configuration for Lighttpd
The configuration for Lighttpd web server might look like following:
```perl
fastcgi.server = (
"/weblate.fcgi" => (
"main" => (
"socket" => "/var/run/django/weblate.socket",
"check-local" => "disable",
),
),
)
alias.url = (
"/media" => "/var/lib/django/weblate/media/",
"/static/admin" => "/usr/share/pyshared/django/contrib/admin/static/admin/",
)
url.rewrite-once = (
"/(.*)$" => "$1",
"/*media.*)$" => "$1",
"/*favicon.ico$" => "/media/favicon.ico",
"/*robots.txt$" => "/media/robots.txt",
"/*(.*)$" => "/weblate.fcgi$1",
)
expire.url = (
"/media/" => "access 1 months",
"/static/" => "access 1 months",
"/favicon.ico" => "access 1 months",
)
```
### 3.4 Upgrading
On upgrade to version 0.6 you should run
./manage.py syncdb and ./manage.py setupgroups --move to setup access control as described in installation section.
On upgrade to version 0.7 you should run ./manage.py syncdb to setup new tables and ./manage.py rebuild_index to build index for fulltext search.
On upgrade to version 0.8 you should run ./manage.py syncdb to setup new tables, ./manage.py setupgroups to update privileges setup and ./manage.py rebuild_index to rebuild index for fulltext search.
All settings are stored in `settings.py` (as usual for Django).
**CHECK_LIST**
List of consistency checks to perform on translation.
См.также:
*Checks, Customizing checks*
**COMMIT_MESSAGE**
Message used on each commit Weblate does.
**ENABLE_HOOKS**
Whether to enable anonymous remote hooks.
См.также:
*Interacting with others*
**GIT_ROOT**
Path where Weblate will store cloned Git repositories. Defaults to `repos` subdirectory.
**LAZY_COMMITS**
Delay creating Git commits until this is necessary. This heavily reduces number of commits generated by Weblate at expense of temporarily not being able to merge some changes as they are not yet committed.
См.также:
*Lazy commits*
**MT_APERTIUM_KEY**
API key for Apertium Web Service, you can register at http://api.apertium.org/register.jsp
**MT_MICROSOFT_KEY**
API key for Microsoft Translator service, you can register at http://www.bing.com/developers/createapp.aspx
NEARBY_MESSAGES
How many messages around current one to show during translating.
SIMILAR_MESSAGES
Number of similar messages to lookup. This is not a hard limit, just a number Weblate tries to find if it is possible.
SITE_TITLE
Site title to be used in website and emails as well.
WHOOSH_INDEX
Directory where Whoosh fulltext indices will be stored. Defaults to `whoosh-index` subdirectory.
См. также:
https://docs.djangoproject.com/en/1.3/ref/settings/
Administration of Weblate is done through standard Django admin interface, which is available under /admin/ URL.
### 5.1 Adding new resources
All translation resources need to be available as Git repositories and are organized as project/subproject structure.
Weblate supports wide range of translation formats supported by translate toolkit, for example:
- GNU Gettext
- XLIFF
- Java properties
- Windows RC files
- Qt Linguist .ts
- Symbian localization files
- CSV
- INI
См.также:
http://translate.sourceforge.net/wiki/toolkit/formats
### 5.2 Project
To add new resource to translate, you need to create translation project first. The project is sort of shelf, in which real translations are folded. All subprojects in same project share suggestions and dictionary, also the
translations are automatically propagated through the all subproject in single project.
5.3 Subproject
Subproject is real resource for translating. You enter Git repository location and file mask which files to translate and Weblate automatically fetches the Git and finds all translated files.
Примечание: As setup of translation project includes fetching Git repositories, you might want to preseed these, repos are stored in path defined by GIT_ROOT in settings.py in <project>/<subproject> directories.
5.4 Updating repositories
You should set up some way how backend repositories are updated from their source. You can either use hooks (see Interacting with others) or just regularly run ./manage.py updategit --all.
With Gettext po files, you might be often bitten by conflict in PO file headers. To avoid it, you can use shipped merge driver (scripts/git-merge-gettext-po). To use it just put following configuration to your .gitconfig:
```ini
[merge "merge-gettext-po"]
name = merge driver for gettext po files
driver = /path/to/weblate/scripts/git-merge-gettext-po %O %A %B
```
And enable it’s use by defining proper attributes in given repository (eg. in .git/info/attribute):
```ini
*.po merge=merge-gettext-po
```
Примечание: This merge driver assumes the changes in POT files always are done in brach we’re trying to merge.
См.также:
http://www.no-ack.org/2010/12/writing-git-merge-driver-for-po-files.html
5.5 Pushing changes
Each project can have configured push URL and in such case Weblate offers button to push changes to remote repo in web interface.
I case you will use SSH for pushing, you need to have key without passphrase (or use ssh-agent for Django) and the remote server needs to be verified by you first, otherwise push will fail.
5.6 Interacting with others
You can trigger update of underlaying git repository for every subproject or project by accessing URL /hooks/update/project/subproject/ or /hooks/update/project/.
For GitHub, there is a special URL /hooks/github/, which parses GitHub notifications and updates related projects automatically.
Примечание: The GitHub notification relies on Git repository urls you use to be in form git://github.com/owner/repo.git, otherwise automatic detection of used repository will fail.
## 5.7 Access control
Weblate uses privileges system based on Django. It defines following extra privileges:
- Can upload translation [Users, Managers]
- Can overwrite with translation upload [Users, Managers]
- Can define author of translation upload [Managers]
- Can force committing of translation [Managers]
- Can update translation from git [Managers]
- Can push translations to remote git [Managers]
- Can do automatic translation using other project strings [Managers]
- Can save translation [Users, Managers]
- Can accept suggestion [Users, Managers]
- Can import dictionary [Users, Managers]
- Can add dictionary [Users, Managers]
- Can change dictionary [Users, Managers]
- Can delete dictionary [Users, Managers]
The default setup (after you run ./manage.py setupgroups) consists of two groups Users and Managers which have privileges as described above. All new users are automatically added to Users group.
To customize this setup, it is recommended to remove privileges from Users group and create additional groups with finer privileges (eg. Translators group, which will be allowed to save translations and manage suggestions) and add selected users to this group. You can do all this from Django admin interface.
## 5.8 Lazy commits
Default behaviour (configured by LAZY_COMMITS) of Weblate is to group commits from same author into one if possible. This heavily reduces number of commits, however you might need to do implicit sync to get Git repository in sync (you can do this in admin interface).
The changes are in this mode committed once one of following conditions happen:
- somebody else works on the translation
- merge from upstream occurs
- import of translation happens
5.9 Customizing checks
Weblate comes with wide range of consistency checks (see Checks), though they might not 100% cover all you want to check. The list of performed checks can be adjusted using CHECK_LIST and you can also add custom checks. All you need to do is to subclass trans.checks.Check, set few attributes and implement either check or check_single methods (first one if you want to deal with plurals in your code, the latter one does this for you). You will find below some examples.
5.9.1 Checking translation text does not contain «foo»
This is pretty simple check which just checks whether translation does not contain string «foo».
```python
from trans.checks import Check
from django.utils.translation import ugettext_lazy as _
class FooCheck(Check):
# Used as identifier for check, should be unique
check_id = 'foo'
# Short name used to display failing check
name = _('Foo check')
# Description for failing check
description = _('Your translation is foo')
# Real check code
def check_single(self, source, target, flags, language, unit):
return 'foo' in target
```
5.9.2 Checking Czech translation text plurals differ
Check using language information to verify that two plural forms in Czech language are not same.
```python
from trans.checks import Check
from django.utils.translation import ugettext_lazy as _
class PluralCzechCheck(Check):
# Used as identifier for check, should be unique
check_id = 'foo'
# Short name used to display failing check
name = _(' Foo check')
# Description for failing check
description = _('Your translation is foo')
# Real check code
def check(self, sources, targets, flags, language, unit):
return {'foo': 'target'}
```
if self.is_language(language, ['cs']):
return targets[1] == targets[2]
return False
Management commands
The ./manage.py is extended with following commands:
- **checkgit**
Prints current state of backend git repository.
You can either define which subproject to check (eg. `weblate/master`) or use `--all` to check all existing subprojects.
- **commitgit**
Commits any possible pending changes to backend git repository.
You can either define which subproject to check (eg. `weblate/master`) or use `--all` to check all existing subprojects.
- **cleanuptrans**
Cleanups orphnaed checks and translation suggestions.
- **loadpo**
Reloads translations from disk (eg. in case you did some updates in Git repository).
- **rebuild_index**
Rebuilds index for fulltext search. This might be lengthy operation if you have huge set of translation units.
You can use `--clean` to remove all words from database prior updating.
- **setupgroups**
Configures default groups and (if called with `--move`) assigns all users to default group.
См.также:
*Access control*
- **setuplang**
Sets up list of languages (it has own list and all defined in translate-toolkit).
- **updatechecks**
Updates all check for all units. This could be useful only on upgrades which do major changes to checks.
You can either define which project or subproject to update (eg. `weblate/master`) or use `--all` to update all existing subprojects.
**updategit**
Fetches remote Git repositories and updates internal cache.
You can either define which project or subproject to update (eg. `weblate/master`) or use `--all` to update all existing subprojects.
7.1 Requests sometimes fail with too many open files error
This happens sometimes when your Git repository grows too much and you have more of them. Compressing the Git repositories will improve this situation.
The easiest way to do this is to run:
```bash
cd repos
for d in */* ; do
pushd $d
git gc
popd
done
```
7.2 Fulltext search is too slow
Depending on various conditions (frequency of updates, server restarts and other), fulltext index might get too fragmented over time. It is recommended to rebuild it from scratch time to time:
```bash
./manage.py rebuild_index --clean
```
7.3 Does Weblate support other VCS than Git?
Not currently. Weblate requires distributed VCS and could be probably adjusted to work with anything else than Git, but somebody has to implement this support.
8.1 weblate 0.8
Released on April 3rd 2012.
- Replaced own full text search with Whoosh.
- Various fixes and improvements to checks.
- New command updatechecks.
- Lot of translation updates.
- Added dictionary for storing most frequently used terms.
- Added /admin/report/ for overview of repositories status.
- Machine translation services no longer block page loading.
- Management interface now contains also useful actions to update data.
- Records log of changes made by users.
- Ability to postpone commit to Git to generate less commits from single user.
- Possibility to browse failing checks.
- Automatic translation using already translated strings.
- New about page showing used versions.
- Django 1.4 compatibility.
- Ability to push changes to remote repo from web interface.
- Added review of translations done by others.
8.2 weblate 0.7
Released on February 16th 2012.
- Direct support for GitHub notifications.
- Added support for cleaning up orphaned checks and translations.
- Displays nearby strings while translating.
- Displays similar strings while translating.
- Improved searching for string.
8.3 weblate 0.6
Released on February 14th 2012.
- Added various checks for translated messages.
- Tunable access control.
- Improved handling of translations with new lines.
- Added client side sorting of tables.
- Please check upgrading instructions in case you are upgrading.
8.4 weblate 0.5
Released on February 12th 2012.
- Support for machine translation using following online services:
- Apertium
- Microsoft Translator
- MyMemory
- Several new translations.
- Improved merging of upstream changes.
- Better handle concurrent git pull and translation.
- Propagating works for fuzzy changes as well.
- Propagating works also for file upload.
- Fixed file downloads while using FastCGI (and possibly others).
8.5 weblate 0.4
Released on February 8th 2012.
- Added usage guide to documentation.
- Fixed API hooks not to require CSRF protection.
8.6 weblate 0.3
Relased on February 8th 2012.
- Better display of source for plural translations.
- New documentation in Sphinx format.
- Displays secondary languages while translating.
- Improved error page to give list of existing projects.
- New per language stats.
8.7 weblate 0.2
Relased on February 7th 2012.
- Improved validation of several forms.
- Warn users on profile upgrade.
- Remember URL for login.
- Naming of text areas while entering plural forms.
- Automatic expanding of translation area.
8.8 weblate 0.1
Relased on February 6th 2012.
- Initial release.
Copyright (C) 2012 Michal Čihář <michal@cihar.com>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
Глава 10
Indices and tables
- genindex
- modindex
- search
Алфавитный указатель
Символы
Опция командной строки ./manage.py
checkgit, 19
cleanuptrans, 19
commitgit, 19
loadpo, 19
rebuild_index, 19
setupgroups, 19
setuupload, 19
updatechecks, 19
updategit, 20
переменная окружения
CHECK_LIST, 11, 16
COMM IT_MESSAGE, 11
ENABLE_HOOKS, 11
GIT_ROOT, 11, 14
LAZY_COMMITS, 11, 15
MT_APERTIUM_KEY, 11
MT_MICROSOFT_KEY, 11
NEARBY_MESSAGES, 11
SIMILAR_MESSAGES, 12
SITE_TITLE, 12
WHOOSH_INDEX, 12
C
CHECK_LIST, 16
checkgit
Опция командной строки ./manage.py, 19
cleanuptrans
Опция командной строки ./manage.py, 19
commitgit
Опция командной строки ./manage.py, 19
G
GIT_ROOT, 14
L
LAZY_COMMITS, 15
loadpo
R
rebuild_index
Опция командной строки ./manage.py, 19
S
setupgroups
Опция командной строки ./manage.py, 19
setuplang
Опция командной строки ./manage.py, 19
U
updatechecks
Опция командной строки ./manage.py, 19
updategit
Опция командной строки ./manage.py, 20
|
{"Source-Url": "https://docs.weblate.org/_/downloads/ru/weblate-0.8/pdf/", "len_cl100k_base": 5844, "olmocr-version": "0.1.49", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 54819, "total-output-tokens": 7453, "length": "2e12", "weborganizer": {"__label__adult": 0.0003814697265625, "__label__art_design": 0.0004303455352783203, "__label__crime_law": 0.00017321109771728516, "__label__education_jobs": 0.00043392181396484375, "__label__entertainment": 8.177757263183594e-05, "__label__fashion_beauty": 9.608268737792967e-05, "__label__finance_business": 0.00011163949966430664, "__label__food_dining": 0.00024700164794921875, "__label__games": 0.0008153915405273438, "__label__hardware": 0.0003581047058105469, "__label__health": 0.00012791156768798828, "__label__history": 0.00012862682342529297, "__label__home_hobbies": 7.49826431274414e-05, "__label__industrial": 0.00014925003051757812, "__label__literature": 0.0002440214157104492, "__label__politics": 0.00010257959365844728, "__label__religion": 0.00030612945556640625, "__label__science_tech": 0.0008907318115234375, "__label__social_life": 0.00012576580047607422, "__label__software": 0.035888671875, "__label__software_dev": 0.95849609375, "__label__sports_fitness": 0.00017464160919189453, "__label__transportation": 0.00015115737915039062, "__label__travel": 0.00016021728515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25366, 0.03538]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25366, 0.09108]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25366, 0.77323]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 2073, false], [2073, 2073, null], [2073, 2083, null], [2083, 2094, null], [2094, 2662, null], [2662, 2662, null], [2662, 3918, null], [3918, 5809, null], [5809, 7312, null], [7312, 8141, null], [8141, 9135, null], [9135, 10746, null], [10746, 11674, null], [11674, 12132, null], [12132, 12918, null], [12918, 14895, null], [14895, 16915, null], [16915, 18681, null], [18681, 18769, null], [18769, 18769, null], [18769, 19995, null], [19995, 20340, null], [20340, 21150, null], [21150, 21150, null], [21150, 21988, null], [21988, 23135, null], [23135, 23717, null], [23717, 23717, null], [23717, 24379, null], [24379, 24379, null], [24379, 24440, null], [24440, 24440, null], [24440, 25366, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 2073, true], [2073, 2073, null], [2073, 2083, null], [2083, 2094, null], [2094, 2662, null], [2662, 2662, null], [2662, 3918, null], [3918, 5809, null], [5809, 7312, null], [7312, 8141, null], [8141, 9135, null], [9135, 10746, null], [10746, 11674, null], [11674, 12132, null], [12132, 12918, null], [12918, 14895, null], [14895, 16915, null], [16915, 18681, null], [18681, 18769, null], [18769, 18769, null], [18769, 19995, null], [19995, 20340, null], [20340, 21150, null], [21150, 21150, null], [21150, 21988, null], [21988, 23135, null], [23135, 23717, null], [23717, 23717, null], [23717, 24379, null], [24379, 24379, null], [24379, 24440, null], [24440, 24440, null], [24440, 25366, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25366, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25366, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 2073, 3], [2073, 2073, 4], [2073, 2083, 5], [2083, 2094, 6], [2094, 2662, 7], [2662, 2662, 8], [2662, 3918, 9], [3918, 5809, 10], [5809, 7312, 11], [7312, 8141, 12], [8141, 9135, 13], [9135, 10746, 14], [10746, 11674, 15], [11674, 12132, 16], [12132, 12918, 17], [12918, 14895, 18], [14895, 16915, 19], [16915, 18681, 20], [18681, 18769, 21], [18769, 18769, 22], [18769, 19995, 23], [19995, 20340, 24], [20340, 21150, 25], [21150, 21150, 26], [21150, 21988, 27], [21988, 23135, 28], [23135, 23717, 29], [23717, 23717, 30], [23717, 24379, 31], [24379, 24379, 32], [24379, 24440, 33], [24440, 24440, 34], [24440, 25366, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25366, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
13d94641249f6c8785e416321aaba27994ecf3d2
|
F-OWL: an Inference Engine for Semantic Web
Youyong Zou, Tim Finin and Harry Chen
Abstract. Understanding and using the data and knowledge encoded in semantic web documents requires an inference engine. F-OWL is an inference engine for the semantic web language OWL language based on F-logic, an approach to defining frame-based systems in logic. F-OWL is implemented using XSB and Flora-2 and takes full advantage of their features. We describe how F-OWL computes ontology entailment and compare it with other description logic based approaches. We also describe TAGA, a trading agent environment that we have used as a test bed for F-OWL and to explore how multiagent systems can use semantic web concepts and technology.
1 Introduction
The central idea of the Semantic Web [Berners-Lee 2001] is to publish documents on the World Wide Web defined and linked in a way that make them both human readable and machine understandable. Human readable means documents in the traditional sense which are intended for machine display and human consumption. Machine understandable means that the data has explicitly been prepared for machine reasoning and reuse across various applications. Realizing the semantic web vision requires well defined languages that can model the meaning of information on the Web as well as applications and services to publish, discover, process and annotate information encoded in them. This involves aspects from many areas, including knowledge representation and reasoning, databases, information retrieval, digital libraries, multi-agent systems, natural language processing and machine learning. The Web Ontology Language OWL [Patel-Schneider, 2003] is part of the growing stack of W3C recommendations related to the Semantic Web. OWL has its origins in DAML+OIL [Hendler 2000] and includes a set of three increasingly complex sub-languages: OWL-Lite, OWL-DL and OWL-Full.
1 This work was partially supported by the Defense Advanced Research Projects Agency under contract F30602-97-1-0215 and by the National Science Foundation under award IIS-0242403.
OWL has a model-theoretic semantics that provides a formal meaning for OWL ontologies and instance data expressed in them. In addition, to support OWL-Full, a second model-theoretic semantics has been developed as an extension to the RDF's semantics, grounding the meaning of OWL ontologies as RDF graphs. An OWL inference engine's core responsibilities are to adhere to the formal semantics in processing information encoded in OWL, to discover possible inconsistencies in OWL data, and to derive new information from known information. A simple example demonstrates the power of inference: Joe is visiting San Francisco and wants to find an Italian restaurant in his vicinity. His wireless PDA tries to satisfy his desire by searching for a thing of type restaurant with a cuisineType property with the value Italian. The goodPizza restaurant advertises its cuisine type as Pizza. These cannot be matched as keywords or even using a thesaurus, since Italian and Pizza are not equivalent in all contexts. The restaurant ontology makes things clearer: Pizza rdfs:SubClassOf ItalianCuisine. By using an inference engine, Joe's PDA can successfully determine that the restaurant goodPizza is what he is looking for. F-OWL, an inference engine for OWL language, is designed to accomplish this task.
In the next section, we outline the functional requirement of the OWL inference engine. Section three describes F-OWL, the OWL inference engine in Frame Logic that we have developed. Section four explained how F-OWL is used in a multi-agent test bed for trading agents. Chapters five and six conclude this paper with a discussion of the work and results and an outline of some potential future research.
2 OWL Engine
An inference engine is needed for the processing of the knowledge encoded in the semantic web language OWL. An OWL inference engine should have following features:
- **Checking ontology consistency.** An OWL concept ontology (e.g., terms defined in the “Tbox”) imposes a set of restrictions on the model graph. The OWL inference Engine should check the syntax and usage of the OWL terms and ensure that the OWL instances (e.g., assertions in the “Abox”) meet all of the restrictions.
- **Computing entailments.** Entailment, including satisfaction and subsumption, are essential inference tasks for an OWL inference engine.
- **Processing queries.** OWL inference engines need powerful, yet easy-to-use, language to support queries, both from human users (e.g., for debugging) and software components (e.g., for software agents).
- **Reasoning with rules.** Rules can be used to control the inference capability, to describe business contracts, or to express complex constrictions and relations not directly supported by OWL. An OWL inference engine should provide a convenient interface to process rules that involve OWL classes, properties and instance data.
- **Handling XML data types.** XML data types can be used directly in OWL to represent primitive kinds of data types, such as integers, floating point numbers,
strings and dates. New complex types can be defined using base types and other complex types. An OWL inference Engine must be able to test the satisfiability of conjunctions of such constructed data types.
The OWL language is rooted in description logic (DL), a family of knowledge representation languages designed for encoding knowledge about concepts and concept hierarchies. Description Logics are generally given a semantics that make them subsets of first-order logic. Therefore, several different approaches based on those logics have been used to design OWL inference engines:
- **Using a specialized description logic reasoner.** Since OWL is rooted in description logic, it is not surprising that DL reasoners are the most widely used tools for OWL reasoning. DL reasoners are used to specify the terminological hierarchy and support subsumption. It has the advantage of being decidable. Three well-known systems are FaCT [Horrocks, 1999], Racer [Haarslev 2001] and Pellet. They implement different types of description logic. Racer system implements SHIQ(D) using a Tableaux algorithm. It is a complete reasoner for OWL-DL and supports both Tbox and Abox reasoning. The FaCT system implements SHIQ, but only support Tbox reasoning. Pellet implements SHIQ(D) and includes a complete OWL-lite consistency checker supporting both Abox and Tbox queries.
- **Using full first order logic (FOL) theorem prover.** OWL statements can be easily translated into FOL, enabling one to use existing FOL automated theorem provers to do the inference. Examples of this approach include Hoolet (using the Vampire [Rmzanov, 2003] theorem prover) and Surnia (using Otter theorem prover). In Hoolet, for example, OWL statements are translated into a collection of axioms which is then given to the Vampire theorem prover for reasoning.
- **Using a reasoner designed for a FOL subset.** A fragment of FOL and general logic based inference engine can also be used to design the OWL inference engine. Horn Logic is most-widely used because of its simplicity and availability of tools, including Jena, Jess, Triples and F-OWL (using XSB). Other logics, like higher-order logic in F-OWL (using Flora), can also be used.
As the following sections describe, F-OWL has taken the third approach. An obvious advantage is that many systems have been developed that efficiently reason over expressive subsets of FOL and are easy to understand and use.
### 3 F-OWL
F-OWL is a reasoning system for RDF and OWL that is implemented using the XSB logic programming system [Sagonas, 1994] and the Flora-2 [Kifer, 1995] [Yang 2000] extension that provides an F-logic frame-based representation layer. We have found that XSB and Flora-2 not only provide a good foundation in which to implement an OWL reasoner but also facilitate the integration of other reasoning mechanisms and applications, such as default reasoning and planners.
XSB is a logic programming system developed at Stony Brook University. In addition to providing all the functionality of Prolog, XSB contains several features not usually found in Logic Programming systems, including tabling, non-stratified negation, higher order constructs, and a flexible preprocessing system. Tabling is useful for recursive query computation, allowing programs to terminate correctly in many cases where Prolog does not. This allows, for example, one to include "if and only if" type rules directly. XSB supports for extensions of normal logic programs through preprocessing libraries including a sophisticated object-oriented interface called Flora-2. Flora-2 is itself a compiler that compiles from a dialect of Frame logic into XSB, taking advantage of the tabling, HiLog [Chen 1995] and well-founded semantics for negation features found in XSB. Flora-2 is implemented as a set of run-time libraries and a compiler that translates a united language of F-logic and HiLog into tabled Prolog code. HiLog is the default syntax that Flora-2 uses to represent function terms and predicates. Flora-2 is a sophisticated object-oriented knowledge base language and application development platform. The programming language supported by Flora-2 is a dialect of F-logic with numerous extensions, which include a natural way to do meta-programming in the style of HiLog and logical updates in the style of Transaction Logic. Flora-2 was designed with extensibility and flexibility in mind, and it provides strong support for modular software design through its unique feature of dynamic modules.
F-OWL is the OWL inference engine that uses a Frame-based System to reason with OWL ontologies. F-OWL is accompanied by a simple OWL importer that reads an OWL ontology from a URI and extracts RDF triples out of the ontology. The extracted RDF triples are converted to format appropriate for F-OWL's frame style and fed into the F-OWL engine. It then uses flora rules defined in flora-2 language to check the consistency of the ontology and extract hidden knowledge via resolution.
A model theory is a formal theory that relates expressions to interpretation. The RDF model theory [Hayes 2003] formalizes the notion of inference in RDF and provides a basis for computing deductive closure of RDF graphs. The semantics of OWL, an extension of RDF semantics, defines bindings, extensions of OWL interpretations that map variables to elements of the domain:
- The vocabulary $V$ of the model is composed of a set of URI's.
- $LV$ is the set of literal values and $XL$ is the mapping from the literals to $LV$.
- A simple interpretation $I$ of a vocabulary $V$ is defined by:
- A non-empty set $IR$ of resources, called the domain or universe of $I$.
- A mapping $IS$ from $V$ into $IR$.
- A mapping $IEXT$ from $IR$ into the power set of $IR \times (IR \cup LV)$ i.e. the set of sets of pairs $<x,y>$ with $x \in IR$ and $y$ in $IR$ or $LV$. This mapping defines the properties of the triples. $IEXT(x)$ is a set of pairs which identify the arguments for which the property is true, i.e. a binary relational extension, called the extension of $x$.
Informally this means that every URI represents a resource that might be a page on the Internet but not necessarily; it might also be a physical object. A property is a relation; this relation is defined by an extension mapping from the property into a set. This set contains pairs where the first element of a pair represents the subject of a triple and the second element represents the object of a triple. With this system of extension mapping the property can be part of its own extension without causing paradoxes.
Take the triple: goodPizza :cuisineType :Pizza from the pizza restaurant in the introduction as example. In the set of URIs there will be terms (i.e., classes and properties) like: #goodPizza, #cuisineType, #pizza, #Restaurant, #ItalianCuisine, etc. These are part of the vocabulary V. The set IR of resources include instances that represent resources on the internet or elsewhere, like #goodPizza, etc. For example the class #Restaurant might represent the set of all restaurants. The URI refers to a page on the Internet where the domain IR is defined. Then there is the mapping IEXT from the property #cuisineType to the set {(#goodPizza, #Pizza), (#goodPizza, #ItalianCuisine)} and the mapping IS from V to IR: :goodPizza :cuisineType #cuisineType.
A rule $A \rightarrow B$ is satisfied by an interpretation $I$ if and only if every binding that satisfies the antecedent $A$ also satisfies the consequent $B$. An ontology $O$ is satisfied by an interpretation $I$ if and only if the interpretation satisfies every rules and facts in the ontology. A model is satisfied if none of the statements within contradict each other. An ontology $O$ is consistent if and only if it is satisfied by at least one interpretation. An ontology $O_2$ is entailed by an ontology $O_1$ if and only if every interpretation that satisfies $O_1$ also satisfies $O_2$.
One of the main problems in OWL reasoning is ontology entailment. Many OWL reasoning engines, such as Pellet and SHOQ, follow an approach suggested by Ian Horrocks [Horrocks 2003]. By taking advantage of the close similarity between OWL and description logic, the OWL entailment can be reduced to knowledge base satisfiability in the SHOIN(D) and SHIF(D). Consequently, existing mature DL reasoning engines such as Racer [Haarslev 2001] can provide reasoning services to OWL. Ora Lassila suggested a "True RDF processor" [Lassila 2002] in his implementation of Wilbur system [Lassila 2001] in which entailment is defined via the generation of a deductive closure from an RDF graph composed of triples. The proving of entailment becomes the building and searching of closure graph.
With the support of forward/backward reasoning from XSB and frame logic from Flora, F-OWL takes the second approach to compute the deductive closure of a set of RDF or OWL statements. The closure is a graph consisting of every triples $<\text{subject}, \text{predicate}, \text{object}>$ that satisfies $\{\text{subject, object } \Rightarrow \text{IEXT}((\text{predicate}))\}$. This is defined as:
$$<\text{subject, predicate, object}> \Rightarrow \text{KB} \iff \{\text{subject, object } \Rightarrow \text{IEXT}((\text{predicate}))\}$$
2 The W3C says of URIs: "Uniform Resource Identifiers (URIs, aka URLs) are short strings that identify resources in the web: documents, images, downloadable files, services, electronic mailboxes, and other resources." By convention, people understand many URIs as denoting objects in the physical world.
Where \( KB \) is the knowledge base, \( I(x) \) is the interpretation of a particular graph, and \( IEXT(x) \) is the binary relational extension of property as defined in [Hayes 2002].
F-OWL is written in the Flora-2 extension to XSB and consists of the following major sets of rules:
- A set of rules that reasons over the data model of RDF/RDF-S and OWL;
- A set of rules that maps XML DataTypes into XSB terms;
- A set of rules that performs ontology consistency checks; and
- A set of rules that provides an interface between the upper Java API calls to the lower layer Flora-2/XSB rules.
F-OWL provides command line interface, a simple graphical user interface and a Java API to satisfy different requirements. Using F-OWL to reason over the ontology typically consists of the following four steps:
- Loading additional application-related rules into the engine;
- Adding new RDF and OWL statements (e.g., ontologies or assertions) to the engine. The triples (subject, predicate, object) on the OWL statements are translated into 2-ply frame style: subject(predicate, object)@model;
- Querying the engine. The RDF and OWL rules are recursively applied to generate all legal triples. If a query has no variables, a True answer is returned when an interpretation of the question is found. If the question includes variable, the variables is replaced with values from the interpretation and returned;
- The ontology and triples can be removed if desired. Else, the XSB system saves the computed triples in indexed tables, making subsequent queries faster.
4 F-OWL in TAGA
Travel Agent Game in Agentcities (TAGA) [Zou 2003] is a travel market game developed on the foundation of FIPA technology and the Agentcities infrastructure. One of its goals is to explore and demonstrate how agent and semantic web technology can support one another and work together.
TAGA extends and enhances the Trading Agent Competition scenario to work in Agentcities, an open multiagent systems environment of FIPA compliant systems. TAGA makes several contributions: auction services are added to enrich the Agentcities environment, the use of the semantic web languages RDF and OWL improve the interoperability among agents, and the OWL-S ontology is employed to support service registration, discovery and invocation. The FIPA and Agentcities standards for agent communication, infrastructure and services provide an important foundation in building this distributed and open market framework. TAGA is intended as a platform for research in multiagent systems, the semantic web and/or automated trading in dynamic markets as well as a self-contained application for teaching and experimentation with these technologies. It is running as a continuous open game at
http://taga.umbc.edu/ and source code is available on Sourceforge for research and teaching purposes.
The agents in TAGA use OWL in various ways in communication using the FIPA agent content language (ACL) and also use OWL-S as the service description language in FIPA's directory facilitators. Many of the agents in the TAGA system use F-OWL directly to represent and reason about content presented in OWL. On receiving an ACL message with content encoded in OWL, a TAGA agent parses the content into triples, which are then loaded into the F-OWL engine for processing.
When an agent receives an incoming ACL message, it computes the meaning of the message from the ACL semantics, the protocols in effect, the content language and the conversational context. The agent's subsequent behavior, both internal (e.g., updating its knowledge base) and external (e.g., generating a response) depends on the correct interpretation of the message's meaning. Thus, a sound and, if possible, complete understanding the semantics of the key communication components (i.e., ACL, protocol, ontologies, content language, context) is extremely important. In TAGA, the service providers are independent and autonomous entities, which making it difficult to enforce a design decision that all use exactly the same ontology or protocol. For example, the Delta Airline service agent may have its own view of travel business and uses class and property terms that extend an ontology used in the industry. This situation parallels that for the semantic web as a whole - some amount of diversity is inevitable and must be planned for lest our systems become impossibly brittle.
Many of the agents implemented in TAGA system use F-OWL to represent and reason about the message content presented in RDF or OWL. Upon receiving an ACL message with content in RDF or OWL, a TAGA agent parses the content into triples, which are then loaded into the FOWL engine for processing.
The message's meaning (communicative act, protocol, content language, ontologies and context) all play a part in the interpretation. For example, when an agent receives a query message that uses the query protocol, the agent searches its knowledge base for matching answers and returns an appropriate inform message. TAGA uses multiple models to reflect the multiple namespaces and ontologies used in the system. The agent treats each ontology as an independent model in the F-OWL engine.
F-OWL has many usages in TAGA, including the following.
- **As knowledge base.** Upon receiving an ACL message with content encoded in OWL, agents in TAGA parse the content into triples and feeds them into their F-OWL engine. The information can be easily retrieved by submitting queries in various query languages.
- **As reasoning engine.** The agent can answer more questions with the help of F-OWL engine, for example, the restaurant can answer the question “what is the average price of a starter” after it understands that “starter” is sameAs “appetizer”.
- **As a service matchmaker.** FIPA platforms provide a directory facilitator service which matches service requests against descriptions of registered services. We have extended this model by using OWL-S as a service description language. F-OWL manages the service profiles and tries to find the best match based on description in the service request.
• As an agent interaction coordinator. The interaction protocol can be encoded into an ontology file using OWL language. F-OWL will advise the agents what to respond based on received messages and context.
5 Discussion
This section describes the design and implementation of F-OWL, an inference engine for OWL language. F-OWL uses a Frame-based System to reason with OWL ontologies. F-OWL supports consistency checking of the knowledge base, extracts hidden knowledge via resolution and supports further complex reasoning by importing rules. Based on our experience in using F-OWL in several projects, we found it to be a fully functional inference engine that was relatively easy to use and able to integrate with multiple query languages and rule languages.
There have been lots of works on the OWL inference engine, from semantic web research community and description logic community. The following table compares F-OWL with some of them:
<table>
<thead>
<tr>
<th></th>
<th>F-OWL</th>
<th>Racer</th>
<th>FaCT</th>
<th>Pellet</th>
<th>Hoolet</th>
<th>Sur- nia</th>
<th>Triple</th>
</tr>
</thead>
<tbody>
<tr>
<td>Logic</td>
<td>Horn, Frame, Higher Order</td>
<td>Description Logic</td>
<td>DL</td>
<td>DL</td>
<td>Full FOL</td>
<td>Horn Logic</td>
<td></td>
</tr>
<tr>
<td>Support</td>
<td>OWL-Full</td>
<td>OWL-DL</td>
<td>OWL-DL</td>
<td>OWL-DL</td>
<td>OWL-Full</td>
<td>RDF</td>
<td></td>
</tr>
<tr>
<td>Based on</td>
<td>XSB/Flora</td>
<td>Lisp</td>
<td>Lisp</td>
<td>Java</td>
<td>Vampire</td>
<td>Otter</td>
<td>XSB</td>
</tr>
<tr>
<td>XML Datatype</td>
<td>Yes</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Decidable</td>
<td>No</td>
<td>Yes</td>
<td>Yes</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Complete consistency checker</td>
<td>No</td>
<td>Yes (OWL-Lite)</td>
<td>Yes</td>
<td>Yes (OWL-Lite)</td>
<td>No</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Interface</td>
<td>Java, GUI, Command Line</td>
<td>DIG, Java, GUI</td>
<td>DIG, Command Line</td>
<td>DIG, Java</td>
<td>Java</td>
<td>Python</td>
<td>Java</td>
</tr>
<tr>
<td>Query</td>
<td>Frame style, RDQL</td>
<td>Racer query language</td>
<td>RDQL</td>
<td>Horn logic style</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Known Limitation</td>
<td>Poor scaling</td>
<td>No Abox support</td>
<td>Poor scaling</td>
<td>Only support RDF</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
The first thing to notice in Table 1 is that the description logic based system can only support reasoning over OWL-Lite and OWL-DL statements but not OWL-Full. OWL-Full is a full extension of RDF, which needs the supporting of terminological cycle. For example, a class in OWL-Full can also be an individual or property. The cyclic terminological definitions can be recognized and understood in horn logic or frame logic system.
Table 1 shows that only three DL-based owl inference engines, which are all use a Tableau based algorithms [Baader 2000], are decidable and support complete consistency checking (at least in OWL-Lite). However, [Balaban 1993] argues that DL only forms a subset of F-Logic. The three kinds of formulae in the description logic can be transformed into first class objects and n-ary relationships. F-Logic is able to provide a full account for DL without losing any semantics and descriptive nature. We understand that our current F-OWL approach is neither decidable nor complete. However, a complete F-Logic based OWL-DL reasoner is feasible.
The table also shows that F-OWL system doesn't scale well when dealing with large datasets, because of the incompleteness of the reasoner. Actually, none of the OWL inference engines listed here scales well when dealing with the OWL test case wine ontology which defines thousands of classes and properties and a relatively modest number of individuals. Further research is needed to improve the performance and desirability.
Comparing with other OWL inference engines, F-OWL has several unique features: tabling, support for multiple logical models or reasoning, and a pragmatic orientation.
**Tabling.** XSB's tabling mechanism gives F-OWL the benefits of a forward chaining system in a backward chaining environment. The triples in a model are computed only when the system needs to know whether or not they are in the model. Once it is established that a triple is in the current model, it is added to the appropriate table, obviating the need to prove that it is in the model again. This mechanism can have a significant impact on the system's performance. While the first few queries may take a long time, subsequent queries tend to be very fast. This is an interesting compromise between a typical forward-only reasoning system and backward-only reasoning systems.
**Multiple logics.** F-OWL supports Horn logic, frame logic and a kind of higher-order logic; all inherited from the underlying XSB and Flora substrates. Working together, these logic frameworks improve F-OWL's performance and capabilities. For example, the F-logic supports non-monotonic (default) reasoning. Another example is higher-order logic. The semantics of higher-order logics, in general, are difficult and in many cases not suitable for practical applications. XSB's Hilog, however, is a simple syntactic extension of first-order logic in which variables can appear in the position of a predicate. In many cases, this simplifies the expression of the statements, rules and constraints, improving the writability and readability of F-OWL and associated programs.
---
3 The wine ontology is used as a running example in the W3C's OWL Web Ontology Language Guide and is available at http://www.w3.org/TR/owl-guide/wine.owl.
Pragmatic approach. The aim of F-OWL system is to be a practical OWL reasoner, not necessarily a complete OWL reasoner. So F-OWL system provides various interface to access the engine and supports multiple query and rule languages.
In the open web environment, it is generally assumed that the data are not complete and not all facts are known. We will research how this fact affects the implementation of inference engine. In the semantic web an inference engine may not necessarily serve to generate proofs but should be able to check proofs. We will work on using F-OWL to resolve trust and proof in semantic web.
In a stand-alone system inconsistencies are dangerous but can be controlled to a certain degree. However, controlling the inconsistencies in the Semantic Web is a lot more difficult. During the communication, ontology definition origin from other agents, who is unknown beforehand, may be asserted. Therefore special mechanisms are needed to deal with inconsistent and contradictory information in the Semantic Web. There are two steps: detecting the inconsistency and resolving the inconsistency.
The detection of the inconsistency is based on the declaration of inconsistency in the inference engine. The restriction, which imposes the possible values and relation that the ontology elements can have, leads to the inconsistency. For example, $\text{owl:equivalentClass}$: imposes a restriction on the resource which the subject is same class as. $\text{owl:disjointWith}$ imposes a restriction on the resource which the subject is different from. The triples $(a \text{owl:equivalentClass } b)$ and $(a \text{owl:disjointWith } b)$ is not directly lead to an inconsistency until applying the detection rule: $(A \text{owl:equivalentClass } B \& A \text{owl:disjointWith } B) \rightarrow \text{inconsistency}$.
When inconsistencies are detected, Namespaces can help tracing the origin of the inconsistencies. John posted “all dogs are human” at his web site, while “all dogs are animal” appears in daml.org’s ontology library. It is clear that the second is more trustable. Every web site are identified and treated unequivocally in the semantic web. The inference engine contacts trust system to evaluate the creditability of the namespaces. [Klyne 2002] and [Golbeck 2003] enlist lots of works and brilliant ideas about how to maintain the trust system in the semantic web. Once having the trust evaluation result, the agent could take three different actions: (a) accept the one suggested by the inference engine; (b) reject both as none of them is trustable; (c) ask the human user to select.
6 Conclusion
This paper describes the design and implementation of F-OWL, an inference engine for OWL language. F-OWL uses a Frame-based System to reason with OWL ontologies. F-OWL supports consistency checking, extracts hidden knowledge via resolution and supports further complex reasoning by importing rules. While using it in TAGA user case, we find that F-OWL is a full functional inference engine and easy to use with the support of multiple query languages and rule languages.
In the open web environment, it is generally assumed that the data are not complete and not all facts are known. We will research how this fact affects the implementation of inference engine. In the semantic web an inference engine may not necessarily serve
to generate proofs but should be able to check proofs. We will work on using F-OWL to resolve trust and proof in semantic web in the future.
References
[Zou, 2003] Youyong Zou, Tim Finin, Li Ding, Harry Chen, and Rong Pan, TAGA: Trading Agent Competition in Agentcities, Workshop on Trading Agent Design and Analysis, held in conjunction with the Eighteenth International Joint Conference on Artificial Intelligence, Monday, 11 August, 2003, Acapulco MX.
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20050137711.pdf", "len_cl100k_base": 6760, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 29247, "total-output-tokens": 8068, "length": "2e12", "weborganizer": {"__label__adult": 0.0003650188446044922, "__label__art_design": 0.0003845691680908203, "__label__crime_law": 0.0005679130554199219, "__label__education_jobs": 0.0007328987121582031, "__label__entertainment": 0.00014257431030273438, "__label__fashion_beauty": 0.0001983642578125, "__label__finance_business": 0.000682830810546875, "__label__food_dining": 0.00044035911560058594, "__label__games": 0.0008707046508789062, "__label__hardware": 0.000720977783203125, "__label__health": 0.000614166259765625, "__label__history": 0.0003554821014404297, "__label__home_hobbies": 0.00011718273162841796, "__label__industrial": 0.0005412101745605469, "__label__literature": 0.0005826950073242188, "__label__politics": 0.00048732757568359375, "__label__religion": 0.00048065185546875, "__label__science_tech": 0.1285400390625, "__label__social_life": 0.00015270709991455078, "__label__software": 0.023681640625, "__label__software_dev": 0.837890625, "__label__sports_fitness": 0.00033855438232421875, "__label__transportation": 0.0007429122924804688, "__label__travel": 0.00026798248291015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33058, 0.01973]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33058, 0.50512]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33058, 0.897]], "google_gemma-3-12b-it_contains_pii": [[0, 2087, false], [2087, 5129, null], [5129, 8042, null], [8042, 11209, null], [11209, 14708, null], [14708, 17464, null], [17464, 20824, null], [20824, 22824, null], [22824, 26106, null], [26106, 29471, null], [29471, 32611, null], [32611, 33058, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2087, true], [2087, 5129, null], [5129, 8042, null], [8042, 11209, null], [11209, 14708, null], [14708, 17464, null], [17464, 20824, null], [20824, 22824, null], [22824, 26106, null], [26106, 29471, null], [29471, 32611, null], [32611, 33058, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33058, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33058, null]], "pdf_page_numbers": [[0, 2087, 1], [2087, 5129, 2], [5129, 8042, 3], [8042, 11209, 4], [11209, 14708, 5], [14708, 17464, 6], [17464, 20824, 7], [20824, 22824, 8], [22824, 26106, 9], [26106, 29471, 10], [29471, 32611, 11], [32611, 33058, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33058, 0.09565]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
97387db263a296d2c8e275c00097f55be863ade9
|
Incremental development for AXE 10
Lars Taxen* and Even-André Karlsson**
Abstract
To become more reactive and flexible to the market needs, i.e. shorter lead-times and more flexibility in handling late and changing requirements, Ericsson’s AXE 10 development processes have been adapted to support incremental development. The basic principle is to develop each of the customer features in one executable and testable increment. In order to plan, coordinate and control projects executed in this way, an Incremental Development Method Package, IDMP, has been developed by UAB*** in close cooperation with support groups in development projects. In this paper, we define what we mean with incremental development and describe the main aspects of the package. We also report experiences from using the package in the CMS 30 phase 7 and HELIOS projects. A prototype tool which supports the package is described, together with the experiences from using the tool in the pilot project Combined Gateway. Even if the IDMP has been developed towards AXE 10, it has proven useful in non AXE 10 projects, as the principles are rather independent of development process.
1. Introduction
The telecommunication market is changing very rapidly mainly due to two forces: The deregulation with the entering of many new operators leading to more competition, and the proliferation of new technology, e.g. mobile communications, intelligent networks etc. This has put a demand on the suppliers to be more reactive and flexible to the market needs, i.e. shorter lead-times and more flexibility in handling late and changing requirements. This is a very challenging change considering the size, complexity and in service performance (uptime) requirements of telecommunication systems.
Ericsson has met this challenge by using some type of incremental development in many large projects. Experiences from these projects have been consolidated into a method package for incremental development, which is an extension of Ericsson’s existing development processes.
2. What are increments?
An increment is a well-defined, testable and rather independent functionality in the final system. An increment is preferably a feature offered to the customer. A group of increments is packaged in a build which together form a new executable system. The sequence of builds will provide a system with growing functionality. Usually the projects are organized in 3-6 internal builds with 1-2 month intervals before the system is finally delivered to the customer and put into operation, but we also have examples of intermediate deliveries to the customer.
*Ericsson Utvecklings AB, S-125 25 Älvsjö, email: Lars.Taxen@uab.ericsson.se
** Q-Labs, S-22 370 Lund, email: Even-Andre.Karlsson@q-labs.se
*** UAB has been responsible for the standard AXE 10 development Medax
The figure below illustrates the principle, in this case how 10 customer features are implemented to fulfil the project goals. The 10 features can be grouped in e.g. 4 builds, with feature 1, 2 and 3 in the first, 4 and 6 in the second, 5 and 7 in the third, and 8, 9 and 10 in the forth and last.
3. **Customer benefits**
Incremental development provides:
- Customer focus by emphasizing the customer features. The customer can also provide feedback on the intermediate builds.
- Requirements flexibility by allowing changes to features in later builds and also keeping the feature focus during each build.
- Reduced risks by having a system with gradually growing functionality.
- Early feedback through each build, both to the customer as well as the designers. In particular the continuous system test activity with feedback to design has proven beneficial for the final performance and quality of the system.
- Shorter lead-time by the overlap and concurrent work between different builds.
4. **Evolution of the Incremental Development Method Package**
Experiences from ERA, EED, ETO, EPA, MET, ETM, EIN and others, were consolidated into a requirement specification for an incremental development method package in the beginning of 1996. During 1996, the package was developed at UAB in close cooperation with the method group from CMS 30 phase 7 at ERA and the TTM 15 initiative associated with the HELIOS project at ETX. The R1 version of package was released late 1997.
The package was applied in CMS 30 phase 7, switching subsystem. The principles of the package was used in HELIOS. The experiences from these projects are described in detail in chapter 6. One major result was that the package needed tool support. Thus, a prototype was developed during 1997 in close cooperation with several pilot projects. This is further described in chapter 7. During this development, the construction planning in the package was modified. The rest of the package will be updated to release R2 during Q2 1998.
5. **Main aspects of IDMP**
In this section we go through the main aspects of the IDMP:
- Construction planning
- PROPS application
- Configuration management
• Adaptation
Finally, we summarize the content of the IDMP.
**Construction Planning:** A carefully prepared construction plan is essential for a successful incremental development project. The construction planning process determines the possible increments, allocate them to builds and schedules the builds as well as the individual increments in time. The process is designed to satisfy:
- Gradually detailing in planning through the earlier phases of the project.
- Distribution of responsibility, both horizontally from total project to teams and vertically between subsystems.
- Adaptation of the model and processes to the needs of specific projects
The construction planning is based on a conceptual model, which describes the important concepts for incremental development, and how they relate to each other. This model has gained acceptance as the standard model for incremental development of applications within a large design community at BN and BR. The conceptual model is to a large extent independent of development model.
The construction planning is roughly split into the following phases:
- **Functional analysis**, where the number, scope and order of the increments are determined. Also, preliminary number of AD’s and dates for these are set.
- **Construction planning - Logical**, where the increments are allocated to AD’s.
- **Construction planning - Real**, where the milestone dates are determined based on effort and resource estimation.
- **AD planning**, where the contents of the test dumps are determined.
The conceptual model is used to indicate what information is produced in each phase. As an example, during the Construction planning - Real, the impacts from each increment on
design items (products and documents) the information concerned is indicated as in the figure below:
We also use the conceptual model to indicate what information is documented in different documents. For example, in the Increment Impact Matrix, we document what design items are impacted by each increment. An example is found below:
<table>
<thead>
<tr>
<th>Project MS</th>
<th>AD package</th>
<th>AD task</th>
<th>Increment task</th>
</tr>
</thead>
<tbody>
<tr>
<td>AD MS</td>
<td>Increment MS</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Increment responsible</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Individual</td>
<td>Team</td>
<td></td>
<td></td>
</tr>
<tr>
<td>LDC</td>
<td>Sub-project</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Project</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>LDC</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Project</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
We also use the conceptual model to indicate what information is documented in different documents. For example, in the Increment Impact Matrix, we document what design items are impacted by each increment. An example is found below:
<table>
<thead>
<tr>
<th>AD</th>
<th>AD 1</th>
<th>AD 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>Increment Name</td>
<td>Flexible Charging</td>
<td>Software Supervision</td>
</tr>
<tr>
<td>Increment Responsible</td>
<td>LMF</td>
<td>LMF</td>
</tr>
<tr>
<td>Tech. feature spec / type</td>
<td>IP 15941</td>
<td>IP 15941</td>
</tr>
<tr>
<td>Tech. feature spec / nbr</td>
<td>101</td>
<td>FCP 101 006</td>
</tr>
<tr>
<td>Tech. feature spec / rev</td>
<td>R4</td>
<td>R4</td>
</tr>
<tr>
<td>Total effort per incr.</td>
<td>364</td>
<td>520</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Subsystem</th>
<th>Resp.</th>
<th>Pred / Doc Type</th>
<th>Pred / Doc Number</th>
<th>Pred / Doc Title</th>
<th>Base rev.</th>
<th>Total effort per item</th>
</tr>
</thead>
<tbody>
<tr>
<td>ANT210.</td>
<td>LMF MKV.</td>
<td>CNT</td>
<td>2352637</td>
<td>HIRON</td>
<td>R2</td>
<td>150</td>
</tr>
<tr>
<td>ANT210.</td>
<td>LMF MKV.</td>
<td>CAA</td>
<td>179656</td>
<td>HSD</td>
<td>RSA</td>
<td>500</td>
</tr>
<tr>
<td>ANT210.</td>
<td>MKV CDA</td>
<td>CAAZ</td>
<td>1075279</td>
<td>HOTDIP</td>
<td>R5A</td>
<td>50</td>
</tr>
</tbody>
</table>
The Increment Impact Matrix can be looked upon as a projection or view of the following parts of the conceptual model:
```
| Customer | IS | IP | FF |
| Set of requirements | | | |
| Technical feature spec | | | |
| Functional Anatomy | | | |
| Design base | | | |
| AD package | | | |
| System issue | | | |
| Project | | | |
| AD task | | | |
| Increment task | | | |
| Increment responsible | | | |
| Increment MS | | | |
| Individual | | | |
| Team | | | |
| LDC | | | |
| Sub-Project | | | |
| Project | | | |
| Project MS | | | |
| AD MS | | | |
| Product | | | |
| Document | | | |
| ... | ANT| CNT| FS |
| ... | FD | TS | |
```
...
**PROPS Application:** For incremental development projects there are milestones on two levels: the project level and the increment level. The project level milestones must be defined for each project, and they should be connected to information necessary to take the necessary business decisions. If the development method for each increment is standardized, the definitions of the increment milestones can be the same for all increments. If the feature increments are short it is recommended to leave out some of the milestone definitions to avoid having too many milestones. Also for feature increments which are developed synchronously towards the same build, we could use the same milestones.
**Configuration Management Rules:** The purpose of these rules are to make it possible to plan the increments in the sense that we must know what revision of documents and products belong to which increment. The basic principle is to step the revision letter for a design document for each increment impacting the document.
**Standard vs Adaption:** The package is a standard package, which is meant to be adapted to local needs at LDC’s / projects. The adaptions needed are usually minor. This principle of adapting a standard product1, has some very attractive consequences:
- The project will get an adapted method (and tool support) very fast.
- A specific adaption does not have to be coordinated with any other adaption.
- The project will feel that they have contributed to the method (and tool) properties, thus the acceptance of the support of will be easier.
- It makes it possible to balance between a collective asset (the standard product) and individual assets (the adaption)
- The maintenance of the combined standard - adaptions product will be less compared to the case where each LDC / project maintains its own support product.
The experiences found in adapting the standard product in many projects, can be incorporated in later revisions of the standard product. Thus, the combined standard - adapted product will be steadily improved:
---
1. This applies to the tool support as well, see chapter 7.
ID Method Package Product Description: The Incremental development method package contains the following elements:
- Incremental development for AXE 10 - General Description
- Incremental development for AXE 10 - Wall-chart
- Definition and planning of increments, work instruction
- Preparing the Increment Specification Documents, work instruction
- Configuration management in Incremental Development, work instruction
- Incremental development with Medax and current tools, work instruction
- Increment dependency matrix, document instruction
- AD (Build)-plan, document instruction
- Functional Anatomy Description, document instruction
- Increment Task Specification, document instruction
- OH-slides and Teacher’s Guide
In addition the following documents are provided:
- Incremental development—Guidelines and experiences
- Incremental development and PROPS (Ericsson’s project management process)
This packages was developed in cooperation with several pilot projects, and was released for general use late 1996. It is based on experiences from about 10 incremental development projects.
The method package has been supported by a series of seminars during it’s introduction. The largest one attracted 90 people, and was a three days seminar with the first day focusing on the methods, the second day covering project experiences and the last day with workgroups looking into special aspects of incremental development, e.g. daily builds, project planning and tracking, configuration management, tool support, etc.
6. Experiences
6.1. The HELIOS project
HELIOS was a large development project (475 000 man-hours) that ran between March 96 to Sept. 97. It was comprised of nine subprojects distributed over seven countries in Europe. It had about 180 feature increments (called assignments) packaged into three builds.
HELIOS decided to go for incremental development early in the project, but there was a lot of discussions about how to use it. Some of the results were:
- The early definition of the Logical Construction Plan was very useful. It helped a lot in planning, and gave a lot of structure to the project.
- Difficult features in early builds was good.
- Incremental development cut the lead time, but increased cost due to more administration when blocks were opened in more than one build. Also, the review effort increased.
- It was very difficult to follow up the project in the traditional way. They tried to see each of the builds as a small project with all the milestones. This led to a milestone passage about every second week. In addition to that, a lot of changes had to be handled during the project. The result was that it became impossible to follow up the project in the traditional way. To cope with this, HELIOS decided to follow up on the main project level only those documents needed by others. The rest of the follow up was left to the responsibilities of the subprojects. They also tried to synchronize the main deliveries from the subprojects.
6.2. The CMS30 phase 7 project
This was a smaller project of about 60 000 man-hours distributed to two design centres in Sweden and Finland. The purpose was to deliver 25 features to the Japanese mobile operators. It ran for about 6 months. Two features were removed and five new features added during the project. Some of their results were:
- Changes were easy to plan. Adding and removing features was quite easy.
- Much better control of costs: the cost of each feature was known.
- Neither MS3 nor MS4 could be used. ID needs a totally new approach to milestones. Conventional model of milestones must be totally forgotten!
- Passing tollgates was a major headache. It was very hard to pass a tollgate, especially TG2 (where the project scope shall be known in all aspects) if you only can foresee the future 3 months ahead due to the contextual insecurity.
- PRIM needs to be updated for each increment (otherwise writing a TR would be impossible). This caused a lot of extra effort.
- SIGMA handling is tricky.
- No tool support for configuration management caused a lot of manual work.
- Lots of inspections!
- There can not be just one TG2 in turbulent projects like this one.
- TG3 placement is even harder than TG2.
- Using one IP as an increment specification helps project planning, but caused too much inspection overhead.
- Confessing that the project you start with is not the same project you finish with is a step forward. ID can be seen as a confession to this.
7. Tool support for managing incremental development
The purpose of the tool support is to manage all information that is needed to plan, coordinate and control incremental development projects. This will give us the possibility to work in a different manner where information rather than documents are managed. For example, it would enable the generation of reports from the information when needed, rather than maintaining and updating documents.
During the second half of 1997, pilot projects at EPK and ETM were started to get feedback from LDC’s on the IDMP and to collect requirements for a tool supporting the construction planning of increments. In parallel, selected requirements were implemented and tested by building a prototype called CPlprot (Construction Planning prototype). Prototyping promised to be the best approach to get useful, reliable and fast feedback from the pilots and other projects. After the first version of the prototype was available (around October 1997), it was presented to the pilot projects at EPK and ETM. In addition several demos at different LDC’s and at the 2nd Advanced Incremental Development Seminar in Stockholm have been given. Also, AMC phase 6 has decided to be a pilot, and JDI has also show interest in the tool.
The prototype was developed in cooperation with the company Technia AB and is build upon a commercial object-oriented PDM (Product Data Management) system called ‘Matrix’ (developed by Matrix-One Inc.). The Matrix platform was chosen because it is very flexible, has a powerful script language and has a graphical and a web interface. Furthermore it has a client/server architecture and so supports the distributed projects at Ericsson. It is also compliant with the CORBA standard.
The conceptual model is directly implemented in the tool, which makes it possible to browse the information exactly as it is shown in the conceptual model. Since each project has its own characteristics and different ways of working, the conceptual model and tool support has to be adapted to each project. These adaptations are rather small and can mostly be done quite fast. The adapted model is implemented in the prototype which is easily possible, due to the fact that the platform of the prototype is object-oriented.
A sample of the features implemented in the prototype are:
**Basic**
- Implementation of the IDMP model
- Manipulating the IDMP model
- Manipulating instances of the IDMP model
- Entities of IDMP model graphically represented
- Version mgmnt according to IDMP for docs
- Browse all information through a web interface
- Define and change status of products and docs
**Product Definition**
- import design base from PRIM
- product structure definition
**Dependencies**
- Trace customer features, requirements, assignments, IS’s, IP’s, FF’s
- View Functional Anatomy information
**Web based Cost Table Editor**
- Enter cost of implementation data per design item
**MS Project Integration**
- Effort estimates to MS Project, time and resources back to CPLTool
**Increment Impact Matrix**
- Produce different views of the Increment Impact Matrix as a HTML file
**Reporting**
- List of impacted FS’s, blocks and docs per feature increment
- Produce 1317-list for PL-GAS
- Produce AD plan (Dates, test objects and SW units)
**Milestone Def and Checking**
- Definition of MS’s on document type- and doc. level
Currently we see the following roles which will use and benefit from the tool:
- Project Manager
- Project Administrator
- Construction Planer
- Configuration Manager
- IPFF writer
- Developer/Team
- Test Configuration Management
- Methods&Tools Responsible
The name of the roles can slightly differ in projects but the tasks are the same.
**Combined Gateway - Pilot project at EPK:** Currently, the prototype is at use in the Combined Gateway (CGW) project, which is an assisted project to AMC phase 5. TG1 was May 1997, design started January 1998 and TG2 was at the end of April. TG4 for CGW is planned for end of January 1999. The size of the execution project (start of design to TG4) is 45-50 kmh and there are 8 design teams and 1 test team. The teams are spread over four different locations.
**Planned Usage Scenario:**
1. All increments and AD-packages are entered. This was updated a few times due to changes in the scope of the project.
2. The design base is imported from PRIM. Minor changes are updated manually, if there are any major changes the design base will be imported from PRIM again.
3. Cost of implementation data from the IPs are entered manually by the project administrator, in this case a consultant from Q-labs. This information is updated by the project if needed.
4. Milestones for all increments are entered and milestone criteria defined for each milestone.
5. Now the teams can export all necessary information regarding their increment to MS project where they can do a time plan for the increment. The updated information is stored in the prototype.
6. As soon as the teams have stored there plans the Project Manager (and everyone else) can check the current status of the project and if necessary take actions to make sure that the project follows the overall time plan.
7. The prototype is used for producing milestone checking reports. The report is a list of all documents that should be checked for this milestone.
The IPs were entered manually by Q-labs since the prototype was not available during the feasibility study. If it had been, and all cost information data had been stored in the prototype in an early stage, it would have been possible to get a very early overview of the project.
The prototype also helped in tracking all conflicts between increments, when the same design item are working with more then one increment. The main adaptation was to create the possibility for the teams to export their increment to MS project. This was done since the teams are responsible for making their own detailed planning, and it was natural to store this information in the prototype to it would be available to everyone in the project.
Before the project agreed to be a pilot there was several meetings with Q-Labs to discuss the usage scenario and needed adaptions. The prototype was installed on a NT-server at the end of January with a few Windows based clients. SUN-based clients were installed later. During the project one person from Q-Labs is regularly visiting EPK to give support and making sure that the necessary adaptions was done. He also assisted in entering the cost of implementation data from the IPs and importing the design base from PRIM.
**Tool samples:** the following figures show a number of samples from the tool. The first one show the Logical Construction Plan (which features are planned in each AD) at EPK.

Next, an example of a functional anatomy, which shows the dependencies between the functions in a system (in this case APZ ODEN):

Finally, an example which shows the tracability from customer through features down to single products and documents:

8. Summary
Incremental development is now becoming a standard way to develop projects within Ericsson, and we have experiences from projects ranging in size from small to very large (e.g. 2 million man-hours spread over 20 sites). Since incremental development is a very flexible concept we still see a lot of variation in how projects choose increments, but the ID method package gives a common reference. We have collected the first set of serious adaptations of the package, which has provided valuable feedback to the second version of the package which is planned for 1998. The tool support for managing incremental development projects are gradually maturing, and we now have several pilots running. The general experience with incremental development at Ericsson is very positive, and most projects have experiences several of the benefits mentioned in section 3.
|
{"Source-Url": "http://www.vits.org/publikationer/dokument/577.pdf", "len_cl100k_base": 5347, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 23527, "total-output-tokens": 5803, "length": "2e12", "weborganizer": {"__label__adult": 0.0003535747528076172, "__label__art_design": 0.0003199577331542969, "__label__crime_law": 0.00023746490478515625, "__label__education_jobs": 0.0007991790771484375, "__label__entertainment": 5.495548248291016e-05, "__label__fashion_beauty": 0.00020170211791992188, "__label__finance_business": 0.0027313232421875, "__label__food_dining": 0.0002856254577636719, "__label__games": 0.0005617141723632812, "__label__hardware": 0.0032329559326171875, "__label__health": 0.0003082752227783203, "__label__history": 0.00028967857360839844, "__label__home_hobbies": 0.00013887882232666016, "__label__industrial": 0.0017938613891601562, "__label__literature": 0.00016772747039794922, "__label__politics": 0.00016868114471435547, "__label__religion": 0.0002536773681640625, "__label__science_tech": 0.01390838623046875, "__label__social_life": 5.2988529205322266e-05, "__label__software": 0.016265869140625, "__label__software_dev": 0.9560546875, "__label__sports_fitness": 0.0003495216369628906, "__label__transportation": 0.0011110305786132812, "__label__travel": 0.00024628639221191406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24367, 0.01604]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24367, 0.17021]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24367, 0.94134]], "google_gemma-3-12b-it_contains_pii": [[0, 2838, false], [2838, 5011, null], [5011, 6732, null], [6732, 8671, null], [8671, 9761, null], [9761, 11886, null], [11886, 13415, null], [13415, 16365, null], [16365, 19755, null], [19755, 22976, null], [22976, 23496, null], [23496, 24367, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2838, true], [2838, 5011, null], [5011, 6732, null], [6732, 8671, null], [8671, 9761, null], [9761, 11886, null], [11886, 13415, null], [13415, 16365, null], [16365, 19755, null], [19755, 22976, null], [22976, 23496, null], [23496, 24367, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24367, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24367, null]], "pdf_page_numbers": [[0, 2838, 1], [2838, 5011, 2], [5011, 6732, 3], [6732, 8671, 4], [8671, 9761, 5], [9761, 11886, 6], [11886, 13415, 7], [13415, 16365, 8], [16365, 19755, 9], [19755, 22976, 10], [22976, 23496, 11], [23496, 24367, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24367, 0.21845]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
f72397e379df5f7dd64176b0d18f288a98204851
|
Resource Management for Parallel Adaptive Components
Luc Courtrai, Frédéric Guidec, Nicolas Le Sommer, Yves Mahéo
To cite this version:
Luc Courtrai, Frédéric Guidec, Nicolas Le Sommer, Yves Mahéo. Resource Management for Parallel Adaptive Components. Workshop on Java for Parallel and Distributed Computing (IPDPS’03), Apr 2003, Nice, France. pp.134-141. hal-00342140
HAL Id: hal-00342140
https://hal.archives-ouvertes.fr/hal-00342140
Submitted on 26 Nov 2008
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Resource Management for Parallel Adaptive Components
Luc Courtrai, Frédéric Guidec, Nicolas Le Sommer, Yves Mahéo
VALORIA, Université de Bretagne-Sud, France
{luc.courtrai|frederic.guidec|nicolas.le-sommer|yves.maheo}@univ-ubs.fr
Abstract
This paper reports the development of the Concerto platform, which is dedicated to supporting the deployment of parallel adaptive components on clusters of workstations. The current work aims at proposing a basic model of a parallel component, together with mechanisms and tools for managing the deployment of such a component. Another objective of this work is to define and implement a scheme that makes it possible for components to perceive their runtime environment. This environment is modelled as a set of resources. Any component can discover and monitor resources, using the services offered by the platform.
1 Introduction
Clusters of computers are now found in an increasing number of laboratories and companies where they take on various forms. They may for instance play the role of low-cost supercomputers. A group of workstations connected via a high performance network may also be used as a cluster, even though these workstations are still exploited for common purposes. Such clusters can make interesting resources for Grid Computing. In this perspective, the objective may be to deploy parallel applications on computing infrastructures that contains several clusters. However, proposing a software solution that allows the development of applications that take benefit from these architectures is still a challenge.
Several approaches can be envisaged for designing and deploying an application that exploit one or several clusters. Among them, the component approach is worth being studied. It allows complex applications to be designed by assembling available components. We consider in this paper the case where each component is designed as a “parallel code” that is to be deployed on a cluster.
Even if it is possible to design a component in an ad hoc manner so that it exploits at best a specific architecture, it is preferable to favour the component’s portability. Each component should ideally be deployable on a large range of clusters. To achieve this goal, one may think of extending the notion of virtual machine to the entire cluster in order to hide the specificities of the underlying hardware platforms. Another approach, that we explore, consists in enforcing the adaptability of the component so it can allow for the hardware and software specificities of the platforms on which it is deployed.
Component adaptation has several facets. Auto-adaptation occurs when the component itself is responsible for deciding to adapt its behaviour to external conditions. Such a component can be referred to as an adaptive component. In other cases, it is the environment, especially the hosting platform, that enforces an adaptation strategy by imposing that a component modify its behaviour according to explicit directives. In this case, the component, must have been designed so as to be able to receive such directives from the platform. Such a component can be referred to as an adaptable component. The difference between these two forms of adaptation is not discussed any further in this paper. Instead, we focus on the basic mechanisms that are necessary in both cases to provide information that makes possible the decision process and leads to adaptation.
If the adaptive component has the possibility to obtain information on the target architecture, it will be able to choose an appropriate configuration when it is deployed. The state of the platform can be expressed in the form of qualitative and quantitative information covering aspects such as the number of nodes in the cluster, the computation power of these nodes, the bandwidth offered by the communication links, the availability of a given peripheral device, or that of a specific software library. However, an initial configuration of the component may not always be sufficient. Whenever a component cannot be deployed on a dedicated platform, it may have to share resources with other components, and even with other applications. In such circumstances new conditions may arise at runtime, requiring that components change their behaviour accordingly. A component should thus have means to dynamically gather information pertaining to its environment, so that it can adapt itself, for example by redistributing data, by balancing its load differently, or by choosing a new algorithm. The kind of information
that can help take such decisions is for instance the CPU
load observed on a given node, or the bandwidth available
on a link.
This paper describes the Concerto software platform.
This platform supports the deployment of parallel compo-
nents on a cluster, and it provides these components with
means to adapt themselves. The platform is dedicated to
the deployment and the support of parallel components written
in Java. Information that feeds adaptation decisions comes
from the observation of resources. In this particular context
the term “resource” has a broad meaning. We thus envisage
to deal with:
- “system” resources such as the memory or the CPU of
each cluster node, or a scientific computing library;
- “conceptual” resources related to the application itself,
such as the sockets or threads used by a component.
Our aim is to develop mechanisms that make it possible to
collect information related to a non-limited set of such re-
sources. The infrastructure we propose is extensible, as it
is designed in such a way that new types of resources may
be easily allowed for when needed. The Concerto platform
can thus evolve so as to take into account new hardware and
software specificities of clusters.
The remaining of this paper is organized as follows. Sec-
tion 2 introduces the basic model of parallel components we
propose, and it describes the implementation of the deploy-
ment mechanisms. The modelling of resources within the
platform and the tools that permit the observation of their
state are presented in section 3. Section 4 concludes the
paper.
2 Parallel Components
Component-based application development is already
possible through the use of component technologies pro-
posed by industry, such as Microsoft COM [10] or Sun’s
Enterprise Java Beans. OMG (Object Management Group)
also develops its own solution with CORBA Component
Model [11]. However these technologies have not been de-
dsigned to support parallel components, that is, components
that involve parallel activities. Some preliminary work has
been carried out on models or platforms that target parallel
components [1, 5]. In this work the main objective is to
reuse large scientific code relying on data parallelism.
The Concerto platform is dedicated to adaptive parallel
components. Although the concept of parallel component
is central to our project, our aim is not to propose a new
component model, but to provide an infrastructure that en-
forces the adaptability of components. In this perspective,
we propose below a minimal definition of what we mean by
parallel component in Concerto.
A programmer who wishes to develop a parallel com-
ponent for Concerto must design his component as a set of
cooperating threads. He must also define the factory part of
the component (name, interface, implementation). The plat-
form offers facilities for managing non-functional aspects of
the component.
Component interface
A parallel component hosted by the Concerto platform has
three interfaces.
- **Factory interface**: No constraints are put on the type
of the factory interface. The programmer may for ex-
ample propose an interface based on Java RMI. The
component is then an object implementing the Remote
interface, whose methods will be called remotely by
the clients of the component. The component may also
be a server listening to a port of the machine on which
a client must open a socket. One may also design a
distributed interface (that is to say, an interface associ-
ated with several objects that each implement a part of
the interface) in order to be connected in parallel with
another parallel component.
- **Life cycle interface**: Through this interface, the dif-
ferent steps of the life of the component can be con-
trolled. To date, this mainly covers deploying the com-
ponent on the cluster, and stopping this component. In
the future it should be possible to distinguish between
several phases within the deployment process, and to
propose persistence services.
- **Resource interface**. The component exhibits a resource
interface through which the user accesses informa-
tion related to the resources used by the component.
More precisely, the component itself is considered as
a resource in the Concerto platform. Hence Concerto
components are required to implement an observe() method that returns an observation report (see Sec-
tion 3 for more details). By default, the observation
report generated by a component aggregates the obser-
vation reports on all the resources it uses. If the com-
ponent programmer finds it useful (for example for se-
curity reasons), he may restrict the amount of informa-
tion disclosed to the component’s client by defining a
particular type of observation report.
Internal structure of a component
When building a parallel component, the programmer de-
vels a set of Java threads (actually a set of classes imple-
menting the Runnable interface). These threads cooperate
to perform the methods of the component’s factory in-
face.
Threads are gathered into placement (or distribution) entities called fragments. A fragment is a set of threads that belong to a single component, and that are bound to run within the same virtual machine on the same cluster node. These threads will thus be able to share a common object space. Communication and synchronization between the threads of a fragment are performed just like in any other multi-threaded Java program. On the other hand, threads that belong to different fragments must rely on external communication and synchronization mechanisms, such as sockets and RMI.
Component deployment
In order to deploy a component on a cluster, one must provide a description file for this deployment. This file describes:
- the component’s structure (expressed in terms of the fragments and threads to be deployed);
- placement directives for the fragments. One can for example duplicate some fragments on all cluster nodes, or assign a fragment to a specific node.
- constraints imposed by the component in order for its deployment to be feasible (availability of a specific version of the JVM, of a RMI registry, etc.)
We have developed an XML dialect that permits the specification of such directives. The Concerto platform can parse this dialect so as to ensure the deployment of components on a cluster.
3 Resource Modelling and Control
Motivation and main principles
The main objective of project Concerto is to provide software components with means to perceive their runtime environment so that they can adapt their behaviour to the state of this environment, and to its variations. We are developing in Java a software platform in which the runtime environment of a component is modelled using objects that reify the various resources offered.
As a general rule, we qualify as “resource” any hardware or software entity a software component may use during its execution. The resources considered to date in the Concerto platform include system resources (CPU, system memory, swap, graphical user interface, network interface, etc.) that characterize chiefly the underlying hardware platform, as well as “conceptual resources” (sockets, processes, threads, directories, files, RMI server, etc.) that rather pertain to the applicative environment considered.
Since a software component is liable to use all or some of the resources available in its environment, the Concerto platform must provide mechanisms that make it possible for the component:
- to check the availability of any resource (or kind of resource) in its environment;
- to discover the existence of a specific resource (or kind of resource);
- to ask for the status of a specific resource;
- to ask that the platform notifies the component when a given condition is reached regarding the status of a specific resource.
Since the Concerto platform is dedicated to the deployment of parallel software components on a cluster of machines, the dissemination of resources over the nodes of this cluster must be allowed for. The above-mentioned mechanisms must thus make the distribution of resources transparent for components. A thread belonging to a component must be able to gather information not only about the resources available on the node it runs on but also about distant resources and resources distributed over whole cluster.
Modelling of system and conceptual resources
Any kind of resource liable to be used by components deployed on the Concerto platform must be reified as Java objects. We have thus started the development of a class hierarchy in order to model these resources. This hierarchy is partially reproduced in Figure 1. It is meant to be extended as new resource types are allowed for in the platform.
Some of the classes shown in Figure 1 model resources that pertain to the hardware level. Classes CPU, Memory, and NetworkInterface belong to this category. The class ClusterNode is used to aggregate the three former classes, so that any cluster node can be modelled as a single resource object.
Other classes shown in Figure 1, such as classes Socket and Thread, are standard classes defined in the JDK (Java Development Kit). Their implementation was revised in project Concerto in such a way that the state of the conceptual resources they model can be observed at runtime.
Classes Fragment and Component were introduced in order to define functionalities that are specific to project Concerto. With these classes, a parallel component and a component fragment can be perceived as resources in the cluster they are deployed on. One can therefore benefit from the services implemented in Concerto to manage the components deployed on a cluster, as well as the fragments deployed on any cluster node.
Observation reports
Observation reports make it possible to gather information about the state of the many kinds of resources distributed in a cluster in a homogeneous way. A hierarchy of Java classes was developed in order to allow the generation, the collection, and the management of such reports (see Figure 2). Any Java object that models a resource in the Concerto platform implements method `observe()`, which returns a report about the current state of the resource considered. A report is modelled as a Java object that implements the `ObservationReport` interface. Of course the actual content of the report depends on the type of the resource considered. Hence, when method `observe()` is called on a `Thread` object, this object returns a report of type `ThreadReport`. The class `ThreadReport` provides pieces of information that characterize the state of the thread object considered (current priority level, amount of CPU and memory consumed since this thread was started, etc.). Likewise, calling method `observe()` on a `Memory` object returns a `MemoryReport`, which provides information about the current state of the system memory.
Resource identification and tracking
In the Concerto platform, all resources are modelled as Java objects that can be created and destroyed at any time. Nevertheless, any resource object must be identified unambiguously. Therefore, the platform implements mechanisms for identifying and tracking resources at runtime. These mechanisms rely on a naming system that gives any resource a unique name. Whenever a resource object is created on a cluster node, this object is given a unique identifier (object of type `ResourceId`, see Figure 1). Moreover, the resource object created is immediately registered within a resource manager (object of type `ResourceManager`), whose function is to identify and to keep track of any existing resource object.
An instance of class `ResourceManager` is created on each cluster node whenever a new component is deployed. This resource manager permits the identification, location, and collection of observation reports for:
- the conceptual resources used by the component it is associated with;
- the system resources of the cluster (which are considered as global resources shared by all components);
- the other components that have been deployed on the cluster (remember that each component is perceived as a resource, and can therefore produce an observation report on demand).
Search patterns make it possible to search for resources, and to collect observation reports from these resources selectively. They model search strategies as Java objects. For example, one can create a search pattern object that describes a local search strategy (i.e. search limited to a specific node of the cluster), and another object that describes a global search strategy (i.e. search performed on all the nodes of the cluster). The interface `SearchPattern` serves as the root of a hierarchy of classes that each describe a specific search strategy (Figure 3).
The following segment of code shows how a resource manager can be called when looking for specific resources. In this example several kinds of search patterns are used in order to specify that the search should apply (1) to local resources only; (2) to the resources located on a remote node whose identity is specified as an argument; (3) to the whole cluster.
```java
ResourceManager manager = ResourceManager.getManager();
Set localIds = manager.getResourceIds(new LocalSearch()); // (1)
Set remoteIds =
manager.getResourceIds(new LocalSearch(remoteNodeIds)); // (2)
Set allIds = manager.getResourceIds(new GlobalSearch()); // (3)
```
Once the identity of a resource object has been obtained, one can require that the resource manager collects and re-
Figure 2. Observation reports modelling.
Resource classification and selection.
The resources registered within a resource manager can be of various types (e.g. CPU, Memory, Socket, Thread, File, etc.). The Concerto platform implements mechanisms for classifying and selecting resources based on the notion of “resource pattern”.
The interface ResourcePattern (see Figure 3) defines a function isMatchedBy(), which takes a resource object as a parameter, and returns a boolean whose value depends on whether this object satisfies the considered selection criterion or not. In a simple case resource selection can rely on the actual type of the resource object which is submitted to the test. Hence, in the class CPU_Pattern (which implements interface ResourcePattern), the method isMatchedBy() simply checks that the object passed as a parameter is a CPU object. But one can also implement more sophisticated selection mechanisms. For example the class SocketPattern is implemented so as to achieve the selection of socket objects based on criteria that take into account not only the type of the resource (this object must be of type Socket), but also the local and remote IP address and ports associated with this socket, as well as the number of bytes sent and received via this socket.
The following example shows the creation of three resource patterns. The first pattern permits the search for and selection of parallel components. The second pattern permits to select specifically resource objects that model the CPUs in a cluster. The third pattern makes it possible to select only those resource objects that model sockets resources, and that additionally satisfy the following selection criteria: the IP address of the remote host must belong to the 195.83.160/24 network, and the remote port must be in the range 0 to 1023. On the other hand, the local IP address and port the socket is bound to can take any value.
ResourcePattern componentPattern = new ComponentPattern();
ResourcePattern cpuPattern = new CPU_Pattern();
ResourcePattern socketPattern = new SocketPattern(InetAddress.AnyAddress, "195.83.160/24", PortRange.AnyPort, new PortRange(0, 1023));
The resource manager can handle requests that take a ResourcePattern object as a parameter. One can thus request that the resource manager locate a specific set of resources, or collect and return observation reports from these resources. For example, assume that the resource manager receives a search request with the componentPattern de-
fined in the former example. It will thus search for those resources that match this pattern, and return only the identities of component resources. On the other hand, if the resource manager is required to search for those resources that match the socket pattern, it will only consider the sockets created by the calling component, and will select among these sockets those whose characteristics (IP addresses, port numbers, etc.) match the SocketPattern.
Implementation details
The mechanisms we use in Concerto for modelling resources and allowing their observation have a larger scope than that of adaptive parallel components. These mechanisms have been gathered in an environment called RAJE (Resource-Aware Java Environment). RAJE permits the reification and the observation of system resources. Moreover, it defines the main schemes for identifying, localizing and observing resources. The Concerto platform is based on the RAJE environment which is extended in order to allow for specific notions such as parallel components or fragments.
The RAJE environment and the Concerto platform are presently implemented on Linux and rely on a variant of the Kaffe 1.0.6 JVM. Details on RAJE can be found in [7]. This paper describes how resource observation is implemented and how consumption shares are imputed to Java threads. Article [9] presents also RAJE (and the JAMUS platform built upon RAJE). Namely, the services provided by RAJE are compared to those offered by others tools such as JRes [4], GVM [3], KaffeOS [2]. Naccio [6] Ariel [8], etc.
4 Conclusion
This article presents the Concerto platform, which aims at allowing the deployment and the support of parallel components on clusters of workstations. Ongoing work focuses on proposing a basic parallel component model, as well as tools for the deployment of such components. Our objective is, in a first stage, to adopt as simple and versatile a model as possible. We rather put the emphasis on the development of mechanisms useful to adaptation. Indeed, the clusters targeted by the Concerto platform are mainly non-dedicated clusters composed for example of several workstations that are shared by several components, and even several applications and users. The runtime environment of parallel components is heterogeneous and may vary along the component’s execution. So we have designed tools that provide the components with means to perceive their execution environment and its variations. The components’ environment is likened to a set of resources. Each component can discover the existence of a particular resource and observe its state thanks to the services the platform offers.
The development of the Concerto platform is still in progress. The deployment tool, that includes a graphical frontend, should be extended. Besides, we intend to make interaction mechanisms available to the components so that they can command the platform to inform them on changes in the state of resources. This implies the definition of a formalism that can be used by components to describe interesting events and to implement a notification scheme that allows for the distributed aspects of the components.
Acknowledgment
This work is supported by the French Ministry of Research in the framework of the ACI GRID program.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00342140/file/Concerto_ipdps03.pdf", "len_cl100k_base": 4888, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 21614, "total-output-tokens": 5883, "length": "2e12", "weborganizer": {"__label__adult": 0.0002161264419555664, "__label__art_design": 0.0002906322479248047, "__label__crime_law": 0.0002218484878540039, "__label__education_jobs": 0.0004701614379882813, "__label__entertainment": 5.155801773071289e-05, "__label__fashion_beauty": 0.00010734796524047852, "__label__finance_business": 0.0001962184906005859, "__label__food_dining": 0.0002310276031494141, "__label__games": 0.00033402442932128906, "__label__hardware": 0.000949382781982422, "__label__health": 0.0003266334533691406, "__label__history": 0.0001885890960693359, "__label__home_hobbies": 7.075071334838867e-05, "__label__industrial": 0.0003066062927246094, "__label__literature": 0.00016880035400390625, "__label__politics": 0.00018417835235595703, "__label__religion": 0.0003418922424316406, "__label__science_tech": 0.0241241455078125, "__label__social_life": 6.4849853515625e-05, "__label__software": 0.01025390625, "__label__software_dev": 0.9599609375, "__label__sports_fitness": 0.00020301342010498047, "__label__transportation": 0.0003845691680908203, "__label__travel": 0.00018334388732910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26823, 0.01089]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26823, 0.77614]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26823, 0.9122]], "google_gemma-3-12b-it_contains_pii": [[0, 1008, false], [1008, 5570, null], [5570, 10639, null], [10639, 15362, null], [15362, 19167, null], [19167, 21684, null], [21684, 25211, null], [25211, 26823, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1008, true], [1008, 5570, null], [5570, 10639, null], [10639, 15362, null], [15362, 19167, null], [19167, 21684, null], [21684, 25211, null], [25211, 26823, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26823, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26823, null]], "pdf_page_numbers": [[0, 1008, 1], [1008, 5570, 2], [5570, 10639, 3], [10639, 15362, 4], [15362, 19167, 5], [19167, 21684, 6], [21684, 25211, 7], [25211, 26823, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26823, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
92aa365027e95f9d21b94230bec363d9d0dcef1c
|
Custom Graphics
Boxing Text
Clipping
Rotating and scaling
Text along a path
Text as graphic
Online \LaTeX\ Tutorial
Part II – Graphics
PSTricks
E Krishnan, CV Radhakrishnan and AJ Alex constitute the graphics tutorial team. Comments and suggestions may be mailed to tutorialteam@tug.org.in
©2004, The Indian \TeX\ Users Group
This document is generated by \pdfTeX\ with hyperref, pstricks, pdftricks and pdfscreen packages in an intel PC running GNU/Linux and is released under LPPL.
The Indian \TeX\ Users Group
Floor III, SJP Buildings, Cotton Hills
Trivandrum 695014, India
http://www.tug.org.in
9. Tricks with Text
In our discussions so far, we’ve been focusing on graphic objects and we’ve treated text only incidentally in Chapter 6, as labels in pictures. We now see how text can be manipulated in various ways using PSTricks.
9.1. Boxing Text
\LaTeX{} has various macros for putting text in boxes (or putting boxes around text) and PSTricks defines its own boxing macros. The advantage of using these is the ease of adorning these boxes using colors, shadows and so on. The simplest of such commands is the \texttt{\textbackslash psframebox} as in the example below:
\begin{verbatim}
\psframebox[fillstyle=solid,\
fillcolor=Cyan,\
linecolor=RoyalBlue]\
{\color{Red}\LARGE\bfseries Text In A Box}
\end{verbatim}
The distance between the sides of the box and the enclosed text is controlled by the \texttt{framesep} parameter. By default, its value is 3 point, but as with other parameters, can be set to any desired value, as shown in the next example:
\begin{verbatim}
\psframebox[framesep=10pt,\
fillstyle=solid,\
fillcolor=Cyan,\
linecolor=RoyalBlue]\
{\color{Red}\LARGE\bfseries Text In A Box}
\end{verbatim}
A variant of the \texttt{\textbackslash psframebox} is the \texttt{\textbackslash psdblframebox} which, as the name indicates, doubles each line of the frame.
Recall that the \texttt{doublesep} parameter determines the width of the space between the double lines and the \texttt{doublesep} the color of this space, as mentioned in Chapter 3. The default value of \texttt{doublesep} for the \texttt{psdblframebox} is \texttt{pslinewidth} and the default value of \texttt{doublecolor} is white.
Another variant is the \texttt{psshadowbox} which, obviously enough, draws a (single) frame with a shadow, as shown below:
\begin{verbatim}
\psshadowbox[framesep=10pt,\fillstyle=solid,\fillcolor=Cyan,\linecolor=RoyalBlue,\shadowcolor=Blue,\shadowsize=5pt]{\color{Red}\LARGE\bfseries Text In A Shadow Box}
\end{verbatim}
Note that the parameters \texttt{shadowsize} and \texttt{shadowcolor} are discussed in Chapter 3.
If you are tired of plain old rectangular boxes, you can try \texttt{psdiabox}, which draws a diamond shaped box:
\begin{verbatim}
\psdiabox[framesep=10pt,\fillstyle=solid,\fillcolor=Cyan,\linecolor=RoyalBlue,\doublecolor=Apricot,\doublesep=3pt]{\color{Red}\LARGE\bfseries Text In A Box}
\end{verbatim}
or \texttt{\textbackslash pstribox}, which draws a triangular box:
\begin{verbatim}
\pstribox[fillstyle=gradient,%
gradbegin=CornflowerBlue,%
gradend=Apricot,%
gradmidpoint=1,%
linecolor=Cyan]%
\color{Red}
\begin{tabular}{c}
Text\%
In A Triangle Box\%
\end{tabular}
\end{verbatim}
Recall that the \texttt{gradient} style of filling requires the \texttt{pst-grad} package, as explained in Chapter 2.
For those who are inclined towards curves than angles, there's a \texttt{\textbackslash pscirclebox}:
Custom Graphics
Boxing Text
Clipping
Rotating and scaling
Text along a path
Text as graphic
or even a \psovalbox:
\psovalbox[fillstyle=solid,\%
fillcolor=Orange,\%
linecolor=BrickRed\%\}
\color{SpringGreen}
\large\bfseries
\renewcommand{%
\arraystretch}{1.2}
\begin{tabular}{c}
Text\\
In An\\
Oval\\
Box
\end{tabular}
Another parameter for the various boxes is the boxsep whose default value is true. In this case, the box that is produced (in the \TeX{}nical sense) is the size of the “frame” around it. If it is set to false, then the box produced is the size of what’s inside, so that the frame is transparent to \TeX{}. This is apparent only when the boxes are used within some surrounding text as illustrated below:
Thus we find that $x+y=3$ and using this together with $x^2+y^2=3$ found earlier, we see that $x=2$ and $y=1$.
Each of the boxing commands above has a *starred* version, which draws a solid shape around the enclosed text instead of just a frame. This is similar to the starred versions of graphic objects we’ve seen earlier, but the color of the boxes is determined by \texttt{fillcolor} instead of \texttt{linecolor} for other graphic objects.
Text In A \psframebox
Text In A \psframebox*
(Here, the command pair $\texttt{SaveVerb}$ and $\texttt{UseVerb}$ come from the package \verb+fancyvrb+ and are used to get the control sequence strings \verb+psframebox+ and \verb+psframebox*+ as arguments of the commands.)
We’ve mentioned in Chapter 6 that the \verb+rput*+ command puts the text first in a \verb+psframebox*. But there are occasions when we’ve to use both \verb+rput+ and \verb+psframebox+ together instead of a single \verb+rput*, as in the example below:
\psset{linecolor=Blue}
\begin{pspicture}(0,0)(4,5)
\pspolygon[fillstyle=solid, %
fillcolor=Cyan]
(0,0)(4,0)(2,4)
\psline(2,0)(2,4)
\psset{linecolor=Mahogany, %
linestyle=dotted, %
dotsep=1pt, %
arrows=<>}
\psline(2.3,0)(2.3,4)
\rput(2.3,2){ \rput*(1,-0.3){\color{Red} $r$}}
\end{pspicture}
Note that here, we cannot use \rput* directly for the label $h$, since the default color of \psframebox* is white, (which is OK for the label $r$) but we want the color of the box for $h$ to be cyan, to blend it with its background.
While on the topic of “putting”, we should also mention the command \cput (and of course \cput*) which combines the functions of \pscirclebox and \rput (or \rput*), as shown in the next example:
\Large
\psset{fillstyle=solid,unit=2cm}
\begin{pspicture}(0,-1)(1,1)
\SpecialCoor
\pspolygon[linecolor=Cyan](0,1)(1;210)(1;330)
\cput*[fillcolor=Red](0,1){\color{White} A}
\cput*[fillcolor=Green](1;210){\color{White} B}
\cput*[fillcolor=Blue](1;330){\color{White} C}
\end{pspicture}
9.2. Clipping
We can clip text, that is, cut off everything outside a specified boundary, using the \psclip....\endpsclip commands. A simple example is given below:
\psclip{%
\psdiamond[linecolor=Red, fillstyle=solid, fillcolor=Yellow]
(2,0.25)(2.5,0.5)
}\color{Blue}\Huge Cut Diamond
\endpsclip
By careful use of coordinates, we can create an overlay effect with suitable clipping, as in the next example:
\begin{pspicture}(0,0)(6,2)
\rput[b](0.5,1){\color{Blue}\Huge Cut Diamond}
\psclip{\psdiamond*[linecolor=Yellow]
(3,1.25)(2.25,0.5)}
\rput[b](0.5,1){\color{OliveGreen}\Huge Cut Diamond}
\endpsclip
\end{pspicture}
Or special effects like this:
Here, the custom style bluestyle and the custom curve \tearcurve are defined as follows:
\newpsstyle{bluestyle}{
linecolor=Blue,%
fillstyle=solid,%
fillcolor=Cyan}
\newcommand{\tearcurve}{
\pscurve(1,0)(1.1,0.3)(1.2,1)(1.3,1)
(1.4,1.2)(1.6,1.6)}
Incidentally, note that the `\psclip...\endpsclip` commands can be used to clip not only text, but graphic objects also, as shown in the example below:
```latex
\psset{unit=0.66,linecolor=Red}
\begin{pspicture}(0,-7)(6.5,2)
\coloraxes(0,0)(0,-2)(7,2)
\plotsqsin
\plotabssin
\psclip{\ycirc(3.1416,0){0.5}}
\coloraxes(0,0)(0,-2)(7,2)
\plotsqsin
\plotabssin
\endpsclip
\dotline(2.6416,0)(1.1416,-4)
\dotline(3.6416,0)(5.1416,-4)
\psclip{\ycirc(3.1416,-4){2}}
\psset{origin={3.1416,4},unit=1.33cm}
\coloraxes(0,0)(0,-2)(7,2)
\plotsqsin
\plotabssin
\endpsclip
\end{pspicture}
```
where the various customized commands used are as follows:
```latex
\newcommand{\plotsqsin}{%\psplot[plotpoints=500,plotstyle=curve,linecolor=Blue]{0}{6.2832}{57.2958 x mul sin 2 exp}}
\newcommand{\plotabssin}{%\psplot[plotpoints=500,plotstyle=curve,linecolor=Green]{0}{6.2832}{57.2958 x mul sin abs}}
\definecolor{PaleYellow}{cmyk}{0,0,0.2,0}
\newpsobject{ycirc}{pscircle}{fillstyle=solid,fillcolor=PaleYellow}
\newpsobject{dotline}{psline}{linestyle=dotted,dotsep=1pt}
\newpsobject{coloraxes}{psaxes}{linestyle=solid,linecolor=Apricot,labels=none,ticks=none}
```
9.3. Rotating and scaling
There are also ready to use commands for rotating text left, right or down, leaving the needed amount of spaces.
\texttt{go straight}
\begin{verbatim}
\rotateright{\textcolor{Red}{go down}}
\rotatedown{\textcolor{Green}{turn upside down}}
\rotateleft{\textcolor{Blue}{go up}}
go straight again
\end{verbatim}
A better effect can be obtained by computing the heights of various upright boxes and raising and lowering them by the appropriate lengths:
\begin{verbatim}
% \newlength{dlen}
% \settoheight{dlen}{\rotateright{\textcolor{Red}{go down}}}
% \newlength{ulen}
% \settoheight{ulen}{\rotateleft{\textcolor{Blue}{go up}}}
go straight
\raisebox{-dlen}{\rotateright{\textcolor{Red}{go down}}}
\rotatedown{\textcolor{Green}{turn upside down}}
\rotateleft{\textcolor{Blue}{go up}}
go straight again
\end{verbatim}
For such manipulation of long pieces of text, these commands also have the “environmental” forms \texttt{\begin{Rotateleft}...\end{Rotateleft}} and others.
Text can also be *scaled*, using the command \texttt{\scalebox}. The general form of this command is
\texttt{\scalebox{number1 number2}{text}}
where \textit{number1} is the horizontal scaling and \textit{number2} is the vertical scaling. If only one number is specified, it is used for scaling in both directions. This is illustrated in the examples below:
\texttt{\scalebox{0.8 4}\{\color{Red}tall and lean\}}
\texttt{\scalebox{4 0.8}\{\color{Green}short and fat\}}
\texttt{\scalebox{2}\{\color{Blue}large but proportional\}}
Using negative numbers for scaling, we can flip text around either axis:
\texttt{\scalebox{-2}\{\color{Blue}large but proportional\}}
We also have the \scalebox command with the general form
\scaleboxto(number1,number2){text}
With this command text is scaled to have width number1 units and height plus depth equal to number2 units. If one of the numbers is set to 0, then the box is scaled to have width and height (plus depth) equal to the other number. (Of course, we cannot set both numbers equal to 0).
\scaleboxto(1.5,1){\color{Red}
tall and lean}
\bigskip
\scaleboxto(7,0.2){\color{Green}
short and fat}
\bigskip
\scaleboxto(3,0){\color{Blue}
small but proportional}
9.4. Text along a path
One of the interesting features of the PostScript language is that it treats text as graphical object. This allows various manipulations of text. The package \texttt{pst-text} provides the command \texttt{\pstextpath} to set text along a specified path. Look at this example:
\begin{pspicture}(0,0)(3,2)
\pstextpath{
\psframe[framearc=0.3,linestyle=none,linewidth=0.3mm,framecolor=Blue](0,0)(3,1.4)}{
\color{Red}\Large Now we have text going around a box
}\end{pspicture}
Note that the general form of the command \texttt{\pstextpath} is
\begin{verbatim}
\pstextpath{graphic}{text}
\end{verbatim}
where, \texttt{graphic} specifies the path along which the specified \texttt{text} is to be set.
By default, \texttt{\pstextpath} draws also the graphic specified, but this can be suppressed by setting \texttt{linestyle=none}, as shown below:
\begin{pspicture}(-3,-1)(3,5)
\psaxes[labels=none](0,0)(-3,-1)(3,5)
\pstextpath{
\psplot[linestyle=none]{-2}{2}{4 x 2 exp sub}}{
\color{Red} This is the graph of the equation $y=4-x^2$ for $-2 \leq x \leq 2$
}\end{pspicture}
This is the graph of the equation $y=4-x^2$ for $-2 \leq x \leq 2$
What if we need something like this?
\begin{pspicture}(-3,-1)(3,5)
\colaxes[labels=none](0,0)(-3,-1)(3,5)
\psset{linecolor=Blue}
\psplot{-2}{2}{4 x 2 exp sub}
\psset{linestyle=none,unit=1.12cm}
\pstextpath{\psplot{-2}{2}{4 x 2 exp sub}}{This is the graph of the equation $y=4-x^2$ for $-2\le x\le 2$}
\end{pspicture}
The trick is to first draw the curve and then use \texttt{pstextpath} to set the text along a slightly scaled up version of the curve, without actually drawing the second curve:
\begin{pseudocode}
\begin{verbatim}
\begin{pspicture}(-3,-1)(3,5)
\colaxes[labels=none](0,0)(-3,-1)(3,5)
\psset{linecolor=Blue}
\psplot{-2}{2}{4 x 2 exp sub}
\psset{linestyle=none,unit=1.12cm}
\pstextpath{\psplot{-2}{2}{4 x 2 exp sub}}{This is the graph of the equation $y=4-x^2$ for $-2\le x\le 2$}
\end{pspicture}
\end{verbatim}
\end{pseudocode}
But this is not exactly what we want. The trouble is that the command \texttt{\texttt{\texttt{\texttt{\texttt{\texttt{\texttt{p}stex}textpath}}}}, by default, places the beginning of the text at the beginning of the path; however, it has an optional parameter which can be used to shift the position of the text:
\begin{pspicture}(-3,-1)(3,5)
\psaxes[labels=none](0,0)(-3,-1)(3,5)
\psset{linecolor=Blue}
\psplot{-2}{2}{4 x 2 exp sub}
\psset{linestyle=none,xunit=1.13cm,yunit=1.05cm}
\pssetpath{c}{%}
\psplot{-2}{2}{4 x 2 exp sub}{%}
\psset{\textcolor{Red}}\texttt{This is the graph of the equation $y=4-x^2$ for $-2\leq x\leq 2$}
\end{pspicture}
Note how we used the optional value \texttt{c} to center the text relative to the curve. (Note also the slight difference in \texttt{xunit} and \texttt{yunit} to get the text at the top just right). Other optional values are \texttt{l} (the default) for left justification and \texttt{r} for right justification. These are illustrated in the next example:
\begin{pspicture}(-3,-1)(3,5)
\colaxes[labels=none](0,0)(-3,-1)(3,5)
\psset{linecolor=Blue}
\psplot{-2.2}{2.2}{4 x 2 exp sub}
\psset{unit=1.15cm,linestyle=none}
\pstextpath[l]{\psplot{-2.1}{2.1}{4 x 2 exp sub}}{\color{Red}\textit{increasing}}
\pstextpath[r]{\psplot{-2.1}{2.1}{4 x 2 exp sub}}{\color{Red}\textit{decreasing}}
\psset{unit=1.07cm}
\pstextpath[c]{\psplot{-2.1}{2.1}{4 x 2 exp sub}}{\color{Red}\textit{turning}}
\end{pspicture}
9.5. Text as graphic
The package \texttt{pst-char} provides the command \texttt{\pscharpath} which can be used to embellish text with colors and the like just as it were a graphic object. We give a couple of examples to illustrate this:
\begin{pspicture}(0,-1)(8,2)
\DeclareFixedFont{\bigrm}{T1}{ptm}{m}{n}{1.5cm}
\pscharpath[fillstyle=solid,%
fillcolor=SkyBlue,%
linecolor=Red]
\{\bigrm PSTricks\}
\end{pspicture}
Here, the command \texttt{\DeclareFixedFont} is the \LaTeX{} way of specifying the font to be used.
\begin{pspicture}(0,-1)(8,2)
\DeclareFixedFont{\bigsf}{T1}{phv}{b}{n}{1.5cm}
\pscharpath[linecolor=Yellow,%
fillstyle=gradient,%
gradbegin=Yellow,%
gradend=Red,%
gradmidpoint=1,%
gradangle=5]%
\{\bigsf PSTricks\}
\end{pspicture}
This package also contains the command \texttt{\pscharclip}\ldots\texttt{\endpscharclip}, which like \texttt{\psclip}\ldots\texttt{\endpsclip} pair, clips any object within them, but this time to the shape of the specified text:
(Here, the text to be clipped is “PostScript” written 500 times, in small font, specified by \smallrm, which is generated by the code starting with \newcounter, put in a box 8 centimeters wide and turned through ninety degrees.)
As in the case of \psclip this can also be used to produce an overlay effect.
Here, the command \firstpara is defined by
\newcommand{\firstpara}{%\scriptsize
\LaTeX\ has only limited drawing capabilities, while PostScript is a page description language which has a rich set of drawing commands; and there are programs (such as \textsf{dvips}) which translate the \texttt{dvi} output to PostScript. So, the natural question is whether one can include PostScript code in a \TeX\ source file itself for programs such as \textsf{dvips} to process after the \TeX\ compilation? This is the idea behind the \textsf{PSTricks} package of Timothy van Zandt. The beauty of it is one need not know PostScript to use it---the necessary PostScript code can be generated by \TeX\ macros defined in the package}
just typesets the opening paragraph of our tutorial in \tiny font in an 8 centimeter wide box.
|
{"Source-Url": "http://cs.stanford.edu/~sbansal/tut/latex/pstricks/chap9.pdf", "len_cl100k_base": 5143, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 60694, "total-output-tokens": 6336, "length": "2e12", "weborganizer": {"__label__adult": 0.0003674030303955078, "__label__art_design": 0.010650634765625, "__label__crime_law": 0.00030303001403808594, "__label__education_jobs": 0.0008521080017089844, "__label__entertainment": 0.00027298927307128906, "__label__fashion_beauty": 0.00017070770263671875, "__label__finance_business": 0.00016164779663085938, "__label__food_dining": 0.0002942085266113281, "__label__games": 0.001003265380859375, "__label__hardware": 0.0015687942504882812, "__label__health": 0.00023627281188964844, "__label__history": 0.00026869773864746094, "__label__home_hobbies": 0.0002310276031494141, "__label__industrial": 0.0003077983856201172, "__label__literature": 0.0002949237823486328, "__label__politics": 0.0001589059829711914, "__label__religion": 0.0006051063537597656, "__label__science_tech": 0.012847900390625, "__label__social_life": 0.00010967254638671876, "__label__software": 0.1123046875, "__label__software_dev": 0.8564453125, "__label__sports_fitness": 0.0002287626266479492, "__label__transportation": 0.00023925304412841797, "__label__travel": 0.0002646446228027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16137, 0.04226]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16137, 0.81688]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16137, 0.73651]], "google_gemma-3-12b-it_contains_pii": [[0, 604, false], [604, 840, null], [840, 1910, null], [1910, 2970, null], [2970, 3510, null], [3510, 4235, null], [4235, 4681, null], [4681, 5206, null], [5206, 5926, null], [5926, 6219, null], [6219, 6888, null], [6888, 7145, null], [7145, 8320, null], [8320, 9319, null], [9319, 9987, null], [9987, 10533, null], [10533, 11704, null], [11704, 12550, null], [12550, 13554, null], [13554, 13994, null], [13994, 15016, null], [15016, 15324, null], [15324, 16043, null], [16043, 16137, null]], "google_gemma-3-12b-it_is_public_document": [[0, 604, true], [604, 840, null], [840, 1910, null], [1910, 2970, null], [2970, 3510, null], [3510, 4235, null], [4235, 4681, null], [4681, 5206, null], [5206, 5926, null], [5926, 6219, null], [6219, 6888, null], [6888, 7145, null], [7145, 8320, null], [8320, 9319, null], [9319, 9987, null], [9987, 10533, null], [10533, 11704, null], [11704, 12550, null], [12550, 13554, null], [13554, 13994, null], [13994, 15016, null], [15016, 15324, null], [15324, 16043, null], [16043, 16137, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16137, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 16137, null]], "pdf_page_numbers": [[0, 604, 1], [604, 840, 2], [840, 1910, 3], [1910, 2970, 4], [2970, 3510, 5], [3510, 4235, 6], [4235, 4681, 7], [4681, 5206, 8], [5206, 5926, 9], [5926, 6219, 10], [6219, 6888, 11], [6888, 7145, 12], [7145, 8320, 13], [8320, 9319, 14], [9319, 9987, 15], [9987, 10533, 16], [10533, 11704, 17], [11704, 12550, 18], [12550, 13554, 19], [13554, 13994, 20], [13994, 15016, 21], [15016, 15324, 22], [15324, 16043, 23], [16043, 16137, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16137, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
62410d4f79eb1eda90010dd35e26d049cb9980c0
|
[REMOVED]
|
{"Source-Url": "http://orbit.dtu.dk/files/126603376/Using_registries_to_integrate_bioinformatics_tools_and_services_into_workbench_environments.pdf", "len_cl100k_base": 4712, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 20421, "total-output-tokens": 7041, "length": "2e12", "weborganizer": {"__label__adult": 0.0002968311309814453, "__label__art_design": 0.0004320144653320313, "__label__crime_law": 0.0004475116729736328, "__label__education_jobs": 0.0023517608642578125, "__label__entertainment": 0.00014090538024902344, "__label__fashion_beauty": 0.0002211332321166992, "__label__finance_business": 0.0006928443908691406, "__label__food_dining": 0.0005016326904296875, "__label__games": 0.0005526542663574219, "__label__hardware": 0.0012826919555664062, "__label__health": 0.0014362335205078125, "__label__history": 0.0004417896270751953, "__label__home_hobbies": 0.00019109249114990232, "__label__industrial": 0.0007119178771972656, "__label__literature": 0.0003764629364013672, "__label__politics": 0.00038504600524902344, "__label__religion": 0.0005054473876953125, "__label__science_tech": 0.409423828125, "__label__social_life": 0.00021839141845703125, "__label__software": 0.06683349609375, "__label__software_dev": 0.51123046875, "__label__sports_fitness": 0.0003533363342285156, "__label__transportation": 0.0005016326904296875, "__label__travel": 0.0002777576446533203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29312, 0.03651]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29312, 0.31617]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29312, 0.83821]], "google_gemma-3-12b-it_contains_pii": [[0, 1458, false], [1458, 4577, null], [4577, 9898, null], [9898, 14401, null], [14401, 19822, null], [19822, 24026, null], [24026, 29312, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1458, true], [1458, 4577, null], [4577, 9898, null], [9898, 14401, null], [14401, 19822, null], [19822, 24026, null], [24026, 29312, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29312, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29312, null]], "pdf_page_numbers": [[0, 1458, 1], [1458, 4577, 2], [4577, 9898, 3], [9898, 14401, 4], [14401, 19822, 5], [19822, 24026, 6], [24026, 29312, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29312, 0.20423]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
f0cb32a55709bae99267f9652093cc29012b1193
|
Archetypical Approaches of Fast Software Development and Slow Embedded Projects
Ulrik Eklund
Malmö University
School of Technology, Dept. Computer Science
Malmö, Sweden
Email: ulrik.eklund@mah.se
Jan Bosch
Chalmers University of Technology
Dept. Computer Science and Engineering
Göteborg, Sweden
Email: jan.bosch@chalmers.se
Abstract—This paper describes the problem context of software development for mass-produced embedded systems, with distinguishing factors such as the co-design of software and hardware, strong focus on manufacturing aspects, supplier involvement and safety-critical functionality. In this context there is a need for a holistic model to explain the failures and successes of industrial projects, where just investigating a single dimension, e.g. chosen ways-of-working or architecture is not sufficient.
The main contribution is a holistic model consisting of five archetypical approaches to embedded software development, based on a mapping study over industrial cases in literature. The approaches range from “traditional” stage-gate projects focusing on product qualities and large integration efforts, to fast development in short loops by autonomous teams based on a composable software platform. The model aligns the processes with the architecture of the embedded software, and the implications on the business and the organisation. The model allows an R&D organisation to identify where it is positioned and to evolve its software development approach. The model is elucidated by two empirical cases from a Swedish company.
Keywords—embedded software; software engineering; software architecture; business; companies
I. INTRODUCTION
Software is prevalent in many products manufactured today; cars, washing machines, mobile phones, airplanes and satellites [1]. Typically these products are developed in large and complex industrial projects where the embedded software may be critical for the success of the product but the manufacturing and delivery of the product is a heavier investment than the R&D budget. This in turn tends to drive the entire R&D process, and software just follows the process logic of the manufacturing setup.
Ebert & Jones [1] mentions factors contributing to the complexity in their survey of the state of embedded software development: “combined software/hardware systems equipped with distributed software, computers, sensors, and actuators” which points to the integration aspects of these systems. They list “high demands on availability, safety, information security, and interoperability” as typical quality attributes. Liggesmeyer & Trapp [2] reaffirms this view stating that embedded software is one of many elements in a product consisting of mechanics, electrics and electronics, and software. The also mention quality attributes such as software safety, reliability and timeliness software safety, reliability, and timeliness, which all must taken into account in the development process. Manhart & Schneider [3] describe agile development of software in buses at Daimler-Chrysler. They mention for example that “equipment, functions, or parameter sets are implemented by integrating different proportions of third party- and OEM manufactured components” indicating supplier involvement.
Examples like this, together with our own research [4], allows us to describe the domain of large industrial development of mass-produced embedded systems (MPES) by five characteristics:
- Deep integration between hardware and software for significant parts of the functionality
- Strong focus on manufacturing aspects of the product in the development
- Strong supplier involvement
- Some parts realise safety-critical functionality
- Long production life-time
In our previous research of MPES development there were significant characteristics of the studied cases that could not be explained only by looking at the used development process or chosen architecture [5]–[7]. For example in [5] there was a desire to simplify the working method and develop functionality top-down from a customer’s point of view. But the architecture used made for cumbersome integration of subsystems, this together with the used development process “led to monster documentation”. This in turn affected the ability to outsource module development, with suppliers having difficulty in handling and meeting the specifications, which affected cost negatively.
In the third case in [7] the goal was to share modules with other car manufacturers to get advantages of scale, and to outsource a significant part of module development. But the distributed architecture meant the interfaces between the modules were complex. This in combination with the stage-gate process used initially meant that the development effort was underestimated. Later in the project the shorter development iterations were introduced, which probably “saved” the project launch date.
The complexities of the studied cases establish the need for a holistic model to explain the failures and successes of these and other industrial projects. We propose a model, built by
analysing existing industrial approaches of embedded software through the four dimensions of Business, Architecture, Process and Organisation (BAPO). These four dimensions originate from the framework by van der Linden et al. [8], aimed at evaluating product families. The use of BAPO allows the construction of the model and identifies different approaches to development of MPES products.
The main contribution of this paper is a holistic model for aligning software development processes with the architecture of the embedded software, and the implications these have on the business and the organisation. The model allows an R&D organisation to identify where it is positioned and how to evolve its software development approach in terms of architecture, process and organisation. The model is illustrated through two industrial cases. In addition to this, the paper provides a rich insight into the context and challenges for development of industrial systems and their embedded software both by empirical evidence from first-hand industrial cases and by published literature through a mapping study.
II. RESEARCH METHODOLOGY AND PROBLEM
The paper builds an explanatory holistic model with the properties described in Section I, through a 3-stage research process. The first stage of the research process identifies the research questions to be investigated:
1) What approaches to embedded software development are used in industry?
a) How do these approaches relate to used architectural styles?
b) What business and organisational implications does it have for an organisation to develop software with a specific approach?
2) How would a model over the processes and architectural styles and their relationship to business and organisation look like?
The second stage, described in detail in Section III, is a mapping study identifying published industrial cases relevant to the research questions above.
The last stage of the process builds a qualitative model, described in detail in Section IV. The model articulates viable approaches an R&D organisation takes to embedded software development. The model is also elucidated through two industrial cases in Sections IV-A1 and IV-B1. These first-hand cases were captured at Volvo Car Corporation (VCC) in a qualitative manner. The case studies took advantage of the fact that the first author was native to the case company at the time (as defined by [9]), and acted as a participant/observer.
III. MAPPING STUDY OF DEVELOPMENT APPROACHES
The mapping study surveyed existing literature with the aim to identify industrial cases relevant to the research question. The study was performed similarly to a systematic literature review [10], with a focus on the steps emphasized below:
1) Planning of the mapping study
• Establish the need of a review
• Define the search questions
• Define the review protocol
• Evaluate the protocol
2) Conducting the mapping study
• Identify relevant cases
• Choose primary papers based on relevance in case description
• Assess qualities of chosen papers
• Extract relevant categories of development approaches and architectures from the chosen papers
• Synthesise the results
3) Reporting the mapping study
A. Define Search Questions
The aim was to identify and model the relationship between business, architecture, process and organisation when developing embedded software, and we iteratively refined the search queries until we ended up with the final phrase used in the Scopus article database.
Search phrase 1: Search phrase used in Scopus.
We limited the search to the last ten years (2003-2012), and excluded everything not within the Subject Areas of Computer Science, Engineering or Business, Management and Accounting. This initial search resulted in 117 papers.
B. Review Protocol
From the list of 117 papers we applied the inclusion and exclusion criteria in Table I. These criteria were applied in two rounds, the first round by just reading the abstracts, which left 53 papers. The second round evaluated the full papers, and this resulted in 23 papers, of which the authors were involved in 4 papers [6], [11]–[13].
C. Quality Assessment
We were looking for examples of where an organisation develops a system with embedded software in an industrial context and the range of development approaches and different styles of architectures used therein. We therefore selected case studies as a search criterion since we wanted empirical studies “that investigates a contemporary phenomena within its real-life context...” [14]. A number of papers claimed to do a case study without providing any information about the context of development, e.g. the purpose of the system. Nor did they say anything about the organisation doing the development, e.g. was it done by an industrial team, PhD students, or a mix of academic and industrial participants? We classified these papers as academic proof-of-concept prototypes, and therefore excluded according to EC1. These papers were a significant part of those excluded resulting in the final set of 23 papers.
Our insider knowledge allowed us to identify when multiple papers described the same case, e.g. developing a new car year model on an existing platform at Volvo Car Corporation [6], [15]–[17], or the continuous evolution of a heavy vehicle product line [6], [18].
D. Study results and limitations
The final selection of papers resulted in 28 cases presented in 23 papers. Each case were categorised on how they addressed the dimensions of business, architecture, process and organisation (BAPO).
A limitation is the difficulty to assess if other, not found, industrial cases would expand or alter the resulting model. Another limitation is the scope of found cases, the search query finds cases which call themselves “embedded”, but this definition is probably not uniform which affects the applicability of the resulting model across various domains. A third limitation is that few papers had information on all four categories when describing the cases.
Despite the limitations it is still possible to draw some general qualitative conclusions: The common way of working is to follow a sequential order of activities, which is characterised as a V-model [19] or as a stage-gate process [6], [20]. Some papers describe cases of agile development, either for a system as a whole [11] or by individual teams within the organisation [20]. When described, most case architectures focus on enabling product quality attributes, usually domain-specific such as safety, cost and variability in the automotive domain [13], security in defence [21], and dependability in space [22]. Some organisations utilise a product line architecture to enable tailoring of the system to a particular customer [23], [24]. An important architectural issue is the integration of modules or sub-systems to a working whole [6], [12], [18], [25].
We can conclude that there is no single approach of how software in embedded systems is developed, but there is rather a range of various approaches.
IV. APPROACHES TO DEVELOP EMBEDDED SYSTEMS
The last stage of the research process resulted in a model able to answer the research questions in Section II. By evaluating how the cases are aligned in the four dimensions of BAPO we identified five archetypical R&D approaches that are feasible within industrial contexts, archetypical in the sense they are the original pattern or model of which all development approaches of the same type are representations or copies. These five approaches constitute the model (Figure 1). The model is described with a narrative of each approach, starting from the right (E) since this is where most cases are. The 28 cases also imply an evolution of how an organisation can move along the range of the five R&D approaches, from right to left in our model (from E to A). We saw no case in the mapping study describing an organisation evolving in the opposite direction.
Primarily the model is explanatory in the sense it conveys how industrial projects align their concerns of business, architecture, process and organisation, thus is an empirical instance of the BAPO framework in [26]. Secondary, the model can be prescriptive in two ways; it suggest to an organisation how others align the BAPO concerns, and it suggest a path of evolution to alternative R&D approaches.
A. Approach E: Rorqual Organisations
Organisations using this approach run development projects demanding a lot of investment in technology during R&D, both what goes into the product and what technology is required to manufacture it. This can be considered the standard practice at which MPES software is developed, with six cases clearly falling into this category (described in papers [6], [13], [15]–[17], [19], [21], [27], [28]). An additional twelve cases (in papers [22], [23], [25], [29]–[31]) fall either into this category or category D.
Business: One major business driver is to minimise the risk associated with the technology investments. Software may not
<table>
<thead>
<tr>
<th>Inclusion criteria</th>
<th>Exclusion criteria</th>
</tr>
</thead>
<tbody>
<tr>
<td>IC1. Papers, technical reports, theses, industry white papers and presentations describing industrial software development approaches and architectures for embedded systems.</td>
<td>EC1. Studies that did not report on projects in an industrial context, e.g. student projects, open source communities, prototypes developed in academia, etc.</td>
</tr>
<tr>
<td>IC2. If a search result contained several cases each case were counted separately.</td>
<td>EC2. Studies only focusing on hardware development</td>
</tr>
<tr>
<td>IC3. Survey papers over other papers</td>
<td>EC3. Studies only evaluating specific tools, notations or other modelling techniques</td>
</tr>
<tr>
<td>IC4. Textbooks and proceeding summaries</td>
<td>EC4. Studies only evaluating testing or other verification practices, such as formal methods</td>
</tr>
<tr>
<td>IC5. Material not accessible either freely through the world wide web or through available library resources</td>
<td>EC5. Material not in English, Swedish or Dutch</td>
</tr>
<tr>
<td>IC6. If a case was published several times only the most recent was included</td>
<td>EC6. Material not in English, Swedish or Dutch</td>
</tr>
</tbody>
</table>
Table I
USED INCLUSION & EXCLUSION CRITERIA.

...
even be considered as a major risk compared to e.g. hardware or manufacturing investments.
Architecture: The architectures and technology used for subsystems and their embedded software optimise the desired product requirements rather than being concerned with the difficulty for the organisation to develop and maintain them. As a result much of the architecting effort is spent on integration issues.
Process: The process model used is a stage-gate model [32]. The project planning follows a template based on calendar time, which has evolved from experiences from previous projects. Gate progression corresponds to design artefacts, e.g. user requirements, system requirements, system & software architecture, component requirements, software implementation (i.e. code), and verification & validation, i.e. a V-model even if the artefacts can be updated as the project progresses.
Organisation: The development organisation is functionally structured, clustering domain specialists together. The organisation resembles a hub with spokes with a central systems engineering or architecture team responsible for the complete product properties, new or updated features are negotiated along the spokes before incorporated in the product. Coordination of involved teams is done through synchronisation of processes. It is common practice to outsource part of the electronics and software development to subcontractors.
1) Example: Development Project of an Infotainment System.: VCC decided to deliver a new generation of infotainment systems to extend its competitive position. The development organisation had to deal with several prerequisites which had a major effect on the project: The systems were to be sold by more than one brand within Ford Motor Company and some developed components were to be shared between brands (both hardware and software) while maintaining a brand-specific HMI. This was to leverage sourcing with other (unrelated) components from suppliers. There was also a desire to minimise the requirements elicitation effort in terms of spent man-hours. The project changed setup midway since the initial development approach had problems delivering according to schedule. A post mortem analysis was done to identify the major factors influencing the architecture and the causal relationships between them and the used process. Management at the concerned department at VCC ordered the study to learn from this case. The case is previously published in [7].
Business: The main business driver to develop a completely new system was to “keep up” with the technological evolution and competitive pressure within the in-vehicle infotainment domain. The release of the new system coincided with the release of a new car model, the Volvo S60 to leverage marketing.
Architecture: The architecture changed, unintentionally, compared to the previous generation infotainment system. Some of the most complex customer features were distributed between two electronic control units (ECUs), which led to a complex interface shared between two software suppliers. The architecture was based on established technologies in the automotive domain, e.g. Media Oriented Systems Transport (MOST) [33] for communication buses, while the application interface on top of this did not follow standard MOST services. The main driver for the separation of HMI to one ECU while having the core functionality in another ECU was to allow sharing of the latter ECU between different car brands while allowing for a brand-specific HMI.
Process: The initial process approach was typical for waterfall development, with the software specifications for each ECU being reused from the previous generation to minimise the effort in writing new specifications. Initially the focus was on component development, i.e. on each ECU with its deployed software. The project progress was initially measured in implemented customer features, in spite of the component-focus, this meant that delivery of infrastructure software necessary for integration and testing was initially de-emphasized. It was difficult to plan and manage the integration occasions necessary for validation and verification since there was no overall view of feature realisation.
Organisation: All software was to be outsourced while in the previous system generation the software for the main ECU was developed in-house at VCC. This meant new development practices of working with supplier had to be established.
The project in this case started at the rightmost E position, but without performing some key elements common to successful projects in this position, e.g. a clear architecture. The project changed approach midway with shorter sprints with a limited set of features verified after each sprint rather than planning against large integrations. The setup of the teams changed, from being focused on component development to cross-functional teams focused on feature development. These changes were vital in keeping the launch date.
B. Approach D: Autonomous Teams
This category is where individual teams are allowed to define their own ways-of-working to facilitate speed, short iterations and delivery quality. This can be seen in domains where by necessity the full product development project cannot move as fast as the individual development teams. Three cases were found that clearly fall into this category [20], [34].
Business: The organisation as a whole is focused on physical delivery of a product (or thousands of products) to the customer, so the ability of individual teams to continuously deploy new software is not seen in the business. Project risk management is focused on minimising technological risks.
Architecture: The architecture of the product is tailored towards satisfying product qualities, requiring a lot of architecting effort spent on defining, verifying and maintaining interfaces, as well as integration of subsystems throughout the project.
Process: Since the overall R&D approach used in the organisation is governed by a stage gate process, or V-model, effort is spent on aligning the practices of the individual teams to the overall process, as described by [20]. The short iterations on the module level are never visible in the large stage-gate process, since the deliveries are still planned towards
scheduled integration points. If the teams adopt e.g. any agile software practices these are only seen at a module or sub-system level, where the short iterations take place, as described in Figure 2.
**Organisation:** The initiative for the teams to be more self-directed in terms of development practices usually comes from the teams themselves, e.g. they want to adopt agile practices from XP or Scrum [20]. This means these teams are more or less isolated in their approach compared to other teams they are interacting with.
1) **Example: Climate Control Software:** The case concerns in-house development of climate control software at VCC, where it was outsourced for the previous generation cars. The target car is not in production at the time of publication. The study was performed to provide feedback to software process improvements at VCC, and is previously published in [7].
It was the team themselves that wanted to use agile practices, so the initiative came from the “grass-roots” of the organisation. The upper management of R&D approved this for the following reasons:
- Shorter lead-times: Having the ability to introduce new features in a controlled manner with the right quality close to launch date of new year models.
- Better to have 80% of the desired software with really good quality than 100% so-and-so.
- Increase in competence: Focus on development teams with continuous learning and improvement will increase the level of competence among the employees.
- Attractive workplace: There are reports of other organisations introducing agile methods having more satisfied employees. Software engineers learn these methods at universities today, it will be easier to recruit and retain competent people.
- A natural progression of the software process improvements already being implemented.
The software runs on a hardware platform with basic software delivered hardware supplier of the HVAC (Heating, Ventilating, and Air Conditioning) Most of the algorithms are developed in Simulink from which C code is generated. Both the control software and the standardised basic software are based on the AUTOSAR software architecture [35] and the interfaces to other systems, including HMI, are stable.
The development team applies most of Scrum practices [36] [37] since this was a natural evolution of present team practices, especially in the light of going from outsourcing to in-house development. The team adjusted their sprint schedule to suit the integration events of the complete electrical system.
An unforeseen benefit seen already after three sprints was better prediction of future gate fulfilment compared to previous ways-of-working.
The governance structure was simple with few different concerned stakeholders. The product owner resides at the interior department, in cooperation with one person from product planning at VCC. The development team of nine persons and the Scrum master are part of the electrical department with extensive domain expertise of climate control.
This case is a typical description of an isolated team trying to achieve shorter feedback loops within a large industrial project, i.e. using a D approach. The team defined their way-of-working, while still meeting the overall stage-gate process. Contributing factors to the success was a stable architecture, especially in the interfaces to other sub-systems, and good domain knowledge among the developers.
C. **Approach C: Adaptive Processes**
At this position an organisation adapts its overall product development process to utilise the possibilities software offers compared to hardware development and manufacturing. This does not seem to be a common position for an organisation to operate at, none of the found cases mention e.g. continuous deployment from autonomous teams, but one case describes how software and hardware development “work according to different methodologies (which) makes it harder to synchronize the work between them.” [38].
**Business:** The development is focused on technology and technological innovation while still preserving key product quality attributes common in the domain.
**Architecture:** Similar to the previously described positions; the architecture of the product is tailored towards satisfying product qualities.
**Process:** The process adaptation can take many forms, but a typical measure is to adjust the schedule to the size and scope of what is being developed, and as a result the software and hardware development processes are usually decoupled. This in turn enables software deployment independent of the hardware manufacturing, even to the point of deployment post-manufacturing.
**Organisation:** Even more so compared to the previous approach the teams are self-directed, both in their ways-of-working and also in defining the deployment date for their deliverables.
D. **Approach B: Architecture for Composition**
This approach is not very common either, only two cases falls into this category [11], [12].
**Business:** The business is focused on minimising risk with technology investments, i.e. the R&D organisation has adapted its architecture, processes and organisation to fast development in short iterations, but the product management and business model is still “traditional”, focusing on delivering and getting paid for each product.
Architecture: The main difference compared to the previous approaches is a shift in what qualities are emphasized when designing the architecture. In previous approaches there is an emphasis on product qualities discernible at run-time, including cost of ownership, but at this approach qualities affecting the speed, effort and cost of the development are weighted against the former two categories. Typical quality attributes driving the architecture are composability, deployability, maintainability and configurability. These are not immediately discernible for the end user, but facilitate desired ways of working for the development organisation and their teams. There is still value in the product-centric quality attributes, and some may still be vital in heavily regulated domains, but organisations balance these against the development-centric attributes. Typically the architecture style is application- or component-based on top of a supporting software platform providing both infrastructure mechanisms and domain-specific services utilised in new innovations. The software platform provides hardware abstractions of sensors and actuators and also executes control over which features can access various hardware. The platform evolves on a different, usually slower, schedule compared to the products utilising it, and may be supplied by a separate organisation than those developing products.
Process: The process is similar to the previous approach, with the organisation as a whole develop software in short iterations and likely applying continuous integration of software.
Organisation: Similar to the previous approach teams are self-directed.
E. Approach A: Marlin Organisations
There is just one case that describes an organisation working with this R&D approach [12]. This case is borderline of what can be called an embedded system, describing mobile smart phones. Nevertheless this category is important as it may be a forerunner to more “open” embedded products utilising a development approach through an open software ecosystem.
Business: One of the major business drivers in this category is to minimise the risk associated with offering the wrong product or developing undesirable features. Software is seen as a major differentiator in the ability to attract and maintain customers, and therefore short leadtimes from idea to deployment is needed to stay competitive.
Architecture: The architectures used optimise the ability to develop products and features with the shortest possible leadtime. Typical properties of the architecture are to emphasise modularity and composability of software from different teams. Standardised platforms are the norm, either industry-wide encompassing 3rd parties or developed within the large organisation.
Process: The process model used is highly iterative aiming to take small development steps and validate these in each iteration. Planning follows a template based on continuous evolution of the product instead of calendar time. For software development a wide-spread agile process such as XP or Scrum can be used.
Organisation: The organisation consists of self-directed and self-organised teams containing cross-functional competences of product management, architecture, design & implementation and testing, capable of invention and launch of new features. The teams operate autonomously and coordination and integration of interfaces is ensured by the underlying platform and its architecture, allowing the teams to choose their own process and activities. Development of new features realised by software is not outsourced since this is seen as a highly competitive skill.
V. RELATED WORK
[26] presents a systematic literature review of alignment between the four dimension of BAPO, and conclude there is a research gap on alignment of BAPO software product development. The short paper does not prescribe any model for organisations to aid alignment or adjust misalignment. The model in Figure 1 described how companies have aligned their BAPO concerns in the framework of Betz and Wohlin.
[39] presents the “stairway to heaven”, a pattern over how companies evolve their software development practices. A comparison by their five steps with the model in Figure 1 shows almost a 1-to-1 mapping, with “Traditional development” corresponding to E in the model, “Agile R&D organisation” to D, “Continuous integration” to C, and “Continuous deployment” and “R&D as an experiment system” to A. The two models complement each other since they have a different focus: The “stairway to heaven” describes how the software development iterations shortens and involve customer collaboration, while the model presented here describes how the process dimension relates to the other dimensions of business, architecture and organisation.
There are frameworks to analyse software processes such as the Zachman framework [40] and others based on this, for example [41] and [42]. The former of these two is similar to what is proposed here, but has a narrower scope on just software processes. The latter does not define concrete approaches and therefore does not support identification of “where to be”. Both frameworks provide some dimensions used to analyse an organisation, but does not propose any possible positions or movements along those dimensions.
[43] use a framework approach to analyse and optimise IT-systems and business processes for mobile business-to-employee-applications for large workforces. They give both some example cases and provide some typical usages of the framework, e.g. understanding business objectives, calculating return on investment, analyse requirements, and model and optimise processes.
The characteristics used to distinguish between “Traditional approaches” and “Internet/intranet development” in [44] spans the same range of software development approaches as Figure 1. The two approaches could be interpreted as
a simplification of the five positions proposed above. The mitigation strategies of the deadly risks would then suggestions of how to evolve along the spectrum of positions. [45] explores the context of agile development, providing an analysis framework, but has no explicit prescription on how to introduce agile development outside what he calls “the agile sweet-spot”. In essence he explores in depth approach D above and how suitable it is for various types of development. [46] identifies a set of six limitations with agile development. They are of concern when developing software with approach D above. [47] describes how to scale agile practices, specifically program management and product backlog administration, in large organisations with several development teams contributing to the final product. This would correspond to evolving from approach D to C in the model above.
VI. CONCLUSIONS
Fast development is a competitive advantage also for MPES products. But when the manufacturing-driven development process also is applied to software development it may cause outdated functionality at time of product introduction. Several studies report on successful implementation of agile methods in development of software systems with strong user interaction, e.g. web-based shops [48], [49]. However, in the domain of MPES iterative development aiming at minimising the risk of delivering unused or unwanted user features is the uncommon exception. In this paper, we provided an overview of the problem context of software development of MPES, with distinguishing factors such as the co-design of software and hardware, strong focus on manufacturing aspects, supplier involvement and safety-critical functionality. The complexity of the projects in this domain suggests a need of a holistic model to choose a suitable approach of software development. We performed a mapping study to identify industrial software development approaches, and could identify a model consisting of five distinct approaches. They ranged from “traditional” stage-gate projects focusing on product qualities and large integration efforts, to fast development in short iterations by autonomous teams based on a composable software platform.
The first key contribution of the paper is the model of five archetypical approaches of embedded software development, all which have been used in industry. The model describes how the concerns of business, architecture, process and organisation were aligned in found projects. The model was elucidated by two empirical cases from Volvo Car Corporation. The second contribution is the empirically grounded insight that successful software projects have aligned their business, architecture, software processes and organisation, while misalignment seems to cause difficulties even though each dimension in isolation seems reasonable. A first-hand case was presented were the dimensions were not aligned, which caused difficulties in the project. The third contribution is the fact the mapping study suggests a direction of how organisations evolve from one approach to another. The evolution is driven both by extrinsic factors, such as changed business and innovation goals, and by intrinsic factors such as the inherent complexity in managing and integrating the work of an increasing number of development teams.
ACKNOWLEDGMENT
This work has been financially supported by the Swedish Agency for Innovation Systems (VINNOVA) and Volvo Car Corporation within the partnership for Strategic Vehicle Research and Innovation (FFI).
REFERENCES
|
{"Source-Url": "https://muep.mau.se/bitstream/handle/2043/16173/FastDev4.pdf;jsessionid=65D1A2E073AAB7394D2B06438B153793?sequence=2", "len_cl100k_base": 6907, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26020, "total-output-tokens": 8504, "length": "2e12", "weborganizer": {"__label__adult": 0.00040268898010253906, "__label__art_design": 0.0004651546478271485, "__label__crime_law": 0.0003402233123779297, "__label__education_jobs": 0.0012073516845703125, "__label__entertainment": 6.175041198730469e-05, "__label__fashion_beauty": 0.00019073486328125, "__label__finance_business": 0.0004982948303222656, "__label__food_dining": 0.00037980079650878906, "__label__games": 0.0007257461547851562, "__label__hardware": 0.0021305084228515625, "__label__health": 0.0004944801330566406, "__label__history": 0.00027632713317871094, "__label__home_hobbies": 9.703636169433594e-05, "__label__industrial": 0.0007724761962890625, "__label__literature": 0.000270843505859375, "__label__politics": 0.00027489662170410156, "__label__religion": 0.0004825592041015625, "__label__science_tech": 0.0212554931640625, "__label__social_life": 6.753206253051758e-05, "__label__software": 0.0037021636962890625, "__label__software_dev": 0.96435546875, "__label__sports_fitness": 0.00030493736267089844, "__label__transportation": 0.0011157989501953125, "__label__travel": 0.0001760721206665039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40311, 0.01194]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40311, 0.165]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40311, 0.93512]], "google_gemma-3-12b-it_contains_pii": [[0, 5095, false], [5095, 10206, null], [10206, 15455, null], [15455, 21772, null], [21772, 27112, null], [27112, 33044, null], [33044, 40311, null], [40311, 40311, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5095, true], [5095, 10206, null], [10206, 15455, null], [15455, 21772, null], [21772, 27112, null], [27112, 33044, null], [33044, 40311, null], [40311, 40311, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40311, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40311, null]], "pdf_page_numbers": [[0, 5095, 1], [5095, 10206, 2], [10206, 15455, 3], [15455, 21772, 4], [21772, 27112, 5], [27112, 33044, 6], [33044, 40311, 7], [40311, 40311, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40311, 0.05128]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
7c914051c6736892e91a6d3fa89d85dec9ad097b
|
A detailed VM profiler for the Cog VM
Sophie Kaleba, Clément Bera, Alexandre Bergel, Stéphane Ducasse
To cite this version:
Sophie Kaleba, Clément Bera, Alexandre Bergel, Stéphane Ducasse. A detailed VM profiler for the Cog VM. International Workshop on Smalltalk Technology IWST’17, Sep 2017, Maribor, Slovenia. 2017, IWST ’17 Proceedings of the 12th edition of the International Workshop on Smalltalk Technologies. <hal-01585754>
HAL Id: hal-01585754
https://hal.archives-ouvertes.fr/hal-01585754
Submitted on 11 Sep 2017
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Abstract
Code profiling enables a user to know where in an application or function the execution time is spent. The Pharo ecosystem offers several code profilers. However, most of the publicly available profilers (MessageTally, Spy, GadgetProfiler) largely ignore the activity carried out by the virtual machine, thus incurring inaccuracy in the gathered information and missing important information, such as the Just-in-time compiler activity.
This paper describes the motivations and the latest improvements carried out in VMProfiler, a code execution profiler hooked into the virtual machine, that performs its analysis by monitoring the virtual machine execution. These improvements address some limitations related to assessing the activity of native functions (resulting from a Just-in-time compiler operation): as of now, VMProfiler provides more detailed profiling reports, showing for native code functions in which bytecode range the execution time is spent.
1. Introduction
Although computers tend to get faster and faster, improving software performance, especially in terms of execution time, remains a major goal to be pursued when developing a software. This statement applies of course to virtualised environments and assuring the performance of virtual machine (VM) is critical when you aim for an overall good performance. Thus, it is crucial to know where the time is spent in the VM during execution: indeed, it helps identifying where to tune its settings to actually get better results.
To get a precise idea of the program behavior, critical information can be collected by profiling code. Such profiling tools are already available in Pharo, like MessageTally [BCDL13] and VMProfiler [Mir08b]: they provide statistical/graphical reports, showing the methods in which most of the execution time is spent, and how much of this time is spent in garbage collection [BCDL13]. However, VMProfiler, unlike MessageTally, provides statistical data about the time spent in the Cog VM. Cog [Mir08a] is a virtual machine designed for Smalltalk, and currently used for other similar languages, such as Pharo [BDN+09] and Squeak [BDN+07]. Cog features a bytecode interpreter and a just-in-time compiler (JIT). A careful monitoring of the interpreter and the JIT is crucial to adequately estimate the time taken to execute a portion of code.
The existing VMProfiler cannot track down precisely where the time is spent when executing the code generated by the JIT. It tracks down in which methods the time is spent, but it cannot track down in which part of those methods the time is spent. For example, assuming there is a frequently used method with multiple loops, VMProfiler mentions that most of the time is spent in this method (it is indeed frequently used), but it cannot mention in which loop the time is spent. This problem is more and more significant as new optimisations are added to the JIT, based on the work of Hölzle and Ungar [HU94]. The development branch of the JIT now features speculative inlining. In this context, the JIT generates a single machine code method for multiple unoptimised bytecode methods (the method optimised and the inlined methods). The VM profiler shows that most of the time is spent in optimised code, but it is currently not possible to know in which inlined method most of the time is spent on. So while we get a faster and more performant VM, the profiler mostly ignores optimisations when computing and reporting its analysis.
To increase the level of detail of the profile, the existing VMProfiler has to be enhanced to show specifically where the time is spent in a method. To do so, we use the API usually used for debugging, that maps machine code program counter (pc) to bytecode program counter. This way, we can tell for each method appearing in the report in which bytecode range most of the time is spent.
In this paper, we will first discuss the existing Squeak VMProfiler, how it works and how the granularity of its reports
when optimised native code functions are profiled raises a problem. Then, we describe the proposed solution, a bytecode level profiling, to address this problem. Eventually, we mention other profiling tools in Smalltalk and other programming languages and compare them against VMProfiler.
2. Profiling jitted code
This section first defines the terminology used in the paper, then describes the existing VMProfiler available in the Cog VM clients such as Squeak or more recently Pharo, and the debugger mapping and lastly states the problem analysed in the rest of the paper.
2.1 Terminology
**Function.** In the paper, we use the term function to refer to executable code, in our case, method or block closures.
**Bytecode function.** The term bytecode function is used to refer specifically to the compiled function in the form of bytecode, for example, instances of CompiledMethod in the case of methods. Bytecode functions are regular objects accessible from the Squeak/Pharo runtime and are present in the heap with all other objects. These functions are executable by the VM.
**Native function.** We use the term native function to refer to the representation of a function generated by Cog’s JIT, which includes the native code. Native functions are not regular objects and are allocated in a specific memory zone (called the machine code zone), which is executable. These functions are directly executable by the processor.
2.2 Existing VM profiler
VMProfiler has been available for almost a decade in Squeak and has been recently ported to Pharo. VMProfiler allows one to track down where the time is spent in the VM when executing a specific portion of code. VMProfiler computes where the time is spent in the compiled C code of the VM, in the VM plugins and in the native functions. All the results are available as a statistical report. A typical report includes two main sections, the time spent in generated code, i.e., in native functions and the time spent in the compiled C code. The time spent in the compiled C code includes the time spent in the bytecode interpreter, in the garbage collector and in the JIT.
**Machine code zone.** As depicted in Figure 1, the machine code zone is composed of three areas (in order, the numbers match the numbers on the figure):
1. The first area includes all the trampolines and enilopmarts generated at VM start-up. Trampolines are native code routines. Some of them are used as discovery routine at VM start-up to know which instructions the processor supports. The other trampolines are called from native functions, either to switch to the C runtime of the VM or just to execute specific uncommon code. Enilopmarts (trampoline written backwards) are native code routines called from the C runtime of the VM to switch to native functions generated by the JIT.
2. The second area is composed of native functions (Cog-Methods or CogFullBlocks, depending on if a method or a block closure is compiled) and polymorphic inline caches (PIC) (ClosedPICs, PICs represented as a jump table up to 6 cases or OpenPICs, PICs represented as a hash map search with an 8 entry hash map). [HCU91]
3. The last area is a linked list of native functions or PICs referencing young objects. This list is used by the scavenger.

During the runtime, part of the machine code zone is unused. When the machine code zone is full and a new function needs to be compiled, a code compaction happens, freeing a quarter of the machine code zone, using a least recently used naive algorithm. Hence, while running a normal application, up to a quarter of the machine code zone is free.
We use three keywords to identify different addresses in the machine code zone. CogCode is the beginning of the machine code zone, before the trampolines and enilopmarts. CCFree is the beginning of the unused part of the machine code zone. CCEnd is the last address of the machine code zone before the linked list of young referrers.
**Implementation.** Implementation-wise, VMProfiler is a sampling profiler. When profiling is started, a separate high-priority OS thread is started and collects the instruction pointers of the VM OS thread in a large circular buffer at an approximate cadence of 1.4kHz. Once the profiling ends (i.e., once the profiled code has been executed), a primitive method is available to gather the samples from the VM into a Bitmap. To understand to which functions the samples correspond to, VMProfiler requests:
- The symbol table of the VM executable and all external plugins.
- A description of the native functions currently present in the machine code zone.
Each function (indifferently, a native function or a C function) is represented as a function symbol (the name of the function, either the Smalltalk function name with the method class and the selector or the C function symbol), a start address and the last address. VMProfiler uses these ranges to find out in which function each profiling sample corresponds to, as shown in Figure 2. Then, the profiler generates a report, either in the form of a string or through a user interface.
**Primitive Cog Constituents.** The primitiveCollectCogCodeConstituents provides VMProfiler with the description of the native functions currently present in the machine code zone it needs. This primitive answers an array of pair-wise elements as shown in Figure 3. The first element of the pair is either:
- the name of a trampoline/enilopmart, or
- a function pointer, or
- the name of a selector (for PICs), or
- annotations (i.e., CCFree, CCEnd).
The second item of the pair is the start address of this element in the machine code zone.
primitiveCollectCogCodeConstituents is called once the profiling samples has been gathered. These samples are mapped to the data answered by the primitive. For instance, if a sample is equal to `0x89012F0`, one can find out it refers to `Behavior>>#new` thanks to the primitive, as showed in Figure 3.
### 2.3 Debugger mapping
To be able to debug native functions as if they were executed as bytecode functions by the bytecode interpreter, when Cog’s JIT generates a native function for a given bytecode function, it generates a list of annotations [Bér16] allowing one to reconstruct the interpreter state of the function activation at any point where the code execution can be interrupted (message sends, conditional branches or back jumps). Each annotation includes a mapping between the pc in the machine code and the pc in the bytecode of the method. The VM is able to use these annotations to reify function activations and provide it to Pharo.
### 2.4 Problem
The VM development team is currently working on the implementation of an optimising JIT for the Cog VM. These optimisations include speculative inlining, as described in the work of Hölzle and Ungar [HU94], in a similar way to production VMs such as Java’s hotspot VM (The hotspot VM is the default virtual machine for Java [PVC01]) and Javascript’s V8 engine (V8 is mostly used as the Javascript engine for Google Chrome and NodeJS [Goo08]). The optimising JIT is designed as a bytecode functions to bytecode functions runtime optimising compiler, re-using the existing JIT as a back-end to generate native functions. In this context, optimised functions are present both in the form of

bytecode functions and native functions, using the extended bytecode set described in the work of Béra et al. [BM14]. When profiling optimised code for benchmarks such as the Games benchmark [GB04], VMProfiler now shows that all the time is spent in a single function (the function where the rest of the functions used by the benchmark are inlined). To improve performance and tune the optimising JIT, the VM development team requires more information about where the time is spent in optimised functions, for example, in which range of bytecodes the time is spent.
**Problem statement. How to provide detailed profiling information in large native functions profiled?**
To address this problem, we propose an implementation that takes advantage of an API used for debugging, to map machine code pc to bytecode pc, to be able to identify in which bytecode range the time is spent in a function. The implementation is specific to the Cog VM, with a working version in both Squeak and Pharo. A similar design could apply in other VMs featuring similar debugging capabilities.
3. Solution
To accurately profile large native functions, we re-use the API available for debugging to identify in which section of the function the time is spent. The solution is split in two steps. First, we enhanced the primitive providing the description of the native functions present in the machine code zone to provide a mapping between machine code pc and bytecode pc in addition to the start address of the function. Second, we used the mapping to identify in which range of bytecodes each sample is.
3.1 Improved primitive
In the improved version of the primitive, if the native function has at least one machine code pc mapped to a bytecode pc, the primitive answers for the function, instead of the start address, an array starting with the start address followed by pairs of machine code pc and the corresponding mapped bytecode pc.
For example, the function `foobarbaz` in Figure 4 sends 3 messages. Once translated to bytecode, there are indeed 3 send bytecodes, each responsible for the sending of a message (on bytecode pc 26, 29 and 32, in bold in Figure 4). For this function, the original primitive was answering 2 elements: the pointer to the native function and its start address in the machine code zone.
As you can see in Figure 5, the improved primitive still answers 2 elements, but, while the first one remains unchanged and still refers to the name of the function, the second one is an array, because the 3 send bytecodes are mapped. The first element of this array is the starting address of the function in the machine code zone. The other elements come in pairs: the first one is the machine code pc, the second one is the bytecode pc. The results answered by the improved primitive are then used to determine bytecode ranges in the function. In Figure 5, there are 4 bytecode ranges, each delimited by a mapped bytecode pc.

3.2 Accurate mapping and report
To compute the profiling statistics, the profiler uses the primitive to create a description of the native functions currently present in the machine code zone. Each function is represented by a `functionSymbol` object, characterized by the name of the function and its starting and limit addresses in the machine code zone. A new field in `FunctionSymbol` has been added to take the results of the modified primitive into account: `mcpcbcmmap`, standing for machine code program counter - bytecode program counter map. This dictionary associates a machine code pc with a bytecode pc.
As shown in Figure 6, this new field helps with identifying where the execution time is spent. For instance, we know that `foobarbaz` starts at `0x0000FF45`, and that the first mapped bytecode pc (26) is at `0x0000FF47`: it means that the 12
samples within this address range will refer to the bytecode instructions between 1 and 26. The same applies for the other entries: the unique sample within the range 0x0000FF47 and 0x0000FF52 will refer to the bytecode instructions between 26 and 29.
In the Figure 6, 1932 samples were gathered in total and 20 were referring to foobarbaz. Among these 20 samples, 12 were referring to foo’s bytecode pc between 1 and 26. Therefore, 60% of the time spent in foobarbaz was spent between these bytecode pcs.
<table>
<thead>
<tr>
<th>aFunctionSymbol</th>
<th>sampleBag</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td>
<td>sampleBag</td>
</tr>
<tr>
<td>address</td>
<td>0x0000FF45</td>
</tr>
<tr>
<td>limit</td>
<td>0x00010A25</td>
</tr>
<tr>
<td>mcpcbcpcmap</td>
<td>0x0000FF47 -> 26</td>
</tr>
<tr>
<td></td>
<td>0x0000FF52 -> 29</td>
</tr>
<tr>
<td></td>
<td>0x0000FF63 -> 32</td>
</tr>
<tr>
<td></td>
<td>0x0000FF47</td>
</tr>
<tr>
<td></td>
<td>0x0000FF49</td>
</tr>
<tr>
<td></td>
<td>0x0000FF62</td>
</tr>
<tr>
<td></td>
<td>0x00010A55</td>
</tr>
<tr>
<td></td>
<td>0x0002EAC3</td>
</tr>
</tbody>
</table>
Figure 6: Mapping with new feature
4. Example
In this section, we present a concrete example of profiling a benchmark.
We profiled the following benchmark: 10 tinyBenchmarks using first the existing VMProfiler, and using then the detailed VMProfiler. Figure 7 puts the two profiling reports side by side. Among the jitted methods, Integer>>benchmark was the one in which most of the execution time was spent (around 45% of the total time).
In the original version of the profiler (left-hand side of Figure 7), one cannot identify where 45% of the total execution time is spent. In the detailed version (right-hand side), however, the method is decomposed in 8 bytecode ranges: one can then identify in which bytecode range(s) most of the time is spent. Here, 57.95% of the time is spent in the entry. The next significant part of time is spent in the last bytecode instructions (12% starting from bytecode pc 38).
In the Integer>>benchmark function, most of the time is spent in the 74 - 78 bytecode range, referring to the following bytecode instructions:
- 75 <6B> popIntoTemp: 3
- 76 <13> pushTemp: 3
- 77 <10> pushTemp: 0
- 78 <84> send: <=
In the next bytecode instructions of the Integer>>benchmark method, one can find the 90 <A3> jumpTo: 76 instruction. It indicates that there is a loop between the 76 and 90 bytecode instructions. Thus, we can assume that the time is mostly spent in the 76, 77 and 78 bytecode instructions.
5. Related Work
The field of execution profiling is vast and has received a large attention from the research community. This section presents the different works related to the effort presented in this paper.
5.1 Standard Pharo profilers
Smalltalk, and therefore Pharo, offers a sophisticated reflective API. Threads are openly exposed and the stack for each active thread may be introspected. In addition, the next thread in the execution queue may be accessed. MessageTally and AndreasSystemProfiler are two standard profilers in Pharo that exploit this advanced reflective API. Both follow the same principle: a thread of high-priority is run and regularly it samples the thread queue. The frequency of the samples typically ranges from 1 to 10 milliseconds. After the program execution, frequency of method context frames is determined. Such frequency is used to estimate the time spent in each frame.
Both profilers essentially rely on the Smalltalk reflective API. Since most of the computation happens within the image, the overhead of the profiler is therefore likely to be expensive and intrusive (e.g., time to execute an application is longer when being profiled). It is known that getting a sample profiler with a high-precision is difficult to achieve and prone to error [MSHD08, MDHS10, Ber11].
5.2 Support in the Virtual Machine
The Java Virtual Machine Tool Interface (JVM TI) is a native programming interface offered by the JVM to build dedicated profiling and debugging tools [Ora02]. JVM TI provides both a way to inspect the state and to control the execution of Java applications.
A JVM TI client, defined as an agent, can be notified of particular events emitted by the JVM. In total, a client may receive 31 different kinds of JVM events. These events cover breakpoints, class loading, garbage collection, method execution, monitor, and virtual machine start up and destruction.
JVisualVM [Ora08] is a visual profiling tool and framework. JVisualVM offers a large set of features, including
**Existing profiler report**
/media/sophie/Data/GSOC/Part2-Precision/VM_Test/pharo-vm/lib/pharo/5.0-201706131152/pharo 2017-06-19 22:19:34
eden size: 3,801,936 stack pages: 50 code size: 1,048,576
7.126 seconds; sampling frequency 1450 hz
10305 samples in the VM (10332 samples in the entire program) 99.74% of total
10007 samples in generated vm code 97.11% of entire vm (96.85% of total)
298 samples in vanilla vm code 2.89% of entire vm (2.86% of total)
% of generated vm code (% of total) (samples) (cumulative)
46.55% (45.68%) Integer>>benchFib (4658) (46.55%)
20.40% (19.75%) Integer>>benchmark (2041) (66.94%)
17.21% (16.67%) Object>>at:put: (1722) (84.15%)
10.10% (9.79%) SmallInteger>>+ (1011) (94.25%)
5.68% (5.50%) Object>>at: (568) (99.93%)
0.03% (0.03%) SequenceableCharacterStream>>to:put:(3) (99.96%)
0.02% (0.02%) Array>>replac...m:to:put:(2) (99.98%)
0.02% (0.02%) ...others... (2) (100.0%)
% of vanilla vm code (% of total) (samples) (cumulative)
47.65% (1.37%) primitiveStringReplace (142) (47.65%)
27.18% (0.78%) instantiateClassIndexableSize (81) (74.83%)
8.72% (0.25%) scavengesReferencesOf (26) (83.56%)
5.70% (0.16%) copyAndForward (17) (89.26%)
3.36% (0.10%) doScavenge (6) (95.02%)
1.99% (0.06%) heartbeat_handler (6) (97.01%)
1.99% (0.06%) doScavenge (6) (97.01%)
1.99% (0.06%) addressAfter (6) (97.01%)
1.88% (0.05%) heartbeat_handler (5) (98.31%)
1.34% (0.04%) bytesInObject (4) (97.65%)
0.67% (0.02%) shouldRemapObj (2) (98.32%)
1.68% (0.05%) ...others... (5) (100.0%)
**Memory**
old +157,616 bytes
free +0 bytes
**GCs**
full 0 totaling 0ms (0.0% elapsed time)
scavenges 182 totaling 57ms (0.8% elapsed time), avg 0.313ms
tenures 0
root table 0 overflows
**Compiled Code Compactions**
0 totaling 0ms (0.0% elapsed time)
**Events**
Process switches 38 (5 per second)
ioProcessEvents calls 390 (49 per second)
Interrupt checks 3745 (526 per second)
Event checks 3743 (525 per second)
Stack overflows 209 (29 per second)
Stack page divorces 0 (0 per second)
**Events**
Process switches 38 (5 per second)
ioProcessEvents calls 390 (49 per second)
Interrupt checks 3745 (526 per second)
Event checks 3743 (525 per second)
Stack overflows 209 (29 per second)
Stack page divorces 0 (0 per second)
**Events**
Process switches 38 (5 per second)
ioProcessEvents calls 390 (49 per second)
Interrupt checks 3745 (526 per second)
Event checks 3743 (525 per second)
Stack overflows 209 (29 per second)
Stack page divorces 0 (0 per second)
---
**Accurate profiler report**
/media/sophie/Data/GSOC/Part2-Precision/VM_Test/pharo-vm/lib/pharo/5.0-201706131152/pharo 2017-06-19 22:16:11
eden size: 3,801,936 stack pages: 50 code size: 1,048,576
6.948 seconds; sampling frequency 1495 hz
10368 samples in the VM (10389 samples in the entire program) 99.80% of total
10067 samples in generated vm code 97.10% of entire vm (96.90% of total)
301 samples in vanilla vm code 2.90% of entire vm (2.90% of total)
% of generated vm code (% of total) (samples) (cumulative)
47.03% (45.58%) Integer>>benchFib (4735) (47.03%)
57.95% 1->23 (2744) (57.95%)
5.89% 24->30 (279) (63.84%)
2.89% 30->31 (137) (66.74%)
8.24% 31->34 (390) (74.97%)
3.38% 34->35 (160) (78.35%)
6.59% 35->36 (122) (84.94%)
3.06% 36->38 (145) (88.00%)
12.00% 38->6667 (668) (100.0%)
20.62% (19.98%) Integer>>benchmark (2076) (67.66%)
0.05% 44->50 (1) (0.05%)
11.32% 52->60 (235) (11.37%)
3.90% 61->65 (81) (15.27%)
4.03% 65->68 (84) (19.32%)
6.17% 66->70 (125) (25.46%)
2.94% 70->74 (61) (28.42%)
35.36% 74->78 (734) (63.78%)
8.24% 79->84 (171) (72.01%)
8.62% 84->88 (179) (80.64%)
8.67% 88->90 (180) (89.31%)
2.12% 90->94 (44) (91.43%)
5.15% 94->98 (107) (96.58%)
3.37% 98->100 (70) (99.95%)
0.05% 100->104 (1) (100.0%)
15.36% (14.88%) Object>>at:put: (1546) (63.01%)
100.0% 1->85 (1546) (100.0%)
10.78% (10.44%) SmallInteger>>+ (1085) (93.79%)
100.0% 1->22 (1085) (100.0%)
6.07% (5.88%) Object>>at: (611) (99.86%)
100.0% 1->57 (611) (100.0%)
0.04% (0.04%) SequenceableCharacterStream>>to:put:(4) (99.90%)
0.03% (0.03%) Array class>>new: (3) (99.93%)
0.02% (0.02%) Array>>replac...m:to:put:(2) (99.95%)
0.02% (0.02%) SmallInteger>>-- (2) (99.97%)
0.03% (0.03%) ...others... (3) (100.0%)
% of vanilla vm code (% of total) (samples) (cumulative)
45.18% (1.31%) primitiveStringReplace (136) (45.18%)
30.56% (0.89%) instantiateClassIndexableSize (92) (75.75%)
9.30% (0.27%) scavengesReferencesOf (28) (85.05%)
3.99% (0.12%) copyAndForward (12) (89.04%)
1.99% (0.06%) addressAfter (6) (91.03%)
1.99% (0.06%) doScavenge (6) (93.02%)
1.99% (0.06%) heartbeat_handler (6) (95.02%)
1.99% (0.06%) moveFramesInObject>>firstPage.isra.stereotype (6) (96.98%)
0.66% (0.02%) handleStackOverflow (2) (97.67%)
2.33% (0.07%) ...others... (7) (100.0%)
**Memory**
old +2,425,712 bytes
free +0 bytes
**GCs**
full 0 totaling 0ms (0.0% elapsed time)
scavenges 192 totaling 53ms (0.783% elapsed time), avg 0.291ms
tenures 0
root table 0 overflows
**Compiled Code Compactions**
0 totaling 0ms (0.0% elapsed time)
**Events**
Process switches 38 (5 per second)
ioProcessEvents calls 340 (49 per second)
Interrupt checks 3656 (526 per second)
Event checks 3660 (527 per second)
Stack overflows 15183 (2185 per second)
Stack page divorces 0 (0 per second)
---
**Figure 7: Comparison of profiling reports for 10 tinyBenchmark**
remote debugging / profiling, thread analysis, heap snapshot, and garbage collection monitoring.
5.3 Generic profilers
The profiling tool described in this paper is intended to be used to address performance issues. The software engineering community has produced software execution monitoring techniques to profile various aspects related to an execution [RBN12, RBNR12].
A common profiling technique is to use instrumentation instead of sampling [MLG05]. For example, Spy [BBRR11] is a framework to build domain-specific profilers, with application ranging from memory consumption analysis [IB15] to test coverage [BP12].
6. Conclusion and Future Work
In this paper, we have presented the evolutions carried out in VMProfiler, a profiler enabling to identify where the execution time is spent in the VM side, i.e. identify where the time is spent in the C code of the VM (interpreter, garbage collector) and in the jitted functions.
As this kind of tool is typically used to know where to boost performance, the existing VMProfiler could be improved: it could not provide detailed profiling data for large native code functions. Indeed, it could report that most of the execution time was spent in a function, but not where in this function the time was spent. This problem was getting significant as more and more optimisations were performed by the JIT: inlining, especially, makes the jitted functions harder to accurately profile.
This paper describes a way to address this problem: an API used for debugging purposes offers a mapping between machine code pc and bytecode pc. This mapping is then used to determine bytecode ranges in a large native code function, and thus identifies how many samples are included in one or the other range. Now, VMProfiler provides detailed profiling statistical reports.
Further improvements are currently being considered:
• As for now, VMProfiler is only available headless in Pharo. A graphical user interface could be implemented to provide profiling data from another perspective.
• Sometimes, customers in a Windows environment request profiling, yet the VMProfiler is currently available for Mac and Linux only. The VMProfiler could then be implemented for Windows to tackle this problem.
• Currently, VMProfiler shows PIC disregarding if the PIC is a closed PIC or an open PIC. It would be nice to extend it to show this information (it requires changes in primitiveCollectCogCodeConstituents).
Acknowledgments
We thank Eliot Miranda for the original implementation of VMProfiler and his support during the evolution.
This work was supported by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council, CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01585754/file/main.pdf", "len_cl100k_base": 7419, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 29972, "total-output-tokens": 9879, "length": "2e12", "weborganizer": {"__label__adult": 0.0003142356872558594, "__label__art_design": 0.0001894235610961914, "__label__crime_law": 0.00021028518676757812, "__label__education_jobs": 0.00031566619873046875, "__label__entertainment": 4.696846008300781e-05, "__label__fashion_beauty": 0.00010991096496582033, "__label__finance_business": 0.00012683868408203125, "__label__food_dining": 0.00023996829986572263, "__label__games": 0.00039887428283691406, "__label__hardware": 0.0008502006530761719, "__label__health": 0.00028777122497558594, "__label__history": 0.00016760826110839844, "__label__home_hobbies": 5.97834587097168e-05, "__label__industrial": 0.00025081634521484375, "__label__literature": 0.00017023086547851562, "__label__politics": 0.0001690387725830078, "__label__religion": 0.0003223419189453125, "__label__science_tech": 0.006938934326171875, "__label__social_life": 6.747245788574219e-05, "__label__software": 0.005107879638671875, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00023424625396728516, "__label__transportation": 0.00035881996154785156, "__label__travel": 0.0001760721206665039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33001, 0.14429]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33001, 0.54948]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33001, 0.85487]], "google_gemma-3-12b-it_contains_pii": [[0, 1070, false], [1070, 5070, null], [5070, 9733, null], [9733, 12494, null], [12494, 16346, null], [16346, 20788, null], [20788, 26078, null], [26078, 31057, null], [31057, 33001, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1070, true], [1070, 5070, null], [5070, 9733, null], [9733, 12494, null], [12494, 16346, null], [16346, 20788, null], [20788, 26078, null], [26078, 31057, null], [31057, 33001, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33001, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33001, null]], "pdf_page_numbers": [[0, 1070, 1], [1070, 5070, 2], [5070, 9733, 3], [9733, 12494, 4], [12494, 16346, 5], [16346, 20788, 6], [20788, 26078, 7], [26078, 31057, 8], [31057, 33001, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33001, 0.04797]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
e321ab5644ee5019a422cb36a068620fcd531634
|
Title Miscellaneous Functions in C++
Version 0.3.0
Description Provides utility functions that are simply, frequently used, but may require higher performance that what can be obtained from base R. Incidentally provides support for 'reverse geocoding', such as matching a point with its nearest neighbour in another array. Used as a complement to package 'hutils' by sacrificing compilation or installation time for higher running speeds. The name is a portmanteau of the author and 'Rcpp'.
URL https://github.com/hughparsonage/hutilscpp
BugReports https://github.com/hughparsonage/hutilscpp/issues
License GPL-2
Encoding UTF-8
LazyData true
LinkingTo Rcpp
Imports Rcpp, data.table, hutils, utils
RoxygenNote 6.1.1
Suggests bench, testthat (>= 2.1.0), TeXCheckR, covr
NeedsCompilation yes
Author Hugh Parsonage [aut, cre]
Maintainer Hugh Parsonage <hugh.parsonage@gmail.com>
Repository CRAN
Date/Publication 2019-10-16 11:20:02 UTC
R topics documented:
anyOutside ................................................................. 2
are_even ................................................................. 3
as_integer_if_safe ...................................................... 4
bench_system_time ...................................................... 4
cumsum_reset .......................................................... 5
anyOutside
Are any values outside the interval specified?
Usage
anyOutside(x, a, b, nas_absent = NA, na_is_outside = NA)
Arguments
x
A numeric vector.
a, b
Single numeric values designating the interval.
nas_absent
Are NAs known to be absent from x? If nas_absent = NA, the default, x will be searched for NAs; if nas_absent = TRUE, x will not be checked; if nas_absent = FALSE, the answer is NA_integer_ if na.rm = FALSE otherwise only non-NA values outside [a,b].
If nas_absent = TRUE but x has missing values then the result is unreliable.
na_is_outside
(logical, default: NA) How should NAs in x be treated?
If NA the default, then the first value in x that is either outside [a,b] or NA is detected: if it is NA, then NA_integer_ is returned; otherwise the position of that value is returned.#'
If FALSE then NA values are effectively skipped; the position of the first known value outside [a,b] is returned.
If `TRUE` the position of the first value that is either outside \([a,b]\) or NA is returned.
Value
\(0L\) if no values in \(x\) are outside \([a,b]\). Otherwise, the position of the first value of \(x\) outside \([a,b]\).
Examples
```r
anyOutside(1:10, 1L, 10L)
anyOutside(1:10, 1L, 7L)
# na_is_outside = NA
anyOutside(c(1:10, NA), 1L, 7L) # Already outside before the NA
anyOutside(c(NA, 1:10, NA), 1L, 7L) # NA since it occurred first
anyOutside(c(1:7, NA), 1L, 7L, na_is_outside = FALSE)
anyOutside(c(1:7, NA), 1L, 7L, na_is_outside = TRUE)
```
---
**are_even**
*Are even*
Description
Are even
Usage
```r
are_even(x, check_integerish = TRUE)
which_are_even(x, check_integerish = TRUE)
```
Arguments
- `x` An integer vector. Double vectors may also be used.
- `check_integerish` (logical, default: `TRUE`) Should the values in \(x\) be checked for non-integer values if \(x\) is a double vector. If `TRUE` and values are found to be non-integer a warning is emitted.
Value
For `are_even`, a logical vector the same length as \(x\), `TRUE` whenever \(x\) is even.
For `which_are_even` the integer positions of even values in \(x\).
as_integer_if_safe
Coerce from double to integer if safe
Description
The same as `as.integer(x)` but only if `x` consists only of whole numbers and is within the range of integers.
Usage
as_integer_if_safe(x)
Arguments
x
A double vector. If not a double vector, it is simply returned without any coercion.
Examples
```r
N <- 1e6 # run with 1e9
x <- rep_len(as.double(sample.int(100)), N)
alt_as_integer <- function(x) {
xi <- as.integer(x)
if (isTRUE(all.equal(x, xi))) {
xi
} else {
x
}
}
bench_system_time(as_integer_if_safe(x))
#> process real
#> 6.453s 6.452s
bench_system_time(alt_as_integer(x))
#> process real
#> 15.516s 15.545s
bench_system_time(as.integer(x))
#> process real
#> 2.469s 2.455s
```
bench_system_time
Evaluate time of computation
Description
(Used for examples and tests)
**cumsum_reset**
**Usage**
```r
cumsum_reset(x, y = as.integer(x))
```
**Arguments**
- `x`
A logical vector indicating when the sum should continue.
- `y`
Optional: a numeric vector the same length as `x` to cumulatively sum.
**Description**
Cumulative sum unless reset
**Usage**
```r
bench_system_time(expr)
```
**Arguments**
- `expr`
Passed to `system_time`.
**Example**
```r
cumsum_reset(c(TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE))
cumsum_reset(c(TRUE, TRUE, FALSE, TRUE, TRUE, TRUE, FALSE),
c(1000, 1000, 10000, 10, 20, 33, 0))
```
do_pmaxC
Internal pmaxC helpers
Description
Internal functions used when the overheads of assertions would be too expensive. The _IP_ flavours modify in place.
Usage
- `do_pmaxC_dbl(x, a, in_place = FALSE)`
- `do_pmaxC_int(x, a, in_place = FALSE)`
- `do_pmax0(x, in_place = FALSE)`
- `do_pmaxIPnum0(x)`
- `do_pmaxIPint0(x)`
Arguments
- `x`: A numeric/integer vector.
- `a`: A single numeric/integer.
- `in_place`: Modify x in place?
---
do_pmaxV
Parallel maximum in C++
Description
A faster pmax().
Usage
- `do_pmaxNumNum(x, y, in_place = FALSE)`
- `do_pmaxIntInt(x, y, in_place = FALSE)`
Arguments
- `x`: A numeric vector.
- `y`: A numeric vector, the same length as x.
- `in_place`: (bool, default: false) Should the function operate on x in-place?
**do_pminC**
**Value**
The parallel maximum of the input values.
---
**do_pminC**
*Parallel maximum*
**Description**
A faster `pmin()`.
**Arguments**
- `x`
A numeric vector.
- `a`
A single numeric value.
- `in_place`
(bool, default: false) Should the function operate on `x` in-place?
**Value**
The parallel minimum of the input values. The 0 versions are shortcuts for `a = 0`.
**Note**
This function will always be faster than `pmin(x, a)` when `a` is a single value, but can be slower than `pmin.int(x, a)` when `x` is short. Use this function when comparing a numeric vector with a single value.
---
**do_pminV**
*Parallel maximum*
**Description**
A faster `pmin()`.
**Usage**
- `do_pminV_dbl(x, y, in_place = FALSE)`
- `do_pminV_int(x, y, in_place = FALSE)`
**Arguments**
- `x`
A numeric vector.
- `y`
A numeric vector, the same length as `x`.
- `in_place`
(bool, default: false) Modify `x` in-place?
Value
The parallel maximum of the input values.
Description
Helper
Usage
helper(expr)
Arguments
expr An expression
Value
The expression evaluated.
Examples
x6 <- 1:6
helper(x6 + 1)
is_constant
Is a vector constant?
Description
Efficiently decide whether an atomic vector is constant; that is, contains only one value.
Equivalent to
data.table::uniqueN(x) == 1L
or
forecast::is.constant(x)
Usage
is_constant(x)
isntConstant(x)
**is_constant**
**Arguments**
x
An atomic vector. Only logical, integer, double, and character vectors are supported. Others may work but have not been tested.
**Value**
Whether or not the vector x is constant:
- `is_constant` TRUE or FALSE. Missing values are considered to be the same.
- `isntConstant` If constant, 0L; otherwise, the first integer position at which x has a different value to the first.
This has the virtue of `!isntConstant(x) == is_constant(x)`.
**Examples**
```r
library(hutilscpp)
library(data.table)
N <- 1e9L
N <- 1e6 # to avoid long-running examples on CRAN
## Good-cases
nonconst <- c(integer(1e5), 13L, integer(N))
bench_system_time(uniqueN(nonconst) == 1L)
#> process real
#> 15.734s 2.893s
bench_system_time(is_constant(nonconst))
#> process real
#> 0.000 0.000
bench_system_time(isntConstant(nonconst))
#> process real
#> 0.000 0.000
## Worst-cases
consti <- rep(13L, N)
bench_system_time(uniqueN(consti) == 1L)
#> process real
#> 5.734s 1.202s
bench_system_time(is_constant(consti))
#> process real
#> 437.500ms 437.398ms
bench_system_time(isntConstant(consti))
#> process real
#> 437.500ms 434.109ms
nonconsti <- c(consti, -1L)
bench_system_time(uniqueN(nonconsti) == 1L)
#> process real
#> 17.812s 3.348s
bench_system_time(is_constant(nonconsti))
#> process real
#> 437.500ms 431.104ms
```
logical3
Vectorized logical with support for short-circuits
Description
Vectorized logical with support for short-circuits
Usage
and3(x, y, z = NULL, nas_absent = FALSE)
or3(x, y, z = NULL)
Arguments
x, y, z Logical vectors. If z is NULL the function is equivalent to the binary versions; only x and y are used.
nas_absent (logical, default: FALSE) Can it be assumed that x, y, z have no missing values? Set to TRUE when you are sure that that is the case; setting to TRUE falsely has no defined behaviour.
Value
For and3, the same as x & y & z; for or3, the same as x | y | z, designed to be efficient when component-wise short-circuiting is available.
**match_nrst_haversine**
*Match coordinates to nearest coordinates*
**Description**
When geocoding coordinates to known addresses, an efficient way to match the given coordinates with the known is necessary. This function provides this efficiency by using C++ and allowing approximate matching.
**Usage**
```r
match_nrst_haversine(lat, lon, addresses_lat, addresses_lon,
Index = seq_along(addresses_lat), cartesian_R = NULL,
close_enough = 10, excl_self = FALSE, as.data.table = TRUE,
.verify_box = TRUE)
```
**Arguments**
- `lat, lon`: Coordinates to be geocoded. Numeric vectors of equal length.
- `addresses_lat, addresses_lon`: Coordinates of known locations. Numeric vectors of equal length (likely to be a different length than the length of `lat`, except when `excl_self = TRUE`).
- `Index`: A vector the same length as `lat` to encode the match between `lat`, `lon` and `addresses_lat`, `addresses_lon`. The default is to use the integer position of the nearest match to `addresses_lat`, `addresses_lon`.
- `cartesian_R`: The maximum radius of any address from the points to be geocoded. Used to accelerate the detection of minimum distances. Note, as the argument name suggests, the distance is in cartesian coordinates, so a small number is likely.
- `close_enough`: The distance, in metres, below which a match will be considered to have occurred. (The distance that is considered "close enough" to be a match.) For example, `close_enough = 10` means the first location within ten metres will be matched, even if a closer match occurs later. May be provided as a string to emphasize the units, e.g. `close_enough = "0.25km"`. Only km and m are permitted.
- `excl_self`: (bool, default: FALSE) For each `x_i` of the first coordinates, exclude the `y_i`-th point when determining closest match. Useful to determine the nearest neighbour within a set of coordinates, viz. `match_nrst_haversine(x, y, x, y, excl_self = TRUE)`.
- `as.data.table`: Return result as a `data.table`? If FALSE, a list is returned. TRUE by default to avoid dumping a huge list to the console.
- `.verify_box`: Check the initial guess against other points within the box of radius $\ell^\infty$.
Value
A list (or data.table if as.data.table = TRUE) with two elements, both the same length as lat, giving for point lat,lon:
pos the position (or corresponding value in Table) in addresses_lat, addresses_lon nearest to lat,lon.
dist the distance, in kilometres, between the two points.
Examples
lat2 <- runif(5, -38, -37.8)
lon2 <- rep(145, 5)
lat1 <- c(-37.875, -37.91)
lon1 <- c(144.96, 144.978)
match_nrst_haversine(lat1, lon1, lat2, lon2, 0L)
match_nrst_haversine(lat1, lon1, lat1, lon1, 11:12, excl_self = TRUE)
Description
Faster pmax() and pmin().
Usage
pmaxC(x, a, in_place = FALSE)
pmax0(x, in_place = FALSE, sorted = FALSE)
pmaxV(x, y, in_place = FALSE)
pmax3(x, y, z, in_place = FALSE)
Arguments
x A numeric vector.
a A single numeric value.
in_place (logical, default: FALSE) Should x be modified in-place.
sorted If TRUE, x is assumed to be sorted. Thus the first zero determines whether the position at which zeroes start or end.
y, z Other numeric vectors the same length as x
**Value**
The parallel maximum/minimum of the input values. $p_{\text{max}0}(x)$ is shorthand for $p_{\text{max}C}(x, 0)$, i.e. convert negative values in $x$ to 0.
**Note**
This function will always be faster than $p_{\text{max}}(x, a)$ when $a$ is a single value, but can be slower than $p_{\text{max}.\text{int}}(x, a)$ when $x$ is short. Use this function when comparing a numeric vector with a single value.
Use `in_place = TRUE` only within functions when you are sure it is safe, i.e. not a reference to something outside the environment.
If $x$ is nonnegative so $p_{\text{max}0}(x) = \text{id}entity(x)$ the function will be much faster still, as the C++ code only starts allocating once a negative value is found.
**Examples**
```
pmaxC(-5:5, 2)
```
---
**Description**
Parallel minimum
**Usage**
```
p_{\text{min}0}(x, \text{in\_place} = \text{FALSE})
p_{\text{min}V}(x, y, \text{in\_place} = \text{FALSE})
p_{\text{min}C}(x, a = 0L, \text{in\_place} = \text{FALSE})
p_{\text{min}3}(x, y, z, \text{in\_place} = \text{FALSE})
```
**Arguments**
- `x`: A numeric vector.
- `in_place`: (logical, default: FALSE) Should $x$ be modified in-place.
- `y, z`: Other numeric vectors.
- `a`: A single number.
**Details**
The type of $x$ is preserved as far as possible.
poleInaccessibility
Value
Same as $p\min(x,0)$.
$p\min_0(x) = p\min(x,0)$
$p\min V(x,y) = p\min(x,y)$
$p\min C(x,a) = p\min(x,a)$ for length-one $a$.
$p\min 3(x,y,z) = p\min(x,p\min(y,z))$.
Examples
$p\min V(10:1, 1:10)$
$p\min 0(-5:5)$
seq.out <- function(x, y) seq(x, y, length.out = 10)
$p\min 3(seq.out(0, 10), seq.out(-5, 50), seq.out(20, -10))$
---
poleInaccessibility Find a binary pole of inaccessibility
Description
Find a binary pole of inaccessibility
Usage
poleInaccessibility2(x = NULL, y = NULL, DT = NULL, x_range = NULL, y_range = NULL, copy_DT = TRUE)
poleInaccessibility3(x = NULL, y = NULL, DT = NULL, x_range = NULL, y_range = NULL, copy_DT = TRUE, test_both = TRUE)
Arguments
x, y Coordinates.
DT A data.table containing LONGITUDE and LATITUDE to define the x and y coordinates.
x_range, y_range Numeric vectors of length-2; the range of x and y. Use this rather than the default when the 'vicinity' of x,y is different from the minimum closed rectangle covering the points.
copy_DT (logical, default: TRUE) Run copy on DT before proceeding. If FALSE, DT have additional columns updated by reference.
test_both (logical, default: TRUE) For 3, test both stretching vertically then horizontally and horizontally then vertically.
Value
poleInaccessibility2 A named vector containing the xmin, xmax and ymin, ymax coordinates of the largest rectangle of width an integer power of two that is empty.
poleInaccessibility3 Starting with the rectangle formed by poleInaccessibility2, the rectangle formed by stretching it out vertically and horizontally until the edges intersect the points x, y.
Examples
library(data.table)
library(hutils)
# A square with a 10 by 10 square of the northeast corner removed
x <- runif(1e4, 0, 100)
y <- runif(1e4, 0, 100)
DT <- data.table(x, y)
# remove the NE corner
DT_NE <- DT[implies(x > 90, y < 89)]
DT_NE[, poleInaccessibility2(x, y)]
DT_NE[, poleInaccessibility3(x, y)]
Description
Range of a vector using Rcpp.
Usage
range_rcpp(x, anyNAx = anyNA(x), warn_empty = TRUE, integer0_range_is_integer = FALSE)
Arguments
x A vector for which the range is desired. Vectors with missing values are not supported and have no definite behaviour.
anyNAx (logical, default: anyNA(x) lazily). Set to TRUE only if x is known to contain no missing values (including NaN).
warn_empty (logical, default: TRUE) If x is empty (i.e. has no length), should a warning be emitted (like range)?
integer0_range_is_integer (logical, default: FALSE) If x is a length-zero integer, should the result also be an integer? Set to FALSE by default in order to be compatible with range, but can be set to TRUE if an integer result is desired, in which case range_rcpp(integer()) is (INT_MAX, -INT_MAX).
Value
A length-4 vector, the first two positions give the range and the next two give the positions in x where the max and min occurred.
This is almost equivalent to `c(range(x), which.min(x), which.max(x))`. Note that the type is not strictly preserved, but no loss should occur. In particular, logical x results in an integer result, and a double x will have double values for `which.min(x)` and `which.max(x)`.
A completely empty, logical x returns `c(NA, NA, NA, NA)` as an integer vector.
Examples
```r
x <- rnorm(1e3) # Not noticeable at this scale
bench_system_time(range_rcpp(x))
bench_system_time(range(x))
```
---
**squish**
*Squish into a range*
Description
Squish into a range
Usage
`squish(x, a, b, in_place = FALSE)`
Arguments
- `x`: A numeric vector.
- `a`, `b`: Upper and lower bounds
- `in_place`: (logical, default: FALSE) Should the function operate on `x` in place?
Value
A numeric/integer vector with the values of x "squished" between a and b; values above b replaced with b and values below a replaced with a.
Examples
`squish(-5:5, 1L) 1L)`
sum_isna
Number of missing values
Description
The count of missing values in an atomic vector, equivalent to to \texttt{sum(is.na(x))}.
Usage
\texttt{sum_isna(x, do\_anyNA = TRUE)}
Arguments
- \texttt{x} An atomic vector.
- \texttt{do\_anyNA} Should \texttt{anyNA(x)} be executed before an attempt to count the NA's in \texttt{x} one-by-one? By default, set to \texttt{TRUE}, since it is generally quicker. It will only be slower when NA is rare and occurs late in \texttt{x}.
Examples
\texttt{sum_isna(c(1:5, NA))}
which3
which of three vectors are the elements (all, any) true?
Description
which of three vectors are the elements (all, any) true?
Usage
\texttt{which3(x, y, z, And = TRUE, anyNAx = anyNA(x), anyNAY = anyNA(y), anyNAz = anyNA(z))}
Arguments
- \texttt{x, y, z} Logical vectors. Either the same length or length-1
- \texttt{And} Boolean. If \texttt{TRUE}, only indices where all of \texttt{x, y, z} are \texttt{TRUE} are returned; if \texttt{FALSE}, any index where \texttt{x, y, z} are \texttt{TRUE} are returned.
- \texttt{anyNAx, anyNAY, anyNAz} Whether or not the inputs have NA.
which_first
Where does a logical expression first return TRUE?
**Description**
A faster and safer version of `which.max` applied to simple-to-parse logical expressions.
**Usage**
`which_first(expr, verbose = FALSE)`
**Arguments**
- `expr`: An expression, such as `x == 2`.
- `verbose`: (logical, default: `FALSE`) If `TRUE` a message is emitted if `expr` could not be handled in the advertised way.
**Details**
If the `expr` is of the form `LHS <operator> RHS` and `LHS` is a single symbol, `operator` is one of `==`, `!=`, `>`, `>=`, `<`, `<=`, or `%in%`. and `RHS` is a single numeric value, then `expr` is not evaluated directly; instead, each element of `LHS` is compared individually.
If `expr` is not of the above form, then `expr` is evaluated and passed to `which.max`.
Using this function can be significantly faster than the alternatives when the computation of `expr` would be expensive, though the difference is only likely to be clear when `length(x)` is much larger than 10 million. But even for smaller vectors, it has the benefit of returning `0L` if none of the values in `expr` are `TRUE`, unlike `which.max`.
Compared to `Position` for an appropriate choice of `f` the speed of `which_first` is not much faster when the expression is `TRUE` for some position. However, `which_first` is faster when all elements of `expr` are `FALSE`. Thus `which_first` has a smaller worst-case time than the alternatives for most `x`.
**Value**
The same as `which.max(expr)` or `which(expr)[1]` but returns `0L` when `expr` has no `TRUE` values.
**Examples**
```r
N <- 1e5
# N <- 1e8 ## too slow for CRAN
# Two examples, from slowest to fastest,
# run with N = 1e8 elements
# seconds
x <- rep_len(runif(1e4, 0, 6), N)
```
which_true_onwards
At which point are all values true onwards
Description
At which point are all values true onwards
Usage
which_true_onwards(x)
Arguments
x
A logical vector. NA values are not permitted.
Value
The position of the first TRUE value in x at which all the following values are TRUE.
Examples
which_true_onwards(c(TRUE, FALSE, TRUE, TRUE, TRUE))
**xor2**
*Exclusive or*
**Description**
Exclusive or
**Usage**
`xor2(x, y, anyNAx = TRUE, anyNAy = TRUE)`
**Arguments**
- `x, y` Logical vectors.
- `anyNAx, anyNAy` Could `x` and `y` possibly contain NA values? Only set to `FALSE` if known to be free of NA.
Index
and3 (logical3), 10
anyOutside, 2
are_even, 3
as_integer_if_safe, 4
bench_system_time, 4
copy, 14
cumsum_reset, 5
do_pmax0 (do_pmaxC), 6
do_pmaxC, 6
do_pmaxC_dbl (do_pmaxC), 6
do_pmaxIntInt (do_pmaxV), 6
do_pmaxIPint0 (do_pmaxC), 6
do_pmaxIPnum0 (do_pmaxC), 6
do_pmaxNumNum (do_pmaxV), 6
do_pmaxV, 6
do_pminC, 7
do_pminV, 7
do_pminV_dbl (do_pminV), 7
do_pminV_int (do_pminV), 7
helper, 8
is_constant, 8
isn'tConstant (is_constant), 8
logical3, 10
match_nrst_haversine, 11
or3 (logical3), 10
pmax0 (pmaxC), 12
pmax3 (pmaxC), 12
pmaxC, 12
pmaxV (pmaxC), 12
pmin0 (pminC), 13
pmin3 (pminC), 13
pminC, 13
pminV (pminC), 13
poleInaccessibility, 14
poleInaccessibility2
(poleInaccessibility), 14
poleInaccessibility3
(poleInaccessibility), 14
Position, 18
range, 15
range_rcpp, 15
squish, 16
sum_isna, 17
system_time, 5
which3, 17
which_are_even (are_even), 3
which_first, 18
which_true_onwards, 19
xor2, 20
|
{"Source-Url": "https://cran.r-project.org/web/packages/hutilscpp/hutilscpp.pdf", "len_cl100k_base": 6495, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 43946, "total-output-tokens": 7856, "length": "2e12", "weborganizer": {"__label__adult": 0.000240325927734375, "__label__art_design": 0.00024211406707763672, "__label__crime_law": 0.0001895427703857422, "__label__education_jobs": 0.00030541419982910156, "__label__entertainment": 5.2988529205322266e-05, "__label__fashion_beauty": 7.748603820800781e-05, "__label__finance_business": 8.994340896606445e-05, "__label__food_dining": 0.0002434253692626953, "__label__games": 0.0005545616149902344, "__label__hardware": 0.0007319450378417969, "__label__health": 0.00019156932830810547, "__label__history": 0.00014126300811767578, "__label__home_hobbies": 7.456541061401367e-05, "__label__industrial": 0.00021529197692871096, "__label__literature": 0.00012314319610595703, "__label__politics": 0.0001119375228881836, "__label__religion": 0.00028514862060546875, "__label__science_tech": 0.006412506103515625, "__label__social_life": 6.145238876342773e-05, "__label__software": 0.00933074951171875, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.00016021728515625, "__label__transportation": 0.0002205371856689453, "__label__travel": 0.0001537799835205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21756, 0.03887]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21756, 0.75798]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21756, 0.66405]], "google_gemma-3-12b-it_contains_pii": [[0, 1374, false], [1374, 2299, null], [2299, 3453, null], [3453, 4285, null], [4285, 4849, null], [4849, 5610, null], [5610, 6564, null], [6564, 7001, null], [7001, 8347, null], [8347, 9012, null], [9012, 11206, null], [11206, 12211, null], [12211, 13497, null], [13497, 14775, null], [14775, 16265, null], [16265, 17349, null], [17349, 18466, null], [18466, 20212, null], [20212, 20582, null], [20582, 20847, null], [20847, 21756, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1374, true], [1374, 2299, null], [2299, 3453, null], [3453, 4285, null], [4285, 4849, null], [4849, 5610, null], [5610, 6564, null], [6564, 7001, null], [7001, 8347, null], [8347, 9012, null], [9012, 11206, null], [11206, 12211, null], [12211, 13497, null], [13497, 14775, null], [14775, 16265, null], [16265, 17349, null], [17349, 18466, null], [18466, 20212, null], [20212, 20582, null], [20582, 20847, null], [20847, 21756, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21756, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21756, null]], "pdf_page_numbers": [[0, 1374, 1], [1374, 2299, 2], [2299, 3453, 3], [3453, 4285, 4], [4285, 4849, 5], [4849, 5610, 6], [5610, 6564, 7], [6564, 7001, 8], [7001, 8347, 9], [8347, 9012, 10], [9012, 11206, 11], [11206, 12211, 12], [12211, 13497, 13], [13497, 14775, 14], [14775, 16265, 15], [16265, 17349, 16], [17349, 18466, 17], [18466, 20212, 18], [20212, 20582, 19], [20582, 20847, 20], [20847, 21756, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21756, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1d8e2a5953064a338d8f89e9a5a7092b8bcfff71
|
Evolutionary Testing for Crash Reproduction
Mozhan Soltani, Annibale Panichella and Arie van Deursen
Report TUD-SERG-2016-013
Evolutionary Testing for Crash Reproduction
Mozhan Soltani
Delft University of Technology
The Netherlands
mozhan.soltani@gmail.com
Annibale Panichella
Delft University of Technology
The Netherlands
a.panichella@tudelft.nl
Arie van Deursen
Delft University of Technology
The Netherlands
arie.vandeursen@tudelft.nl
ABSTRACT
Manual crash reproduction is a labor-intensive and time-consuming task. Therefore, several solutions have been proposed in literature for automatic crash reproduction, including generating unit tests via symbolic execution and mutation analysis. However, various limitations adversely affect the capabilities of the existing solutions in covering a wider range of crashes because generating helpful tests that trigger specific execution paths is particularly challenging.
In this paper, we propose a new solution for automatic crash reproduction based on evolutionary unit test generation techniques. The proposed solution exploits crash data from collected stack traces to guide search-based algorithms toward the generation of unit test cases that can reproduce the original crashes. Results from our preliminary study on real crashes from Apache Commons libraries show that our solution can successfully reproduce crashes which are not reproducible by two other state-of-art techniques.
Keywords
Crash Reproduction, Genetic Algorithm, Search-Based Software Testing, Test Case Generation
1. INTRODUCTION
Debugging is the process of locating and fixing defects in software source code, which requires deep understanding about that code. Typically, the first step in debugging is to reproduce the software crash, which can be a non-trivial, labor-intensive and time-consuming task. Therefore, several automated techniques for crash reproduction have been proposed, including the use of core dumps to generate crash reproducible test cases [6, 9], record-replay approaches [1, 8, 10], post-failure approaches [2, 5], and approaches based on crash stack traces [3, 11].
However, the techniques mentioned above present some limitations which may adversely impact their capabilities in generating crash reproducible test cases. For example, core dumps are not always generated by software applications at the crash time, which may reduce the applicability of approaches which are merely based on using core dumps [6, 9]. Record-replay approaches apply dynamic mechanisms to monitor software executions, thus, leading to higher performance overhead [1, 8]. STAR [3] and MuCrash [11] are two novel approaches designed to deliver test cases that can reproduce target software crashes by relying on crash stack traces. STAR relies on backward symbolic execution to compute the crash triggering precondition [3]. However, inferring the initial condition of certain types of exceptions may be a complex task to accomplish by STAR. On the other hand, MuCrash applies mutation to update existing test cases to reproduce crashes [11]. While MuCrash can also reproduce certain crashes that STAR can reproduce, it fails to reproduce certain other crashes which are reproducible by STAR. As reported by Xuan et al. [11], the main reason for this failure is that reproducing those crashes requires frequent method calls which can not be recreated by directly applying mutation operators.
In this paper, we propose a novel approach for automatic crash reproduction through the usage of evolutionary search-based techniques, and crash stack traces. We implemented our solution as an extension of EvoSuite [4], and evaluated it on well-known crashes from the Apache Commons libraries. The main contributions of our paper can be summarized as follows:
- We provide a first formulation of stack-trace-based crash replication problem as a search-based problem;
- We define a novel fitness function to evaluate how close the generated test cases are to replicate the target crashes relying on stack traces only;
- We report the results of a preliminary study which shows the effectiveness of our solution compared to STAR and MuCrash.
The rest of the paper is structured as follows: Section 2 provides and overview on existing approaches on crash replication and provides background notions on search-based software testing. Section 3 presents our approach, while Section 4 describes our preliminary study. Finally, conclusions and future work are discussed in Section 5.
2. BACKGROUND AND RELATED WORK
In this section, we describe the two main related techniques for automatic crash reproduction, namely STAR [3] and MuCrash [11]. In addition, we provide an overview on search-based software testing and Genetic Algorithms.
Stack Trace based Crash Reproduction. STAR is an approach proposed by Chen and Kim [3] to identify crash triggering preconditions. It combines backward symbolic execution with a novel method sequence technique to create test cases that can produce test inputs to satisfy the identified crash triggering preconditions. The goal in STAR is to produce test cases that can crash at the same position and can generate stack traces as similar to target stack traces as possible. Chen and Kim also describe an empirical evaluation involving real crashes from three well-known open source projects. The results showed that STAR can successfully exploit 31 out of 52 crashes (59.6%) reported for the open source projects. Out of those exploitable crashes, 42.3% were successful reproduction of the reported crashes that revealed the crash triggering defect [3].
MuCrash is a more recent approach to automatic crash reproduction proposed by Xuan et al. [11]. It applies test case mutation to update existing test cases that can reproduce crashes, rather than generating new test cases which is the general strategy used in STAR [11]. Given a stack trace, MuCrash executes all the existing test cases on the program and selects test cases that cover the classes in the stack trace. Each selected test case produces a set of test case mutants, using a set of predefined mutation operators. The resulting test cases are executed on the program and the ones that can reproduce crashes are delivered to developers for debugging. MuCrash has been evaluated on 12 crashes reported for Apache Commons Collections library [11]. The result of the evaluation showed that MuCrash could successfully replicate 7 crashes out of 12.
We notice that none of the two approaches above provided an explicit formulation of the crash reproduction problem as a search-based problem, thus, they do not use any search-based algorithm to deal with crash reproduction. In this paper we conjecture that the usage of evolutionary test case generation technique can be effective in reproducing software crashes upon the definition of a specific fitness function, which is the key contribution of our paper.
Search-Based Software Testing (SBST) applies search-based optimization algorithms to seek solutions for various kinds of software testing problems. In the 1990s, there was a dramatic increase in work on metaheuristic search techniques, and since then, SBST has been applied in various testing problems [7], such as integration testing, functional testing, mutation testing, etc.
So far, the main metaheuristic search algorithms that have been applied in SBST include Hill Climbing, Simulated Annealing, and Genetic Algorithms [7].
Genetic Algorithms are closely related to the concept of survival of the fittest [7]. Solutions in the search space are referred to as “individuals” or “Chromosomes”, which collectively form a “population”. Since Genetic Algorithms maintain a population of solutions, multiple starting points are provided to the search with a corresponding larger sample of the search space (compared to local searches) [7]. The first population is randomly generated, and then iteratively recombined and mutated to evolve throughout subsequent iterations, called “generations”. After a population is generated, best individuals are selected as parents for reproduction via crossover [7]. The selection is guided through using a fitness function, which is problem-specific. While fitter individuals are favored, a too strong bias towards them may result in their dominance in future generations [7]. Consequentially, the chance of premature convergence on a particular area of the search space may increase. This cyclic process of generating and selecting individuals goes on until either the Genetic Algorithm finds a solution or the allocated resources are consumed.
3. SEARCH-BASED CRASH REPLICATION
The key ingredient for a successful application of search-based techniques is the formulation of a proper fitness function to guide the search toward reaching the test goal. Then, such a function is optimized by search techniques, such as Genetic Algorithms, which use specific search operators to promote tests closer to cover the target goal and penalize tests with weak fitness values.
An optimal test case for crash reproduction has to crash at the same location as the original crash and produce stack traces as close to the original one as possible. Therefore, our fitness function has to exploit the information available in stack traces to measure the closeness of a test case to replicate the target crash. Usually a stack trace contains (i) the type of the exception thrown, and (ii) the list of methods being called at the time of the crash. For each called method, the stack trace also provides names and line numbers where the exception was generated. The first method in the trace is the root cause of the exception while the last one is the location where the exception was actually thrown. Therefore, the class containing the last method in the trace is the class to target for generating unit test, i.e., the class under test.
There are three main conditions that must hold to replicate a crash: (i) the line (statement) where the exception is thrown has to be covered, (ii) the target exception has to be thrown, and (iii) the generated stack trace must be as similar to the original one as possible. Therefore, our fitness function has to consider the three conditions above. Formally, let \( t \) be a given test to evaluate, we define the following fitness function:
\[
f(t) = 3 \times d_s(t) + 2 \times d_{except}(t) + d_{trace}(t)
\]
where \( d_s(t) \) denotes how far \( t \) is to execute the target statement, i.e., the location of the crash; \( d_{except}(t) \in \{0, 1\} \) is a binary value indicating whether the target exception is thrown or not; and \( d_{trace}(t) \) measures the distance between the generated stack trace (if any) and the expected trace.
For the line distance \( d_s(t) \), we use the approach level and the branch distance, which are two well-known heuristics to guide the search for branch and statement coverage [7]. The approach level measures the distance (i.e., minimum number of control dependencies) between the path of the code executed by \( t \) and the target statement. The branch distance uses a set of well-established rules [7] to score how close \( t \) is to satisfy the branch condition for the branch on which the target statement is directly control dependent.
For the trace distance \( d_{trace}(t) \), we define a new distance function as reported below. Let \( S^* = \{c_1^*, \ldots, c_k^*\} \) be the target stack trace to replicate, where \( c_i^* = (C_i^*, m_i^*, l_i^*) \) is the \( i \)-th element in the trace composed by class name \( C_i^* \), method name \( m_i^* \), and line number \( l_i^* \). Let \( S = \{c_1, \ldots, c_k\} \) be the stack trace (if any) generated when executing the test \( t \). We define the distance between the expected trace \( S^* \) and
Table 1: Real-world bugs used in our study
<table>
<thead>
<tr>
<th>Bug ID</th>
<th>Version</th>
<th>Exception</th>
<th>Priority</th>
</tr>
</thead>
<tbody>
<tr>
<td>ACC-4</td>
<td>2.0</td>
<td>NullPointerException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-28</td>
<td>2.0</td>
<td>NullPointerException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-35</td>
<td>2.1</td>
<td>UnsupportedOperationException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-48</td>
<td>3.1</td>
<td>IllegalArgumentException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-53</td>
<td>3.1</td>
<td>ArrayIndexOutOfBoundsException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-70</td>
<td>3.1</td>
<td>NullPointerException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-77</td>
<td>3.1</td>
<td>IllegalStateException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-104</td>
<td>3.1</td>
<td>ArrayIndexOutOfBoundsException</td>
<td>Major</td>
</tr>
<tr>
<td>ACC-331</td>
<td>3.2</td>
<td>NullPointerException</td>
<td>Minor</td>
</tr>
<tr>
<td>ACC-377</td>
<td>3.2</td>
<td>NullPointerException</td>
<td>Minor</td>
</tr>
</tbody>
</table>
The actual trace \( S \) as follows:
\[
D(S^*, S) = \min_{k,n} \sum_{i=1}^{n} \varphi(\text{diff}(e^*_i, e_i)) + |n - k| \tag{2}
\]
where \( \text{diff}(e^*_i, e_i) \) measures the distance between the two trace elements \( e^*_i \) and \( e_i \) in the traces \( S^* \) and \( S \) respectively; finally, \( \varphi(x) \in [0,1] \) is the widely used normalizing function \( \varphi(x) = x/(x + 1) \) [7]. We say that two trace elements are equal if and only if they share the same trace components. Therefore, we define \( \text{diff}(e^*_i, e_i) \) as follows:
\[
\text{diff}(e^*_i, e_i) = \begin{cases}
3 & C^*_i \neq C_i \\
2 & C^*_i = C_i \text{ and } m^*_i \neq m_i \\
\varphi(|l^*_i - l_i|) & \text{Otherwise}
\end{cases}
\tag{3}
\]
Therefore, \( \text{diff}(e^*_i, e_i) \) is equal to zero if and only if the two trace elements \( e^*_i \) and \( e_i \) share the same class name, method name and line number. Similarly, \( D(S^*, S) \) in Equation 2 is zero if and only if the two traces \( S^* \) and \( S \) are equal, i.e., they share the same trace elements. Starting from the function in Equation 2, we define the trace distance \( d_{\text{trace}}(t) \) as the normalized \( D(S^*, S) \) function:
\[
d_{\text{trace}}(t) = \varphi(D(S^*, S)) = D(S^*, S)/(D(S^*, S) + 1) \tag{4}
\]
Consequently, our fitness function \( f(t) \) assumes values within the interval \([0, 5]\), reaching a zero value if and only if the evaluated test \( t \) replicates the target crash.
4. PILOT STUDY
To evaluate the effectiveness of our solution for crash reproduction, we selected 10 bugs from the Apache Commons Collections library, a popular real world Java project with 25,000 lines of code. The selection of these bugs was not at random. These bugs have been used in the previous study on automatic crash reproduction when evaluating symbolic execution [3] and mutation analysis [11], which allows us to compare the results. The characteristics of the bugs, including type of exception and priority, are summarized in Table 1.
Prototype Tool We have implemented our fitness function in Evosuite [4], a popular unit test generation framework, widely used in research to generate unit tests targeting code coverage (e.g., statement coverage) or mutation score as testing criteria to maximize. Specifically, we defined a new coverage criterion (in addition to traditional coverage criteria already existing in Evosuite) consisting in maximizing the number of bugs (stack traces) to replicate. As search strategy, we used the traditional one target at a time approach, which consist of targeting one single bug (and the corresponding stack trace) at a time and running meta-heuristics, and genetic algorithms in particular, to optimize the fitness function. A value of zero for the fitness function means that the generated test case is able to replicate the targeted crash and, thus, it can be directly presented to developers for debugging purposes. The encoding schema is the same as used in Evosuite at the test case level. Thus, a chromosome is a randomly generated test case consisting of a variable sequence of method calls with random input. Random tests are then evolved as usual in genetic algorithms throughout selection, crossover and mutation operators. Pairs of existing tests (parents) are selected using the tournament selection according to their fitness function scores. New tests (offsprings) are then generated from their parents using a single-point crossover, which randomly exchange statements between the two parents. Finally, test cases are mutated by the uniform mutation that randomly adds, deletes, or changes statements with a given small probability. For all parameter values, we use the default setting in Evosuite since they provide good performance in traditional test case generation applications [4].
Experimental Procedure We applied our prototype tool to the selected crashes in order to generate test cases for reproducing them. In our pilot study, we set a maximum search budget of 2 minutes. Therefore, the search ended when either zero fitness was achieved or when the timeout was reached. Given the randomized nature of genetic algorithms, the search for each target bug/crash was repeated 30 times in order to verify whether crashes are constantly replicated or not. To assess whether the generated test cases are really helpful to fix the bugs -other than triggering the same stack trace- we performed a manual validation following the guidelines in [3, 11].
4.1 Results
Table 2 reports the number of times our prototype is able to replicate the target crashes (column 2). It also compares our crash results with two state-of-the-art methods, namely (i) STAR [3], and (ii) MuCrash [11]. The former uses symbolic execution while the latter is based on mutation analysis. As shown in Table 2, genetic algorithms allow us to reproduce 8 out of 10 crashes. Based on our manual check, all reproduced crashes are useful to fix the bug. For example, for bug ACC-70 our prototype generated within 10 seconds of search (on average) the test case depicted in Listing 1. According to our test, the crash is caused by a call to previous() when a TreeListIterator is instantiated with the first parameter (parent of the tree) set to null. Since inside the method previous() there is no check condition on such a parameter, a null pointer exception is generated. A simple fixing would consist of adding a check condition to...
verify that the parent of the tree is not null.
```java
public void test12 () throws Throwable {
TreeList treeList0 = new TreeList();
treeList0.add((Object) null);
TreeList.TreeListIterator l = new TreeList.TreeListIterator(treeList0, 732);
treeList_TreeListIterator0.previous();
}
```
Listing 1: Generated test for ACC-70.
For six bugs, our prototype constantly replicates the crash in all 30 independent runs. For ACC-53, there are only two out of 30 runs where a replication is not achieved. Finally, we find that the replication for ACC-331 is achieved only for some of the runs (33%). However, for such a class we notice that it requires specific method call sequence to be regenerated. Since our prototype does not invoke only methods and classes involved in the crash, it has minimal chance to call the right methods or to instantiate the correct objects. While this choice is useful to maintain diversity, it can have certain drawbacks. One natural extension would be to change the mutation operator in EvoSuite in order to focus the search by using methods and objects of interest more frequently than others.
Comparing our results with those achieved by STAR [3] and MuCrash [11], we observe that there are bugs that can be reproduced by our technique and not by the alternative ones. In particular, for ACC-70 our prototype generates a test case (see Listing 1) which helps in replicating and fixing the bug. However, for such a bug both STAR and MuCrash are not able to generate useful tests. Crashes due to bugs ACC-53 and ACC-77 are replicable using our technique while they are not replicable using MuCrash [11]. Finally, STAR fails to reproduce ACC-331, which is instead covered by our prototype.
The results of our pilot study show the strength of evolu-
tionary testing techniques, and evolutionary test case gen-
eration tools in particular, with respect to symbolic execu-
tion based on precondition analysis and mutation analysis. Theoretically speaking, evolutionary testing should imply a higher overhead of computing resources since tests have to be generated and executed. However, we notice that in our pilot study all crashes have been replicated in few (<10) seconds on average.
5. CONCLUSION
Manual crash reproduction is a labor-intensive and time-
consuming task. Therefore, in this paper we propose a new search-based approach for generating unit test cases to replicate software crashes. Our solution uses a novel fitness function suitably defined for crash reproduction and implemented as an extension of EvoSuite. By exploiting crash information from crash stack traces, the novel fitness function is used to guide test case generation algorithms toward the generation of tests directly consumable by developers to find the cause of the crash and fix the bugs.
This paper also reports the results of a preliminary study based on ten real crashes (and stack traces) related to bugs affecting the well-known Apache Commons libraries. The achieved results show that our solution is able to generate helpful tests for eight out of ten crashes. Moreover, our search-based solution is able to successfully replicate crashes not replicable using two state-of-the-art techniques for crash reproduction, namely STAR and MuCrash.
Considering the promising results achieved in this paper, the future work may have several possible directions. First, we plan to evaluate our search-based techniques on a wider sample of real crashes. We also plan to improve the fitness function and mutation operators in order to increase the likelihood of generating helpful test cases. Finally, a combination of genetic algorithms and symbolic execution is part of our future agenda.
6. REFERENCES
|
{"Source-Url": "http://pure.tudelft.nl/ws/portalfiles/portal/7664941/TUD_SERG_2016_013.pdf", "len_cl100k_base": 4881, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 20547, "total-output-tokens": 5873, "length": "2e12", "weborganizer": {"__label__adult": 0.0003485679626464844, "__label__art_design": 0.0002052783966064453, "__label__crime_law": 0.0003750324249267578, "__label__education_jobs": 0.0004429817199707031, "__label__entertainment": 4.57763671875e-05, "__label__fashion_beauty": 0.00014448165893554688, "__label__finance_business": 0.00013709068298339844, "__label__food_dining": 0.00028252601623535156, "__label__games": 0.0004727840423583984, "__label__hardware": 0.0007061958312988281, "__label__health": 0.00046896934509277344, "__label__history": 0.00016319751739501953, "__label__home_hobbies": 6.777048110961914e-05, "__label__industrial": 0.00026798248291015625, "__label__literature": 0.00021016597747802737, "__label__politics": 0.0002295970916748047, "__label__religion": 0.0003528594970703125, "__label__science_tech": 0.0085906982421875, "__label__social_life": 7.385015487670898e-05, "__label__software": 0.004543304443359375, "__label__software_dev": 0.98095703125, "__label__sports_fitness": 0.0002899169921875, "__label__transportation": 0.0003814697265625, "__label__travel": 0.00016367435455322266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23754, 0.02737]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23754, 0.28314]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23754, 0.88306]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 128, false], [128, 128, null], [128, 4766, null], [4766, 11816, null], [11816, 18064, null], [18064, 23754, null], [23754, 23754, null], [23754, 23754, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 128, true], [128, 128, null], [128, 4766, null], [4766, 11816, null], [11816, 18064, null], [18064, 23754, null], [23754, 23754, null], [23754, 23754, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23754, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23754, null]], "pdf_page_numbers": [[0, 0, 1], [0, 128, 2], [128, 128, 3], [128, 4766, 4], [4766, 11816, 5], [11816, 18064, 6], [18064, 23754, 7], [23754, 23754, 8], [23754, 23754, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23754, 0.10256]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
edab4415445320f34ffb5fdd6a28b24b44e3522b
|
Requirements Engineering for Model-Based Enterprise Architecture Management with ArchiMate
Dominik Bork, Aurora Gerber, Elena-Teodora Miron, JP van Deventer, Alta van der Merwe, Dimitris Karagiannis, Sunet Eybers and Anna Sumereder
Published in:
Enterprise and Organizational Modeling and Simulation, 14th International Workshop, EOMAS 2018, Held at CAiSE 2018, Tallinn, Estonia, June 11–12, 2018, Selected Papers
The final authenticated publication is available online at
https://doi.org/10.1007/978-3-030-00787-4_2
OMilAB
www.omilab.org
Requirements Engineering for Model-Based Enterprise Architecture Management with ArchiMate
Dominik Bork\textsuperscript{1}, Aurona Gerber\textsuperscript{2,3}, Elena-Teodora Miron\textsuperscript{1}, Phil van Deventer\textsuperscript{2}, Alta van der Merwe\textsuperscript{2}, Dimitris Karagiannis\textsuperscript{1}, Sunet Eybers\textsuperscript{2}, and Anna Sumereder\textsuperscript{1}
\textsuperscript{1} University of Vienna, Research Group Knowledge Engineering
Waehringer Street 29 1090 Vienna, Austria
firstname.lastname@univie.ac.at.
\textsuperscript{2} University of Pretoria, Department of Informatics
Hatfield, 0083 Pretoria, South Africa
firstname.lastname@up.ac.za.
\textsuperscript{3} CSIR Center for AI Research (CAIR), Brummeria, Pretoria
Abstract. The role of information systems (IS) evolved from supporting basic business functions to complex integrated enterprise platforms and ecosystems. As a result, enterprises increasingly adopt enterprise architecture (EA) as a means to manage complexity and support the ability to change. We initiated a study that investigates the pivotal role of enterprise architecture management (EAM) as an essential strategy to manage enterprise change and within this larger context, specifically how the ArchiMate modeling language can be enhanced with capabilities that support EAM. This paper reports on the evaluation of an EA modeling tool (TEAM) which has been enhanced with EAM capabilities. The evaluation was performed by a focus group of enterprise architects that attended a workshop and applied the tool to an EAM case study. The evaluation results, requirements as well as a conceptualization for further development are presented and are of value for both, enterprise architecture researchers and enterprise architects.
Key words: Enterprise Architecture Management, ArchiMate, Requirements Engineering, Focus Group
1 Introduction
"The digitization of our society changes the way society work, communicate and collaborate." [1] Similarly, digitization or digital transformation changes the way enterprises create value. Traditionally, enterprises created value by selling products or by providing services to customers with direct and simple business models. The digital transformation significantly changed these business models (e.g., toward platform ecosystems [2]), customer involvement (e.g., value co-creation [3]), and product/service systems [4]. These changes are either driven
or supported by information systems and therefore directly influence the enterprise architecture (EA). Thus, it is of utmost interest for enterprises to manage their EA as well as to manage their enterprise using EA, collectively termed enterprise architecture management (EAM) [5, 6].
The Open Group Architecture Framework (TOGAF) and the ArchiMate [7] modeling language are widely adopted EA standards. However, both have limited support for corporate EA management because of the sole focus on the methodological and modeling language aspects of EA, respectively. Supporting these standards with computerized modeling environments creates opportunities to support EAM by for instance exploiting conceptual models as knowledge base for advanced management support [8]. Our study therefore investigates how EA modeling with proper tooling supports enterprise architecture management.
Adopting the action design research paradigm that incorporates evolutionary design with short evaluation/feedback loops [9], we implemented a first prototype of the TOGAF-based Enterprise Architecture Management (TEAM) modeling tool\(^1\) that implements the Archimate 3.0.1. standard [7]. This paper reports on an evaluation/feedback loop of TEAM that used a carefully designed focus group. The focus group introduced eight EA experts to both EAM as well as the TEAM tool using a case study in a workshop scenario. In depth feedback was collected from the EA experts on the functionality of the tool, as well as input on how a modeling platform could support the two focus areas of EAM namely: 1) managing the EA of an enterprise, and 2) managing the enterprise using EA. This feedback was consolidated into advanced requirement themes for the second prototype version of TEAM.
The remainder of this paper is structured as follows: foundations are presented in Section 2 and in section 3 the research design for the evaluation of TEAM is discussed. Section 4 consolidates the results by means of a set of requirement themes for advanced EAM. Finally, Section 5 concludes the paper.
\section*{2 Foundations}
\subsection*{2.1 Enterprise Architecture Management}
Enterprise Architecture Management (EAM) is a relatively recent perspective within the domain of EA. EAM is broadly defined as \textit{management practice that establishes, maintains and uses a coherent set of guidelines, architecture principles and governance regimes that provide direction for and practical help with the design and the development of an enterprises architecture in order to achieve its vision and strategy} [6]. In the 80s John Zachman, often described as the father of EA, adopted a systems engineering approach to develop the Zachman Framework for Enterprise Architecture or Zachman Framework (ZFEA) [10]. The ZFEA had as primary goal the specification of a universal set of descriptive representations
\footnote{\textsuperscript{1} The tool is freely available on the OMiLAB TEAM project site at: \url{http://austria.omilab.org/psm/content/team/info}, last visit: 08.05.2018}
from different views for enterprises as socio-technical systems [10, 11]. Originally, EAM was thus focused on the development of the enterprise architecture itself in an attempt to manage the complexity of modern enterprises [6, p. 13].
In the 90s the focus of EAM shifted from modeling the enterprise towards alignment of the different aspects within an enterprise [6, p. 14]. To assist with this alignment, several EA frameworks were proposed and EAM literature discussed various enterprise alignment aspects e.g. the execution of strategy through business-IT alignment [12, 13, 14, 15]. Lapalme [16] summarized the EAM notions of the time by describing three schools of thought related to EA namely: 1) Enterprise-wide IT platform (EIT), concerned with effective enterprise strategy execution and operation through IT-Business alignment; 2) Enterprise (E), concerned with effective enterprise strategy implementation through execution coherency; and 3) Enterprise-in-environment (EiE), concerned with fostering organizational learning by aligning the various facets of the enterprise such as governance structures and IT capabilities [16].
Fig. 1. EAM Building Blocks [6].
The most recent developments in EAM include the use of EA for strategic business management [6, 17]. This strategic EAM standpoint incorporates all the previous EAM perspectives but specifically adopts the extended view that EA is a management philosophy and executive management and governance function that should, for instance, be used to manage holistic and sustainable enterprise transformation, alignment and integration [6, p. 57]. Given this perspective, EAM is a multidimensional function that influences all aspects of an enterprise, including its organizational culture, communication practices and operations. Ahleman et al. [6, p. 42] proposed a model that depicts the essential EAM
building blocks. As is shown in Figure 1, the main and outside container for EAM indicate the soft factors that are important within an organization. Stakeholder buy in into EAM is crucial when, for instance, altering organizational culture and changing individual behavior. Figure 1 furthermore depicts the role of EAM as a chief executive officer agenda at the top. The EAM governance and organization role deals with the manner in which EAM is institutionalized within an organization. Furthermore, the integration of EA into organizational processes includes the embedding of EAM into strategic planning, project life cycles and organizational operations and monitoring, which all have to do with the day-to-day operations of an enterprise. EAM building blocks have to include EA frameworks, modeling and tools, which represent and include the existing body of knowledge and best practices regarding enterprise architecture [6, p. 42]. Since ArchiMate is one of the dominantly used EA languages, conceptual modeling methods in general and ArchiMate in particular are briefly introduced in the following to establish a theoretic foundation for the rest of the paper.
2.2 ArchiMate, TOGAF and Conceptual Modeling Methods
ArchiMate is a standard of the Open Group that describes an enterprise architecture modeling language [18]. ArchiMate was originally developed by a team from Telematica Institute in the Netherlands to model an EA within and across business domains [19]. ArchiMate adopts a layered view on an enterprise depicted in the ArchiMate Framework where the core entities of an enterprise are categorized along two dimensions (layers and aspects) as shown in Figure 2. In addition, ArchiMate adopts a service-oriented model where each layer provides services to the layers above it. ArchiMate focuses on specifying a modeling standard for enterprise architecture. By contrast, TOGAF, the Open Group Architecture Framework specifies guidelines for designing, planning, implementing, and governing an enterprise information technology architecture [14]. When the implementation of ArchiMate is discussed, it is often done within the TOGAF approach to provide the context of an enterprise architecture project [20].
Any conceptual modeling methods such as ArchiMate facilitates the management of complexity by applying abstraction. According to [21], modeling methods are composed of modeling language, modeling procedure, and mechanisms & algorithms. A modeling language can be further decomposed into: syntax, the available language elements; notation, the graphical representation of syntactic elements; and semantics, the meaning of the syntactic elements. The modeling procedure describes steps and results of utilizing a modeling method in order to create valid models. Lastly, mechanisms & algorithms define the model processing functionality that is provided by the modeling method (e.g., simulations and queries).
Conceptual modeling methods are used to create abstract representations of some part of the real world for "human users, for purposes of understanding and communication" [22]. This traditional view is still valid, however, nowadays conceptual models are also viewed as a formalized knowledge base that enables
machine processing and intersubjective understanding [23]. Conceptual modeling methods therefore not only target the best abstraction level for a specific domain by means of a metamodel, but also the enrichment of the modeling language with proper functionality to increase the value of the models. This approach to conceptual models is adopted by OMiLAB, the platform used for the development of TEAM, which is discussed in the next section.
2.3 The Open Models Laboratory (OMiLAB)
The Open Models Laboratory (OMiLAB, www.omilab.org) is an open platform for the conceptualization of modeling methods, combining open source and open communities with the goal of fostering conceptual modeling. OMiLAB constitutes a high number of international contributors [24]. Almost 50 different modeling methods have already been successfully conceptualized within OMiLAB [25], such as Multi-Perspective Enterprise Modeling (MEMO) [26] and SOM [27]. A more comprehensive view on successful conceptualizations within OMiLAB is given in [25] \(^2\). The TEAM tool was implemented as a project within OMiLAB.
3 Research Design: Focus Group Evaluation
As stated, we report on the evaluation of the first prototype version of the TEAM modeling tool. In order to obtain the in depth feedback required, we adopted a focus group (FG) as research method. A FG is a qualitative research method that is effective when collecting data about the opinions of people or how they think, feel, or act regarding a specific topic [28]. The method is particularly useful for collecting data in complex scenarios where specialized knowledge is required. Using a FG for data collection in our evaluation of TEAM was therefore applicable because EAM has an extensive scope and we were particularly interested in the opinions of the participants (EA experts and practitioners) regarding EAM requirements when using TEAM. As a prerequisite, the FG needs to be designed in such a way that participants are able to provide high-quality, in-depth feedback. We therefore designed the FG as a workshop specifically aimed at EA experts and practitioners with several years of experience, and we included carefully developed feedback mechanisms that triangulate in order to collect data. Because the experience of the participants varied, we created a baseline by introducing the necessary background in the workshop. The workshop was structured as follows:
1. **Session 1: Enterprise Architecture Management**: During this session the theory, history and focus of EAM were introduced, followed by the focus areas of EAM namely 1) managing the EA of an enterprise; and 2) managing the enterprise using the EA.
\(^2\) Full method repository is available at [http://austria.omilab.org/psm/tools](http://austria.omilab.org/psm/tools), last visit: 08.05.2018
2. **Session 2: ArchiMate and TEAM**: This session consisted of two parts namely: 1) an overview of Archimate (most participants were familiar with ArchiMate and TOGAF); and 2) an introduction to the TEAM tool.
3. **Session 3: Focus Group Case Study**: In this session a detailed case study was introduced where participants were guided to use the TEAM tool. For more details of the case study, see Section 3.1.
4. **Session 4: Focus Group Feedback**: In this session the participants were asked to give high-level feedback on the TEAM tool, EAM and further development, especially given their experience, see Section 4.
The data was collected from eight workshop participants, of which seven were established EA specialists either working full-time as enterprise architects within organizations or as EA consultants responsible for projects initiating EA at various levels within organizations. The group included: (a) professional consultants and trainers who specialized in EA and ArchiMate; (b) professional users who employ EAM frameworks and tools in their respective enterprise or public administration and who are in charge of the EAM management; as well as (c) academics who research and teach EAM at graduate and post-graduate level but with previous experience in EA implementation. The next section presents the case study which was used to evaluate the TEAM tool.
### 3.1 Focus Group Case Study
The *Charlies Auto Repair Shop* case study was employed to evaluate prototype one of TEAM. After an introduction to TEAM the experts were asked to model each of the three parts of the case. 45 minutes was allocated for each modeling task and 15 minutes were used for discussion. A final one hour long session was dedicated to discussing: (a) the quality and eventual shortcomings of the case itself given EAM; (b) the completeness and accuracy of the mapping between the ArchiMate standard and the tool; (c) usability of the current, and requirements for future versions of the TEAM tool; and (d) usefulness of the TEAM tool functionality for EAM.
**Case Description** In line with the idea that EA and its management play a pivotal role in enterprise transformations, the case study’s focus is on the transformation of a traditional car repair SME into a car repair-as-a-service business - strongly reliant on IT and the business opportunities enabled by it. *Charlie’s Car Repair Shop*’s original business model focused on providing parts and specialized repair for old-timers. Information technology played a marginal role in the back office of the business for administrative and bookkeeping activities. A management change triggered the modification of the business model. The assets of the old business - repair facility and machinery, spare parts, and mechanical expertise - will now be leveraged with the support of IT to realize a car repair-as-a-service business model where old-timer owners can book the assets to work on their cars. The customers will be charged usage-based fees for the different service components.
The underlying motivation is to monetize the old-timer owner’s love and knowledge about cars. These persons are known to the repair shop as having two characteristics important for the repair-as-a-service business model. They tend to be financially well off and are able to invest in the costly maintenance and repairs. Moreover, they care about a particular car and also have a lot of knowledge about its mechanics.
Following a general introduction to the case, the first part of the case study detailed the new strategy defining goals, the expected outcomes as well as the necessary capabilities. The second part then derived exemplary business services to be offered to the clients, technology services as well as business processes necessary for the provisioning of the new services. The identified services were also linked to their technology assets like software and hardware. Lastly, the third part described the physical elements which establish the "execution environment" for the services, like repair spaces, repair machinery etc. These physical elements were linked to the previously defined technology assets.
In alignment with the ArchiMate 3.0.1 standard and following the TOGAF framework, the case includes also Internet of Things and physical assets - thus expanding the EAM space considered in previous versions of the standard.
Exemplary Case Solutions TEAM provides the full spectrum of the ArchiMate 3.0.1 modeling language. The language concepts are grouped into the ArchiMate 3.0.1 layers - called model types: strategy layer, business layer, application layer, technology layer, physical layer, implementation/migration layer, motivation layer, and analysis model. While each of the model types contains only the concepts specific to it, e.g. a business service class is included in the business layer, the analysis model is a container of all ArchiMate 3.0.1 classes thus allowing a top to bottom model for the whole EA. For purposes of this case study participants were encouraged to use the analysis model type. Increasing readability within the model is achieved by using the grouping class to graphically compose objects which also belong semantically together (seen in Figure 4 by the dotted boxes).
In the first part, "The Strategy", the participants needed to cognitively differentiate between a goal and an outcome as well as between a capability and a resource. To ease the identification of the correct ArchiMate concepts, cues are provided in the case description, for example "...the need to build up the auto shops IT Operations and Management capabilities", which points the participant to the concept to use, i.e., capability and its name IT Operations and Management. One solution to the first part of the case study is represented in Figure 3.
Business services need to support the goals defined for the new strategy. On their part they must be aided by appropriate business processes as well as technology services. For example, the newly instituted Repair space rental service triggers a newly defined business process which in itself employs the Billing technology service. In addition, not shown in the case, one could include a Rental space booking application running on a web-based client-server hardware which allows customers to book their repair slots on-line. For practicing purposes
and due to limited time, only an excerpt of the services and processes involved was discussed in the case study. One possible solution of the second part is presented in Figure 4.
The new business model also triggered changes on equipment level (see Figure 5). While previously the machinery necessary for mechanical repairs did not need any ICT, now, with the time-dependent billing of usage, each machine must be able to "identify" at least the client to be billed as well as the start and end time of the rental. To this end, car repair machines must be equipped with card reading devices and enabled to transmit the necessary information to the Billing application and ultimately to the Billing technical service.
The new language concepts available in ArchiMate 3.0.1 on the strategy - and on the physical layer enable the enterprise architects to create a comprehensive model stack on which different analytics can be applied, both at design but also at "run" time, thus enabling the enterprise architect’s management capabilities.
4 Evaluation and Advanced Requirements for EAM
The evaluation feedback was obtained during the focus group case study and feedback sessions. During the case study session where the tool was used by the participants, feedback was obtained through interaction and discussion with the participants, as well as through documented observations by the research team when supporting the participants.
4.1 Focus Group Case Study Evaluation Results
The participants were asked to evaluate the TEAM tool, EAM and further development, especially given their experience. Questions were prepared to guide the feedback. During the session, the discussion was recorded and transcribed. All feedback is described in the following and collated into the requirement themes reported in Section 4.2.
Workshop participants easily found their way through the first two parts of the case study as it used familiar concepts and terminology. The third part, which relies heavily on new modeling constructs defined in ArchiMate 3.0.1, required a bit more working time.
TEAM was easily understood and handled by the participants. They remarked positively on the intuitive use of modeling concepts and their graphical representation. Moreover, participants found it useful that the use of connectors was limited by the tool only to those allowed according to ArchiMate 3.0.1.
4.2 Advanced Requirements for EAM
For eliciting the requirements, we analyzed the focus group feedback from the workshop participants from both the case study and the feedback sessions and condensed the feedback into four advanced EAM requirement themes. Each requirement theme is described using the aspects: Rationale, detailing the ratio behind it; Metamodel Requirements, describing the requirements on metamodel level; Implementation, indicating how the requirement theme should be implemented in a modeling tool; and Execution, exemplifying the execution by the modeler. Finally, we indicate in Sections 4.3 and 4.4 how these requirements could be incorporated conceptually into the next versions of the TEAM tool.
Theme 1 - Information Management
Rationale: It is reasonable to have the possibility of attaching comments and descriptions to the ArchiMate concepts. The generic nature of these attributes enables the modeler to document further properties - besides solely the name - for each concept. Moreover, such meta data can be used for analysis as well as possible future developments. For example, the descriptions can reveal, which additional attributes might be required.
Metamodel Requirements: Two new attributes, termed Description and Notes of datatype string, should be introduced into the TEAM metamodel. They should be provided for each ArchiMate concept.
Implementation: The two attributes shall be adding to the metamodel and their values should be stored with the models. Visualization and editing of these attributes shall be enabled.
Execution: The modeler shall be able to see and edit description and notes in the properties of each modeled concept.
Theme 2 - Lifecycle Management
Rationale: When dealing with ICT, lifecycle management plays a vital role. Questions like "until when are software systems supported with updates?", or "when becomes a certain component invalid?" are crucial for EAM. There should be different kinds of dates in the various layers. For example, the application layer components should have attributes for licenses, which can be outdated or invalid. Time elements in the model should offer possibilities regarding queries and a kind of lifecycle management in the model.
Metamodel Requirements: In general, there should be one time attribute for nearly all ArchiMate concepts. In addition, the attributes purpose and name should vary from layer to layer, as there are specific requirements and types of dates. A valid until date should be used for application layer concepts.
Implementation: The new attributes should be visualized to the modeler for editing. Additionally, two queries should be realized that enable the modeler to efficiently list in-/valid application components of the current model.
Execution: The modeler should define a date at the beginning of the query execution. The tool then lists all instances that fulfill the query criteria. It should be possible to click on the elements in the list to navigate directly to the corresponding instance in the model.
Theme 3 - Responsibility Management
Rationale: The assignment of responsibilities should enforce a higher level of engagement and ease EAM. Thus, technology layer components should be assigned to actors/roles in the business layer. To its end, a visualization functionality shall be realized that displays the connections between the components on the different layers.
Metamodel Requirements: To combine business and technology layer, semantic links between concepts of those two layers should be added. Such semantic links might be realized as references or pointers that are specified at the corresponding metamodel classes.
Implementation: Reference attributes between technology and business layer should be added for selected elements of the two layers. Furthermore, a functionality shall be provided that generates, starting from a technology layer model, the list of corresponding actors/roles of the business layer.
Execution: The modeler shall be enabled to edit the specific reference attributes in order to semantically link concepts of the two layers. Moreover, the modeler shall be enabled to generate the list of responsibilities. All list items shall enable direct navigation to the corresponding instances in the models.
Theme 4 - Business Continuity Management
Rationale: In today’s fast changing business models, built on top of complex ecosystems, failure and service unavailability is inevitable. Enterprises therefore aim to establish a business continuity management (BCM) strategy. Conceptual modeling and modeling tools can play a vital role in BCM [29, 30]. A prerequisite for managing crisis events is to be aware of
the mutual effects different EA instances have on each other. A semantic link between business and technology models should be established. The goal is to identify the impact of a technology element (e.g., function, process, interface, event, service) on a business layer element of the same type.
Metamodel Requirements: Especially concerned are function, process, interface, collaboration, interaction, event and service of the technology and the business layer. A reference attribute, which relates the elements of these two layers shall be added.
Implementation: 'Influence on' reference attributes shall be used to define relationships between elements of the technology layer and the business layer.
Execution: The reference attributes shall be editable by the modeler, thereby enabling the efficient specification of relationships. Moreover, a functionality shall be realized that queries the models for these attribute values and lists all relationships. This functionality shall be parameterizable by the type of concepts interested in. The modeler may e.g., parameterize a certain business function to be out of order and receive a list of technology components related to this function.
4.3 Conceptualization of Modeling Tools with ADOxx
Meta modeling platforms are used for the development of modeling tools. They raise the abstraction level of modeling tool development to a more elaborate level that is adequate for method engineers. The goal is to enable also non-programmers to realize their modeling tools. This is achieved by providing a rich set of pre-configured functionality the user then only needs to adapt to his/her domain. Moreover, users can benefit from existing tool developments on a certain platform.
ADOxx [31] is a meta modeling platform that has been successfully used in research and industry. The aim of the platform is to raise the abstraction level of modeling tool development to a less implementation-specific level [32]. ADOxx takes care of all domain-independent and non-functional requirements like model management, user management, storage, and user interaction. What is left to be done by the tool engineer is according to [33]: 1) configure the specific modeling language by referring it’s concepts to the meta concepts of the platform; 2) provide a proper visualization for the concepts and combine concepts into logical clusters, i.e., model types; and 3) realize additional functionality like model transformations, model queries, or simulations.
4.4 The TEAM Tool
Figure 6 visualizes a screenshot of the TEAM modeling tool realized with the ADOxx metamodeling platform. TEAM realizes all layers of ArchiMate 3.0.1 following the TOGAF framework, as well as the requirement themes described in Section 4. This enables TEAM to do basic ArchiMate modeling and TOGAF support as well as acting as a facilitator for EAM. Besides the modeling palette,
listing the available ArchiMate language concepts of the currently opened model on the left side, the tool also comes with an intuitive context menu that features the model queries - e.g., for the lifecycle management - and the additional functionality - e.g., for the business continuity management.
At the top of Figure 6, indicated by the letter ‘a’ is the menu bar implemented for the business continuity management and responsibility management. When clicking on ‘a’, the modeler is presented a multi-select box (see Figure 6 ‘b’) where he/she can de-select the ArchiMate concepts he/she is interested in, thereby parameterizing the query. After confirming the selection, TEAM executes the query and visualizes the query result window (see Figure 6 ‘c’) on the bottom). The results window lists the relationships between the selected business object type instances and the technology objects of the currently opened models (in Figure 6 Business service and Business function were selected).
5 Conclusions and Future Research
This paper reported on an action design science research project that targeted the identification and conceptualization of requirements for an advanced enterprise architecture management approach that integrated the TOGAF framework with the ArchiMate 3.0.1 modeling language. The data was collected using a workshop focus group design where in-depth feedback was obtained during tool
use in a case study and a feedback session. The feedback was obtained from eight EAM experts and practitioners and was condensed into a set of requirement themes for advanced EAM. Finally, the realization of these requirements with the ADOxx metamodeling platform as a project within the Open Models Laboratory (OMiLAB, www.omilab.org) was briefly illustrated.
Intuitive usage of the modeling tool was evaluated positively by the focus group. Results for the modeling tasks differed. The case study showed, that practitioners were able to create good models for commonly used ArchiMate layers like application and technology. By contrast, support by the moderators was necessary to achieve good results for the new ArchiMate 3.0 layers like motivation. Focus group participants expressed a strong need to support managers and enterprise architects not only with a methodology like TOGAF and an existing language like ArchiMate, but also with a full-fledged modeling environment. Based on the expert feedback, the paper specified requirement themes for advancing model-based EAM. Consequently, EAM has the ability to emerge from being limited to IT experts towards becoming a management tool fostering efficient business operations and the ability to change. This paper finally introduced a first prototype of the TEAM tool, aiming for a tool-based application of advanced EAM.
This research also comes with some limitations. The number of experts was quite low, however we ensured a homogeneous set of participants in the workshop and the discussion. Moreover, some feedback might be biased by the tool that has been used. It is important to differentiate in future design cycles more clearly between the conceptual approach and the tool support.
In future research we will extend the case study with tasks, that utilize some of the advanced features. This extended case study shall then be used to evaluate the second TEAM prototype - eventually leading to a mature modeling environment for advanced EAM. Moreover, we will consider to extend the functionality, e.g., with semantic technologies as proposed in [34, 35] and mechanisms for ensuring consistency between the multiple ArchiMate layers [36, 37, 38].
Acknowledgment
Part of this research has been funded through the South Africa / Austria Joint Scientific and Technological Cooperation program with the project number ZA 11/2017.
References
|
{"Source-Url": "http://eprints.cs.univie.ac.at/5583/1/[Bork+18]%20Requirements%20Engineering%20for%20Model-based%20EAM%20with%20ArchiMate.pdf", "len_cl100k_base": 6680, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 35069, "total-output-tokens": 9483, "length": "2e12", "weborganizer": {"__label__adult": 0.0006194114685058594, "__label__art_design": 0.006847381591796875, "__label__crime_law": 0.0006837844848632812, "__label__education_jobs": 0.01177978515625, "__label__entertainment": 0.00030541419982910156, "__label__fashion_beauty": 0.0004839897155761719, "__label__finance_business": 0.0206146240234375, "__label__food_dining": 0.0007066726684570312, "__label__games": 0.0011920928955078125, "__label__hardware": 0.001316070556640625, "__label__health": 0.0009541511535644532, "__label__history": 0.0012683868408203125, "__label__home_hobbies": 0.0002913475036621094, "__label__industrial": 0.00244903564453125, "__label__literature": 0.0010957717895507812, "__label__politics": 0.0006966590881347656, "__label__religion": 0.0009737014770507812, "__label__science_tech": 0.311279296875, "__label__social_life": 0.0002655982971191406, "__label__software": 0.047576904296875, "__label__software_dev": 0.58642578125, "__label__sports_fitness": 0.0003731250762939453, "__label__transportation": 0.0013952255249023438, "__label__travel": 0.0004162788391113281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40801, 0.035]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40801, 0.15053]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40801, 0.89947]], "google_gemma-3-12b-it_contains_pii": [[0, 546, false], [546, 3005, null], [3005, 6057, null], [6057, 7932, null], [7932, 11180, null], [11180, 13996, null], [13996, 17040, null], [17040, 20383, null], [20383, 21819, null], [21819, 24463, null], [24463, 27477, null], [27477, 30381, null], [30381, 31798, null], [31798, 34612, null], [34612, 37836, null], [37836, 40801, null]], "google_gemma-3-12b-it_is_public_document": [[0, 546, true], [546, 3005, null], [3005, 6057, null], [6057, 7932, null], [7932, 11180, null], [11180, 13996, null], [13996, 17040, null], [17040, 20383, null], [20383, 21819, null], [21819, 24463, null], [24463, 27477, null], [27477, 30381, null], [30381, 31798, null], [31798, 34612, null], [34612, 37836, null], [37836, 40801, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40801, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40801, null]], "pdf_page_numbers": [[0, 546, 1], [546, 3005, 2], [3005, 6057, 3], [6057, 7932, 4], [7932, 11180, 5], [11180, 13996, 6], [13996, 17040, 7], [17040, 20383, 8], [20383, 21819, 9], [21819, 24463, 10], [24463, 27477, 11], [27477, 30381, 12], [30381, 31798, 13], [31798, 34612, 14], [34612, 37836, 15], [37836, 40801, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40801, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
8a09e7060711d51442eb183f40cc36328fb50af0
|
fANFARE: Autonomic Framework for Service-based Pervasive Environment
Yoann Maurel, Stéphanie Chollet, Vincent Lestideau, Jonathan Bardin, Philippe Lalanda, André Bottaro
To cite this version:
HAL Id: hal-00693315
https://hal.archives-ouvertes.fr/hal-00693315
Submitted on 8 Nov 2012
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
fANF ARE: Autonomic Framework for Service-based Pervasive Environment
Yoann Maurel*, Stéphanie Chollet†, Vincent Lestideau‡, Jonathan Bardin‡, Philippe Lalanda‡ and André Bottaro*
*Orange Labs Grenoble
28 Chemin du Vieux Chêne
F-38243 Meylan, France
{firstname.lastname}@orange.com
†Laboratoire de Conception et d’Intégration des Systèmes
F-26902, Valence cedex 9, France
stephanie.chollet@lcis.grenoble-inp.fr
‡Laboratoire d’Informatique de Grenoble
F-38041 Grenoble cedex 9, France
{firstname.lastname}@imag.fr
Abstract—The ability to react quickly to unpredictable changes in the environment is a key requirement in pervasive computing. This paper presents fANF ARE, a framework for the autonomic management of service-oriented applications in pervasive environments. Specifically, it focuses on the configuration and optimization of pervasive applications deployed on OSGi platforms. We propose to handle runtime administration through a hierarchy of autonomic managers, that is a platform manager and a number of application managers and dependency managers. Our approach has been implemented and validated on pervasive use cases within the MEDICAL project funded by the French Ministry of Industry.
Keywords—Autonomic Computing, Integration middleware, Formal Concept Analysis
I. INTRODUCTION
As Mark Weiser’s [1] vision is becoming more and more concrete, a new world of applications emerges with the proliferation of devices available in the home, building or cities. This comes along with an increasing number of interconnected computing platforms, characterized by their heterogeneity and volatility. These platforms include set-top-boxes, Internet boxes, smartphones, tablets, etc. They are more and more used to run applications orchestrating and sometimes managing a number of sensors/effectors. In a near future, applications will be more and more available for deployment on those platforms in order to manage energy, security, entertainment devices, etc.
The dynamic aspect of pervasive environments justifies the use of service-oriented approaches [2] as a base paradigm for the development and execution of pervasive applications. The use of services offers the possibility to create, deploy and manage multiple configurations of the same application. This certainly increases the complexity of the systems but, in return, it provides the flexibility demanded by context-aware pervasive applications. Pervasive applications raise however major challenges in terms of administration. This problem is made particularly severe in distributed environments made of a number of computing platforms and volatile devices. In addition, many applications are critic in the sense that they directly intervene in the physical environment. Humans have to be protected against inappropriate actions caused by incorrect administration (after badly controlled updates for instance). We believe that administration actions should be done automatically as much as possible. In particular, adaptations and optimisation due to context evolution should be dealt with programmatically. However, such optimizations can be expressed in the form of high-level objectives fixed by the users and/or administrators. It seems clear to us that before being distributed to the general public, pervasive platforms and applications should be managed in an autonomic way.
This paper presents fANF ARE, a framework for the autonomic management of service-oriented applications in pervasive environments. Its purpose is to provide a solution to three major issues in pervasive computing: 1) the discovery and use of services seamlessly by hiding their technology heterogeneity, 2) the classification and the autonomic selection of services at runtime and 3) the supervision of resources and platform components to allow evaluation of the behaviour of services and applications. We believe that these three challenges are very much related. Indeed, being able to integrate all sorts of services, regardless of their technology, dramatically increases the number of usable services when dynamically building an application. This makes the basic “all-or-nothing” selection techniques ineffective and obsolete. In this context, service selection becomes a central issue for the configuration, optimisation and repair of pervasive applications. Complementarily, services elected for execution must be carefully supervised. They come from multiple providers, multiple servers, possibly in the cloud, and cannot be completely and definitively trusted. Also, unforeseeable interactions between applications can result in undesirable actions. First thing to allow the dynamic composition of applications and their efficient monitoring...
is, obviously, to detect the available services in the environment. To do so, we rely on an extensible integration middleware called RoSe [3]. RoSe is available in open source and used in many industrial projects. The extensibility feature is absolutely necessary. Indeed, pervasive environments are characterised by the number of available services but also by the heterogeneity of these services. New protocols, new formats are regularly needed and must be integrated rapidly in communication middleware like RoSe. As we will see, in the MEDICAL project, we extended RoSe to support such protocols as Modbus for instance.
Second thing is the ability to evaluate and select the services dynamically discovered by RoSe. Here, our approach is based on FCA (Formal Concept Analysis) in order to classify and identify equivalent services sets regarding some high level goals expressed by a user/administrator. Specifically, we endow every service with a local, very efficient, manager in charge of the dynamic management of service dependencies. Such a manager uses FCA structures and can decide to replace a service at anytime to better meet the current objectives, which can be also changed anytime. It is noteworthy that this notion of high level goals can be rather complicated in the sense that goals can be in opposition. It is then needed to define context-based cost functions to make a decision.
Finally, our approach is complemented with supervision mechanisms. We have defined and implemented on top of OSGi a set of monitoring mechanisms that can activated on the fly. This allows a dynamic focus of the monitoring activities depending on the context (current bugs or misbehaviours to be handled for instance). It also permits to adjust the cost of monitoring to the situation. The paper is structured as it follows. First Section II outlines our approach. In the three following sections, we detail the approach. Specifically, we discuss the discovery service in Section III, the dependency manager in Section IV and the monitoring tool in Section V. Before concluding, Section VII discusses related works.
II. GLOBAL APPROACH
Our work is based on OSGi platforms. This framework, supporting the dynamic execution of service-oriented components, is today recognized as a solution of choice for pervasive computing. OSGi platforms generally host business components and are able to integrate devices using diverse technologies. In a typical application, the business logic is implemented as OSGi bundles while most of the sensors use UPnP or DPWS technologies.
OSGi and service component models such as OSGi Declarative Services [4] or iPOJO [5] establish a clear distinction between bundles, components, service references, service factories and service instances. Typically a bundle contains several components that provide or use services. A service is associated with one or many service references. References refer to the Java interfaces provided by the service and service properties. The service is provided by a service object (Java object instance) that can be retrieved via the registry. For simplicity sake, we make no distinction here between a bundle providing a service and a service instance, between a bundle requiring a service and a service client, and between a service instance and a service reference.
This paper focuses on the administration of applications deployed on the OSGi framework. We propose to manage such applications through a hierarchy of managers (see Figure 1) including a platform manager and a number of application and service managers. This hierarchy is derived from the generic architecture described by [6].

Let us see the role and duties of these managers. The platform manager, first, has two main responsibilities. First, it manages the global behaviour of the platform through the join optimisation of the different applications. With respect of the scope of this paper, it influences the service selection policies of the dependency managers. To do so, it is able to dynamically modify the administration goals of the application managers, which, in turn, implies internal re-configurations at the application level. For a given platform, an administrator can define mandatory and optional features, including non-functional aspects like cost per use, location, reliability, performances or security. The platform manager also holds a prioritized list of applications. Applications are ranked by the administrator depending on their criticality. Depending on these priorities and application specific goals, the platform manager configures each application managers’ goals.
The platform manager is also in charge of the management of the available resources, like CPU or memory. Values characterizing the resources are collected by several monitoring mechanisms that have been implemented within the OSGi framework. CPU-monitoring is progressive and
localized: each mechanism can be enabled or disabled on-the-fly. When CPU-intensive applications are discovered, the platform manager sends an alert to the concerned application manager. This alert includes a list of suspected bundles that should be stopped or replaced. If the application manager is not able to solve the problem in a limited time, the platform manager can send an alert to the administrator or stop the suspected bundles depending on the application criticality.
Application managers are in charge of the management of groups of bundles. The mapping between bundles and application managers is done manually by the administrator at deployment time or automatically on the basis on meta-information provided by bundles (providers and dependencies). The bundles may change groups during the execution if required by the administrator. In particular, since applications are typically distributed among multiple platforms and that bundles and services are shared across multiple applications, an application may in fact be managed by several application managers. For instance, when a bundle is shared by multiple applications, the administrator can assign it to a new application manager. This allows avoiding conflicts in the management. The main advantage of grouping bundles into consistent sets is to enable the application of specific policies to a particular group of bundles or applications.
Application managers also deal with a large part of service selection. They play the role of smart registries for their associated bundles and dependencies managers. Depending on the required services, they subscribe and discover services in the OSGi registry using LDAP filters. This allows the selection of relevant services only. The application manager first ensures the consistency between properties. Each property of the small subset of interesting services are then discretized if necessary depending on the application context. Discretization is performed using a set of thresholds defined by the administrator. Properties interpretation is thus application-dependant. For instance the interpretation of "high-cost per use" may vary between critical and comfort applications. The application manager then organizes services using the FCA algorithm. This structure is divided into substructure and subgoals that are sent to dependency managers. One substructure (or decision lattice) is computed per services bindings. These structures help dependency managers to find the most suited services depending on the goals.
Components are endowed with a dependency manager in charge with the proper selection of services using the FCA structure and goals provided by the application manager. Decision structures allow to easily partition discovered services into equivalence classes. Goals include a set of mandatory properties (i.e., the desired functionality) and an ordered set of optional properties (i.e., cost, reliability, UPnP). The dependency manager then uses the equivalent classes provided by FCA to choose the most suitable services. Benefits of this approach are that the decision structures are small and their inspection to choose services is fast. This allows the fast substitution of services when a service becomes unavailable. Additionally this approach avoids the selection of no service by providing equivalence classes. Finally as the structures are updated by application managers only when needed, that is on a context change or when the list of available services has undergone too many changes. For instance, changing the order of priorities of optional properties does not require changing the structure.
Next sections are organized as follows. Section III describes the RoSe middleware for managing service heterogeneity and discovery. Section IV explains why FCA is a method of choice for service selection and how this technique is used by application and dependencies managers. Finally, Section V focuses on the resource management and how the platform manager carries the progressive monitoring of the system.
III. RoSe: Heterogeneity Management
Service discovery is based on the integration middleware named RoSe [3]. RoSe handles in a transparent way services distribution and heterogeneity. It reacts to the arrival and departures of services (Figure 2). It is an OSGi-based open source middleware\(^1\).
It allows service discovery, the automatic instantiation of a specific proxy with different strategies, the management of the proxies life-cycle and the ability to automatically generate a proxy for well-identified services. Proxies representing remote services are directly inserted in the OSGi registry. Consequently, remote services are accessible as regular OSGi services, that is to say programmers are able to access remote services just like local ones.
Concretely, when RoSe discover an available service (published by a device for instance), a proxy is dynamically created providing a local delegate of this service. Different protocols are supported here, including Web Service, UPnP, DPWS, etc. Finally when a remote service is no longer available, its corresponding proxy is destroyed. To describe the expected features and find corresponding services, RoSe
---
\(^1\)http://wiki.chameleon.ow2.org/xwiki/bin/view/Main/Rose
provides a query language to define configurations as pairs of properties/values. By applying the different configurations, we obtain the matching service(s). It is possible to define LDAP filters among the set of available configurations to refine the search. In addition, RoSe allows the dynamic modification of filters and configurations at runtime.
IV. SELECTION THROUGH THE USE OF FCA
We propose to use the Formal Concept Analysis method to classify the dependencies between components. This classification is stored in each dependency manager as a decision structure. First, we present the computation of this decision structure. Then, we detail the use of this structure in an autonomic context.
A. Formal Concept Analysis Foundations
Formal Concept Analysis (FCA) [7] is a theoretical and mathematical framework used to classify items. We very shortly define the main concepts of FCA. The purpose of FCA is to build a partially ordered structure, called concept lattice, from a formal context.
Definition 1: A formal context $K$ is a set of relations between objects and attributes. It is denoted by $K = (O, A, R)$ where $O$ and $A$ are respectively sets of Objects and Attributes, and $R$ is a Relation between $O$ and $A$ (Figure 3).
Definition 2: A formal concept $C$ is a pair $(E, I)$ where $E$ is a set of objects called Extent, $I$ is a set of attributes called Intent, and all the objects in $E$ are in relation $R$ with all the attributes in $I$.
Thus, the Extent of a concept is the set of all objects sharing a set of common attributes, and the Intent is the set of all attributes shared by the objects of the Extent. Formally:
- $E = \{o \in O, \forall i \in I, (o, i) \in R\}$,
- $I = \{a \in A, \forall e \in E, (e, a) \in R\}$.
Consequently, a formal concept $C = (E, I)$ is made of the objects in $E$ which are exactly the set of objects sharing the attributes in $I$. For example, $\{(1, 2, 3); \{c\}\}$ is a formal concept in the context of Figure 3.
Let $X$ a set of attributes. We define the function $\text{Closure}_X(X)$ which associates to $X$ the concept made of the set of objects sharing $X$ and the other attributes shared by this set of objects. Note that the computation of a formal concept from a set of attributes $X$ of size $n$ has a complexity of $O(n \times m)$ where $m$ is the number of objects.
Definition 3: A concept lattice is defined as the pair $(\mathcal{C}(K), \leq)$ Let two concepts $(E_1, I_1)$ and $(E_2, I_2)$ we say that $(E_2, I_2)$ is a successor of $(E_1, I_1)$ if $(E_1, I_1) <_C (E_2, I_2)$. Given $I_1$ a subset of $A$, we note by $\text{successors}(I_1)$ the set of successors of the concept $(E_1, I_1)$. The concept lattice can be represented by a particular graph called Hasse Diagram (Figure 3).
Note that the computation of a concept lattice from a formal context has a complexity of $O((n+m) \times m \times |\mathcal{C}(K)|)$ where $n$ is the number of attributes and $m$ is the number of objects [8]. Most of the time we have $n << m$ and the complexity becomes $O(m^2 \times |\mathcal{C}(K)|)$.
The set $\mathcal{C}(K)$ of all concepts induced by a context can be ordered using the following partial order relation: $(E_1, I_1) <_C (E_2, I_2)$ if $E_2 \subset E_1$ and $I_1 \subset I_2$.
B. Computation of Decision Structure
In [9], [10], we propose to apply FCA to service selection in pervasive environment. The usability of the FCA is based on the computation of the interesting concepts of the concept lattice. First, we transform the data extracted from the OSGi registry in a formal context where: service functionalities and properties are attributes, available services are objects.
All the computed concepts of the lattice are not interesting. Two exclusive groups are proposed:
- concepts with no real meaning. These concepts contain in their intent a set of properties which is not usable.
- concepts with sense. Contrary to the previous group, the intent of the concepts makes sense, i.e. the intent contains coherent information according to the application context.
In addition, the computation of the lattice is evaluated with a significant complexity. Consequently, we propose to compute only interesting concepts according to the selection request (Figure 4). The ordered interesting concepts, named decision structure, are a sub-set of the concept lattice. Obviously, a concept lattice contains many decision structures and these structures can share common formal concept(s).
The interest to compute formal concept(s) is that the extent contains all the services providing the properties of the intent. For example, in the concept $(\{S_1, S_4, S_7\}; \{Temperature, DPWS\})$, all the services provides the functionality Temperature implemented in DPWS. This allows to define equivalent classes of services, i.e. a service can be replaced by another in case of failure (departure, breakdown...). The decision structure expresses the difference between the services, i.e. services are classified by refinement.
The computation of decision structures naturally contributes to reduce the computation time and also the number of computed concepts in spite of theoretical complexities. Figure 5 shows the using such structure is feasible. We have evaluated the number of computed concepts according to the available services and the size of the request. The request contains:
- No constraint, i.e., equivalent to compute the entire lattice,
- One functional constraint, i.e., the minimal use case because the functionality is always known to select a service.
For this experimentation, we count the number of computed concepts from a context composed by 24 attributes (11 functionalities and 13 properties).
We note that the computation of only interesting concepts (decision structure) largely decreases the number of computed concepts. For a request based on one functionality, the decrease is 92% in average.
### C. Autonomic Dependency Management
At a given time, an application is executed in order to address a given goal. This goal can change and, as a result, it may be necessary to change one or several components or dependencies. It is the role of application manager and dependency managers to handle these evolutions. Figure 6 illustrates the relations between application manager and dependency manager.
**Application Manager.** In our approach each application comes with its own application manager. It provides all services that can be used in the application, somehow playing the role of a local registry. The application manager use the RoSe “Service Discovery” feature in order to get the list of available services. The expected functionalities to be searched are described using the RoSe descriptors. Thus, this local registry is a view of the available services filtered according to the current goals of the application. We propose to use the formal context approach to store the services characteristics. These characteristics are aligned with ontologies [11] and discretized to get numerical properties [12], [13]. The advantage of this definition is that the characteristics are defined by application. Consequently, one application can express that the cost is high even if it is more than 100$ while for another application a cost of 100$ is low. Each application is composed by a set of related services (components). For each dependency (represented in the form of a binding in OSGi), application managers generate a decision structure. The chosen level of granularity (dependency service) ensures that the decision structure is relevant (meaning and size). For each dependency, the global goal is transformed into a more specific goal sent to the dependency manager concerned.
**Dependency Manager.** Each service consumers (i.e. component or bundle) is endowed with a local dependency manager. The dependency manager is used to dynamically change the dependencies between components. To help make “good” decisions, it has a goal and a structure of decision provided by the application manager. A goal is an ordered list of mandatory and optional properties. For instance, we
can express that we want a service providing temperature with a reduced cost and possibly with reliability characteristics and in DPWS technology. As previously explained, the decision structure allows to classify services and to group them into equivalent classes. To resolve a dependency, a dependency manager searches the decision structure and it can thus select the "best" service related to this goal.
Note that the evolution of the goal and the decision structure is independent. A goal change does not imply a new computation of decision structure. Consequently, this allows fast reactivity to the context modifications and to priority changes regarding the global goals of the application.
Dependency managers have been implemented by extending actual iPOJO dependency manager [5] as suggested by [6]. For legacy bundles (i.e., not using iPOJO) we relied on the new services hook provided by OSGi [4] – this way it is possible to control the service discovery.
V. RESOURCES MANAGEMENT
The share of the same software execution environment by competing applications requires that the execution of software modules from an actor does not harm the execution of other modules. Hardware resources management has thus to be provided at software module level above the JVM. Given the wide range of deployable modules and their interactions, it is hardly possible to test exhaustively all the possible combinations and even a rigorous testing of every bundle before deployment is not sufficient. This problem is exacerbated by the use of remote services in the cloud whose evolution can have unpredictable consequences on the behavior of applications hosted on the platform. For instance, we had trouble on a fleet of platform when performances of a cloud services have improved significantly: some applications have been so overwhelmed that they caused CPU consumption problems.
Our approach is to build a self-optimizing monitoring system that can dynamically activate specific monitoring mechanisms when issues are detected. The platform manager tunes monitoring mechanisms accuracy and frequency and decides when to enable/disable monitoring mechanisms on the fly. By sparingly using resource-intensive monitoring mechanisms, it is possible to get the necessary accuracy while limiting the average resource consumption. This helps to detect performance issues with a minimal overhead in the long run. These information can then be used by an autonomic manager to take decisions, i.e., stopping a bundle, changing implementation, or restarting a device.
In this paper, we focus on CPU usage monitoring. To achieve this goal, we have developed several mechanisms that are activated in the following order:
1) global load average and system CPU (M1): this mechanism is always activated and uses system calls to make the system compute load average. The collect frequency is increased depending on previous values.
2) CPU usage per bundle (M2): this mechanism provides an estimate of CPU usage per bundle. It is activated only when load average is high. This mechanism has been implemented following ideas described in [14]. The framework has been modified so that each new created threads are attached to the proper bundle.
3) building dependency graph (M3): when a suspect has been found, a dedicated task determines bundle dependencies. This information is used to determine the impact of uninstalling a bundle on other bundles.
4) monitoring service dependencies (M4): this mechanism uses service proxy injection to refine the analysis and try to determine the actual source of CPU load. Dynamically injecting proxies in OSGi raises some issues. It requires to force the consumer to release the providers it uses so that it uses proxies instead. However, according to the specification [15], once a consumer holds a reference on a service object, the only ways to force it to discard the services are (a) to stop the providers or (b) to stop the consumer. Stopping the providers has an impact on all their consumers and potentially on applications or the whole system. Stopping the consumer may also have an impact on other bundles and may lead to bundle state loss. We propose some modifications in the OSGi framework so as to create a proxy-aware registry. We take advantage of the loose coupling offered by the SOA by modifying some mechanisms and in particular the way bundles are notified of the arrival and departure of services. Basically, service binding monitoring implies three steps: (a) unbinding existing services by pretending they are no longer available, (b) pretending they are available again, (c) substituting original providers by proxies when consumers ask for them.
5) monitoring package dependencies (M5): the CPU consumption may be the result of the usage of a poorly coded library. Sampling is performed by an external mechanism observing system threads activity on a regular basis. The subset of threads to be monitored is determined using the M3 mechanism: the task monitors threads attached to the monitored bundle. At a fixed pace (10ms in our case), we request the JVM to generate the stack trace for monitored threads. This stack trace contains information on the stack frame. It is thus possible to infer the time spent calling a class by comparing stack traces. This gives an estimate of the time taken calling an outside package. This is then matched up with package dependencies. Usage statistics are then calculated.
The Figure 7 shows the average monitoring impact on CPU usage for different monitoring configuration on a system running typical bundles (50 bundles). Mechanisms (M1 to M5) are activated progressively from C1 to C5 respec-
tively. In C4 and C5, services and packages are monitored on a single bundle. In C6 all bundles are monitored. Overall, these results confirm that using progressive monitoring is generally more CPU-efficient than using always-enabled traditional monitoring systems. When idle, the impact of monitoring is way below the one with traditional systems. Most of the time, the platform is not charged by useless computations. When active, the impact is significant but localized on a single bundle. Additionally the overhead on non-monitored bundle is limited. The benefits of localization is visible when comparing C5 (local monitoring on a single bundle) and C6 (monitoring all bundles). Moreover, the difference is even more pronounced on a platform with more bundles. Therefore, always-enabled monitoring, done by many OSGi monitoring systems, is not reasonable on end-user systems.
With all the reported values, it is easy to build simple heuristic for the detection of CPU-intensive bundles. Automatically, the platform manager controls the activation of monitoring mechanisms and fires alerts to the concerned application managers. The latter are then responsible for the interruption and replacement of suspect bundles. We tested these policies on simple applications on sample pervasive applications. The manager has been implemented using the Ceylon [16] framework. This proves to be effective to avoid platform crashes caused by CPU-intensive applications. A future work is to use the information provided by the system to automatically rank bundles and services and therefore influences the selection process depending on this ranking.
VI. RELATED WORK
The problem of service selection, depending on service classification and FCA, has been studied by a few authors. Bruno et al. [17] propose an approach based on machine-learning techniques to support service classification and annotation. Peng et al. [18] classify Web Services in a concept lattice. Services are classified according to their functional operations regardless of non-functional aspects. Azmeh et al. [19] classify Web Services by their calculated QoS levels and composability modes. In these approaches the concept lattice is computed only once whereas in the pervasive domain, services regularly appear and disappear, which means recalculating the lattice. Moreover, these approaches can not manage simultaneously different technologies (UPnP, DPWS...). Ait Ameur [11] proposes to adapt the registry to a semantic registry in which semantic Web Services are stored. The introduction of ontologies allows to define a subsumption relationship between services that expresses a substituability relationship between these services. In our approach, ontologies can be added in the filter of the service registry in order to minimize the number of attributes in the context model.
Monitoring an OSGi-based platform is challenging [20]: the specification [15] does not define any means to isolate or monitor bundles. Existing Java tools (e.g., JVM-Ti² or TTP3) cannot be used as is since gathered information is too fine-grained and thus not relevant. Existing OSGi tools are not suitable for embedded in-production environment because (i) they target development environments [14] or rich platforms [4], or (ii) they require heavy modifications of the JVM or underlying operating system [21], and (iii) they generally induces a persistent strong overhead of at least 20%. Our main contributions are to propose flexible monitoring mechanisms. In particular, we refine (i) existing techniques to attach threads to bundles, (ii) propose a novel approach to inject proxies on-the-fly without stopping bundles by building a proxy-aware registry, and (iii) proposes a method to monitor package dependencies by using localized sampling. The proposed system competes well with traditional monitoring systems: the overhead when idle is under 2% and is comparable when fully active (20% on a typical system). Moreover, the overhead is localized: it mostly impacts the targeted bundles and has limited consequences on the others.
VII. CONCLUSION AND FUTURE WORK
This article presents a framework dedicated to the execution of pervasive service-based applications and their automated management. This framework address three major overlapping challenges related to service-based pervasive environments: (i) the management of heterogeneous distributed services is performed by RoSe, an integration tool that generates the necessary bridges for the publication or integration of services, (ii) the autonomic service selection through the use of FCA to classify available services and the addition of an autonomic dependency manager to each service consumers, (iii) finally the evaluation of applications and the autonomic resource management has been made possible via the use of several monitoring mechanisms that are activated on-purpose and on-the-fly by a platform manager. This platform is controlled by a hierarchy of managers that allows the division of administrative objectives into sub-goals and managing the preoccupations of applications and the platform. This platform has been implemented.
²http://docs.oracle.com/javase/6/docs/technotes/guides/jvmti/
³http://www.eclipse.org/tptp/
using OSGi technologies and in particular via the extension of the associated service-oriented component model. Each contribution has been evaluated separately and the platform is currently being used in the MEDICAL project. Future works include the management of a fleet of platforms and the inter-platform communication between application managers and platform managers so as to coordinate the management of the multiple platforms.
REFERENCES
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00693315/file/Maurel-Chollet-Lestideau-Bardin-Lalanda-Bottaro_SCC2012.pdf", "len_cl100k_base": 6943, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28807, "total-output-tokens": 9141, "length": "2e12", "weborganizer": {"__label__adult": 0.00027108192443847656, "__label__art_design": 0.0004930496215820312, "__label__crime_law": 0.0003066062927246094, "__label__education_jobs": 0.00066375732421875, "__label__entertainment": 0.00010079145431518556, "__label__fashion_beauty": 0.00016236305236816406, "__label__finance_business": 0.0003094673156738281, "__label__food_dining": 0.00029587745666503906, "__label__games": 0.0004940032958984375, "__label__hardware": 0.0015707015991210938, "__label__health": 0.0006155967712402344, "__label__history": 0.00032067298889160156, "__label__home_hobbies": 0.00010335445404052734, "__label__industrial": 0.0004203319549560547, "__label__literature": 0.0003230571746826172, "__label__politics": 0.00029397010803222656, "__label__religion": 0.0004534721374511719, "__label__science_tech": 0.1298828125, "__label__social_life": 0.00010728836059570312, "__label__software": 0.028350830078125, "__label__software_dev": 0.83349609375, "__label__sports_fitness": 0.00022602081298828125, "__label__transportation": 0.00046443939208984375, "__label__travel": 0.00022029876708984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40538, 0.02658]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40538, 0.2188]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40538, 0.90337]], "google_gemma-3-12b-it_contains_pii": [[0, 1134, false], [1134, 5855, null], [5855, 10868, null], [10868, 16171, null], [16171, 21195, null], [21195, 24295, null], [24295, 29987, null], [29987, 35246, null], [35246, 40538, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1134, true], [1134, 5855, null], [5855, 10868, null], [10868, 16171, null], [16171, 21195, null], [21195, 24295, null], [24295, 29987, null], [29987, 35246, null], [35246, 40538, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40538, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40538, null]], "pdf_page_numbers": [[0, 1134, 1], [1134, 5855, 2], [5855, 10868, 3], [10868, 16171, 4], [16171, 21195, 5], [21195, 24295, 6], [24295, 29987, 7], [29987, 35246, 8], [35246, 40538, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40538, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
beea9270cbeb9ee7d65aba5a551cd781228b3d49
|
Hoare logic
Lecture 6: Extending Hoare logic
Jean Pichon-Pharabod
University of Cambridge
CST Part II – 2018/19
Recap
Last time, we looked at how separation logic enables modular reasoning.
In this lecture, we will consider extending Hoare logic in other directions:
- We will look at extending partial correctness triples to enforce termination, and at adapting the Hoare logic rules for partial correctness to total correctness.
- We will look at how to handle (a crude form of) procedures.
- We will look at how to reason about simple forms of concurrency.
Total correctness
So far, we have concerned ourselves only with partial correctness, and not with what happens when the program diverges.
However, in many contexts where we care about correctness enough to use Hoare logic for verification, we also care about termination.
There is no standard notation for total correctness triples; we will use $[P] \ C \ [Q]$.
The total correctness triple $[P] \ C \ [Q]$ holds if and only if:
- assuming $C$ is executed in an initial state satisfying $P$,
- then the execution terminates,
- and the terminal state satisfies $Q$.
A total correctness triple asserts that when the given command is executed from an initial state that satisfies the precondition, then any execution must terminate, and that any terminal state satisfies the postcondition:
\[
\models [P] C [Q] \overset{\text{def}}{=} \forall s. \ s \in [P] \Rightarrow \left( \neg (\langle C, s \rangle \xrightarrow{\omega}) \land \forall s'. \langle C, s \rangle \rightarrow \langle \text{skip}, s' \rangle \land s' \in [Q] \right)
\]
Semantics of total correctness triples
Since WHILE is safe and deterministic, this is equivalent to
$$\forall s. s \in \llbracket P \rrbracket \Rightarrow \exists s'. \langle C, s \rangle \rightarrow \langle \text{skip}, s' \rangle \land s' \in \llbracket Q \rrbracket$$
Assume $s \in \llbracket P \rrbracket$ and $\langle C, s \rangle \rightarrow^* \langle \text{skip}, s' \rangle$.
Since WHILE is safe and deterministic, $\neg(\langle C, s \rangle \rightarrow^\omega)$. Moreover, since WHILE is deterministic, for all $s''$ such that $\langle C, s \rangle \rightarrow^* \langle \text{skip}, s'' \rangle$, $s'' = s'$, so $s'' \in \llbracket Q \rrbracket$.
Examples of total correctness triples
- The following total correctness triple is valid:
\[
\vdash [X \geq 0] \text{while } X \geq 0 \text{ do } X := X - 1 \ [X = 0]
\]
the loop terminates when executed from an initial state where \( X \) is non-negative.
- The following total correctness triple is not valid:
\[
\not\vdash [\top] \text{while } X \geq 0 \text{ do } X := X - 1 \ [X = 0]
\]
the loop only terminates when executed from an initial state where \( X \) is non-negative, but not when executed from an initial state where \( X \) is negative.
Corner cases of total correctness triples
\([P] \ C \ [\top]\)
- this says that \(C\) always terminates when executed from an initial state satisfying \(P\).
\([\top] \ C \ [Q]\)
- this says that \(C\) always terminates, and ends up in a state where \(Q\) holds.
Rules for total correctness
**while** commands are the commands that introduce non-termination.
Except for the loop rule, all the rules of Hoare logic (from the first lecture) are sound for total correctness as well as partial correctness.
\[
\begin{align*}
\vdash [P] \text{skip} [P] \\
\vdash [P] \ C_1 \ [Q] & \quad \vdash [Q] \ C_2 \ [R] \\
\hline
\vdash [P] \ C_1; \ C_2 \ [R] \\
\vdash [P \land B] \ C_1 \ [Q] & \quad \vdash [P \land \lnot B] \ C_2 \ [Q] \\
\hline
\vdash [P] \text{if } B \text{ then } C_1 \text{ else } C_2 \ [Q] \\
\vdash P_1 \Rightarrow P_2 & \quad \vdash [P_2] \ C \ [Q_2] \quad \vdash Q_2 \Rightarrow Q_1 \\
\hline
\vdash [P_1] \ C \ [Q_1]
\end{align*}
\]
The loop rule that we have for partial correctness is not sound for total correctness:
\[
\begin{align*}
\vdash (T \land T) \Rightarrow T & \\
\vdash \{T\} \text{ skip } \{T\} & \\
\vdash T \Rightarrow T & \\
\vdash \{T \land T\} \text{ skip } \{T\} & \\
\vdash \{T\} \text{ while } T \text{ do skip } \{T \land \neg T\} & \\
\vdash T \land \neg T \Rightarrow \bot & \\
\vdash \{T\} \text{ while } T \text{ do skip } \{\bot\}
\end{align*}
\]
If the loop rule were sound for total correctness, then this would show that \textbf{while } T \text{ do skip} always terminates in a state satisfying \( \bot \).
We need an alternative total correctness loop rule that ensures that the loop always terminates.
The idea is to show that some non-negative integer quantity decreases on each iteration of the loop.
If this is the case, then the loop terminates, as there would otherwise be an infinite decreasing sequence of natural numbers.
This decreasing quantity is called a variant.
In the rule below, the variant is $t$, and the fact that it decreases is specified with an auxiliary variable $n$:
\[
\frac{
\vdash [P \land B \land (t = n)] \quad C [P \land (t < n)]
\quad \vdash P \land B \Rightarrow t \geq 0
}{
\vdash [P] \textbf{while } B \textbf{ do } C [P \land \neg B]
}\]
The second hypothesis ensures that the variant is non-negative.
The variant $t$ does not have to occur in $C$.
Using the rule of consequence, we can derive the following backwards reasoning total correctness loop rule:
\[
\begin{align*}
\vdash P \Rightarrow I & \quad \vdash I \land \neg B \Rightarrow Q \\
\vdash I \land B \Rightarrow t \geq 0 & \quad \vdash [I \land B \land (t = n)] C [I \land (t < n)] \\
\hline
\vdash [P] \textbf{while } B \textbf{ do } C [Q]
\end{align*}
\]
Consider the factorial computation we looked at before:
\[
X = x \land X \geq 0 \land Y = 1
\]
\[\text{while } X \neq 0 \text{ do } (Y := Y \times X; X := X - 1)\]
\[Y = x!\]
By assumption, \(X\) is non-negative and decreases in each iteration of the loop.
To verify that this factorial implementation terminates, we can thus take the variant \(t\) to be \(X\).
Total correctness: factorial example
\[
[X = x \land X \geq 0 \land Y = 1]
\]
\[
\text{while } X \neq 0 \text{ do (} Y := Y \times X; X := X - 1 \text{)}
\]
\[
[Y = x!]
\]
Take \( I \) to be \( Y \times X! = x! \land X \geq 0 \), and \( t \) to be \( X \).
Then we have to show that
- \( X = x \land X \geq 0 \land Y = 1 \Rightarrow I \)
- \([I \land X \neq 0 \land (X = n)] \ Y := Y \times X; X := X - 1 [I \land (X < n)] \)
- \( I \land \neg(X \neq 0) \Rightarrow Y = x! \)
- \( I \land X \neq 0 \Rightarrow X \geq 0 \)
Total correctness, partial correctness, and termination
Informally: total correctness = partial correctness + termination. This is captured formally by:
- If $\vdash \{P\} C \{Q\}$ and $\vdash [P] C [\top]$, then $\vdash [P] C [Q]$.
- If $\vdash [P] C [Q]$, then $\vdash \{P\} C \{Q\}$.
It is often easier to show partial correctness and termination separately.
Termination is usually straightforward to show, but there are examples where it is not: no one knows whether the program below terminates for all values of $X$:
```plaintext
while $X > 1$ do
if ODD($X$) then $X := 3 \times X + 1$ else $X := X \div 2$
```
Microsoft’s T2 tool is used to prove termination of systems code.
We have given rules for total correctness, similar to those for partial correctness.
Only the loop rule differs: the premises of the loop rule require that the loop body decreases a non-negative expression.
It is even possible to do amortised, asymptotic complexity analysis in Hoare logic:
- A Fistful of Dollars, Armaël Guéneau et al., ESOP 2018
Functions (not examinable)
Consider an extension of our language with the following form of functions:
\[ C ::= \ldots | \textbf{let } F(\mathit{X}_1, \ldots, \mathit{X}_n) = \mathit{C}_1 \textbf{ in } \mathit{C}_2 | \mathit{F}(\mathit{X}_1, \ldots, \mathit{X}_n) \]
For this to work, we need to be careful to not have aliasing between program variables. We will elide this.
\[
\{ \mathit{X} = \mathit{x} \land \mathit{x} > 0 \} \\
\textbf{let } \mathit{F}(\mathit{X}, \mathit{N}) = \\
\hspace{1em} (\textbf{if } \mathit{X} > 1 \textbf{ then } (\mathit{X} := \mathit{X} - 1; \mathit{N} := \mathit{N} \times \mathit{X}; \mathit{F}(\mathit{X}, \mathit{N})) \textbf{ else skip}) \textbf{ in } \\
\hspace{1em} \mathit{N} := \mathit{X}; \\
\hspace{1em} \mathit{F}(\mathit{X}, \mathit{N}) \\
\{ \mathit{N} = \text{fact}(\mathit{x}) \}\]
We also need to extend our judgment $\vdash$ with a component $\mathcal{F}$ to keep track of the pre- and postconditions of functions:
\[
\mathcal{F}(F) = \langle P, Q \rangle \quad \ldots
\]
\[
\vdash_{\mathcal{F}} \{ P[Z_1/X_1, \ldots, Z_n/X_n] \} \ F(Z_1, \ldots, Z_n) \ \{ Q[Z_1/X_1, \ldots, Z_n/X_n] \}
\]
\[
\vdash_{\mathcal{F}[F \mapsto \langle P', Q' \rangle]} \{ P' \} \ C_1 \ \{ Q' \}
\]
\[
\vdash_{\mathcal{F}[F \mapsto \langle P', Q' \rangle]} \{ P \} \ C_2 \ \{ Q \} \quad \ldots
\]
\[
\vdash_{\mathcal{F}} \{ P \} \ \textbf{let} \ F(X_1, \ldots, X_n) = C_1 \ \textbf{in} \ C_2 \ \{ Q \}
\]
Recursive function pre- and postconditions are like loop invariants, but with a “gap”.
Concurrency (not examinable)
Concurrent composition
Consider an extension of our WHILE language with a concurrent composition construct (also “parallel composition”), $C_1 \parallel C_2$, which executes the two statements $C_1$ and $C_2$ concurrently.
For our simple form of concurrency, the statement $C_1 \parallel C_2$ reduces by interleaving execution steps of $C_1$ and $C_2$, until both have terminated:
\[
\langle C_1, \langle s, h \rangle \rangle \rightarrow \langle C'_1, \langle s', h' \rangle \rangle \\
\langle C_1 \parallel C_2, (s, h) \rangle \rightarrow \langle C'_1 \parallel C_2, \langle s', h' \rangle \rangle \\
\langle C_2, \langle s, h \rangle \rangle \rightarrow \langle C'_2, \langle s', h' \rangle \rangle \\
\langle C_1 \parallel C_2, \langle s, h \rangle \rangle \rightarrow \langle C_1 \parallel C'_2, \langle s', h' \rangle \rangle
\]
For instance, $(X := 0 \parallel X := 1); \text{print}(X)$ is allowed to print 0 or 1.
Final states are now of the form $F ::= \textbf{skip} | F_1 \parallel F_2$.
Concurrency disciplines
Adding concurrency complicates reasoning by introducing the possibility of concurrent interference on shared state.
While separation logic does extend to reason about general concurrent interference, we will focus on two common idioms of concurrent programming with limited forms of interference:
- disjoint concurrency, and
- well-synchronised shared state.
Disjoint concurrency
Disjoint concurrency
Disjoint concurrency refers to multiple commands potentially executing concurrently, but all working on disjoint state.
Parallel implementations of divide-and-conquer algorithms can often be expressed using disjoint concurrency.
For instance, in a parallel merge sort, the recursive calls to merge sort operate on disjoint parts of the underlying array.
The proof rule for disjoint concurrency requires us to split our assertions into two disjoint parts, $P_1$ and $P_2$, and give each parallel command ownership of one of them:
$$\frac{\vdash \{P_1\} C_1 \{Q_1\} \quad \vdash \{P_2\} C_2 \{Q_2\}}{mod(C_1) \cap FV(P_2, Q_2) = \emptyset \quad mod(C_2) \cap FV(P_1, Q_1) = \emptyset \quad \vdash \{P_1 \ast P_2\} C_1 || C_2 \{Q_1 \ast Q_2\}}$$
The third hypothesis ensures that $C_1$ does not modify any program variables used in the specification of $C_2$, the fourth hypothesis ensures the symmetric.
Here is a simple example to illustrate two parallel increment operations that operate on disjoint parts of the heap:
\[
\begin{align*}
&\{X \mapsto 3 \ast Y \mapsto 4\} \\
\text{\text{\{\text{\{X \mapsto 3\}}}} & \quad \text{\text{\text{\{Y \mapsto 4\}}} \\
A := [X]; [X] := A + 1 & B := [Y]; [Y] := B + 1 \\
\{X \mapsto 4\} & \quad \{Y \mapsto 5\} \\
\text{\text{\text{\{X \mapsto 4 \ast Y \mapsto 5\}}}
\end{align*}
\]
Well-synchronised concurrency
Well-synchronised shared state refers to the common concurrency idiom of using locks to ensure exclusive access to state shared between multiple threads.
To reason about locking, concurrent separation logic extends separation logic with **lock invariants** that describe the resources protected by locks.
When acquiring a lock, the acquiring thread takes ownership of the lock invariant and when releasing the lock, must give back ownership of the lock invariant.
To illustrate, consider a simplified setting with a single global lock. We write $I \vdash \{P\} C \{Q\}$ to indicate that we can derive the given triple assuming the lock invariant is $I$. We have the following rules:
\[
\begin{align*}
FV(I) = \emptyset \\
I \vdash \{\text{emp}\} \text{lock} \{I \ast \text{locked}\} & \quad FV(I) = \emptyset \\
I \vdash \{I \ast \text{locked}\} \text{unlock} \{\text{emp}\}
\end{align*}
\]
The \textit{locked} resource ensures the lock can only be unlocked by the thread that currently has the lock.
Well-synchronised shared state example
To illustrate, consider a program with two threads that both access a number stored in shared heap cell at location $X$ concurrently.
Thread $A$ increments $X$ by 1 twice, and thread $B$ increments $X$ by 2. The threads use a lock to ensure their accesses are well-synchronised.
Assuming that location $X$ initially contains an even number, we wish to prove that the contents of location $X$ is still even after the two concurrent threads have terminated.
A non-synchronised interleaving would allow $X$ to end up odd.
First, we need to define a lock invariant.
The lock invariant needs to own the shared heap cell at location $X$ and should express that it always contains an even number:
$$I \equiv \exists n. x \mapsto 2 \times n$$
We have to use an indirection through $X = x$ because $I$ is not allowed to mention program variables.
We can temporarily violate the invariant when holding the lock.
Summary of concurrent separation logic
We have seen how concurrent separation logic supports reasoning about concurrent programs. The rule for disjoint concurrency enables reasoning about the parts of the state that are not shared, and the rules for locks enable reasoning about the parts of the state that are shared but guarded by locks.
Concurrent separation logic can also be extended to support reasoning about general concurrency interference.
Papers of historical interest:
- Peter O’Hearn. Resources, Concurrency and Local Reasoning.
Conclusion
• Verification of the seL4 microkernel assembly:
• The RustBelt project:
https://plv.mpi-sws.org/rustbelt/
• The iGPS logic for relaxed memory concurrency:
http://plv.mpi-sws.org/igps/
• The Iris higher-order concurrent separation logic framework,
implemented and verified in a proof assistant:
http://iris-project.org/
• Facebook’s bug-finding Infer tool:
http://fbinfer.com/
We have seen that Hoare logic (separation logic, when we have pointers) enables specifying and reasoning about programs.
Reasoning remains close to the syntax, and captures the intuitions we have about why programs are correct.
It’s all about invariants!
|
{"Source-Url": "https://www.cl.cam.ac.uk/teaching/1819/HLog+ModC/slides/lecture6.pdf", "len_cl100k_base": 4520, "olmocr-version": "0.1.49", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 61723, "total-output-tokens": 6113, "length": "2e12", "weborganizer": {"__label__adult": 0.0004019737243652344, "__label__art_design": 0.00034546852111816406, "__label__crime_law": 0.0005331039428710938, "__label__education_jobs": 0.0009603500366210938, "__label__entertainment": 8.565187454223633e-05, "__label__fashion_beauty": 0.0001538991928100586, "__label__finance_business": 0.00019228458404541016, "__label__food_dining": 0.0005769729614257812, "__label__games": 0.0005893707275390625, "__label__hardware": 0.0011110305786132812, "__label__health": 0.0007610321044921875, "__label__history": 0.00026154518127441406, "__label__home_hobbies": 0.00017452239990234375, "__label__industrial": 0.0007252693176269531, "__label__literature": 0.00031828880310058594, "__label__politics": 0.0003666877746582031, "__label__religion": 0.0006365776062011719, "__label__science_tech": 0.05126953125, "__label__social_life": 0.00015091896057128906, "__label__software": 0.00496673583984375, "__label__software_dev": 0.93359375, "__label__sports_fitness": 0.0004625320434570313, "__label__transportation": 0.000934600830078125, "__label__travel": 0.00023484230041503904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 15044, 0.00879]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 15044, 0.63445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 15044, 0.78395]], "google_gemma-3-12b-it_contains_pii": [[0, 115, false], [115, 566, null], [566, 584, null], [584, 839, null], [839, 1135, null], [1135, 1605, null], [1605, 2268, null], [2268, 2828, null], [2828, 3095, null], [3095, 3782, null], [3782, 4389, null], [4389, 4763, null], [4763, 5176, null], [5176, 5547, null], [5547, 5914, null], [5914, 6442, null], [6442, 7135, null], [7135, 7486, null], [7486, 7513, null], [7513, 8320, null], [8320, 8929, null], [8929, 9017, null], [9017, 9046, null], [9046, 10049, null], [10049, 10435, null], [10435, 10456, null], [10456, 10834, null], [10834, 11384, null], [11384, 11806, null], [11806, 11836, null], [11836, 12302, null], [12302, 12841, null], [12841, 13403, null], [13403, 13725, null], [13725, 13789, null], [13789, 14335, null], [14335, 14346, null], [14346, 14788, null], [14788, 15044, null]], "google_gemma-3-12b-it_is_public_document": [[0, 115, true], [115, 566, null], [566, 584, null], [584, 839, null], [839, 1135, null], [1135, 1605, null], [1605, 2268, null], [2268, 2828, null], [2828, 3095, null], [3095, 3782, null], [3782, 4389, null], [4389, 4763, null], [4763, 5176, null], [5176, 5547, null], [5547, 5914, null], [5914, 6442, null], [6442, 7135, null], [7135, 7486, null], [7486, 7513, null], [7513, 8320, null], [8320, 8929, null], [8929, 9017, null], [9017, 9046, null], [9046, 10049, null], [10049, 10435, null], [10435, 10456, null], [10456, 10834, null], [10834, 11384, null], [11384, 11806, null], [11806, 11836, null], [11836, 12302, null], [12302, 12841, null], [12841, 13403, null], [13403, 13725, null], [13725, 13789, null], [13789, 14335, null], [14335, 14346, null], [14346, 14788, null], [14788, 15044, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 15044, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 15044, null]], "pdf_page_numbers": [[0, 115, 1], [115, 566, 2], [566, 584, 3], [584, 839, 4], [839, 1135, 5], [1135, 1605, 6], [1605, 2268, 7], [2268, 2828, 8], [2828, 3095, 9], [3095, 3782, 10], [3782, 4389, 11], [4389, 4763, 12], [4763, 5176, 13], [5176, 5547, 14], [5547, 5914, 15], [5914, 6442, 16], [6442, 7135, 17], [7135, 7486, 18], [7486, 7513, 19], [7513, 8320, 20], [8320, 8929, 21], [8929, 9017, 22], [9017, 9046, 23], [9046, 10049, 24], [10049, 10435, 25], [10435, 10456, 26], [10456, 10834, 27], [10834, 11384, 28], [11384, 11806, 29], [11806, 11836, 30], [11836, 12302, 31], [12302, 12841, 32], [12841, 13403, 33], [13403, 13725, 34], [13725, 13789, 35], [13789, 14335, 36], [14335, 14346, 37], [14346, 14788, 38], [14788, 15044, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 15044, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
a90f07fcd22414c0f3fe7fbbc63cf59bc82019f4
|
MBSE Applicability Analysis
Bita Motamedian
Abstract—"Model Based Software/Systems Engineering - MBSE" is growing rapidly in the systems engineering (SE) domain for large complex projects to minimize risks and avoid late stage changes. As digital product models help companies and manufacturers to integrate engineering processes across production networks. Although model based development is well established in specific engineering fields like software, electronics, and mechanics, its role in SE to improve manufacturing productivity by enabling efficient integration engineering and manufacturing applications is still evolving. I was seeking voluntary information from industries that have applied Systems Engineering Techniques in their projects. My purpose for doing this survey is to highlight the position of MBSE in our real projects in addition to clarifying roughly the popularity rate of MBSE concept among engineers especially system engineers, and the usage besides the advantages, barriers and concerns of using “modeling languages” and “modeling tools” in MBSE efforts among various industries.
Index Terms—MBSE, Model Based Software Engineering, Model Based System Engineering, Model Based Development, Modeling Language, Modeling Tool
1 INTRODUCTION
The purpose of this survey is a review of the usage of Model-Based Software/Systems Engineering in various industries among different types of engineering groups and the related obstacles and concerns in using MBSE. It should be mentioned that I have done this survey for the sake of introducing MBSE to those engineers who are not familiar with this concept. A further objective is an update, for those who are using it, about the current status and popularity of using modeling methods in various industries.
A study by the “Aberdeen Group” demonstrates significant time and cost savings in using model-based techniques in comparison with conventional engineering practices.1
While the literature on model based development and modeling techniques is extensive in its technical and structural sections, much work remains to be done on revealing the MBSE benefits and the difficulties of adopting MBSE as a mandatory fundamental phase of production lanes, which can smooth the use of it in the near future.
This study might be found interesting and might contain useful information about the applicability of MBSE, but it has a number of limitations. As a case in point, my survey was conducted in just some specific regions and focused on specific job positions. Although, I already have covered a relatively broad range of countries, further researches on other job positions should be done to verify the result(s) of this study on a broader scale.
2 SURVEY METHODOLOGY
I distributed my online questionnaire, which includes 11 questions [see Appendix A] among different groups of engineers via the following social networks: LinkedIn, Xing, IBM Rhapsody Forum, SysML forum and Facebook.
The survey was a set of questions administered through a Google Docs Form, consisting of multiple choices, checkboxes and grid questions. This survey was performed over a period of 17 days from 15th November to 3rd December 2012. In all, 55 people responded anonymously to the survey. The 55 responses came from European countries, including Europe Continent. I did not receive any response from South America, Caribbean Countries, and Middle East-Arab Countries.
Roughly 60% of the responses were from European countries that show the usage of MBSE is rapidly growing in Europe Continent.
1 The Transition from 2D Drafting to 3D Modeling Benchmark Report – Improving Engineering Efficiency”, Aberdeen Group, September 2006.
2 I prepared this questionnaire based on my studies of some “MBSE” articles, which I have mentioned in the References part.
3 Survey Results
In this part I summarize my study and analysis on the received responses.
3.1 Respondents
In order to segment the respondents’ answers and to understand what type of engineers and which positions are the most common users of MBSE, two questions of my questionnaire were about the respondents’ job position and the industry in which they were working. The sample is heavily biased towards my social networks.
Amazingly only 38% of respondents (21 out of 55) are using MBSE in their current position.
Among the users of MBSE, 57.1% were System Engineers and 33.3% were in the other fields of engineering.
Moreover, among the respondents’ job positions there were many management positions, around 31% who had not heard about MBSE. This shows that we should do more feasible studies to highlight the advantageous of MBSE among managers who have the power of controlling projects.
3.2 Active Industries
Based on the received responses, “Aircraft industry” is standing in the first position of the MBSE user’s list, “Defense industry” is in the second position and “Banking/ Payment and IT industries” are in the third position of using MBSE in their projects.
I should add here that according to my researches, “Automotive and Shipping industries” have high potentials to use MBSE techniques in their processes, although I did not receive any response from Shipping industry.
3.3 Know-how Level of MBSE Users
To be able to analyze the degree of know-how among the MBSE users, I defined 4 different levels of technical knowledge attached to the MBSE practical experiences as below:
- Basic → Just started
- Intermediate → Between 1 and 2 years of experiences
- Advance → Between 2 and 5 years of experiences
- Expert → More than 5 years of experiences
As you can see in the following pie chart, almost 43% of MBSE users who participated in my survey have more than 5 years of practical experiences in their jobs. Amazingly, all respondents from “Defense industry” were in the expert category of users. It illustrates that MBSE is being seriously used in “Defense systems” by experts. “Aircraft industry” seems to be more dynamic in this regard by having all levels of MBSE users from beginners to experts. It can be concluded that Aircraft industry is trying to benefit from MBSE experts while giving the chance to newbies of MBSE.
3.4 The extent of MBSE usage
To measure the popularity of using MBSE in practice, I differentiated the answer options of my questionnaire in 4 categories:
- Awareness by training courses, workshops, seminars
- Applying MBSE on Pilot projects
- Applying MBSE on R&D projects
- Adopting MBSE in real programs/projects
Based on the received responses, “Defense and Aircraft industries” seem actively using MBSE in practice in their real projects more than the other industries; but “Aircraft industry” is not as active as “Defense industry” in increasing the know-how of MBSE by training, R&D and pilot projects.
From my working experience in Aerospace and my researches I can add here that the next generation Requirements Engineering projects try to improve the requirements engineering process in connection with MBSE.
Amazingly the “Banking/Payment industry” seems to be very active in awareness part of MBSE activity due to the fact that online global financial transactions are becoming more and more complex by considering the variety range of regulations and rules in different countries and estates which needs deeper analysis both in software (security data transactions) and in hardware (payment devices, embedded card readers).
I summarized the responses based on the highest and the lowest MBSE focus among active industries in using MBSE in table below:
<table>
<thead>
<tr>
<th>The Extent of MBSE Usage</th>
<th>Highest</th>
<th>Lowest</th>
</tr>
</thead>
<tbody>
<tr>
<td>Awareness</td>
<td>Defense, Banking/Payment Industries More than 81%</td>
<td>Aircraft Industry Around 10%</td>
</tr>
<tr>
<td>Pilot projects</td>
<td>Defense Industry More than 81%</td>
<td>Aircraft Industry Around 10%</td>
</tr>
<tr>
<td>R&D projects</td>
<td>Various Industries 51% - 80%</td>
<td>Aircraft Industry Around 10%</td>
</tr>
<tr>
<td>Real programs/projects</td>
<td>Defense & Aircraft Industries 51% - 80%</td>
<td>Automotive Around 10%</td>
</tr>
</tbody>
</table>
3.5 The Organizations’ Focus on MBSE
To be able to investigate which part of MBSE is more favorable in practice, I asked this question “On which part of the MBSE does your organization focus?” and I gave the possibility of multiple choices to the respondents. The result states that 56% of users are applying the MBSE on “System Design”. “Requirement Management” and “Simulation” have the second and third position in practice respectively.
This result illustrates that most of the organizations or companies that are using MBSE have focused on the “System Designing” features of MBSE more than on other aspects; hence, it can be highlighted that the use of MBSE implies valuable benefit in the beginning phase of production lanes, as it creates the ideal product development architecture to design, calibrate and test the different parts of a system both individually and in relation with other elements in a simulation environment.
3.6 Modeling Language & Modeling Tool
To clarify the most popular and the most favorite modeling languages and modeling tools, I asked a question with the possibility of multiple choices answer; here is my analysis on the received responses:
“Rhapsody” and “SysML” both were selected as the most usable modeling tools and modeling languages with 15% selected “always” option and “UML” and “MATLAB” with the 15% and 13% respectively were selected as the languages and tools that are being used “often”.
I should mention that based on the received responses, “Rhapsody” and “SysML” seem to be the experts’ first choices in practical use of MBSE. It might be concluded that these two tools are growing rapidly in MBSE life cycle of real projects. On the other hand UML as the base of modeling languages can still be the first step of starting model-based engineering.
I would like to point out here that distribution of my questionnaire in Rhapsody and SysML forums could be suspected as a biased way of collecting data for this survey, however, according to my experience, these two tools are widely applied in practice, why such biasing effect can be regarded as rather limited in the overall picture. Further, the possible biasing seems justified when comparing with the current survey’s benefit of finding more experts in real projects, disclosing the advantageous of MBSE. All in all, however, these arguments are not disguise the fact that, for a more complete picture, more reliable results should be obtained in similar future researches including other tools’ languages’ techniques’ forums.
3.7 MBSE related training
To recognize how many efforts are being done officially by the organizations and companies to improve the theoretical part of MBSE among their technical teams via offering related trainings and workshops, I asked a question and the responses depict that incredibly 62% of MBSE active users never received any official training from their organization and only 2% received a complete technical course that financially was covered by their companies.
In more details, around 43% of respondents among the MBSE users never had any training in “Modeling Languages”, and around 43% of respondents among the MBSE users had 2-5 days courses in modeling languages.
52% of respondents among the MBSE users never received any training in “MBSE Method”, and only 29% of respondents among the MBSE users had 2-5 days courses in the “MBSE Method”.
43% of respondent among the MBSE users had 2-5 days courses in “MBSE Tools”; but 38% of respondents among the MBSE users never had any training in “MBSE Tools”.
This finding can be a reason that MBSE is not highly popular in general, despite being able to provide benefits to all manufacturing production lanes and to almost all industries. It can be considered as an investment on know-how in organizations, companies and manufacturers by managers who are seeking for a better solution to minimize the risk and costs and maximize the reliable outputs, besides keeping the technical experts motivated by supporting them to improve their skills in their professional path.
3.8 The Value of Modeling Effort
To give the MBSE experts and users the opportunity to share their opinions about the value of modeling efforts based on their experiences in practice and real projects, I asked for the field(s) in which they believe that MBSE can be highly beneficial.
The answers show that “System Engineering” with 89% is the first field that obviously can benefit using MBSE. “Software Engineering” and “Hardware Engineering” with 85% and 74%, respectively reached the second and third positions. This result clarifies the importance of not only familiarity with MBSE by system and software engineers but also the ability of using its techniques in real, complex projects. It should be noted that the knowledge improvement of technical teams in the mentioned fields needs strong support of managements.
3.9 The Barriers in using MBSE
The most important technical barrier in using MBSE is the “lack of related knowledge and skills” in practice and not easy access to the experts. This reason is around 57% of the received responses to my question about the MBSE prevention. The second reason with 48% of the received answers is the “lack of perceived value of MBSE” and also the same 48% rate for “resistance to change”.
It shows that each of us should be more active to make this valuable concept more transparent. We can introduce the benefits of MBSE not only by presentations but also by introducing the successful projects that used MBSE in their lifecycle. Universities and the Modeling forums can play vital roles in this regard.
The most managerial barrier in using MBSE - in case the management teams support the efforts of modeling - is the “inability to sufficiently merge and integrate multiple engineering applications involved in the design, production, and inspection of products across the production network”. In the contrast case, the “lack of managerial support” in my survey received 43% as one of the main prevention of using MBSE.
In this regard, we might need to consider some impressive presentations with successful outputs of real projects to the top managers of organizations, companies and manufacturers to highlight the benefits of using MBSE in reducing the risks of failure, especially, in production processes by using the simulation feature of MBSE.
4 CONCLUSION
Model-Based Engineering applies modeling methods and simulation technologies to integrate and manage not only the requirements but also all processes and functions related to the product design, development, test and production. The aim of Model-Based Engineering (MBE) is minimizing risks and avoiding late stage changes, and, therefore, the related costs, which might be very critical to
MBSE helps engineers and managers in more precisely controlling and better handling a project in early design phases by using simulation features, which can bring out the virtual result of the design before a costly production phase starts. In this case, any amendment and correction or even change in design, can be done much easier and at lower cost than after start of the production line. Moreover, in some industries, like Space and Aircraft, testing the final produced version is impossible without using the simulation capabilities. Even testing some parts of these huge systems in practice is impossible due to various reasons. MBSE is the only solution in this regard.
In conclusion, a model is an ideal representation of various characteristics of a real-world system, such as, its structure, requirements' relation, behavior, function, and operation. It simulates functionality or behavior and can merge design and development information in a virtual world with cheaper budget. It should be noted that for having a high quality and reliable model-based approach, additional detailed data is vital for downstream analysis, design & development, manufacturing and control/monitoring processes. For example in manufacturing context, a product model is a container not only of the nominal CAD\(^3\) geometry, but also process specifications, GD&T\(^4\), material specifications, and many more.
At the end, the analysis I have done can be useful for MBSE providers, marketers and analysts to understand the position of MBSE, and for technical managers to understand that their industries' need to benefited by the advantages of MBSE, and for manufacturers to figure out the risk minimization and cost savings by using MBSE in their design and production lanes.
The small sample of my survey implies that the highly segmented data should not be relied on without further verification and a more controlled selection process. However, it yields a first valid indication for trends in the field and gives a guideline for how to conduct further research in this field, aiming towards an adoption of on-demand MBSE technologies.
Based on my analysis, in order to be able to use MBSE seriously, it is vital to invest more on identifying MBSE concept and its benefits by training and workshops. Advertising about this technology is crucial in order to reduce doubts and ambiguities about it and increase both the usage and support of it.
5 End Sections
5.1 Appendix: “MBSE Applicability Analysis”
Questionnaire
1. In which industry are you working? If your industry is not in the list, please enter it in the “Other” field.
<table>
<thead>
<tr>
<th>Industry Options</th>
<th>Frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>Aircraft</td>
<td>0%</td>
</tr>
<tr>
<td>Automotive</td>
<td>0%</td>
</tr>
<tr>
<td>Banking / Payment Solutions</td>
<td>0%</td>
</tr>
<tr>
<td>Defense</td>
<td>0%</td>
</tr>
<tr>
<td>IT</td>
<td>0%</td>
</tr>
<tr>
<td>Marine / Shipping</td>
<td>0%</td>
</tr>
<tr>
<td>Medical</td>
<td>0%</td>
</tr>
<tr>
<td>Renewable Energy (Solar, Wind, etc.)</td>
<td>0%</td>
</tr>
<tr>
<td>Space Systems</td>
<td>0%</td>
</tr>
<tr>
<td>Telecommunication</td>
<td>0%</td>
</tr>
<tr>
<td>Other:</td>
<td>100%</td>
</tr>
</tbody>
</table>
2. Are you using MBSE?
- Yes
- No
3. What is your job position? If your job position is not in the list, please enter it in the “Other” field.
<table>
<thead>
<tr>
<th>Job Position Options</th>
<th>Frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>Business Manager</td>
<td>1%</td>
</tr>
<tr>
<td>Engineer (Except system engineer)</td>
<td>1%</td>
</tr>
<tr>
<td>Executive Manager</td>
<td>1%</td>
</tr>
<tr>
<td>Project Manager</td>
<td>1%</td>
</tr>
<tr>
<td>System Engineer</td>
<td>1%</td>
</tr>
<tr>
<td>Other:</td>
<td>96%</td>
</tr>
</tbody>
</table>
4. In which country you are working now?
- Europe - Schengen Countries
- Europe - Non Schengen Countries
- North America
- South America
- Middle East - Arab countries
- Middle East - Non Arab countries
- Africa
- Asia
- Oceania Countries (Australia, New Zealand, ...)
- Caribbean Countries (Bermuda, Haiti, ...)
5. How knowledgeable are you in MBSE?
- I have no idea about MBSE.
- Basic (Just started)
- Intermediate (between 1 and 2 years of experiences)
- Advance (between 2 and 5 years of experiences)
- Expert (More than 5 years of experiences)
6. To what extent has your company/organization done the following activities?
<table>
<thead>
<tr>
<th>Activity Options</th>
<th>Frequency</th>
</tr>
</thead>
<tbody>
<tr>
<td>Applying MBSE on Pilot projects</td>
<td>0%</td>
</tr>
<tr>
<td>Applying MBSE on R&D projects</td>
<td>0%</td>
</tr>
<tr>
<td>Adopting MBSE on programs/ projects</td>
<td>0%</td>
</tr>
<tr>
<td>Increasing MBSE awareness (e.g. by (3) Computer-Aided Design (4) Geometric Dimensions and Tolerances</td>
<td>0%</td>
</tr>
</tbody>
</table>
ISSN 2229-5518
http://www.ijsr.org
7. On which part of the MBSE does your organization focus? (Multiple choices are acceptable) If your focus of using MBSE is not listed below, please enter it in the "Other" field.
- System Design
- System Validation
- Simulation
- Requirements Management
- Verification Planning/ Test Execution
- Trade-off Studies
- None of above
- Other:
8. To what extent are you using the following "modeling languages" and/ or "modeling tools" in your MBSE effort? If you are not using MBSE at all, please choose the last row (None of Above) and first column (Never).
<table>
<thead>
<tr>
<th>Language</th>
<th>Never</th>
<th>Sometime</th>
<th>Often</th>
<th>Always</th>
</tr>
</thead>
<tbody>
<tr>
<td>AADL</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Enterprise Architecture</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>FFBD</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>IDEF0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>MATLAB</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>OPM</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Rhapsody</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Smulink</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Statemate</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>SysML</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>UPDM</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>UML</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Visio</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>XWiki</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Eclipse</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>Other COTS model</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
</tbody>
</table>
9. Is any of the following trainings offered by your company/ organization officially to the team members involved in the modeling effort?
- Home grown modeling tools
- None of Above
10. Do you believe that there is any value of the MODELING effort in the following fields? (Multiple choices are acceptable) Please add any other field that you think the modeling effort can be beneficial.
- Hardware engineering
- Project management
- Software engineering
- Systems engineering
- Test engineering
- Mechanical Engineering
- Other:
11. Which of the following issues are/were preventing your company/ organization from using MBSE? (Multiple choices are acceptable) If you encounter other obstacles that are not in the list, please enter them in the "Other" field.
- Concerns about MBSE learning curve
- Lack of management support
- Lack of perceived value of MBSE
- Lack of related knowledge and skills
- Lack of related tools accessibility
- Resistance to change
- Risk associated with the adoption of MBSE
- Other:
5.2 References
5.2.1 Articles, Presentations, Thesis:
5.2.2 URLs:
|
{"Source-Url": "https://www.ijser.org/researchpaper/MBSE-Applicability-Analysis.pdf", "len_cl100k_base": 5338, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 29335, "total-output-tokens": 5798, "length": "2e12", "weborganizer": {"__label__adult": 0.00047135353088378906, "__label__art_design": 0.0011739730834960938, "__label__crime_law": 0.0004622936248779297, "__label__education_jobs": 0.0090789794921875, "__label__entertainment": 0.00017392635345458984, "__label__fashion_beauty": 0.0003325939178466797, "__label__finance_business": 0.0024051666259765625, "__label__food_dining": 0.0005087852478027344, "__label__games": 0.0009355545043945312, "__label__hardware": 0.0018148422241210935, "__label__health": 0.0006613731384277344, "__label__history": 0.0004863739013671875, "__label__home_hobbies": 0.0002200603485107422, "__label__industrial": 0.004184722900390625, "__label__literature": 0.0004978179931640625, "__label__politics": 0.0003330707550048828, "__label__religion": 0.0006718635559082031, "__label__science_tech": 0.3095703125, "__label__social_life": 0.0002777576446533203, "__label__software": 0.040924072265625, "__label__software_dev": 0.623046875, "__label__sports_fitness": 0.00035858154296875, "__label__transportation": 0.001251220703125, "__label__travel": 0.00022709369659423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23914, 0.03105]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23914, 0.07602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23914, 0.94284]], "google_gemma-3-12b-it_contains_pii": [[0, 3822, false], [3822, 6179, null], [6179, 10577, null], [10577, 14828, null], [14828, 19420, null], [19420, 21846, null], [21846, 23914, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3822, true], [3822, 6179, null], [6179, 10577, null], [10577, 14828, null], [14828, 19420, null], [19420, 21846, null], [21846, 23914, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23914, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23914, null]], "pdf_page_numbers": [[0, 3822, 1], [3822, 6179, 2], [6179, 10577, 3], [10577, 14828, 4], [14828, 19420, 5], [19420, 21846, 6], [21846, 23914, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23914, 0.25758]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
d9ea4be25898e8a4f216784a202e94872ad0b098
|
A Protocol for Distributed Collision Detection
Tom Ching Ling Chen
School of Computer Science
McGill University
Montreal, Quebec, Canada H3A 2A7
ching.chen@mail.mcgill.ca
Clark Verbrugge
School of Computer Science
McGill University
Montreal, Quebec, Canada H3A 2A7
clump@cs.mcgill.ca
Abstract—Scalability of multiplayer games can be improved by client-side processing of game actions. Consistency becomes a concern, however, in the case of unpredictable but important events such as object interactions. We propose here a new motion-lock protocol for distributed collision detection and resolution. The motion-lock protocol improves performance of motion prediction by giving stations time to communicate and agree on the detected collisions. This reduces the divergence of distributed object states and post-collision trajectories. Offline and online simulation results show the motion-lock protocol results in qualitative and quantitative improvements to consistency, with negligible network impact and a minimal sacrifice in the responsiveness of player controls. Our design can be used to hide latency and reduce server load in current multiplayer online games, improving scalability and furthering fully distributed designs.
I. INTRODUCTION
To reduce bandwidth and mask network latency, many multiplayer games make use of dead-reckoning algorithms to predict the motion of game objects. Adaptive designs can be quite successful, predicting behaviour with low error for many complex, if locally smooth motions [1]. Dead-reckoning is least successful, however, in the presence of unpredictable, dynamic interactions such as collisions. Errors in dead-reckoned motion can affect the time, location, and even detection of collisions, easily resulting in large visual errors as game state deviates and is eventually synchronized. Figure 1 shows examples where errors introduced by dead-reckoning cause a collision to be missed or detected erroneously (respectively). The potential for this behaviour is an important aesthetic concern in client/server architectures, and has a major impact on overall game consistency in peer-to-peer contexts.
Fig. 1. Missed and false collisions when error in the location of one object is greater than the sum of the radius of the objects.
Here we investigate a new protocol for improving collision consistency between pairs of distributed objects. Our motion-lock protocol works by observing object behaviour and preventing unpredictable local object movements in the presence of potential collisions. By matching limitations on object activity to network delay, the prediction of future collisions can be greatly improved, in optimal cases resulting in perfect collision synchrony. This simple design has few drawbacks, adding only minimal additional network cost and no discernible user impact.
We evaluate our design experimentally, measuring and comparing behaviour to an industry-standard dead-reckoning design. Our design shows significant benefit to game consistency, halving the inconsistency times over more straightforward approaches on average, and noticeably improving visual appearance.
A. Contributions
Our work makes the following specific contributions:
- We present a new protocol for improving collision consistency in a distributed, peer-to-peer environment. Our approach has low network and user impact, but provides significant benefits to collision consistency.
- Our protocol is examined in both offline and online simulation, and we experimentally evaluate behaviour under our approach in comparison to other designs. Our motion-lock protocol greatly reduces inconsistency time and improves visual appearance.
In the next section we give background and related work on distributed motion prediction and collision detection. Section III describes our main protocol design, and Section IV gives experimental data comparing our design with others. We conclude and discuss future work in Section V.
II. BACKGROUND AND RELATED WORK
Modern multiplayer games are based on various designs, extending from basic client/server to more distributed implementations. In either context we assume game object data is either managed locally, or replicated from a remote master.
Within such a distributed context, inconsistencies can occur as network messages are lost or delayed. Several designs have been proposed that improve state consistency. Optimistic approaches, such as TimeWarp [2], rewind and recalculate state when new information is discovered; this helps correctness [3], although it is not always suitable for real-time games. A more practical technique is Local lag, which introduces a delay in
local event processing, providing time for the corresponding network messages to reach their destinations [2]. A similar approach is used in bucket synchronization, which also delays events, assigning them to discretized-time buckets for current or future processing [4]. This allows stations to move at variable frame-rates and further accommodates network latency. Our design for motion-lock is based on similar delay principles, although we apply it to predicted rather than known events, and only to collision events rather than all events.
In more distributed contexts such as DIS [5] (and in actual games), latency and packet loss are compensated by dead reckoning, extrapolating the future state of an object based on past data and an underlying motion model. The same design of course also works to limit bandwidth, only sending updates when a master believes a replica has a sufficient deviation. The Position History-Based Dead Reckoning (PHBDR) protocol [1] builds on this design, further reducing bandwidth and improving remote tracking.
Many improvements to position estimation have been proposed. Tumanov et al. use a Kalman filter to predict the future state of the master on the sender side [6]. The predicted master state is then sent to replicas such that it will arrive in time without the need for extrapolation at the receiver side. Other work has focused on developing heuristics that can be applied to the basic dead-reckoning approach. Cai et al. adaptively change the error threshold of the dead reckoning algorithm depending on the distance between the objects. When two objects enter each other’s area of interest such that the distance between them is small, the error threshold is reduced so that the update frequency is increased to reduce prediction error [7]. Similarly, the pre-reckoning algorithm overrides the error threshold and sends updates immediately to the replica if the motion of the master shows certain behaviours, such as starting to move, coming to a stop, and making a sharp turn [8]. Kenny et al. calculate the deviation error of the replica and send back the error to the master, so that the master can change the error threshold accordingly. This creates a close-loop control system to adjust the update frequency [9].
Some research into distributed collision detection has also been performed, although primarily in terms of improving position estimates at collision points. Ohlernburg adaptively increases the rate of updates from master to replicas when objects are close to each other and may potentially collide [10]. The results show great improvement in object collisions, but also show that by increasing the update rate the bandwidth usage increases dramatically. For more predictable, non-player controlled objects, the Deterministic Object Position Estimator uses an object’s past trajectory to predict the motion of the object and the collision point and time [11]. The estimator gives accurate results for objects that follows predictable trajectories, but of course does not apply to player-controlled objects or other objects with less predictable movements. Our technique offers a heuristic solution to collision handling that allows game peers to detect and resolve collisions as locally as possible, based on the general principle of avoiding server or other indirection. For distributed hybrid and P2P games this has been an effective technique, previously applied to reducing the communication cost of interest management [12] as well as the cost of position updates [13].
We perform part of our experimental testing within NetZ, a distributed game middleware for multiplayer games [14]. NetZ uses a master duplica (replica) approach, and a publish-subscribe model to automate the object replication process. NetZ implements a number of features to help synchronize game objects and maintain consistency, including PHBDR, local-lag, and bucket-synchronization [15].
### III. Motion-Lock Protocol
Our work is intended to improve dead-reckoning, allowing either predictive or distributed resolution of object collisions. We assume a core scenario consisting of two objects undergoing potential collision, with one or both objects not under local control. Network latency and jitter mean the positions of the master and duplica objects can deviate, and so in the absence of central authority multiple stations need to come to agreement on the existence of a collision, and if so on the resulting state.
Performance and real-time requirements further complicate this scenario. Consistency can be provided by, for instance, each station simply informing others when collisions are detected. Network latency, however, means distant stations display collisions late, resulting in visually confusing deviations and subsequent corrections to the (post-collision) state.
Below we present our motion-lock protocol solution. We first discuss the basic protocol, and then present two extensions, addressing multi-object collision scenarios and improving post-collision behaviour.
#### A. Motion-Lock
A naive solution to distributing collision resolution suffers from network latency, with distant stations being forced to make use of late data. Our motion-lock protocol improves behaviour by making predictions of collisions, allowing sufficient time for objects to receive notifications. To ensure predicted behaviour is not invalidated by local client (player) actions or other events, we briefly lock the motion of objects in such situations. The resulting guaranteed and future-scheduled collision can be transmitted to other stations ahead of time, improving game consistency.
Predicting collisions is straightforward, and we use a simple linear model leaving more complex modeling for future work. To determine a potential collision, at every frame the future trajectories of objects are estimated using linear extrapolation. Estimated trajectories are checked for intersection, and if a pair of objects will collide at a future time the collision point and time is determined.
To prevent external events acting on locally mastered objects from altering predictions, object motions are locked for the duration of the protocol, ensuring regular and predictable movement. Naturally, if object motions are locked for too long, players may feel a loss of control over game objects. We thus define a maximum locking threshold $T_{\text{lock}}$: when a collision is
predicted for two objects, their motions are locked only when the time remaining before the collision is no more than \( T_{\text{lock}} \). Following known thresholds in user tolerance of latency, and matching our own experience, a threshold of around 100ms is maximal [16]. A practical value for \( T_{\text{lock}} \) depends of course on the specific game and expected network latency.
Once the objects’ motions are locked they are committed to the collision, and details of the scheduled collision are sent to all relevant stations. Under optimal circumstances all stations will receive notifications prior to the collision; more generally the protocol will reduce the duration of local inconsistency by up to \( T_{\text{lock}} \).
B. Spatial-temporal Bucket Synchronization
Our basic protocol considers only pair-wise collisions, as is common in practice. In dense situations, however, an object may experience multiple collisions in short periods, and our collision predictions can be negated or altered by other nearby collisions inducing new interactions. Best accuracy would be achieved by discarding commitments and revising predictions, but this has large costs in reversing previous notifications.
A more efficient design is achieved by prioritizing the first collision, and ignoring any later detected collisions with a locked object. Visually, however, the objects will appear to penetrate each other as collisions are ignored. We thus augment our basic protocol with spatial-temporal synchronization for improving the appearance and consistency of multi-object collision scenarios.
This technique works by grouping overlapping collisions and resolving them in detection order. When an object \( X \) is found to collide with an already locked object \( Y \), if the projected collision time is prior to \( Y \)’s existing, committed collision time, the collision of \( X \) and \( Y \) is delayed accordingly. \( X \) locks its motion and becomes committed to the new collision time. The end result is that all three objects resolve their collisions at the same time. This design extends naturally to larger groups of colliding objects.
Note that all the objects newly joined to the collision must have their original collision time within the \( T_{\text{lock}} \) threshold of the first collision. This means that all overlapping collisions are both spatially and temporally close to the first pair of colliding objects. Since humans do not perceive collision points accurately [17], the small difference in collision point implied by reordering collisions is not observable, and outweighs the very noticeable visual confusion of inter-penetrated objects.
C. Post-Collision Trajectory Agreement
Identification of a collision event in time is not entirely sufficient for good visual consistency. Whether due to interpolation error or local events, stations may resolve a collision with objects in slightly different states, and small differences in object position or orientation can result in large deviations in post-collision trajectories. This can be corrected by communicating full state information, either pre- or post-collision, although doing so can result in visual discrepancies.
Our design improves appearance by instead only partially correcting object state. As well as collision time we thus also send a post-collision direction vector, and use this to ensure objects end up heading in the same direction. After the collision objects on both the detected and any informed stations will then travel in at least parallel if not identical directions. Later position corrections may still be required, but with similar trajectories the differences tend to be smaller and later position corrections less noticeable.
It is possible of course for two stations to both detect the same collision and send conflicting final trajectories to each other. Further agreement protocols could be added to our protocol to handle this situation; in our implementation work we give the local trajectory priority. A more detailed investigation of this problem is left for future work.
IV. EXPERIMENTAL ANALYSIS
To evaluate the effectiveness of the motion-lock protocol, we implemented our design in an offline simulator and in an online multiplayer middle-ware used in the gaming industry. The offline simulator allows us to easily test the protocol in different network conditions, while the online middle-ware allows us to test the protocol in a real network. For comparison, in both offline and online systems we also set up control and naive-send protocols. We experimentally compare the duration of inconsistency due to collisions, as well as the deviation in post-collision trajectories.
Our offline simulator has adjustable network latency and packet-loss rates. The simulator is written in python and modeled using the Discrete Event System Specification (DEVS) formalism; further details can be found in [18]. The station module consists of a simple 2D physics simulation that contains circular objects moving and colliding with each other. A position history-based dead reckoning protocol with a polynomial motion model is implemented in the simulator to synchronize the motion states of the master and the replica. The machine used for the offline simulator is an Intel Core 2 2GHz machine with 2GB of memory and runs on 32 bit Windows Vista.
We base our online investigation on Quazal’s NetZ middle-ware [14]. NetZ is a framework for managing distributed game states, and has been used by gaming companies to provide online multiplayer functionality. NetZ includes a number of techniques, such as local-lag and PHBDR, to help reduce the effect of network latency. We implemented the protocols within NetZ, building a test suite from a supplied 3D physics simulation involving simple spherical objects. Tests were run on two machines connected through residential ISPs to the internet, the machine above with a 6Mbps upload/800Kbps download connection and an Intel Core 2 Quad 2.5GHz machine with 4GB of memory running 64-bit Windows 7 and using a 3Mbps/512kbp connection.
A. Offline Experiment Setup
For our offline experiments we implemented control and naive-send protocols to compare with motion-lock. The control
only has the underlying PHBDR protocol to synchronize the motion states. No collision agreement mechanism is implemented, and if a collision is missed it will only be corrected in terms of position updates from later messages. This represents a worst case for distributed collision handling. The naive-send protocol sends out a message whenever a collision is detected. All participating stations will eventually be consistent (assuming the message is actually received), but the interval in which a given collision is not represented at both stations is bounded only by message latency. We compare with the naive-send protocol in order to evaluate the effect of reducing the collision inconsistency interval in motion-lock.
All protocols behave well if movement of all objects is highly predictable, such as with purely linear movement. We thus set up scenarios of greater complexity, interacting combinations of circular and linear movement. All scenarios involve two stations, A and B, such that A contains master \( M_a \) and replica \( R_b \) and B contains master \( M_b \) and replica \( R_a \).
- **CIRCULAR–LINEAR–COLLIDE (CLC)** In this scenario, \( M_b \) is moving in straight line while \( M_a \) on A is moving in a circle to simulate less predictable movement. The extrapolated states of \( R_a \) is thus inaccurate. The objects will collide at one point; however, \( R_a \)’s deviation may cause it to miss the collision with \( M_a \). This scenario is to evaluate how the protocols deal with missed collisions.
- **CIRCULAR–LINEAR–PASS (CLP)** Motion of the objects is similar to CLC except the objects do not collide and instead miss each other by at least 1 object radius. Again, however, the state of \( R_a \) is inaccurate and in this case may cause false collisions.
- **CIRCULAR–CIRCULAR–COLLIDE (CCC)** Here both objects are moving in circles and the objects will collide at one point. This is to evaluate how the protocols perform when the states of both replicas are inaccurate.
- **CIRCULAR–CIRCULAR–PASS (CCP)** This scenario is similar to the CCC scenario except the objects do not collide, missing each other by at least an object radius.
Three different network conditions are considered for each scenario. 1) The good network condition has 50ms latency and 10% packet-loss rate, 2) the moderate network condition has 100ms latency and 20% packet-loss rate, and 3) the congested network condition has 150ms latency and 40% packet-loss rate. Each combination of scenarios and conditions is run 50 times with different random seeds to generate 50 collisions. If a collision is detected, the run continues for 500ms more before terminating. If there is no collision, the run terminates after 3 seconds. The time of collision and deviation errors are recorded. The difference between the collision times of the two stations is the collision inconsistency interval. The post-collision deviation error is calculated by summing the deviation error from the time of collision to the end of the 500ms interval, sampled at every 20ms.
**B. Results**
Qualitatively, the results show significant visual improvement, with the benefit magnified in cases of greater latency and packet loss. For the control protocol large errors are generated when the stations fail to both detect or miss a collision, as one master reacts to the collision while the other ignores it. This is improved by naive-send, although the delay in notification still causes noticeable gaps in the position of objects at the time of collision, as objects continue to move. Motion-lock further reduces the gap, resulting in objects showing more natural collisions. We note that setting \( T_{lock} \) appropriately here is important as well; if set too large objects can appear to pull toward each other due to their motion being locked into the movement dictated by the prediction model.
Quantitatively, we measured the time of collision on each station to determine the duration of the inconsistency interval caused by the collision, as well as the positions of the objects after collisions to determine the post-collision deviation. Below we present and discuss numerical results from our offline simulation, followed by our online results where we extend the problem to a multi-object collision scenario.
1) **Collision Inconsistency Interval:** To determine how consistent stations are in terms of displaying collisions, we compute for each of our scenarios a collision inconsistency interval. In each case we recorded the time of collision detected on each station, calculating the relative differences to determine the interval. Here we compare the collision inconsistency intervals of only the naive-send and motion-lock protocols and not the control, since without an agreement protocol in the control the interval grows arbitrarily large if one station misses a collision.
Figures 2 and 3 show the collision inconsistency interval in our offline simulation for CLC, CLP, CCC, and CCP scenarios respectively. Naive-send, as expected suffers from the fact that if both stations do not simultaneously detect the collision the communication has the network latency as a lower bound. The motion-lock protocol generally and sometimes dramatically reduces the collision inconsistency interval, and even in high congestion scenarios the motion-lock protocol keeps the inconsistency time well below network latency.
2) **Post-Collision Trajectory Deviation:** The collision consistency interval measures consistency between stations. The visual disruption to a specific station, however, is better measured in terms of the difference in position between master and replica, since that bounds the amount of required visual correction. To compare the protocols we thus measured the deviation errors of the replicas for 400ms after each collision.
Figure 4 shows that the motion-lock protocol has significantly smaller deviation errors than other protocols. The improvement is naturally reduced in the presence of fast and hard-to-predict motion, where the accuracy of collision prediction has a strong impact on the ability to detect collisions early. In Figure 5 we can see that our simple linear prediction model is unable to improve performance beyond that of naive-send for the CCC and CCP tests.
**C. Online Experiment**
In our online experiment we evaluate behaviour of our protocol within NetZ, and applied to more complex, multi-object
collision scenarios. This allows us to test the effectiveness of the Spatial-temporal bucket synchronization algorithm, and under real network conditions. As well as the control, naive-send, and motion-lock protocols, we thus also have data for two different versions of the motion-lock protocol, with and without the spatial-temporal bucket synchronization.
As a multi-object collision scenario we set up two stations connected through the internet. Station $A$ contains one master and 7 replicas, while station $B$ contains 7 masters and 1 replica. The objects are repeatedly and synchronously given forces the cause them to all collide at (approximately) the same point and time. After each collision, a 3 second period is given for the objects to finish bouncing, and the process is repeated. For each protocol the scenario runs for 20 minutes.
For deviation error analysis, object positions are recorded every 20ms between frames. Replica deviation errors are calculated by processing the locally stored data after the test have finished. Since stations run at slightly different frame rates, we interpolate master positions at intermediate times required for matching replica actions. The scenario can also create many collisions, and so instead of isolating each collision and measure the post-collision deviation, we sum the deviation for each object from the beginning to the end of each 20min run. To estimate the impact on user responsiveness, we recorded the proportion of direction change commands discarded due to motion-lock.
1) Results: Two types of collisions can be observed in our scenario: actual multi-objects collisions involving more than one object at a point and time, and consecutive collisions, where one object experiences a rapid series of collisions with several others. The latter in particular amplifies error for the control protocol, and complex collisions show significant visual errors, with objects undergoing corrections of up to several object diameters. This problem is reduced, but still present in the case of naive-send; long collision inconsistency intervals allow object states to diverge noticeably.
In the motion-lock protocol without the spatial-temporal bucket synchronisation, collisions between a locked object and an unlocked object are ignored. Thus, although it provides an obvious visual improvement over naive-send, we still observed many object penetrations. With the addition of spatial-temporal bucket synchronisation, no penetrations are observed. This represents a further significant improvement, although the benefit is slightly mitigated by more actual correction jumps, induced by the manipulation of collision time inherent in spatial-temporal bucket synchronisation.
Experimental data showing total, average and maximum deviation error and average and maximum inconsistency is shown in Table I. From this we can see that motion-lock improves both inconsistency time and distance error. Numerically, spatial-temporal bucket synchronisation shows some degradation over the base motion-lock, correlating with the increased number of corrections it requires.
Network performance is given for station A, having 1 master interact with 7 replicas, and shown in the right side of Table I. Neither the naive-send nor the motion-lock protocols require significantly more network bandwidth than the control. Unlike state updates, data required for collision agreement are only sent around the collision time, and thus form a small proportion of overall bandwidth costs.
We also considered the impact of locking motion on user control. For the two motion-lock protocols, around 3% to 4% of the commands to change the objects’ motion are ignored due to motion-locking. Further, real-game and real-player testing is required of course, but the ratio is low, and could be reduced further at a cost of reduced collision consistency.
In multiplayer games ensuring stations agree on the existence and result of each collision is important for game consistency, and critical to game immersion and ensuring fairness. We have presented a design that improves the scalability of collision detection, providing an efficient and practical means for client stations to resolve collisions independent of any central authority. This applies to P2P designs, but also as an optimistic approach for increasing visual responsiveness in client/server architectures. Our technique is validated in simulation under a variety of conditions, as well as through testing with actual game network middle-ware in a real network.
There are a number of variations and possible improvements to the basic design we presented here. Specific game contexts, for instance, will strongly affect how well motion-lock performs—in very sensitive collision contexts, such as steering a fast object through a dense field of obstacles, even the small reduction in user-control we observe may be excessive. Further validation is required, and improvements may be possible by specifying or identifying specific motions and game environments, locking some object motions but not others. Future work is also required to address cheating concerns; our protocol is designed for simplicity and efficiency more than security, and practical game contexts would require the protocol be hidden or well protected to prevent abuse.
### Acknowledgments
This work is supported by the Natural Science and Engineering Research Council of Canada. We thank Quazal for the use of their NetZ middleware.
### References
|
{"Source-Url": "http://gram.cs.mcgill.ca/papers/chen-10-protocol.pdf", "len_cl100k_base": 5499, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19331, "total-output-tokens": 6878, "length": "2e12", "weborganizer": {"__label__adult": 0.0010614395141601562, "__label__art_design": 0.0007410049438476562, "__label__crime_law": 0.0013704299926757812, "__label__education_jobs": 0.0016927719116210938, "__label__entertainment": 0.0007882118225097656, "__label__fashion_beauty": 0.0005540847778320312, "__label__finance_business": 0.0007390975952148438, "__label__food_dining": 0.0012674331665039062, "__label__games": 0.08251953125, "__label__hardware": 0.004425048828125, "__label__health": 0.001854896545410156, "__label__history": 0.0013561248779296875, "__label__home_hobbies": 0.0002359151840209961, "__label__industrial": 0.001434326171875, "__label__literature": 0.0007905960083007812, "__label__politics": 0.0007534027099609375, "__label__religion": 0.0011587142944335938, "__label__science_tech": 0.39599609375, "__label__social_life": 0.00030303001403808594, "__label__software": 0.02716064453125, "__label__software_dev": 0.47021484375, "__label__sports_fitness": 0.0014390945434570312, "__label__transportation": 0.0012178421020507812, "__label__travel": 0.0007262229919433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32457, 0.01672]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32457, 0.39217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32457, 0.90747]], "google_gemma-3-12b-it_contains_pii": [[0, 4668, false], [4668, 11104, null], [11104, 17360, null], [17360, 23810, null], [23810, 25959, null], [25959, 32457, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4668, true], [4668, 11104, null], [11104, 17360, null], [17360, 23810, null], [23810, 25959, null], [25959, 32457, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32457, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32457, null]], "pdf_page_numbers": [[0, 4668, 1], [4668, 11104, 2], [11104, 17360, 3], [17360, 23810, 4], [23810, 25959, 5], [25959, 32457, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32457, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
45c57b1a278b82e8d5df4ba0757778420ce13543
|
University of California at Berkeley
College of Engineering
Department of Electrical Engineering and Computer Science
CS 162
Spring 2011
I. Stoica
FIRST MIDTERM EXAMINATION
Wednesday, March 9, 2011
INSTRUCTIONS—READ THEM NOW! This examination is CLOSED BOOK/CLOSED NOTES. There is no need for calculations, and so you will not require a calculator, Palm Pilot, laptop computer, or other calculation aid. Please put them away. You MAY use one 8.5” by 11” double-sided crib sheet, as densely packed with notes, formulas, and diagrams as you wish. The examination has been designed for 80 minutes/80 points (1 point = 1 minute, so pace yourself accordingly). All work should be done on the attached pages.
In general, if something is unclear, write down your assumptions as part of your answer. If your assumptions are reasonable, we will endeavor to grade the question based on them. If necessary, of course, you may raise your hand, and a TA or the instructor will come to you. Please try not to disturb the students taking the examination around you.
We will post solutions to the examination as soon as possible, and will grade the examination as soon as practical, usually within a week. Requests for regrades should be submitted IN WRITING, explaining why you believe your answer was incorrectly graded, within ONE WEEK of the return of the examination in class. We try to be fair, and do realize that mistakes can be made during the grading process. However, we are not sympathetic to arguments of the form “I got half the problem right, why did I get a quarter of the points?”
__________________________________________
SID: _______________________________
(Signature)
__________________________________________
Discussion Section (Day/Time): _______
(Name—Please Print!)
<table>
<thead>
<tr>
<th>QUESTION</th>
<th>POINTS ASSIGNED</th>
<th>POINTS OBTAINED</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>10</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>15</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>15</td>
<td></td>
</tr>
<tr>
<td>4</td>
<td>20</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>20</td>
<td></td>
</tr>
<tr>
<td>6</td>
<td>20</td>
<td></td>
</tr>
<tr>
<td>TOTAL</td>
<td>100</td>
<td></td>
</tr>
</tbody>
</table>
**Question 1. Miscellaneous (10 points)**
For each of the following statements, indicate whether the statement is True or False, and provide a very short explanation of your selection (2 points each).
a. Several threads can share the same address space.
**Rationale:** Yes, two threads in the same process will share the process’ address space.
b. Changing the order of semaphores’ operations in a program does not matter.
**Rationale:** If a semaphore is initialized to 0 you have to call first a P(); you cannot start with V(). For another example in which you cannot change the order of semaphores see Lec5.13.
c. Paging leads to external fragmentation.
**Rationale:** All pages have the same size, so there is no external fragmentation.
d. FIFO scheduling policy achieves lowest average response time for equal size jobs.
**Rationale:** If all jobs have equal size, FIFO is equivalent with SRTF which is optimal discipline for minimizing the response time.
e. LRU exhibits the Belady anomaly.
**Rationale:** LRU guarantees that the subset of pages in a cache of size X is a subset of pages hold by a cache of size X+1 (see Lec11.24).
Question 2. Deadlock (15 points)
Consider a system with four processes P1, P2, P3, and P4, and two resources, R1, and R2, respectively. Each resource has two instances. Furthermore:
- P1 allocates an instance of R2, and requests an instance of R1;
- P2 allocates an instance of R1, and doesn’t need any other resource;
- P3 allocates an instance of R1 and requires an instance of R2;
- P4 allocates an instance of R2, and doesn’t need any other resource.
(5 points each question)
(a) Draw the resource allocation graph.
(b) Is there a cycle in the graph? If yes, name it.
P2 and P4 are running, P1 is waiting for R1, and P2 is waiting for R2.
(c) Is the system in deadlock? If yes, explain why. If not, give a possible sequence of executions after which every process completes.
There is a cycle, but no deadlock.
- P2 finishes, release R1;
- P4 finishes, release R2;
- P1 acquires R1, finishes and release R1,R2;
- P3 acquires R2, finishes and release R1,R2;
Question 3. Synchronization (15 points)
Consider a set of queues as shown in the above figure, and the following code that moves an item from a queue (denoted “source”) to another queue (denoted “destination”). Each queue can be both a source and a destination.
```c
void AtomicMove(Queue *source, Queue *destination) {
Item thing; /* thing being transferred */
if (source == destination) {
return; // same queue; nothing to move
}
source->lock.Acquire();
destination->lock.Acquire();
thing = source->Dequeue();
if (thing != NULL) {
destination->Enqueue(thing);
}
destination->lock.Release();
source->lock.Release();
}
```
Assume there are multiple threads that call AtomicMove() concurrently. (5 points each question)
(a) Give an example involving no more than three queues illustrating a scenario in which AtomicMove() does not work correctly.
If one thread transfers from A to B, and another transfers from B to C and another from C to A, then you can get deadlock if they all acquire the lock on the first buffer before any of them acquire the second.
(b) Modify AtomicMove() to work correctly.
One solution to solve the problem is to impose a total order on how locks are acquired/released. The following code uses the source/destination object addresses to impose such an order, i.e., the source/destination object with a lower address acquire the lock first (the modified code is in bold):
```c
void AtomicMove (Queue *source, Queue *destination) {
Item thing; /* thing being transferred */
if (source == destination) {
return; // same queue; nothing to move
}
if (source > destination) {
source->lock.Acquire();
destination->lock.Acquire();
} else { // destination < source
destination->lock.Acquire();
source->lock.Acquire();
}
thing = source->Dequeue();
if (thing != NULL) {
destination->Enqueue(thing);
}
if (source > destination) {
source->lock.Release();
destination->lock.Release();
} else { // destination < source
destination->lock.Release();
source->lock.Release();
}
}
```
(c) Assume now that a queue can be either a source or a destination, but not both. Is AtomicMove() working correctly in this case? Use no more than two sentences to explain why, or why not. If not, give a simple example illustrating a scenario in which AtomicMove() (given at point (a)) does not work correctly.
The code presented at point (a) will work correctly in this case, as it cannot lead to deadlock. This is because AtomicMove() will always acquire the lock of the source, first and the lock of the destination second.
(Next, we give a “proof”; this proof wasn’t required for receiving full score.) The fact that AtomicMove() always acquires the source lock first guarantees that you cannot end up with a cycle. Indeed, assume this is not the case, i.e., thread T1 holds the lock of queue1 and requests the lock of queue2, T2 holds the lock for queue2, and requests the lock of queue3, …., Tn holds the lock for queue n and waits for the lock of queue 1. Since T1 holds the lock of queue 1 but not queue 2, it follows that queue 1 is a source queue, while queue 2 is a destination queue. Furthermore, since Tn holds the lock of queue n but not queue 1, it follows that queue 1 is a destination queue. But queue cannot be at the same time source and destination, which invalidates the hypothesis that the pseudocode can lead to deadlock.
Question 4. Scheduling (20 points)
Consider three threads that arrive at the same time and they are enqueued in the ready queue in the order T1, T2, T3.
Thread T1 runs a four-iteration loop, with each iteration taking one time unit. At the end of each iteration, T1 calls yield; as a result, T1 is placed at the end of the ready queue. Threads T2 and T3 both run a two-iteration loop, which each iteration taking three time units. At the end of first iteration, T2 synchronizes with T3, i.e., T2 cannot start the second iteration before T3 finishes the first iteration, and vice versa. While waiting, T2 (T3) is placed in the waiting queue; once T3 (T2) finishes its first iteration, T2 (T3) is placed at the end of the ready queue. Each process exits after finishing its loop.
Assume the system has one CPU. On the timeline below, show how the threads are scheduled using two scheduling policies (FCFS and Round Robin). For each unit of time, indicate the state of the thread by writing “R” if the thread is running, “A” if the thread is in the ready queue, and “W” if the thread is in the waiting queue (e.g., T2 waits for T3 to finish the first iteration, before T2 can run its second iteration).
(a) (6 points) FCFS (No-preemption) FCFS always selects the thread at the head of the ready queue. A thread only stops running when it calls yield or waits to synchronize with another thread. What is the average completion time?
(b) (6 points) Round Robin (time quantum = 2 units) When a thread is preempted it is moved at the end of the ready queue. What is average completion time?
(c) (8 points) Assume there are two processors P1 and P2 in the system. The scheduler follows the policy of FCFS with no preemption. When the scheduler assigns tasks, always assign a task to P1 before assigning to P2. Instead of using “R” to mark running, use “P1” or “P2” to indicate where the task runs. What is the average completion time?
**Question 5. Paging (20 points)** Consider a memory architecture using two-level paging for address translation. The format of the virtual address, physical address, and PTE (page table entry) are below:
<table>
<thead>
<tr>
<th>Virtual address:</th>
<th>9 bits</th>
<th>9 bits</th>
<th>14 bits</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>virtual page #</td>
<td>virtual page #</td>
<td>offset</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Physical address:</th>
<th>10 bits</th>
<th>14 bits</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>physical page #</td>
<td>offset</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>PTE:</th>
<th>10 bits</th>
<th>6 bits</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>physical page #</td>
<td>perm. bits</td>
</tr>
</tbody>
</table>
(4 points each question)
(a) What is the size of a page?
\[ 2^{14} \text{ bytes} = 16384 \text{ bytes} = 16 \text{ KB} \]
(b) What is the size of the maximum physical memory?
\[ 2^{24} \text{ bytes} = 16 \text{ MB} \]
(c) What is the total memory needed for storing all page tables of a process that uses the entire physical memory?
There are \( 2^{10} = 1024 \) physical pages. There is one page table at the first level, and up to \( 2^9 = 512 \) page tables at the second level. Since the physical address is 3 bytes, the size of the first level page is \( 2^3 \times 3 \text{ bytes} = 1,536 \text{ bytes} \). Furthermore, the PTE is 2 bytes, so the size of a second level table is \( 2^9 \times 2 \text{ bytes} = 1024 \text{ bytes} \). All in all, the page tables use \( 1,536 \text{ bytes} + 512 \times 1024 \text{ bytes} = 528,384 \text{ bytes} \) of memory.
(Notes: We gave full credit to answers assuming that the entries at the first level page are 2 bytes, as well. Indeed, the last 9 bits of an address to a page table are typically 0, and don’t need to be stored.)
(d) Assume a process that is using 512KB of physical memory. What is the minimum number of page tables used by this process? What is the maximum number of page tables this process might use?
The process uses \( 512\text{KB} / 16 \text{ KB} = 32 \) physical pages. Since a second level page can hold up to 512 PTEs, in the best case scenario we use only 2 page tables: 1st level page + a 2nd level page.
In the worst case, the process may use a little bit of every physical page (e.g., 0.5 KB of each physical page), and all page tables will be populated. Thus, the process ends up using \( 1 + 512 = 513 \text{ page tables} \).
(Note: We have also given full credit to people who assumed that the process fully uses each physical page. In this case the answer is \( 1 + 32 = 33 \text{ page tables} \).)
(e) Assume that instead of a two-level paging we use an inverted table for address translation. How many entries are in the inverted table of a process using 512KB of physical memory?
The inverted table maintains one entry per physical page. In the worst case, the process uses all physical pages, which yields \( 1024 \text{ entries} \). In the best case, the process fully uses each physical page, which yields \( 32 \text{ entries} \). (Note: We gave full credit to people who only answered: \( 32 \text{ entries} \).)
**Question 6. Caches (20 points)** A tiny system has 1-byte addresses and a 2-way associative cache with four entries. Each block in the cache holds two bytes. The cache controller uses the LRU policy for evicting from cache when both rows with the same “index” are full.
(a) () Use the figure below to indicate the number of bits in each field.
<table>
<thead>
<tr>
<th></th>
<th>6 bits</th>
<th>1 bits</th>
<th>1 bits</th>
</tr>
</thead>
<tbody>
<tr>
<td>cache tag</td>
<td>index</td>
<td>byte select</td>
<td></td>
</tr>
</tbody>
</table>
(b) Assume the following access sequence to the memory: 0xff, 0x22, 0x27, 0x24, 0x27, 0xff, 0xf0, 0x24, 0x27, 0x22. Fill in the following table with the addresses whose content is in the cache. Initially assume the cache is empty. The first entry (i.e., the one corresponding to address 0xff) is filled for you.
<table>
<thead>
<tr>
<th>Set 0</th>
<th>Index: 0</th>
<th>0xff</th>
<th>0x22</th>
<th>0x27</th>
<th>0x24</th>
<th>0x27</th>
<th>0xff</th>
<th>0xf0</th>
<th>0x24</th>
<th>0x27</th>
<th>0x22</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>0x24, 0x25</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>0xf0, 0x25</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>Set 1</td>
<td>Index: 0</td>
<td>0xfe, 0xff</td>
<td>0x26, 0x27</td>
<td>0x26, 0x27</td>
<td>0xff</td>
<td>0x26, 0x27</td>
<td>0x26, 0x27</td>
<td>0x26, 0x27</td>
<td>0x26, 0x27</td>
<td>0x26, 0x27</td>
<td>0x26, 0x27</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0x22, 0x23</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td>0xfe, 0xff</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
(c) How many cache misses did the access sequence at point (b) cause? What is the hit rate?
7 misses, hit rate = 3/10 = 30%
(d) How many compulsory misses (i.e., misses which could never be avoided) did the access pattern at point (b) cause?
5 (0xff, 0x22, 0x27, 0x24, 0xf0)
(e) Assuming the cache access time is 10ns, and that the miss time is 100ns, what is the average access time assuming the access pattern at point (b)?
10ns * 3/10 + 100ns * 7/10 = 73ns
|
{"Source-Url": "http://inst.eecs.berkeley.edu/~cs162/fa15/static/exams/sp11-mt1-solutions.pdf", "len_cl100k_base": 4266, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20002, "total-output-tokens": 4417, "length": "2e12", "weborganizer": {"__label__adult": 0.0007214546203613281, "__label__art_design": 0.0008940696716308594, "__label__crime_law": 0.0009927749633789062, "__label__education_jobs": 0.1011962890625, "__label__entertainment": 0.0002243518829345703, "__label__fashion_beauty": 0.0004718303680419922, "__label__finance_business": 0.0007271766662597656, "__label__food_dining": 0.0010786056518554688, "__label__games": 0.0022792816162109375, "__label__hardware": 0.005878448486328125, "__label__health": 0.0013675689697265625, "__label__history": 0.0011301040649414062, "__label__home_hobbies": 0.0005292892456054688, "__label__industrial": 0.001987457275390625, "__label__literature": 0.0008063316345214844, "__label__politics": 0.0008378028869628906, "__label__religion": 0.001194000244140625, "__label__science_tech": 0.238037109375, "__label__social_life": 0.000522613525390625, "__label__software": 0.01039886474609375, "__label__software_dev": 0.62451171875, "__label__sports_fitness": 0.0013828277587890625, "__label__transportation": 0.002239227294921875, "__label__travel": 0.0005846023559570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14833, 0.03516]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14833, 0.31498]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14833, 0.87897]], "google_gemma-3-12b-it_contains_pii": [[0, 2226, false], [2226, 3395, null], [3395, 4362, null], [4362, 5479, null], [5479, 7893, null], [7893, 9826, null], [9826, 12880, null], [12880, 14833, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2226, true], [2226, 3395, null], [3395, 4362, null], [4362, 5479, null], [5479, 7893, null], [7893, 9826, null], [9826, 12880, null], [12880, 14833, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 14833, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14833, null]], "pdf_page_numbers": [[0, 2226, 1], [2226, 3395, 2], [3395, 4362, 3], [4362, 5479, 4], [5479, 7893, 5], [7893, 9826, 6], [9826, 12880, 7], [12880, 14833, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14833, 0.17949]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
0f72c46830614f101035e52b594a2300301d4210
|
CHAPTER THREE
ACCESSIBILITY & INCLUSIVE DESIGN IN IMMERSIVE EXPERIENCES
This new chapter of the developers’ guide continues to focus on those developing platforms and applications for XR through establishment of an evolving set of best practices — this time with an emphasis on the importance and necessity of creating programs that are accessible to people with disabilities.
XR hardware is evolving rapidly, and while there are facets of XR hardware that are unique to each manufacturer, all are working to maximize accessibility in conjunction with software development partners. As software developers look to develop platforms for XR and/or create programs across multiple platforms, the concepts of inclusive and ergonomic design are helping to provide a strong, guiding principle for that software development. Both hardware and software developers have a shared interest in and commitment to incorporating iterative practices and to working closely with people with disabilities to test out advancements at each stage of development.
This update is intended as a baseline of best practices for accessibility and as a compilation of guidance for platform and application developers for consideration during the development process. It was created through contributions of member company representatives with expertise in the XR space as well as input from disability advocacy groups and members of the disabled community. This guide is not meant as an exhaustive source on designing for accessibility and inclusivity, and we recognize that these best practices must continually evolve and will require ongoing input from relevant stakeholders, including standards setting bodies who are also working to establish industry standards in the area of accessibility for people with disabilities.
Additionally, many countries around the world have established laws governing accessibility for software, and we urge software developers to follow all disability and accessibility laws and regulations in their applicable jurisdictions as they create innovative designs for XR.
An essential component of ensuring safe and comfortable navigation of virtual or augmented spaces is the inclusion of accessibility features for people with disabilities. If a device or an experience is not accessible, having a disability can significantly impact not just how someone uses XR technology, but if they can use it at all.
**USING INCLUSIVE DESIGNS TO ENSURE ALL USERS CAN BENEFIT FROM XR**
The most effective way to ensure that all users, including those with disabilities, can easily navigate XR environments is to create inclusive and ergonomic designs which take into consideration the differences many users may have in their abilities to experience different aspects of the technology. Those differences may include permanent disabilities — such as vision and hearing loss, mental, cognitive, and intellectual disabilities, or physical disabilities — as well as temporary or situational limitations, such as a broken or sprained limb, muscle soreness, or sun glare.
Designing for an “average” user can lead to designs that are inflexible and constraining for all, while inclusive designs often produce technology that is more adaptable and flexible. Inclusive design goes beyond simply making technology accessible to people with disabilities. Rather than creating a separate experience, tool, or plug-in specifically aimed at a particular disability, inclusive design aims to create a universal XR experience that integrates tools that all users can enjoy. Because of that, inclusive design should be a goal from the outset, during the platform and app development stage, to ensure a consistent experience across the different applications being used on the platform or app.
Inclusive features, such as vibration alerts, voicemail transcription, voice recognition, and haptic feedback solutions, have become common smartphone features used by everyone, not just people with hearing or vision loss for whom such functions are vital. Designs that allow users to control how they want to accomplish tasks and how they experience the platform or application should be built in from the beginning — a process that often results in cutting-edge innovations that improve the experience for everyone.
While this guide endeavors to provide developers with suggestions for how to ensure their designs are accessible and inclusive, it is also vital to solicit input from people with disabilities during both the development and testing phases of your platform and/or application designs. Solving problems in advance of deployment is much more efficient than trying to retroactively patch issues and supports accessibility from the outset.
Best practice suggests providing comparable experience for all users. Give users control of their experience by providing them with various options for how to complete tasks and/or how to alter their XR environment to fit their needs or desires. Solicit and incorporate the input of people with disabilities during both the development and the testing phases of your platform and/or application designs.
XR WORKPLACE APPLICATIONS ARE RAPIDLY BEING DEVELOPED
and used for industrial, educational, medical, marketing, communication and other business uses. This includes immersive experiential job training programs, headsets that augment the information a worker can access while doing their job, VR market research, immersive or augmented school and university learning applications, and virtual medical treatments, diagnoses, and therapies, to name but a few.
The need for enterprise applications to be accessible to all workers or users in such settings is paramount to ensure equal opportunity employment and advancement, and so that enterprises receive the efficiencies and benefits of such technology. In addition to the fact that many companies want broadly accessible designs to accommodate a diverse workforce, many countries also require employers to provide accommodations for employees with disabilities, including in the technology they use. Because many companies have a multi-country or global presence, it is imperative that an enterprise XR application meet accessibility standards in order to be considered for use by companies around the world. Developers should consult those standards when working on enterprise apps to ensure they meet minimum standards for the companies, industries, and jurisdictions for which they are designed.
GENERAL ACCESSIBILITY
Accessibility in XR requires the creation of a flexible environment in which users can control the way they experience a platform or application. Later in this chapter we will discuss some suggested options that are specific to common types of disability, but it’s also important to remember that some users may have disabilities in more than one category. Below are some software solutions to help make interfaces more inclusive and accessible to users across disability types.
REMOVING OR REDUCING BACKGROUND DETAILS AND AUDIO
Those who are visually-impaired or have cognitive or intellectual disabilities may have difficulty discerning the most important experience options or tasks amidst rich background visuals. Similarly, for people with hearing loss or those with cognitive or intellectual disabilities, background audio that is not essential to the experience could be confusing or disorienting. By providing users the option to remove or reduce background visual and audio detail, users may better distinguish the most important activities or tasks in the application. As discussed in the Visual Accessibility portion of this chapter, allowing users to turn on audio captioning features should also be an option.
UNDO/REDO FUNCTIONS
Regardless of disability, all people make mistakes when using XR platforms and apps. Allowing users to undo or redo actions they’ve made in error or because of imprecision would aid all users, but is especially helpful to improve the experience for users with physical, cognitive, visual or auditory disabilities. For example, users who have physical dexterity disabilities, perhaps tremors or a broken finger, may be more likely to inadvertently make imprecise choices when using certain hardware. Additionally, users with disabilities may also benefit from a function that requires them to confirm an action before it happens, so they can correct an error that otherwise would be irreversible.
REDUCING SPEED AND SETTING UP ACTION SEQUENCES
Users may at times have difficulty quickly and accurately reacting to prompts, experience options, and/or physical or reflex challenges due to mobility, vision, auditory or cognitive disabilities. To enable user progress, it may be helpful to allow users to reduce the speed of the app or to increase the time allotted for making decisions or completing challenges.
Similarly, allowing users to pause the app or game to set up action sequences for tasks that require several steps may aid them in ensuring they can accurately respond to each challenge.
BYPASS FUNCTIONS
XR experiences that include physical or reflex challenges and/or complex puzzles or other decision-making tasks may be taxing for some users with physical or cognitive disabilities. Additionally, timed tasks put pressure on users who cannot move or make decisions quickly. Adding a bypass function would permit users to skip challenging or timed experiences while still allowing them to progress in the app. Other users with visual or hearing loss may too want to bypass tasks that prove frustrating or time-consuming.
SAVE PROGRESS
Users benefit from being allowed to save their progress in an XR experience for a variety of reasons, such as unexpected real world interruptions, difficulty completing tasks in the app, or just because they are ready to end the experience. For users with disabilities, having to end the experience and restart later may require them to repeat experiences that may have been challenging for them to complete in the first instance. Therefore, it is recommended that platform and app developers include a function that allows users to save their progress at any time to avoid the need to repeat challenging actions or simply to allow them to pick up where they left off on the experience. Developers also should allow users to skip challenging actions or reduce the difficulty of challenging tasks.
According to the World Health Organization (WHO), as many as 1.2 billion people globally may have vision impairment or blindness that cannot be corrected with medical intervention. The types of vision loss or low vision that may affect a user’s ability to experience XR apps include blurred vision, loss of peripheral vision, light sensitivity, monocular vision (loss of vision in one eye), blindspots created by a loss of central vision, eye injuries, and color blindness, among others.
However, vision loss need not be a barrier to utilizing or experiencing XR if tools are provided to adjust visual elements and text in the app.
ALTERING THE SIZE OF OBJECTS, ELEMENTS AND TEXT
There are a number of ways developers can allow users to control the visual elements in an app that would aid low vision users in completing tasks and/or enhancing their experience. These include:
- Allowing users to magnify or reduce objects and text to make them larger or smaller
- Allowing users to change fonts for more easily readable text
- Allowing users to add contrasts or edge enhancements to highlight objects and text
- Allowing users to change foreground or background colors of text
- Allowing users to change the brightness levels in the app
- Allowing users to employ peripheral maps to show objects outside of the field of vision
AUDIO AUGMENTATION AND TEXT-TO-SPEECH
Audio augmentation is an important feature that should be available to users with vision loss. Text-to-speech (TTS), also known as “read aloud,” programs may work especially well to ensure that users who otherwise cannot read text instructions, labels, or other written elements in an app are able to understand and interact with the app effectively. TTS is already a built-in feature of operating systems for computers, smartphones and tablets, and developers should consult existing software solutions when designing their own XR TTS technology and/or build their platform to natively support an existing TTS technology. Developers also should include optical character recognition as a feature of TTS, so that words included in images that may be used in XR apps can be deciphered by low vision users.
In addition to TTS, audio augmentation elements should include labeling objects or elements and allowing users to have those objects audibly identified as they encounter or explore those objects in the platform or app.
COLOR FILTERS AND SYMBOLS
To support users that cannot discern color, developers should either allow users to recolor the interface and objects, provide shapes or symbols alongside meaningful colors, or provide textures on objects or elements to help distinguish information in app. These methods allow users to comprehend information in the app communicated by color.
SCRIM OR SCRIM-LIKE OVERLAYS
A scrim is a translucent gradient layer that aids in making text more readable against background pictures, colors, objects and other elements that might affect a user’s ability to read it. Where other methods of making text more readable — such as blurring underlying images or using text boxes — can obscure background information and elements, a scrim’s semi-transparent layer still allows the user to see the image or object behind it, while providing text that is readable.
For programs that require readable texts and/or captioning for deaf or hard of hearing users, using a scrim-like overlay is a potential solution for developers to help ensure all users can read and understand the text display. However, it is important to also ensure that scrims or scrim-like overlays do not introduce color gradients that may make the text unreadable by users with vision loss, create other difficulties in reading the text, or prevent the user from otherwise experiencing the virtual environment.
DEAF AND HARD OF HEARING
Auditory disabilities occur in 5 percent of people worldwide, according to the WHO. It may be the result of aging, prolonged exposure to loud noises, congenital deafness, illnesses that affect the ears, and even temporary factors, such as excess fluid in the ear, among other things. To ensure that users who are deaf or hard of hearing can utilize XR technology, developers should provide multiple ways for users to understand and control the audio features of XR platforms and apps.
CAPTIONING AUDIO FEATURES
One of the most common ways to make XR accessible to the deaf and hard of hearing is by providing captions or subtitles for audio features. However, there are several considerations developers should take into account when providing captions to ensure the captions are readable given the dynamism of XR technology.
For example, developers may want to consider allowing users to choose where to place captions and allow users to move them to ensure other visual aspects of the app are observable. Developers also should allow users to change the font as well as the colors of captions and their background to make them easier to read, if the background colors in the interface dynamically change.
If a feature in the app involves more than one speaker, the captions should clearly indicate or label which speaker is talking.
In addition to the above recommendations, there are many useful guides that are publicly available for how to assure high-quality captions that meet industry standards, which have been developed over decades for captioning of television audio. While there are no standard captioning guidelines specifically for XR, television broadcast captioning guidelines may prove helpful to XR developers. As a starting point, developers should consult captioning guidelines, which provide information on the recommended number of characters per line, characters per second, and standards for punctuation, among other things. Some government agencies, such as the U.S. Federal Communications Commission, have published specific recommendations for broadcast captioning, and the WCAG 2.1 has guidelines and resources for online accessibility and captioning that may be helpful in developing captioning for XR.
While standard two-dimensional captioning for media is relatively straightforward, developing three-dimensional captioning for XR poses an added challenge, given the difficulty in predicting where a user may look or turn at any given moment.
USING ICONS TO IDENTIFY AUDIO FEATURES
Developers should use icons or other indicators to identify for users how they should move their heads or reorient their focus to ensure they are able to see the direction from which verbal and non-verbal audio features are emanating.
Additionally, developers may want to use icons or captions to indicate background sounds or other non-speech indicators, but they should ensure such indicators specify the source or direction from which the sound is coming. In gaming, for example, this may include indicating the direction of incoming gunfire or approaching characters.
When creating icons for XR, however, it's important to remember that there is no standardized iconography across geographic regions or cultures. Developers should ensure when creating icons that those icons are culturally and geographically sensitive and do not evoke different, or even offensive, connotations in different cultures and regions of the world.
SIGN LANGUAGE
Developers may want to consider augmenting their captions with an option to persistently display sign language interpretation within the app. Just as with captioning, developers should allow users to control the placement of the sign language visual to ensure other visual information is not obscured.
Whatever features developers include, they also should provide a way for users to turn these various features on or off — such as captions, sign language, background noise icons, etc., so that users can customize their experience and choose the information that best suits their needs.
MONO AUDIO
Users with hearing loss in only one ear may not be able to hear everything in a stereo recording, which splits audio into left and right channels, particularly when using headphones. Platform developers should include a feature that allows users to switch from stereo to mono audio so both stereo channels can be heard in either ear. Keep in mind mono audio will no longer contain information on the directionality of an audio source so the directionality will need to be communicated using other methods, such as with icons or other indicators. For reference, a “mono audio” feature is already included on most smartphones as an accessibility feature.
MOBILITY DISABILITIES
Mobility disabilities may be permanent or temporary and affect a person’s physical ability to walk, stand, move comfortably, use their hands and arms to grip, hold, lift, and interact with objects, or generally use and control their extremities and body movements. A person may have a mobility disability because of accident or injury, disease, or congenital or neuromuscular disorders. Mobility disabilities include paralysis, tremors, loss of one or more limbs or digits, recurrent seizures, loss of motor control or poor motor control, muscle weakness, and movement tics, among other things.
A study in 2017 by the Disability Visibility Project found that users of all studied disabilities listed mobility challenges as one of the most difficult barriers to using virtual reality programs, including activities such as standing, crouching, arm movements, and general locomotion and/or rotation of the body. The following are some options for platform and app developers to help improve access to XR programs for users with mobility disabilities.
SETTINGS AND MENU OPTIONS
Being able to configure usability preferences when initially setting up their XR experience is an important feature for those with mobility disabilities, as is allowing users to save those preferences for future interactions with the program. Some of those preferences should include:
- Allowing users to choose to have the app assist them in navigating the interface and in helping them to complete any tasks, such as might occur in workplace training programs or in gaming.
- Allowing users to receive assistance in some aspects of the program by enabling a separate controller or sensor.
- Allowing users to access the experience from a seated, reclining, or stationary position, if the application otherwise would require standing or body movements to access its full content.
Other recommendations for increasing the accessibility of XR platforms and apps include allowing users to slow down various aspects of a game or app, such as slower cursors to allow more precise movements to more accurately target menu options, objects, or other features in an interface. The ability to slow down camera movements and/or zoom options is also recommended.
For reference, these options are similar to software that has already been developed for use with a computer mouse that freezes the cursor during clicking to reduce errors for people with tremors or impaired fine motor precision. Other software has been developed to ignore multiple clicks or taps when they occur too close together.
If the XR design requires users to use multiple buttons or controllers to navigate through the program, there are several ways a developer can make it easier for those with mobility issues to experience the application:
- Allowing users to automate some actions to reduce the number of physical actions they must make within an app.
- Allowing users to map several actions to a single controller button or action to be able to complete complex multi-step actions or choices in a sequence.
- Allowing remapping of controls onto alternate controllers, sensors, or keyboards.
- Allowing remapping of controls on the standard controller to ensure the user can reach the necessary controls.
A woman in a wheelchair with one arm wears a virtual reality head-mounted display and selects a point on an abstract map of the virtual reality controls.
MOBILITY DISABILITIES
DYNAMIC FOVEATED RENDERING AND EYE TRACKING
Some XR hardware developers are working to incorporate eye tracking and dynamic foveated rendering features into their products in order to improve the performance of the hardware as well as the user’s experience. Foveated rendering reduces the image quality in a user’s peripheral vision while providing clear and detailed images at the eyes’ focal point. Dynamic foveated rendering uses eye tracking to move the user’s field of vision as the user’s eyes move.
When developing apps or platforms for such hardware, software developers can use the built-in eye-tracking and foveated rendering features to create an option for users with significant mobility disabilities, such as paralysis or severe tremors, to select eye-tracking as their primary way of manipulating the interface and progressing through the app.
Eye tracking and foveated rendering techniques developers can use to increase accessibility include:
- Interface Navigation
- Input Selection
- Automatic Scrolling
- Aim Assistance
- Object Selection
- Text and Fine Details Rendering Quality
- Analytics and User Research
CONTROLLER-FREE HAND-TRACKING
With the advent of controller-free hand-tracking hardware, developers have the opportunity to design software to match the technology. This important accessibility feature can help address the difficulty in handling controllers that users with impairments to fine motor skills or the ability to grasp and press buttons may have.
A key design component of hand tracking software will be in allowing the user to have both absolute and relative interactions with the app to ensure that the user can both directly "touch" an object nearby (absolute) and control or manipulate objects farther away (relative). Some hardware developers with hand-tracking functionality have published guides for software developers to use when designing apps for such hardware.
Cognitive and intellectual disabilities encompass a broad spectrum of conditions, including autism, learning disabilities such as attention deficit disorder and dyslexia, Down Syndrome, brain injury, and dementia, among others.
**INTERSECTIONALITY OF SOLUTIONS**
Many times a single solution will positively impact users with different disabilities. For example, people who live with auditory issues may not choose to speak. So having alternatives for speech command-and-control systems and in-game speech communications will help this community but will also be impactful for people living with cognitive disabilities who may have significant speech impairments or are non-speaking.
Many of the suggestions already included in this guide for allowing users to adapt content displays, to opt for subtitling or audio commands, to turn off background audio, and to highlight important information in apps also would aid many cognitively impaired individuals. It is important to allow these users to control their experience with the content on the platform or app to prevent sensory or information overload.
Just as with mobility disabilities, those with cognitive disabilities may want to save their settings and preferences for future use of the platform. Additional settings that would aid users with cognitive disabilities include:
- Providing on-demand functions that allow the user to receive assistance in orienting themselves in the experience or to receive more context about their progression in the app. Such options should provide information to users about where they are in the virtual space, what they can or should do next, what their current progress in the app is, etc.
- Providing in-app prompts, such as reminders, help topics, introductions to new features, among other things, to assist the user in progressing through the experience.
- Providing training opportunities for users to experiment with the interface and control configurations so they can learn the potential challenges they may face and choose their settings accordingly.
- If an app includes challenges or tasks that must be completed, allowing users to review their objectives — both completed and future — to reorient them in the application and ensure they can progress in the app effectively.
- Allowing users to hide distracting or non-critical interface components, including visual, audio and/or animated components, to ensure they are able to focus on the most essential information being communicated to them.
- If the design requires users to use separate controllers to accomplish tasks, allowing users to create control reviews for the interface to help the user navigate the controllers more accurately, and allowing users to reduce the number of controls in order to limit the number of things they have to do to accurately complete any objectives contained in the program.
### EXPLORE WORLD OPTIONS
Allowing users to familiarize themselves with the app and its various interfaces and input needs may help users with cognitive disabilities to feel more comfortable taking an active part in the program and help them to understand and experiment with the interface and the environment prior to utilizing the app, and to set preferences. For apps that include challenge, puzzle or gaming features, this option also would allow users to simply experience the app and its virtual world without having to take on challenges that may prove difficult or frustrating.
The XR Association wishes to extend its sincere thanks to the following individuals and organizations for their assistance, contributions, and leadership in the development of the “Accessibility & Inclusive Design in Immersive Experiences” chapter for the XR Association’s Developers Guide: An Industry-Wide Collaboration for Better XR.
Elaine Dai
Facebook Reality Labs
Andrew Eiche
Owlchemy Labs, Co-Chair of XRA’s Accessibility Working Group
Debbie Girolamo
Facebook Reality Labs, Member, XRA Board of Directors
John Kim
Sony Interactive Entertainment — PlayStation, Co-Chair of XRA’s Accessibility Working Group
Elka Looks
Facebook Reality Labs
Christopher Patnoe
Google, Co-Chair of XRA’s Accessibility Working Group
Ben Rickert
Microsoft Corporation
Mike Shebanek
Facebook
And other member company representatives on XRA’s Accessibility Working Group
Jesse Anderson
IllegallySighted and XR Access Initiative Volunteer
Dr. Shiri Azenkot
Assistant Professor, Information Science, Director, Connective Media Program, Jacobs Technion-Cornell Institute and Co-Founder, XR Access Initiative
Mark Barlet
Founder and Executive Director, The AbleGamers Charity
Bill Curtis-Davidson
Senior Consultant, Emerging Tech Accessibility, Partnership on Employment & Accessible Technology (PEAT) and Leader, XR Access Initiative
Wendy Dannels
Research Associate Professor, Director, XR Accessibility Solutions Laboratory, National Technical Institute for the Deaf, Rochester Institute Technology and XR Access Initiative Volunteer
Triskal deHaven
User Experience Researcher, The AbleGamers Charity
Larry Goldberg
Senior Director and Head of Accessibility, Verizon Media and Co-Founder, XR Access Initiative
Greg Haynes
Lead Games User Researcher, The AbleGamers Charity
Emily Pierce
Freelancer
The XR Association is interested in your feedback about the “Accessibility & Inclusive Design in Immersive Experiences” chapter for the XR Association’s Developers Guide: An Industry-Wide Collaboration for Better XR. Please share your thoughts with XRA by emailing info@xra.org.
The XR Association promotes the dynamic global growth of the XR industry, which includes virtual reality, augmented reality, mixed-reality, and future immersive technology. XRA is leading the way for the responsible development and adoption of XR by convening stakeholders, developing best practices and research, and advocating on behalf of our members and the greater XR industry.
Association members represent the headset and technology manufacturers across the broad XR industry, including Google, HTC Vive, Facebook and Oculus, Microsoft, and Sony Interactive Entertainment.
**XR ACCESSIBILITY AND INCLUSIVE DESIGN QUICK REFERENCE GUIDE**
XR technologies are still new and will continue to rapidly advance. New thinking and new solutions to meet the needs of all XR users will be required. The XR Association is committed to keeping this chapter and corresponding quick reference guide up-to-date as XR technologies and capabilities evolve.
<table>
<thead>
<tr>
<th>ACCESSIBILITY TECHNIQUES</th>
<th>Sight Disabilities</th>
<th>Auditory Disabilities</th>
<th>Non-Speaking/ Speech Impairments</th>
<th>Mobility Disabilities</th>
<th>Cognitive Disabilities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Removing or Reducing Background Details and Audio</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Undo/Redo Functions</td>
<td>⬤</td>
<td>⬤</td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Reducing Speed and Setting Up Action Sequences</td>
<td>⬤</td>
<td></td>
<td></td>
<td>⬤</td>
<td>⬤</td>
</tr>
<tr>
<td>Bypass Functions</td>
<td>⬤</td>
<td>⬤</td>
<td></td>
<td>⬤</td>
<td>⬤</td>
</tr>
<tr>
<td>Save Progress</td>
<td>⬤</td>
<td></td>
<td></td>
<td>⬤</td>
<td>⬤</td>
</tr>
<tr>
<td>Altering the Size of Objects, Elements and Text</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Audio Augmentation and Text-to-Speech</td>
<td>⬤</td>
<td>⬤</td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Color Filters and Symbols</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Scrim or Scrim-Like Overlays</td>
<td>⬤</td>
<td>⬤</td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Captioning Audio Features</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Using Icons to Identify Audio Features</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Sign Language</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Mono Audio</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Settings and Menu Options</td>
<td>⬤</td>
<td></td>
<td></td>
<td>⬤</td>
<td>⬤</td>
</tr>
<tr>
<td>Dynamic Foveated Rendering and Eye Tracking</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Controller-Free Hand-Tracking</td>
<td>⬤</td>
<td>⬤</td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
<tr>
<td>Explore World Options</td>
<td>⬤</td>
<td></td>
<td></td>
<td></td>
<td>⬤</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://xra.org/wp-content/uploads/2020/10/XRA_Developers-Guide_Chapter-3_Web_v3.pdf", "len_cl100k_base": 6459, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 28763, "total-output-tokens": 6635, "length": "2e12", "weborganizer": {"__label__adult": 0.0013608932495117188, "__label__art_design": 0.0113525390625, "__label__crime_law": 0.0012712478637695312, "__label__education_jobs": 0.010498046875, "__label__entertainment": 0.0008058547973632812, "__label__fashion_beauty": 0.0011568069458007812, "__label__finance_business": 0.0006237030029296875, "__label__food_dining": 0.0009927749633789062, "__label__games": 0.0155487060546875, "__label__hardware": 0.0214385986328125, "__label__health": 0.0146331787109375, "__label__history": 0.0006837844848632812, "__label__home_hobbies": 0.0005373954772949219, "__label__industrial": 0.000732421875, "__label__literature": 0.0010366439819335938, "__label__politics": 0.0013704299926757812, "__label__religion": 0.0016908645629882812, "__label__science_tech": 0.199951171875, "__label__social_life": 0.00047516822814941406, "__label__software": 0.13232421875, "__label__software_dev": 0.57861328125, "__label__sports_fitness": 0.0013179779052734375, "__label__transportation": 0.0011949539184570312, "__label__travel": 0.0005712509155273438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34535, 0.0001]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34535, 0.43641]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34535, 0.93026]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2080, false], [2080, 5138, null], [5138, 6489, null], [6489, 10409, null], [10409, 14204, null], [14204, 18955, null], [18955, 22389, null], [22389, 24335, null], [24335, 27806, null], [27806, 30467, null], [30467, 34535, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2080, true], [2080, 5138, null], [5138, 6489, null], [6489, 10409, null], [10409, 14204, null], [14204, 18955, null], [18955, 22389, null], [22389, 24335, null], [24335, 27806, null], [27806, 30467, null], [30467, 34535, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34535, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34535, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2080, 2], [2080, 5138, 3], [5138, 6489, 4], [6489, 10409, 5], [10409, 14204, 6], [14204, 18955, 7], [18955, 22389, 8], [22389, 24335, 9], [24335, 27806, 10], [27806, 30467, 11], [30467, 34535, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34535, 0.1131]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
71bd5d7dfd66bb358e12e6bead4ba1e4d6c54ebf
|
[REMOVED]
|
{"len_cl100k_base": 5985, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39644, "total-output-tokens": 8006, "length": "2e12", "weborganizer": {"__label__adult": 0.0004069805145263672, "__label__art_design": 0.0006260871887207031, "__label__crime_law": 0.0004677772521972656, "__label__education_jobs": 0.0008101463317871094, "__label__entertainment": 0.00017774105072021484, "__label__fashion_beauty": 0.00022733211517333984, "__label__finance_business": 0.0004131793975830078, "__label__food_dining": 0.0004448890686035156, "__label__games": 0.000804901123046875, "__label__hardware": 0.00188446044921875, "__label__health": 0.0008425712585449219, "__label__history": 0.0003654956817626953, "__label__home_hobbies": 0.00013816356658935547, "__label__industrial": 0.0007033348083496094, "__label__literature": 0.0002868175506591797, "__label__politics": 0.0004222393035888672, "__label__religion": 0.0007066726684570312, "__label__science_tech": 0.301025390625, "__label__social_life": 0.00016748905181884766, "__label__software": 0.02783203125, "__label__software_dev": 0.66015625, "__label__sports_fitness": 0.0003592967987060547, "__label__transportation": 0.0005536079406738281, "__label__travel": 0.00026106834411621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31759, 0.03582]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31759, 0.44723]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31759, 0.8568]], "google_gemma-3-12b-it_contains_pii": [[0, 3640, false], [3640, 7209, null], [7209, 11200, null], [11200, 14008, null], [14008, 16000, null], [16000, 18458, null], [18458, 21538, null], [21538, 23404, null], [23404, 25280, null], [25280, 27339, null], [27339, 30810, null], [30810, 31759, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3640, true], [3640, 7209, null], [7209, 11200, null], [11200, 14008, null], [14008, 16000, null], [16000, 18458, null], [18458, 21538, null], [21538, 23404, null], [23404, 25280, null], [25280, 27339, null], [27339, 30810, null], [30810, 31759, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31759, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31759, null]], "pdf_page_numbers": [[0, 3640, 1], [3640, 7209, 2], [7209, 11200, 3], [11200, 14008, 4], [14008, 16000, 5], [16000, 18458, 6], [18458, 21538, 7], [21538, 23404, 8], [23404, 25280, 9], [25280, 27339, 10], [27339, 30810, 11], [30810, 31759, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31759, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
3289487c3741b15652d864a4c9ba3cf904405a32
|
A C compiler for Large Data Sequential Processing using Remote Memory
Shiyo Yoshimura, Hiroko Midorikawa
Graduate School of Science and Technology, Seikei University, Tokyo, Japan
E-mail: dm106231@cc.seikei.ac.jp, midori@st.seikei.ac.jp
Abstract
Prevailing 64bit-OS enables us to use a large memory address space in computer programming general. However, the actual physical memory becomes the limitation in utilizing it fully. When a program requires more memory than available physical memory in a computer, a traditional virtual memory system performs the page swap between a local hard disk and physical memory. Here, with the recent development in high-speed network, remote-memory access via networks becomes faster than accessing a local hard disk. We built the Distributed Large Memory System (DLM) to access vast remote memories in networks. The DLM is designed as a user-level software for high portability. The DLM provides a very large virtual memory using remote memories distributed over cluster nodes. This paper proposes a newly designed C compiler for the DLM. It provides an easy programming interface to use the abundant memory of the DLM with existing sequential programs, instead of developing parallel programs.
1. Introduction
In recent years, we can use a large memory address space from the programs running on 64bit-OS. However, the physical memory of a computer becomes a limitation for such memory use. Ordinary, when a program requires more memory than physical memory in a computer, memory pages are swapped in/out a hard disk. However, accessing remote memories in network-connected computers becomes faster than accessing local hard disks, because of the recent development of high-speed networks.
Researchers who simulate a scientific numerical problem usually develop a sequential program first and validate it with small scale problems. Then, they simulate large scale problems. To deal with large scale problems, the programs sometime require more memory than a local memory in one computer. In such cases, the sequential programs have to be converted to parallel programs that utilize large memories on multiple nodes in a cluster. However, developing a parallel version of the programs is not an easy task for people who are not familiar with parallel programming. Moreover, it will impose them to pay extra costs for debugging and validating a parallel version of programs. Additionally, not all sequential program models can be converted to parallel ones because of the nature of original simulation models. In these cases, some users prefer to run existing sequential programs using large memory distributed over cluster nodes, even if the execution time becomes slower than the time of a parallel version of the programs.
Because of these reasons, the Distributed Large Memory (DLM) System [1], which was a virtual large memory system distributed over cluster nodes, was developed. The DLM system is designed for sequential programs that need a large amount of data beyond the local physical memory. It was reported that the DLM system using remote memory achieved higher performance of program executions compared to the kernel swap system with a local hard disk. In this paper, we propose a DLM compiler, which enables us to use rich memory distributed over multiple nodes of a cluster with existing sequential programs. It also eliminates the extra cost for developing parallel programs.
To use remote memory for a sequential program, there are two ways: kernel-level implementations and user-level implementations.
Kernel-level implementations have limited portability because they ordinary require a special hardware and/or kernel modifications. Kernel-level implementations usually replace a traditional swap device, a hard disk, with remote memory. It was reported that changing a swap device to remote memory often caused performance degradation in page swapping [1]. One of the reasons is that the swap system of a traditional OS is usually tuned to a hard disk. Another reason is unstable behavior in remote communication under the lack of memory when a swap daemon is initiated. However, kernel implementation gives complete transparency to a user.
It means there is no need to change programs for using a remote memory.
User-level implementations are designed independent from OS kernel and swap daemon, and run as a user-level program. They have high portability, but it imposes users to suit their programs to the APIs provided by the implementations. User-level implementations generally achieve a high communication performance than kernel-level implementations, because they are executed without initiating swap daemon.
The JumboMem [2], the one of the user-level implementations, improved user transparency. It was achieved by providing a dynamic linkable shared object library and replacing memory-related functions, such as malloc, with newly implemented JumboMem functions, which utilize remote memory in the JumboMem address space. It realizes perfect user transparency. There is neither need to modify user programs, nor to recompile existing binary programs. However, there are two problems. First is that JumboMem only supports dynamic memory allocation functions, and it does not support static array declarations, which are commonly used in many numerical programs. The second problem is that all malloc functions are replaced with JumboMem functions which use remote memories. It sometimes causes significant problems, e.g. I/O buffer memory for file access might be allocated in remote memory, which must be always allocated in local memory.
The DLM system is a user-level software to achieve high portability and performance. It resolves two problems occurred in JumboMem. First, the DLM provides the API that can support both dynamic memory allocation and static array data. The second, users can distinguish two types of data, data allocated in both remote memory and local memory and data always exist in local memory. To improve low user transparency in user-level implementation, the DLM compiler is proposed.
2. The DLM System
2.1. The DLM System Overview
Fig.1 shows an overview of DLM system. The DLM system runs a sequential user program at Cal Host node. The DLM system automatically allocates data in remote memory of Mem Server node, when a user program needs more memory than the size of a local memory. When the user program accesses certain data in remote memory, the DLM system swaps in the page which contains the data to Cal Host node, and swaps out the other pages to Mem Server node. The unit of swapping is DLM page size that is a multiple of the OS page size. General protocol, TCP/IP or MPI, is used in the DLM system. So it can run on a wide variety of high-speed communication media like 10Gbps Ethernet, Infiniband, and Myri10G. It looks like a sequential program execution for users, but actually it runs as a user-level parallel program using distributed memory over a cluster.
2.2. Program Interface of the DLM System
The proposed interface is designed to alleviate a user’s load of rewriting programs. The knowledge of parallel programming is unnecessary to use the DLM system.
As previously mentioned, the DLM supports two types of memory allocations, both a static array declaration and a dynamic memory allocation for remote memory. Users can specify large data, called DLM data, which are allocated not only in local memory but also in remote memory when the amount of local memory is not sufficient for the data.
Generally in C programs, global variables and static variables are allocated in static data area, local variables are allocated in stack memory area, and dynamically allocated data by malloc function are allocated in heap memory area. However, in the DLM system, the DLM data are always allocated in heap memory area of local/remote memory irrespective of a global static array or a local variable. This enables us to use a large amount of data, not limited by local memory size or a compiler.
The DLM programs are identical to ordinary C programs, except attaching dlm before the DLM data declarations. The dlm is introduced as one of the storage specifiers in C grammar, like extern and static. DLM’s API has 2 features as follows:
A user can distinguish the DLM data from ordinary data using dlm specifier. The first line in Fig.2 represents an ordinary data declaration, which allocates data in local memory only. The second line represents the DLM data declaration, which allocates data in a local memory and/or a remote memory on memory servers.
A user can specify 2 types of the DLM data. The first line in Fig.3 represents a static array declaration of the DLM data. The second line represents a dynamic memory allocation of the DLM data.
Fig. 2 Ordinary data and the DLM data declarations
```
int a[100][10]; // allocate in a local memory only
dlm int b[1000][1000]; // allocates in local memory and/or remote memory
dlm double c[1000][1000]; // static array declaration
c = (double*)dlm_alloc(1000*1000*sizeof(double)); // dynamic memory allocation
```
Fig. 3 Static and dynamic DLM data allocation
Fig.4 shows a DLM sample program, which calculates median values of multiple array data. The array a(①) at the beginning of the program includes 10 integer arrays, which has 10G elements in each array. The main function randomly assigns integer numbers to the array a. In the median function, local array variable b(②) is created, and one of the integer array of a is copied to b. Then, qsort function sorts the array b and the median function returns the median of the array b. The essence of the program is preserving the original order of the array a by copying one of the array a to the array b at every time when median is called.
The size of local array variable b at ② in Fig.4 is 40GB. In an ordinary C program, it is allocated in a stack memory, so the program execution is usually restricted by the size of local memory and the size of stack memory area. On the other hand, the DLM data are always allocated to heap memory area in local memory and/or remote memory, even if they are declared as local variables in programs. So the program using large data specified as the DLM data is hardly limited in an actual execution by the available local memory size or a kernel memory layout.
```
#include <stdio.h>
#include <stdlib.h>
#include <dlmm.h>
#define NUM 10
#define LENGTH (10*(1L << 30)) // 10GB
dlm int a[NUM][LENGTH]; // 400GB ①
int median(long int num) {
dlm int b[LENGTH]; // 40GB ②
long int j, ans;
for ( j = 0; j < LENGTH; j++)
③ b[j] = a[num][j]; ④
qsort(b,LENGTH,sizeof(int),compare_int); ⑤
ans = b[LENGTH/2]; ⑥
return ans;
}
int main ( int argc, char *argv[])
{
long int i, j;
for ( i = 0; i < NUM; i++)
for ( j = 0; j < LENGTH; j++)
⑦ a[i][j] = rand();
for ( i = 0; i < NUM; i++)
printf("median[%d] = %d
", i, median(i));
return 0;
}
```
Fig. 4 A sample of the DLM program
3. Structure of the DLM Compiler
The DLM compiler is designed to have 2 parts for high portability. The first part includes a general C preprocessor and a DLM translator, dlmpp. The second part is a general C compiler.
Fig.5 shows procedures in the DLM compiler. First, the DLM compiler converts a DLM program including dlm specifiers to an ordinary C program by dlmpp translator. It performs 3 tasks: (1) insert DLM library functions, (2) transform the DLM data declarations to ordinary C pointers, and (3) rename the variables accessing to the DLM data by considering their scopes.
The next, the DLM compiler creates an execution program from the C program by a gcc compiler with the dlm library. Fig.6 shows an example of compile command, dlmc, which makes an execution program (prg1) from a DLM program (prg.c) with dlm library.
The DLM system is also available by manually rewriting programs using \texttt{dlm} functions and executing them with an ordinary C compiler. However, the DLM compiler gives a great benefit for users by saving the time bothering to rewrite the programs.
- Insert \texttt{dlm} functions
- Change DLM array data to pointer access
- Rename DLM data with data scope check
- Insert \texttt{dlm} startup function after variable declarations of the main function. The \texttt{dlm} startup activates the DLM system. It creates memory server processes at memory server nodes and sets up communication between memory server processes and a calculation process. (See Fig.\textit{7} - ⑨)
-variables are expressed in the blocks and/or functions. (See Fig.\textit{7} - ③)
- Insert \texttt{dlm} shutdown function before all return statements in the main function. The \texttt{dlm} shutdown terminates the DLM system. It finishes the communication with memory servers and shuts down memory server processes. (See Fig.\textit{7} - ⑫,⑬)
- Change DLM array data declarations to pointer base declarations. (See Fig.\textit{4} - ①,② \rightarrow Fig.\textit{7} - ①,②). Insert \texttt{dlm} _alloc functions of the DLM data after the \texttt{dlm} startup function(See Fig.\textit{7} – ③,④). Rename all of the DLM data variables to \texttt{dim} variable name block number (See Fig.\textit{4} - ③,④,⑤,⑥,⑦ \rightarrow Fig.\textit{7} - ④,⑤,⑥,⑦,⑧).
```c
int (*__dim_a_0)[](10*(1L<<30)); ①
int median(long int num) {
int (*__dim_b_1); ②
long int i, ans;
__dim_b_1 = (int(*)[])dlm_alloc((10*(1L<<30))*sizeof(int)); ③
for (j = 0; j < (10*(1L << 30)); j++)
④__dim_b_1[] = __dim_a_0[]; ⑤
qsort(__dim_b_1,(10*(1L << 30)),sizeof(int),compare_int); ⑥
ans = __dim_b_1[(10*(1L << 30))/2]; ⑦
dlm_free(__dim_b_1); ⑧
return ans;
}
int main ( int argc, char *argv[])
{
long int i, j;
dlm_startup(&argc, &argv); ⑨
__dim_a_0 = ⑩
(int*)((10*(1L<<30)))]dlm Alloc(10*(10*(1L<<30))*sizeof(int));
for (i = 0; i < 10; i++)
for (j = 0; j < (10*(1L << 30)); j++)
__dim_a_0[i] = rand();
for (i = 0; i < 10; i++)
printf("median[%d] = %d\n", i, median(i));
dlm_shutdown(); ⑪
return 0;
dlm_shutdown(); ⑫
}
```
Fig.\textit{7} A C program converted by \texttt{dlmpp} translator
4. DLM programs for DLM System
In this section, we show some examples of rewriting actual programs for the DLM system. Basically, what users have to do for converting existing sequential programs to DLM programs is only 2 or 3 things: (1) inserting dlm.h at the top of the program, (2) attaching dlm before large data declarations, and (3) replacing malloc with dlm_alloc.
Fig. 8 shows all modified parts for Himeno Benchmark [3]. In the original program, large data are declared as global static variables. Since the source file is only one, static is unnecessary. Here only 4 modifications are required (Fig.8).

Fig.9 shows the case of STREAM Benchmark [4], where only one modification is required.

Fig.10 shows the modified parts of IS in the NAS Parallel Benchmark [5]. In the IS, the first 3 arrays are declared as DLM data, while the last one is declared as normal because its size is very small.

- INT_TYPE key_array[SIZE_OF_BUFFERS],
- key_buff1[SIZE_OF_BUFFERS],
- key_buff2[SIZE_OF_BUFFERS],
- partial_verify_vals[TEST_ARRAY_SIZE];

- dlm INT_TYPE key_array[SIZE_OF_BUFFERS],
- key_buff1[SIZE_OF_BUFFERS],
- key_buff2[SIZE_OF_BUFFERS];
- INT_TYPE partial_verify_vals[TEST_ARRAY_SIZE];

In numerical simulations, static array declarations are often used for large data. Without the DLM compiler, users have to translate original static declarations of data arrays into dynamic data allocations (dlm_alloc) and convert all the original data-array accesses into pointer-based accesses. The DLM compiler reduces the rewriting costs of programs by users to its minimum. Only attaching dlm to the existing sequential programs is sufficient for using a remote memory for large data.
5. The DLM performance
This section shows one of benchmark performances using the DLM system. The experiments are conducted on a public open cluster of the T2K Open Supercomputer HA8000 cluster with the MPI batch queuing system [6] (Table 1).

<table>
<thead>
<tr>
<th>Environment of Experiment</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>Machine</td>
<td>HITACHI HA8000-kc/RSA25</td>
</tr>
<tr>
<td>CPU</td>
<td>AMD QuadCore Opteron 8356(2.3GHz)</td>
</tr>
<tr>
<td>Memory</td>
<td>32GB/node (936 nodes), 128GB/node (16nodes)</td>
</tr>
<tr>
<td>Cache</td>
<td>L2 : 2MB/CPUS (512KB/Core), L3 : 2MB/CPUS</td>
</tr>
<tr>
<td>Network</td>
<td>Myrinet-10G x 4, (40Gbps) bonding4</td>
</tr>
<tr>
<td>OS</td>
<td>Linux kernel 2.6.18-53.1.19.e15 x86_64</td>
</tr>
<tr>
<td>Compiler</td>
<td>gcc version 4.1.2 20070626, Hitachi Optimizing C</td>
</tr>
<tr>
<td>MPI Lib</td>
<td>MPICH-MX (MPI 1.2)</td>
</tr>
</tbody>
</table>
The benchmark used here is Himeno Benchmark [3], which measures the speed of major loops for solving Poisson’s equation. It uses multiple loops of iterations and is known as a heavy memory access program.
Fig. 11 shows the performance of the Himeno Benchmark of ELARGE size, 513x513x1025 float array (15GiB). The benchmark outputs the performance in MFLOPS, but the values here are translated into the relative execution time.
The horizontal axis of Fig. 11 represents the ratio of local data size/total data size (local memory ratio L) used in the benchmark program. Note that (1-L)% of the total data resides in remote memory, while the remaining L% in local memory. The performance with the DLM system is measured by limiting the size of available local memory in Cal Process node. The vertical axis of Fig.11 represents a relative execution time in the DLM system compared with an ordinary execution time without the DLM. In an ordinary execution, a program uses only local memory, where the local memory ratio is 100%. The performance of ELARGE in an ordinary execution is 415 MFLOPS.
Fig.11 shows the case of using 1MB DLM page size for bonding=4 (40Gbps) network. According to Fig. 11, even if the local memory ratio becomes 6.9%, which means 93.1% of the total data resides in remote memory, the benchmark execution time becomes 2.3 times longer than the ordinary execution time.
We also evaluate the newly created larger data version of the Himeno Benchmark with XLARGE, 1025x1025x2049 float array (112GiB). 6 nodes and 40Gbps network is used in this experiment, where 20GiB local memory in each node is used for execution. In this benchmark, a normal execution using only local memory is impossible because physical memory size in one node is limited 32GiB. Here the absolute performance is 179.4 MFLOPS, where the local memory ratio is 17.4%.
Moreover, a larger data size version, XLARGE-d with 1025x1025x2049 double array (2.5GiB) is created for evaluation. This experiment uses 12 nodes. The size of local memory for each node and network bandwidth used here are the same as the previous experiment. The absolute performance becomes 88.8 MFLOPS, where the local memory ratio is 8.1%.
These experiments show that existing sequential programs can be used for large-scale problems beyond the limitation of local physical memory size.
6. Conclusion
The DLM system makes easy to use remote memories in a cluster without parallel programming. In the previous DLM system, users had to rewrite existing programs by inserting dlm functions, such as dlm_start, dlm_alloc and dlm_shutdown, and modifying array data to pointer-based data. The proposed DLM compiler relieves users from these annoying program rewrites. With this compiler, users are only required to attach dlm before the DLM data.
The proposed compiler will make people to process large data easily by accessing a cluster with sequential programs, and it will save time for rewriting a sequential program considerably.
References
|
{"Source-Url": "http://www.ci.seikei.ac.jp/midori/paper/Pacrim2011-yoshimura.pdf", "len_cl100k_base": 4808, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 18783, "total-output-tokens": 5461, "length": "2e12", "weborganizer": {"__label__adult": 0.00040650367736816406, "__label__art_design": 0.0004074573516845703, "__label__crime_law": 0.0003979206085205078, "__label__education_jobs": 0.000560760498046875, "__label__entertainment": 0.00010341405868530272, "__label__fashion_beauty": 0.00018358230590820312, "__label__finance_business": 0.0002865791320800781, "__label__food_dining": 0.0004508495330810547, "__label__games": 0.0006685256958007812, "__label__hardware": 0.005069732666015625, "__label__health": 0.0006198883056640625, "__label__history": 0.00033783912658691406, "__label__home_hobbies": 0.0001404285430908203, "__label__industrial": 0.0008897781372070312, "__label__literature": 0.00019466876983642575, "__label__politics": 0.0002808570861816406, "__label__religion": 0.0006284713745117188, "__label__science_tech": 0.1546630859375, "__label__social_life": 9.125471115112303e-05, "__label__software": 0.01213836669921875, "__label__software_dev": 0.8203125, "__label__sports_fitness": 0.0003769397735595703, "__label__transportation": 0.0007433891296386719, "__label__travel": 0.00024580955505371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20715, 0.03954]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20715, 0.59602]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20715, 0.84218]], "google_gemma-3-12b-it_contains_pii": [[0, 4209, false], [4209, 8272, null], [8272, 11875, null], [11875, 14131, null], [14131, 16918, null], [16918, 20715, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4209, true], [4209, 8272, null], [8272, 11875, null], [11875, 14131, null], [14131, 16918, null], [16918, 20715, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20715, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20715, null]], "pdf_page_numbers": [[0, 4209, 1], [4209, 8272, 2], [8272, 11875, 3], [11875, 14131, 4], [14131, 16918, 5], [16918, 20715, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20715, 0.0641]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
cb00665935620fa27821f99c9e6d8ef7523722aa
|
[REMOVED]
|
{"len_cl100k_base": 6255, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31860, "total-output-tokens": 9850, "length": "2e12", "weborganizer": {"__label__adult": 0.0006031990051269531, "__label__art_design": 0.0005092620849609375, "__label__crime_law": 0.00044155120849609375, "__label__education_jobs": 0.005279541015625, "__label__entertainment": 0.00015985965728759766, "__label__fashion_beauty": 0.000293731689453125, "__label__finance_business": 0.00026798248291015625, "__label__food_dining": 0.0005474090576171875, "__label__games": 0.0008683204650878906, "__label__hardware": 0.0009679794311523438, "__label__health": 0.0006871223449707031, "__label__history": 0.0003273487091064453, "__label__home_hobbies": 0.00017976760864257812, "__label__industrial": 0.00058746337890625, "__label__literature": 0.0008101463317871094, "__label__politics": 0.0003542900085449219, "__label__religion": 0.0006432533264160156, "__label__science_tech": 0.03662109375, "__label__social_life": 0.0002541542053222656, "__label__software": 0.00591278076171875, "__label__software_dev": 0.9423828125, "__label__sports_fitness": 0.000408172607421875, "__label__transportation": 0.00074005126953125, "__label__travel": 0.0002384185791015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31382, 0.07529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31382, 0.35029]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31382, 0.78326]], "google_gemma-3-12b-it_contains_pii": [[0, 3639, false], [3639, 5992, null], [5992, 9459, null], [9459, 12825, null], [12825, 16769, null], [16769, 20584, null], [20584, 24390, null], [24390, 27918, null], [27918, 30647, null], [30647, 31382, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3639, true], [3639, 5992, null], [5992, 9459, null], [9459, 12825, null], [12825, 16769, null], [16769, 20584, null], [20584, 24390, null], [24390, 27918, null], [27918, 30647, null], [30647, 31382, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31382, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31382, null]], "pdf_page_numbers": [[0, 3639, 1], [3639, 5992, 2], [5992, 9459, 3], [9459, 12825, 4], [12825, 16769, 5], [16769, 20584, 6], [20584, 24390, 7], [24390, 27918, 8], [27918, 30647, 9], [30647, 31382, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31382, 0.19786]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
17887205266a217ff9dfe2b4466928804a3eb0ac
|
Using RT-CORBA Scheduling Service And Prioritized Network Traffic to Achieve End-to-End Predictability
Tarek GUESMI
Laboratoire SYSCOM
Ecole Nationale d’ingénieurs de Tunis
1002 Tunis, Tunisia
Tarek.guesmi@isecs.rnu.tn
Salem HASNAOUI
Laboratoire SYSCOM
Ecole Nationale d’ingénieurs de Tunis
1002 Tunis, Tunisia
Salem.hasnaoui@enit.rnu.tn
Houria REZIG
Laboratoire SYSCOM
Ecole Nationale d’ingénieurs de Tunis
1002 Tunis, Tunisia
Houria.rezig@enit.rnu.tn
Abstract—Computing systems are increasingly distributed, real-time, and embedded (DRE) and must operate under highly unpredictable and changeable conditions. To provide predictable mission-critical quality of service (QoS) end-to-end, QoS-enabled middleware services and mechanisms have begun to emerge. It is also, widely known that Control Area Networks (CAN) are used in real-time, distributed and parallel processing which cover manufacturing plants, humanoid robots, networking fields. We show how prioritization of messages over the physical CAN network can be achieved, when adopting the use of RT-CORBA distributed scheduling service which implements a dynamic scheduling policy to achieve end-to-end predictability and performance.
Keywords: Real-time scheduling, Earliest Deadline First, CAN Bus, Intermediate Deadline, Network Priority Mapping.
1. Introduction
In recent years, there has been a growth in a category of performance-critical distributed systems executing in open and unpredictable environments [10]. Examples range from next generation military avionics and ship computing systems to current open systems. In these systems, intelligent sensors, actuators and distributed control structures replace the centralized computer. This leads to a modular system architecture in which smart autonomous objects cooperate to control a physical process. As theory and practice in distributed computing and in real-time computing matures, there is an increasing demand for automated solutions for dynamic distributed real-time middleware to support scheduling end-to-end timing constraints. The latest version of Real-Time CORBA (RTC), known as RTC1.2 (formerly known as RTC 2.0) [1], defines the Distributable Thread (DT) primitive to support real-time computing in dynamic distributed middleware systems. RTC1.2 provides a flexible means for expressing and propagating scheduling information across node boundaries in a distributed system. However, Real-Time CORBA is not immediately applicable to embedded real-time control systems for several reasons:
- Real-Time CORBA implementations have excessive resource demands. A first step to solve this problem is the minimumCORBA [12] specification, which is a cut-down version of CORBA specified by the OMG.
- Often Real-Time CORBA implementations are built on the top of unpredictable off-the-shelf soft- and hardware and do not support typical real-time communication systems.
A real-time communication system (RTCS) constitutes the backbone for distributed control applications. RTCS substantially differ in many respects from general purpose communication systems. In general, while the goals of general purpose communication systems center around throughput, RTCS focus on predictability of communication. Predictability means that the system exhibits an anticipated behaviour in the functional and the temporal domain. Controller Area Network (CAN) bus [15] provides advanced built-in features, which make it suitable for complex real-time applications. Some of these features are priority-based, multiparty bus access control using carrier sense / multiple access with collision avoidance (CSMA/CA), bounded message length, efficient implementation of positive/negative acknowledgement, and automatic fail-silence enforcement with different fault levels. These characteristics make it very challenging to run Real-Time CORBA applications on a CAN-based distributed platform. To exploit the advantages of CAN for RT-CORBA, we designed an inter-ORB protocol within the context of Data Acquisition from Industrial Systems (DAIS) use [2]. RT-CORBA preserves end-to-end priorities by the mapping of the importance of the Distributable Thread to the corresponding operating system priorities and propagating these priorities across the network as the DT spans multiple hosts; however RT-CORBA specification is less explicit about the communication transport and the
underlying network. A promising approach is QoS enhancement by preserving the priority of the client when sending a request and accessing the communication support by giving an efficient mechanism to map the DT global priority, assigned by the distributed scheduling service and the CAN network-based priority. Contributions of this paper are as follows:
- Describing the interaction between the Distributable Thread (DT) and distributed scheduling service (DSS) when sending a request or a reply.
- Developing an efficient model using tasks and subtasks to describe the execution of the DT over the distributed system and especially the communication over the CAN bus.
- Implementing a new technique for calculating the deadline of the CAN message, and hence its priority.
The remainder of this paper is organized as follows: section 2 describes the related work; section 3 summarizes the technical backgrounds of this work and describes basic principles of Real-Time CORBA, Controller Area Network and some related real-time basic knowledge. Section 4 describes the proposed architecture and mechanism for supporting the network priority mapping. In section 5, we evaluate the latency time introduced by the CAN network transmission. Some general assessments of the lessons learned are provided and some conclusions are drawn in section 6.
II. Related work
Developing Distributed Real-Time Embedded (DRE) platforms based on RT-CORBA middleware running over real-time networks is a very challenging research topic and during the last years, several teams have prominently worked on these platforms. One of these is a team of the Software Architecture Lab, Seoul National University headed by Kimoon Kim [6] [7]. In their paper [6] Kim and others present their design of CAN-CORBA, an environment specific CORBA for CAN-based distributed control systems. Their ORB core supports the classical connection-oriented point-to-point communication of CORBA and additionally subscription-based group communication. They implement a new Inter-ORB protocol customized for the CAN bus called Embedded Inter-ORB Protocol (EIOP). Nevertheless EIOP was first inter-ORB protocol designed for embedded systems, it provides no support for Real-Time CORBA specifications and therefore it provides not translation between the priority handling of CAN and Real-Time CORBA.
In their paper [4], S. Lankes, A. Jabs and T. Bemmerl describe the implementation of a CAN-based connection-oriented point-to-point communication model and its integration into Real-Time CORBA in the context of ROFES [13] platform. The main idea presented in their work is to make efficient use of the advantages of CAN by means of smaller message headers for CAN and mapping the CAN priorities to a band of RT-CORBA priorities. However the idea of mapping priorities in both level applications and network is very interesting and fundamental to enforce end-to-end predictability, the way this mapping is done in ROFES is too simple and this for many reasons. First, the protocol is based on CAN 2.0A and thus only uses 11-bit CAN identifiers, and the priority field is encoded in 2 bits. This choice imposes severe restrictions of the number of network priorities comparing to the number of CORBA priorities. Second, the technique used for priority mapping, maps each individual CAN priority to a contiguous range of CORBA priorities and thus badly expresses the real-time requirements of each DT when communicating over the CAN bus.
Recently, Douglas C. Shmidt and al. [8] present an interesting approach, which describes how priority and network QoS management mechanism can be coupled with standards-based, off-the-shelf distributed object computing (DOC) middleware to better support dynamic DRE applications with stringent end-to-end real-time requirements. This work is very interesting since it provides TAO [5] extensions to support mechanisms to map RT-CORBA priorities to DiffServ network priorities. This enhancement allows RT-CORBA middleware to manage network resources using DiffServ, which is priority based but provides no support to other prioritized network traffic like CAN-based networks.
The main idea discussed within this work is how to use a task model to calculate CAN message priority relatively to the global CORBA priority and thus by defining a well adapted architecture using the distributed RT-CORBA Scheduling Service.
III. Technical Backgrounds
A. Basic CAN Features
The Controller Area Network (CAN) is an ISO defined serial communication bus. It was originally developed during the 80’s by the Robert Bosch GmbH for the automotive industry. The CAN bus works according to the Producer-Consumer-Principle: messages are not sent to a specific destination address, but rather as a broadcast (aimed at all receivers) or a multicast (aimed at a group of receivers). A CAN message has a unique identifier, which is used by devices connected to the CAN bus to decide whether to process or ignore the incoming message. Two variants of the CAN protocol exist. The main difference between the first (CAN 2.0A) and second variant (CAN 2.0B) is that the former uses 11 bits to uniquely identify each message, while the latter uses 29 bit identifiers. For correct operation of the CAN bus, the identifiers of two messages sent at the same time must never be the same, consequently CAN 2.0B offers a greater variety and scope for concurrent message Id’s.
The CAN bus is based on the arbitration scheme Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA). During arbitration process, any node willing to send a CAN message starts sending bit by bit the 11 or (in case of CAN 2.0B) 29 identifier bits. Each time a bit is applied to the bus, the sending node checks whether the bus really is at the corresponding voltage level—high for an applied logical 1 and low for an applied logical 0. As a
common resource, the CAN bus has to be shared by all computing nodes. Access to the bus has to be scheduled in a way that distributed computations meet their deadlines in spite of competition for the communication line. Since the scheduling of the bus cannot be based on local decisions, a distributed consensus about the bus access has to be achieved. The CSMA/CA protocol is comparable with a priority-based dispatcher. Due to this analogy, it is possible to express scheduling decisions for the CAN-bus resource by dynamic priority orders. The presented approach associates advantage of the built-in CSMA/CA access protocol of CAN bus and the Real-Time CORBA dynamic scheduling service to realize the EDF access regulation.
B. Basic Real-Time CORBA 1.2 Features
To understand the model presented within the context of this paper, which associates the scheduling mechanisms of both CAN bus and Real-Time CORBA scheduling service, this section explains the necessary features of the Real-Time CORBA specification. A more detailed description of the Real-Time CORBA specification is given in [3] and [11]. Real-Time CORBA is a QoS enabled extension of CORBA middleware. The Real-Time 1.1 specification is designed for static distributed system where the number of tasks and their scheduling parameters are known a priori. The Real-Time CORBA 1.2 specification extends RTC1.1 to encompass both static and dynamic systems. In a dynamic system, tasks enter and leave the system at times that cannot be calculated a priori. In order to effectively manage the dynamic task set, RTC1.2 introduces the Distributable Thread scheduling primitive. A DT is an abstraction of a chain of method calls by multiple threads at multiple processors. According to its definition, a DT can span nodes boundary and carry scheduling parameters to each node in the chain. At each local node, the correspondent local scheduler will schedule that DT based on its scheduling information. Each distributable thread in RTC2 is identified by a unique system wide identifier called a Globally Unique Id (GUID). A distributable thread may have one or more execution scheduling parameters, e.g., priority, time-constraints (such as deadlines), and importance. These parameters specify the acceptable end-to-end timeliness for completing the sequential execution of operations in CORBA object instances that may reside on multiple physical endsystems. Below we describe the key interfaces and properties of distributable threads in the RTC2 specifications:
Scheduling segment. A distributable thread comprises one or more scheduling segments. A scheduling segment is a code sequence whose execution is scheduled according to a distinct set of scheduling parameters specified by the application. For example, the worst-case execution time, deadline, and criticality.
Scheduling points. An application and ORB interact with the RTC2 dynamic scheduler at pre-defined points to schedule distributable threads in a DRE system. These scheduling points allow an application and ORB to provide the RTC2 dynamic scheduler information about the competing tasks in the system, so it can make scheduling decisions in a consistent and predictable manner. Scheduling points 1-3 in Figure 1 are points where an application interacts with the RTC2 dynamic scheduler.
Scheduling points 4-7 are points where an ORB interacts with the RTC2 dynamic scheduler, i.e., when remote invocations are made between different hosts. The ORB interacts with the RTC2 dynamic scheduler at points where the remote operation invocations are sent and received. Client-side and server-side interceptors are therefore installed to allow interception requests as they are sent and received.
As depicted in figure 1, the distributable thread interacts with the RTC 1.2 dynamic scheduler, which responsible for the allocation of CPU resources, to meet the QoS needs of the application that share ORB endsystem. The problem with such architecture, as we can notice here, that all scheduling decisions are assumed to be local – that is a local scheduler on each endsystem uses the same propagated scheduling information to make local scheduling decisions. These local schedulers do not have a global view of the overall system. This could lead to local enforcement decisions that fail to achieve maximum possible global system performance. Another problem is that the RTC1.2 dynamic scheduler is not able to regulate the access to communication support when sending request or reply. These problems can be solved when using Distributed Scheduling Service (DSS) framework that works with application specified end-to-end scheduling parameters and with local scheduling mechanisms to make globally sound scheduling decisions for the system. To manage network access, an RTC1.2 framework must resolve a number of design challenges. Below we examine two challenges:
- Designing interactions between DTs and DSS when sending a message on the CAN bus.
- Determining the message priority using a mapping from global CORBA priority of the DT to CAN priority.

IV. Proposed architecture
A. Task Model
In this section, we define the manner the distributable thread interacts with the distributed scheduling service when sending a request or a reply, and thus to improve the network behavior. The problem here is, how can the DSS set the network priority of the message sent over the CAN bus using the scheduling parameters of the DT. To solve this problem, we are brought to develop a tasks and subtasks model that describes the execution of the distributable thread over the distributable system. In our model the distributable thread DT is modeled with real-time Task (T), the scheduling parameter elements of the distributable thread represent real-time attributes of the task. Two scheduling parameter elements will be taken in consideration in the model: 1) distributable thread deadline and 2) distributable thread execution time. At any given instant, each distributable thread has only one execution point in the whole system, i.e., it executes a code sequence consisting of nested distributed and/or local operation invocations. This code sequence executing in one node of the distributed system is mapped to a subtask of the original task mapped to the distributable thread. We believe that this model encompasses the communication subsystem, i.e., the CAN bus. That is sending messages can be viewed as another type of subtask. For instance, let T be a task composed of two subtasks T1 (running on node N1) and T2 (running on node N2), say that after T1 completes, it is necessary to send a message from N1 to N2 (send_request()) containing the inputs for T2. Then after T2 finishes it is necessary to ship the final result to some other site (send_reply()). The two transmissions can be seen as two additional subtasks, Tn, Tm, so that the global task is really T=T1, Tn, T2, Tm. Below we describe the basic formalism of our task model. A distributed real-time system consists of several nodes representing system components. Each node manages one or more resources, for example, a database, a cycle server, or a communication channel. At each node, there is a real-time scheduler prioritizing tasks according to some real time queueing discipline, e.g., earliest deadline first (EDF).
We consider two categories of tasks: local and global. A task that visits only one node and is submitted to a scheduler only once is termed a local task. On the other hand, a global task is a set of several related subtasks or stages, to be executed in series. The deadline of a global task is the time by which the last subtask must complete. Each subtask is associated with an execution node to which it is submitted for execution. Associated with each task X (whether it is local, global, or a subtask) are five attributes denoted by the following functions:
- \( \text{ar}(X) = \text{arrival (or submission) time of X}, \)
- \( \text{sl}(X) = \text{slack of X}, \)
- \( \text{dl}(X) = \text{deadline of X}, \)
- \( \text{ex}(X) = \text{real execution time of X}, \)
- \( \text{pex}(X) = \text{predicted execution time of X}. \)
The first four attributes are related by the following equation:
\[
dl(X) = \text{ar}(X) + \text{ex}(X) + \text{sl}(X) \quad (1)
\]
We assume that the deadline and the execution time of the task are known, since they are member of the scheduling parameters elements, introduced with the distributable thread mapped to that task. The slack can be computed used the above equation. The section will introduce the main components interacting when sending a message on the CAN bus and how the former model can be useful to evaluate the network priority of this message.
B. System Architecture
Figure 2 depicts our overall system architecture. There are five essential components in this framework, Distributed Thread (DT), Local Scheduler, DSS, Local Info Collection and System Info Repository. These components are independent and coordinated with each other.

Figure 2. System Architecture
A Distributable Thread (DT) is the schedulable entity in our system architecture. When a DT is spawned by the application it carries its specified scheduling parameters, including its end-to-end deadline, along as it traverses the nodes in its path. The Local Scheduler is defined in RTC1.2 to manage the local portion of a DT. In our architecture, we extend the definition and allow the local scheduler to interact with the DT so that the local scheduler can obtain and use global information. When a DT is spawned by an application, the DT communicates with the DSS to determine if it is schedulable alongside the existing DTs in the system, and receives from the DSS its globally sound scheduling parameters. In RTC1.2, the interface specifies that the DT pass its scheduling parameters to the Local Scheduler. We have preserved this interface, and extended the Local Scheduler to allow it to send these parameters to the DSS, which returns the globally sound scheduling parameters to be returned to the DT. A DT first sends its scheduling information to a local scheduler whenever the DT makes a request to begin, update or end a scheduling segment. The parameters passed along are determined by the scheduling discipline chosen by both DT and the local scheduler such as RM, DM, EDF and
MUF. If the DT spans multiple nodes our design dictates that it must pass an end-to-end deadline and a sequence of subtasks to the local scheduler. On each endsystem, a local scheduling component schedules access to resources within that endsystem, and a local information collection component records a variety of status information such as CPU utilization, progress of application activities, and success or failure of tasks in meeting their deadlines. This local status information is distilled into higher-level information such as predictability of local tasks in meeting intermediate deadlines toward timely completion of end-to-end activities. The higher level information is sent to a distributed information collection service called the system information repository.
The components cited above cooperate to enforce the end-to-end task scheduling. The DSS sets intermediate deadlines for an EDF local scheduler; it uses the end-to-end deadline and subtask execution times to calculate an intermediate relative deadline for each subtask.

**Figure 3. RTC1.2 Scheduling Points and DSS**
In RTC1.2 scheduling points are the points in time and/or code at which the local scheduler is invoked by the application, which may result in a change in the current schedule. Figure 3 shows that all seven scheduling points which may have interactions with our DSS. The seven scheduling points are Begin_Scheduling_Segment (BSS), Update_Scheduling_Segment (USS), End_Scheduling_Segment (ESS), send_request, receive_request, send_reply, and receive_reply. In our current implementation, we use two of the seven scheduling points, send_request and send_reply to interact with the DSS, to calculate request and reply CAN priority and thus to optimize the network behaviour. At the send_request scheduling point, the DT sends all of its scheduling parameters to the DSS. These scheduling parameters include a system wide unique name of the DT, its execution time and its end-to-end deadline. The question here is how to set the network priority for the CAN message send when calling the send_request(). The DSS, using the task model we introduced previously, calculates the intermediate deadline of the network message and then mapped this deadline to a CAN priority.
The figure below depicts the way the distributable thread, the DSS and the system repository interact in order to set the intermediate deadline of the message and thus its priority.

**Figure 4. Scheduling Point – send_request()**
C. Setting the network priority
As we indicated above, the CSMA/CA access protocol, used to regulate the access to the CAN bus, is comparable with a priority based dispatcher. Due to this analogy optimal scheduling of soft real-time communication can be achieved by EDF scheduling strategy. The first step done by the DSS is to calculate the message transmission deadline using the subtask deadline assignment [9]. The second is to map this deadline into the message priority.
1) Transmission deadline assignment
We consider the distributable thread as a global task \( T \) that consists of \( n \) subtasks \( T_1, \ldots, T_n \). The message transmission on the CAN bus can be seen as a subtask \( T_i \) and its deadline can be calculated using the Equal Slack strategy (EQS) [9]. Using this strategy, each subtask (including message transmission) should have its fair share of its global task’s slack and this can be done when dividing the total remaining slack equally among the remaining subtasks:
\[
dl(T_i) = ar(T_i) + pex(T_i) + \\
\left[ dl(T) - ar(T) - \sum_{j \neq i} pex(T_j) \right]/(n-i+1) \quad (2)
\]
- \( ar(T_i) \): arrival time of \( T_i \), i.e., the point of time when the ORB makes the send_request() or send_reply().
- \( pex(T_i) \): predictable execution time of \( T_i \) corresponds to the time taken by the message transmission over the CAN bus, pex\[T_i] can be assumed to the longest time taken to transmit message \( m \left(C_{\text{m}}\right) \), based on bounding the number of bits sent on the bus for this message. For CAN networks we have the fellow expressions:
For CAN 2.A and as the network priority of the message
\[ C_m = \frac{34 + 8S_n}{4} + 47 + 8S_n \] for CAN 2.A and
\[ C_m = \frac{54 + 8S_n}{4} + 57 + 8S_n \] for CAN 2.B.
The term \( S_m \) is the number of bytes in payload field of the message and \( T_{th} \) is the bit time of the bus (i.e. 1\( \mu \)s at a bus speed of 1 MBPS). This time delay includes the 47 bit overhead per message and 34 bits of the overhead added to the message content, both are subjected to bit stuffing. Recall that the stuffing consists on an additional bit of opposite value added after 5 successive bits of identical value. The same reasoning can be made for CAN 2.B.
- \( dl(T) \): Corresponds to the global deadline of the task, i.e., the distributable thread.
- \( \sum_{j=1}^{n} pex(T_j) \): The predictable remaining execution time of the task. This expression cannot be directly evaluated. To solve this problem we propose to deduce it from the current execution time of the global task and its global execution time. As indicated above, the local information collection component records the CPU utilization for the subtasks on each node, these information are sent to the system information repository, which aggregate the subtasks execution times for each global task to get its current execution time when \( T_r \) is submitted (\( curr\_ex(T_r) \)).
We implement \texttt{SystemRepository::get_utilization()} to evaluate this variable. The predictable remaining execution time of the task can be written:
\[ \sum_{j=1}^{n} pex(T_j) = ex(T) - \left[ curr(T, ar(T)) \right] \] (5)
After evaluating all terms of equation (2), the DSS calculates the intermediate deadline of subtask \( T_r \) which corresponds to the message transmission deadline. Below we describe how to map this deadline to CAN message priority.
2) Priority Mapping
In this section, we describe how the message transmission deadline is mapped to real-time message priority on the CAN network.
<table>
<thead>
<tr>
<th>2-bit Protocol Type</th>
<th>6-bit Priority Level</th>
<th>7-bit Physical Node_ID</th>
<th>6-bit TX_Port Number</th>
<th>8-bit TX_CAN Object_ID</th>
</tr>
</thead>
</table>
Figure 5. Partitioning of a CAN-message identifier
As depicted in figure 5, the CAN-ID is divided into five fields. In [14] we described deeply the partitioning of the CAN-ID, in this paper we are specifically interested in the priority level field. In the priority field of a real-time message, the time remaining until its transmission deadline is encoded. The \textit{transmission deadline} is a point of time specified by the sending application object, when a message must be completely transmitted to receiving nodes. As long as a sending node is pending for the bus, its communication subsystem checks and updates the transmission deadline of the ready message periodically.
Each value of the transmission laxity is mapped to a portion of future time, a \textit{priority tick} \( \Delta t_p \). At the end of each priority tick, a pending transmitter increases the priority of its real-time message by decrementing its transmission deadline field. The priority ticks are time intervals of a fixed length, with the first one beginning at the present time. Since there are only a limited number of different priorities (2\textsuperscript{6} or 64 priority levels), only a limited number of priority ticks are visible. Now the question is: how to map the deadline transmission message, calculated before, to its CAN priority?
Having a range \( \{P_{min} .. P_{max}\} \) for the priority field, a deadline \( \Delta L \) is mapped to a priority \( P \), where
\[ P = \left\lfloor \Delta L / \Delta t_p \right\rfloor + P_{min} \text{ if } \Delta L < (P_{max} - P_{min}) \Delta t_p \]
and
\[ P = P_{max} \text{ if } \Delta L \geq (P_{max} - P_{min}) \Delta t_p \]. The period \( \Delta t_p \) is called the priority slot and:
- \( P_{min} \) is the highest priority = lowest binary value for real-time priorities.
- \( P_{max} \) is the lowest priority = highest binary value for real-time priorities.
We denote \( P \) as the network priority of the message and network priority mapping, the function having, and global CORBA priority of the distributable thread as input and the network priority as output.
v. The network Latency-time Evaluation Methodology
The performance evaluation of the latency time is based on the evaluation of both the \textit{worst-case queuing delay} in the CAN MAC and physical sublayers and the longest time needed to transmit a message. We ignore factors with low probabilities that affect the latency time, such as message retransmission at Upper Layer Protocols-ULP, operating system scheduling uncertainties, etc. We limit the study of the latency time only to the CAN physical and MAC sublayers. Let \( J \) the queuing jitter of a message from the ULP to medium access control layer. We assume this jitter \( J \) is constant. The analysis of the worst-case response time can be derived from tasks scheduling theory and real-time scheduling algorithms that can be applied by the transmitter and by the receiver to guarantee a minimum latency time for the exchanged frames.
Figure 6. Definition of Latency time
The latency time is defined as the difference of time between the instant indicating the beginning of the transmission request and the real beginning of the action generated by this one.
Let $W_m$ be the time taken by a station to gain access to the medium, $C_{mes}$ the longest time needed for transmitting successfully a CAN message and $Ar$ the time taken by a receiver station to analyze the transmitted MAC frame.
$$ T_{lat} = J + W_m + C_{mes} + Ar \quad (6) $$
In what follows, we assume that the jitter $J$ and $Ar$ have constant values. Knowing that $C_m$ is the longest time taken to successfully transmit a CAN messages, then we have:
$$ C_m = \left\{ \frac{54 + 8S_m}{4} \right\} + 57 + 8S_m \tau_{lat} $$
$W_m$ is the worst-case queuing delay of message $m$ due to both higher priority messages pre-empting message $m$, and a lower priority message that has already acquired the bus [16].
$$ W_m = B_m + \sum_{\forall j \in (w)} \left[ \frac{W_j + J_j + \tau_{lat}}{T_j} \right] C_j \quad (7) $$
Where:
$$ B_m = \max(C_k) \quad (8) $$
Finally, we obtain the latency time expression, introduced by the CAN physical and MAC layer as:
$$ T_{lat} = J + B_m + \sum_{\forall j \in (w)} \left[ \frac{W_j + J_j + \tau_{lat}}{T_j} \right] C_j + \left\{ \frac{54 + 8S_m}{4} \right\} + 57 + 8S_m \tau_{lat} + Ar \quad (9) $$
vi. Conclusion
With the work described herein, the CAN bus has been rendered more usable in the field of distributed real-time systems. We tried to design interaction between distributable thread and distributed scheduling service to optimise network behaviour. This interaction aims to integrate the network resources control to high level middleware and thus enabling a new generation of flexible DRE applications that have more precise control over their end-to-end resource. The priority based mechanism and the EDF scheduling strategy used within the context of this work is adapted with soft real-time communication systems.
On promising research direction is to combine priority-based mechanisms in co-junction with reservation mechanisms and to combine this strategy with the hybrid real-time bus scheduling mechanisms for CAN.
vii. References
|
{"Source-Url": "http://ww1.ucmss.com/books/LFS/CSREA2006/CIC5178.pdf", "len_cl100k_base": 6903, "olmocr-version": "0.1.50", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 24761, "total-output-tokens": 8216, "length": "2e12", "weborganizer": {"__label__adult": 0.00040841102600097656, "__label__art_design": 0.0005216598510742188, "__label__crime_law": 0.0004248619079589844, "__label__education_jobs": 0.0006232261657714844, "__label__entertainment": 0.00016200542449951172, "__label__fashion_beauty": 0.00017344951629638672, "__label__finance_business": 0.0005211830139160156, "__label__food_dining": 0.0004000663757324219, "__label__games": 0.0008473396301269531, "__label__hardware": 0.00890350341796875, "__label__health": 0.0005178451538085938, "__label__history": 0.0004138946533203125, "__label__home_hobbies": 0.0001163482666015625, "__label__industrial": 0.001931190490722656, "__label__literature": 0.0002837181091308594, "__label__politics": 0.0003864765167236328, "__label__religion": 0.00061798095703125, "__label__science_tech": 0.4462890625, "__label__social_life": 7.814168930053711e-05, "__label__software": 0.031768798828125, "__label__software_dev": 0.501953125, "__label__sports_fitness": 0.00031757354736328125, "__label__transportation": 0.002368927001953125, "__label__travel": 0.00026607513427734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35416, 0.01945]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35416, 0.3691]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35416, 0.88287]], "google_gemma-3-12b-it_contains_pii": [[0, 4393, false], [4393, 10281, null], [10281, 15412, null], [15412, 20701, null], [20701, 24841, null], [24841, 30102, null], [30102, 35416, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4393, true], [4393, 10281, null], [10281, 15412, null], [15412, 20701, null], [20701, 24841, null], [24841, 30102, null], [30102, 35416, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35416, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35416, null]], "pdf_page_numbers": [[0, 4393, 1], [4393, 10281, 2], [10281, 15412, 3], [15412, 20701, 4], [20701, 24841, 5], [24841, 30102, 6], [30102, 35416, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35416, 0.01418]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
9f11c4ef5a0c11bdd75d8d2d07f36dfafdf4a908
|
PDF hosted at the Radboud Repository of the Radboud University Nijmegen
The following full text is a publisher's version.
For additional information about this publication click this link.
http://hdl.handle.net/2066/73145
Please be advised that this information was generated on 2019-08-28 and may be subject to change.
Evolving Fixed-parameter Tractable Algorithms
Stefan A. van der Meer $^a$ Iris van Rooij $^{a,b}$ Ida Sprinkhuizen-Kuyper $^{a,b}$
$^a$ Radboud University Nijmegen, Department of Artificial Intelligence
$^b$ Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour
Abstract
One effective means of computing NP-hard problems is provided by fixed-parameter tractable (fpt-) algorithms. An fpt-algorithm is an algorithm whose running time is polynomial in the input size and superpolynomial only as a function of an input parameter. Provided that the parameter is small enough, an fpt-algorithm runs fast even for large inputs. In this paper, we report on an investigation of the evolvability of fpt-algorithms via Genetic Programming (GP). The problem used in this investigation is the NP-hard 2D-Euclidean Traveling Salesman Problem (TSP), which is known to be fpt if the number of points not on the convex hull is taken as the parameter. The algorithm evolved in our GP study turns out to have clear characteristics of an fpt-algorithm. The results suggest GP can be utilized for generating fpt-algorithms for NP-hard problems in general, as well as for discovering input parameters that could be used to develop fpt-algorithms.
1 Introduction
Many computational problems, including those figuring in computational cognitive theories, are NP-hard. Traditionally, such NP-hard problems are considered intractable for all but small input sizes [3]. This has led applied computer scientists to focus attention on developing inexact (heuristic) methods for approaching NP-hard problems, and cognitive scientists to reject NP-hard problems as psychologically implausible models of human cognition [14]. However, it is known that certain NP-hard functions can be computable in a time that is polynomial for the overall input size and superpolynomial for only a small aspect of the input, called the parameter. Problems for which this holds are called fixed-parameter tractable and are said to belong to the complexity class FPT [2]. As long as the parameter is small enough for those instances of interest, an NP-hard problem in FPT can be considered efficiently solvable.
How do we know if a given problem is in FPT for some parameter $k$? One way of finding out is by designing an algorithm that solves the problem and establish that its running time can be expressed as a polynomial function of the input size, $n$, and a superpolynomial function of $k$ (i.e., time $O(n^\alpha f(k))$, where $\alpha$ is some constant and $f(.)$ is a function depending only on $k$). Designing such an algorithm can be technically quite challenging, however, especially if the relevant parameter is yet to be discovered. It is for this reason that we investigate here the utility of genetic programming (GP) as a general method for developing or discovering fpt-algorithms for NP-hard problems.
Genetic programming (GP) is an evolutionary computation technique used to evolve computer programs [6, 13]. Populations of programs are evaluated and the fittest individuals are ‘bred’ to form new populations. Breeding is performed by applying genetic operations such as crossover, which creates new programs by recombining random parts of two selected programs, and mutation, where a random part of a program is randomly altered to form a new program. We used GP to evolve an algorithm that solves instances of the 2-dimensional Euclidean Traveling Salesman problem (TSP): given a set of points (‘cities’) in the plane, find the shortest tour visiting all points and returning to its starting point. This problem is known to be NP-hard and in FPT if the number of inner points is taken as a parameter [1]. The inner points of a TSP instance are the points that are in the interior of the convex hull. GP has often been applied to finding heuristic algorithms for TSP (see for example [12]), but to our knowledge no attempts to use GP to find fpt-algorithms exist to this date.
The aim of this research is to test whether or not an fpt-algorithm for TSP can be evolved using GP. Also of interest is whether GP can be used to discover potentially interesting input parameters for use in
developing new fpt-algorithms for NP-hard problems in general. The rest of this paper is organized as follows. Section 2 describes our method, Section 3 provides our results and Section 4 concludes.
2 Method
In GP, when evaluating an evolved program, called an individual, it is executed within a context of predefined supporting code, referred to as the environment. As this environment does not evolve, its functionality remains constant over all evaluations. The environment combined with the individual forms the algorithm. The primitive set is the set of functions, variables and constants available to the GP process for generating programs. As tree-based GP [13] was used, functions are referred to as function nodes (forming internal nodes in a program’s tree), while variables and constants are terminal nodes (forming the leaves). Lastly, where the primitive set and environment define the search space, it is the fitness function that defines the goal of the search process [13], with individuals assigned a higher fitness value having a higher chance of being selected for breeding.
2.1 The environment
For the environment a structure was chosen similar to the one used in [12]. In this environment, a tour is built step by step, with the evolved individual forming a function that determines for each step what the next city in the tour should be. Algorithm 1 contains a pseudocode version of the environment. For each TSP instance used to evaluate a given individual, the environment loops through all cities in the problem instance that have not yet been ‘visited’ (i.e., that are not yet part of the tour). For each city, the evolved function calculates a score. When all unvisited cities have been scored, the city with the lowest score is selected. This city is added to the tour, and is therefore considered ‘visited’. This process repeats itself until the tour includes all cities in the problem instance. In effect the algorithm ‘travels’ from city to city until it has visited them all and the tour is complete. If at each step the evolved function has given the best score to the correct city, the algorithm has found an optimal tour. In case of a Nearest Neighbor heuristic, it would score each city according to its distance from the ‘current’ city (i.e., the distance from the city last added to the tour to the city being evaluated), which will not yield an optimal tour for many problem instances.
Algorithm 1 The environment represented in pseudocode.
```plaintext
city-start = random city
city-current = city-start
while not visited all cities do
selected = None
bestscore = ∞
for all unvisited cities do
city-eval = next unvisited city to evaluate
score = result of evolved function
if score < bestscore then
bestscore = score
selected = city-eval
end if
end for
add selected city to tour
city-current = selected
end while
return length of tour
```
This structure was chosen because it constrains the evolved function to the specific task of solving TSP. The structure allows for a wide range of solvers, from purely heuristic (e.g., Nearest Neighbor) variations, to optimal exhaustive searchers. Which exact algorithms can be constructed is constrained by the primitive set.
<table>
<thead>
<tr>
<th>Name</th>
<th>Return type</th>
<th>Child nodes</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>distance</td>
<td>Number</td>
<td>2 of type city</td>
<td>Returns the distance between the two given cities.</td>
</tr>
<tr>
<td>if-on-convex</td>
<td>Number</td>
<td>1 of type city, 2 of type number</td>
<td>If the given city is on the convex hull, returns the result of evaluating the first numeric child node. Else it returns the result of the second.</td>
</tr>
<tr>
<td>for-loop-X</td>
<td>Number</td>
<td>1 of type number</td>
<td>Loops through unvisited cities, evaluating the child node for each one and adding up the result to a running total. This total is returned. X is a number referring to the associated variable-node, see Table 2.</td>
</tr>
</tbody>
</table>
Table 1: Function set, domain-specific functions
2.2 The primitive set
Tree-based GP was used, so an evolved algorithm forms a tree structure consisting of any valid combination of function nodes and terminal nodes. Strong typing was used to enforce type requirements of certain nodes [11]. At its root, the tree returns a real number: the calculated score. All function nodes and many terminal nodes return this type, meaning they can form the root of the tree, allowing for a wide variety of possible programs.
The basic concept behind the primitive set was also inspired by the research of [12], in that the primary tool for scoring a city to be used in the evolved function was distance. However, unlike their research, in our experiment we required the primitive set to be sufficient for specific types of algorithms other than heuristics such as Nearest Neighbor. The first type is exhaustive algorithms, the second is fpt-algorithms.
2.2.1 Iteration and recursion
Some form of iteration or recursion is needed in order for an evolved program to implement something more complex than a heuristic of a complexity that is linear to the size of its tree. Therefore, both iteration and recursion were implemented and added to the primitive set.
Traditional implementations of iteration in GP [8, 9] do not allow for nested loops, nor do they allow for a loop counter or element reference to be used by other nodes in the tree when iterating over a vector. In this research, complex nested loop structures are of interest, as they allow for more advanced calculations, and more emergent computational complexity. Therefore, a simplified version of the iteration implementation described in [4, 5] was used. A for-loop node was implemented which iterates over all unvisited cities when called. On each iteration, it sets a variable to reference the current unvisited city in the iteration, and evaluates its child node. The result is added to a running total, which is returned at the end of the loop. The variable can be accessed through a special terminal node that can only be generated inside the subtree of a given for-loop node, as it will be linked to that specific loop node and only has a value inside its ‘scope’.
Recursion was implemented in the form of a terminal node. When this node is called while calculating the score for the city under evaluation, it adds the evaluated city to the tour and recursively calls the function holding the algorithm. By doing this, it causes the calculation of the rest of the tour as the algorithm would find it, if the given city were to be added to the tour. The length of this tour is returned, and the node returns this in turn as its result, after removing the evaluated city from the tour.
2.2.2 Primitive set contents
With the evolved program calculating a score using real numbers, it seemed useful to include basic mathematical operations for calculating and combining results from other nodes. The function set contained function nodes for addition, subtraction, multiplication and division, and for min and max operations. All these nodes require two children returning numbers, and return a single number themselves after performing their mathematical operation on the results of the child nodes. Besides these basic operations, certain
domain-specific functions were necessary, listed in Table 1. The distance node is used to find the distance between two cities. As arguments for this function, several terminal nodes exist that return a given city, listed in Table 2. The relevant nodes for iteration and recursion are also included, and are as defined earlier. Lastly, a node was added that represents the knowledge of the convex hull of a given TSP instance. This if-on-convex node checks if a given city is on the convex hull. If so, then it evaluates the first of its subtrees; if not, then it evaluates the other. Therefore, an evolved program using this node can alter its method of calculating a score (and its result) depending on whether the evaluated city is on the convex hull or not.
### 2.3 Fitness function
Traditionally, GP experiments use a single value to determine the fitness, such as the difference between the length of a shortest tour (i.e., the optimal solution for TSP) and the length of the tour that was found by an individual. In this experiment, however, there are two relevant values: speed and accuracy.
Speed was measured using the number of tree nodes evaluated in creating an individual’s tour, where a lower number of evaluations is faster and therefore better (on this measure). Individuals with many loops or with a recursion would evaluate a larger number of nodes, and score worse than a Nearest Neighbor-like individual. Equation 1 shows how the speed measure was calculated, where the number of instances refers to the instances used in evaluating the individual.
\[
\text{fitness}_{\text{speed}} = \frac{\text{number of instances}}{\text{nodes evaluated}}
\]
(1)
Accuracy was measured as the difference between the length of an optimal solution to the TSP instance the individual just solved, and the length of the tour the individual found. Equation 2 shows the exact calculation.
\[
\text{fitness}_{\text{accuracy}} = \frac{1}{1 + \text{tour length error}}
\]
(2)
Note that Equations 1 and 2 are chosen such that \( \text{fitness}_{\text{speed}} \) is decreasing in the number of evaluated nodes, and \( \text{fitness}_{\text{accuracy}} \) is decreasing in the length of the produced tour.
In exploratory runs it became clear that, if both \( \text{fitness}_{\text{speed}} \) and \( \text{fitness}_{\text{accuracy}} \) independently contributed to overall fitness, then the Nearest Neighbor (NN) individuals and exhaustive search individuals, consisting of only three and one nodes respectively, would always be selected for breeding. Apparently their good speed and good accuracy respectively would always ‘beat’ more complex individuals that were in their initial stages of development. This made it practically impossible for more complex individuals to exist for
longer than a single generation, and therefore difficult for such individuals to evolve into more ‘fit’ variants. To prevent the search process from fixating on the two extremes of exhaustive versus NN search, lower limits were set on both speed and accuracy. These limits would start at a high level in the beginning of the run, but would become lower with each generation until (i) the accuracy limit would make Nearest Neighbor search unfeasible and (ii) the speed limit would make exhaustive search unfeasible. It was our expectation that the introduction of such strict lower limits would enable the evolution process to go beyond the fastest heuristic approach and the intractable exact approach, and explore instead accurate yet tractable algorithms such as fpt-algorithms. The fitness functions with the additional lower limits are given in Equation 3.
\[
\begin{align*}
\text{nodes evaluated} & > \text{maximum nodes} \lor \\
\text{tour length error} & > \text{maximum error} \Rightarrow \\
fitness_{\text{speed}} &= fitness_{\text{accuracy}} = 0 \\
\text{otherwise} & \Rightarrow fitness_{\text{speed}} = \frac{\text{number of instances}}{\text{nodes evaluated}} \land fitness_{\text{accuracy}} = \frac{1}{1 + \text{tour length error}}
\end{align*}
\]
In comparing two individuals, it is very likely that neither of them may be better in both speed and accuracy, particularly in the earlier generations of a run, and especially considering the existence of the extreme individuals mentioned earlier. A simple criterion was introduced to counteract this effect: Whenever individuals were compared during selection, if neither was better on both speed and accuracy, there would be a chance they were then compared on either speed or accuracy to find a winner. This probability was set to start at a high level, and decreased as the process advanced in generations.
### 2.4 Experiment details
The GP experiment was implemented using the evolutionary computation for Java toolkit, ECJ [10]. Many GP parameters were left at the defaults used by Koza [7], such as those involving the building of initial trees in the population. Experiment-specific parameters and their values are listed in Table 3.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Generations</td>
<td>50</td>
</tr>
<tr>
<td>Population size</td>
<td>128</td>
</tr>
<tr>
<td>Crossover rate(^1)</td>
<td>0.80</td>
</tr>
<tr>
<td>Mutation rate(^1)</td>
<td>0.10</td>
</tr>
<tr>
<td>Reproduction rate(^1)</td>
<td>0.10</td>
</tr>
<tr>
<td>TSP instance size</td>
<td>7</td>
</tr>
<tr>
<td>TSP instances per evaluation</td>
<td>50</td>
</tr>
<tr>
<td>Total pool of random instances</td>
<td>500</td>
</tr>
</tbody>
</table>
\(^1\) The crossover, mutation and reproduction rates determine the probability of said genetic operation being used in breeding a new individual. See [7, 13] for details on these operations.
The TSP instances used in the experiment each consisted of 7 points. This number was kept deliberately low to ensure that evaluation progressed at a reasonable rate. Larger instances meant that individuals using (exhaustive) recursion would spend a large amount of time per instance, making the evaluation of a large number of individuals take an impractical amount of time. Population size was limited to 128 for the same reason. The instances were generated beforehand, as a set of 7 random coordinates in an area of 500 by 500 points. Every possible number of inner points (0 to 4, as a minimum of 3 points form the convex hull) was equally represented in the pool of instances. This was achieved by randomly generating point sets and disregarding instances that did not match the required number of inner points. For each generated instance both the optimal solution and the points on the convex hull were calculated. Each individual was evaluated on 50 TSP instances randomly selected from a pool of 500 available instances.
3 Results
3.1 Best evolved individual
In the GP experiment, one type of individual was consistently selected as the best individual of a run. The individual’s code is shown in Program 1. This individual would generally develop fairly early in the run, between generations 10 and 25 (of 50), and due to its relatively high fitness it would immediately form the best individual of the generation and remain so until the end of the run. The early development is not surprising given the structure of the individual. It is a small tree consisting of only a few nodes, and substantial parts of the tree are formed by nodes that make up two common individuals in the population. The subtree \((\text{distance city-current city-eval})\) is equal to the tree of a Nearest Neighbor (NN) individual, shown in Program 2. Similarly, the \((\text{recursion})\) node would on its own form the entirety of an exhaustive search individual, shown in Program 3.
Program 1 The program of the best evolved individual.
\[
\text{(if-on-convex city-current}
\text{ (distance city-current city-eval)}
\text{ (recursion))}
\]
Program 2 The program of the common Nearest Neighbor individual.
\[
\text{(distance city-current city-eval)}
\]
Program 3 The program of the common exhaustive search individual.
\[
\text{(recursion)}
\]
The behavior of the best evolved individual (Program 1) is straightforward: If the current city is part of the convex hull, it travels to the nearest neighboring city. Otherwise, it recursively builds possible extensions of the tour resulting from travelling to any of the unvisited cities. This recursive process repeats until the program encounters again a point on the convex hull, in which case it will extend each of the partial tours constructed so far by traveling to the nearest unvisited neighbor. From all tours constructed in the process, the program determines which is the shortest, and travels to the unvisited city that has that tour as a result. Note that if the instance the algorithm is solving happens to have few inner points, say only 1, it will do much less recursion than an exhaustive solver. At the same time, it will give more accurate results than NN when solving more complex instances with multiple inner points, as the ‘look ahead’ in the recursion allows it to avoid certain bad choices that NN would make. If tours with such bad choices occur after a recursion has been entered, they will most likely be discarded due to their higher length.
3.2 Fpt-characteristics
Is the best evolved individual an fpt-algorithm for TSP? To address this question, we first consider the algorithm’s time behavior: The worst-case\(^2\) time-complexity of the best individual is \(O(k! \ (n - k)^2) = O(k! \ n^2)\), where \(n\) is the total number of points and \(k\) is the number of inner points. For instances with zero inner points, the program behaves as an NN individual (with time-complexity \(O(n^2)\)), and for instances with a very large number of inner points, performance is nearer an exhaustive search (with time-complexity \(O(n!)\)). We also investigated the algorithm’s average-case time-complexity by running the algorithm on the 500 random instances in the pool. The results, depicted in Figure 1a, show that the average time required for Program 1 to find a tour grows speedily with the number of inner points, with its running time being close to that of the NN heuristic for few inner points and growing closer to the exhaustive algorithm as the number of inner points increase. In sum, the evolved program indeed exploits the number of inner points for
\(^2\)Due to the nature of both the environment and the evolved program itself, an individual’s time-complexity depends on the point selected as starting point (which is a random selection). If this point is an inner point, for example, more recursion will be performed than if that city is not visited until later on in the computation.
Figure 1: Speed (a) and accuracy (b) as a function of number of inner points, for the evolved program, NN, and exhaustive search, averaged over all 500 random instances and all possible starting cities.
the efficient computation of instances for which that parameter is small, and the running-time behavior is as one expects of an fpt-algorithm (i.e., the running time can be expressed as a polynomial function of input size, \(n\), and a superpolynomial function of only the parameter \(k\)).
As it turns out, the best evolved individual does not meet the second criterion for being an fpt-algorithm, viz., exactness. Due to its reliance on NN to select the optimal tour when travelling over the convex hull, it inherits some of NN’s flaws. Figure 2 shows an example of a trivial instance (i.e., one without any inner points) where, for a certain starting point, NN fails to find an optimal tour. Such instances are not rare: For only 4 of the 100 generated instances with no inner points, NN is able to find an optimal tour regardless of the starting point, with on average 3.32 out of 7 starting points per instance resulting in a less than optimal tour. Be that as it may, the performance of the best evolved individual is much better than NN for all instances with at least one inner point (see Figure 1b). Hence, even if the best evolved individual is not an exact algorithm for TSP, it clearly outperforms a polynomial-time heuristic, like NN.
Figure 2: Example instance with no inner points where NN does not find an optimal tour when starting from a certain city (starting city shown in black).
4 Conclusion
The program evolved in our GP experiment shows clear characteristics of an fpt-algorithm, even though strictly speaking it is not: The program does not solve all TSP instances optimally, though it is much more accurate than its polynomial-time competitor, the NN heuristic. Also, the program is characterized by an fpt running time of \(O(k! n^2)\). This result is promising with regards to the utility of GP for developing fpt-algorithms for NP-hard problems in general and discovering relevant parameters that can be used in such algorithms. We think that the fact that the best individual in our experiment was not an exact algorithm for TSP does not detract from this point, because an evolved fpt-heuristic can give a clear suggestion as to the direction in which an fpt-algorithm can be sought. After all, TSP is known to be in FPT if the parameter is the number of inner points [1], and the fpt-heuristic evolved in our experiment used this same parameter to bound its superpolynomial running time. Besides the important step of discovering the parameter, it is conceivable that fpt-like inexact individuals themselves can be transformed into fpt-algorithms; and even if they cannot, an evolved (inexact) fpt-heuristic may still strike a better balance between speed and accuracy for instances of practical interest, than do available polynomial-time heuristics.
References
|
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/73145/73145.pdf?sequence=1", "len_cl100k_base": 5546, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 22083, "total-output-tokens": 6790, "length": "2e12", "weborganizer": {"__label__adult": 0.0005140304565429688, "__label__art_design": 0.0003659725189208984, "__label__crime_law": 0.00070953369140625, "__label__education_jobs": 0.0017938613891601562, "__label__entertainment": 0.00013506412506103516, "__label__fashion_beauty": 0.00026488304138183594, "__label__finance_business": 0.0005145072937011719, "__label__food_dining": 0.0005507469177246094, "__label__games": 0.0009245872497558594, "__label__hardware": 0.0015192031860351562, "__label__health": 0.0015554428100585938, "__label__history": 0.0005536079406738281, "__label__home_hobbies": 0.00020992755889892575, "__label__industrial": 0.0009665489196777344, "__label__literature": 0.0005884170532226562, "__label__politics": 0.0005326271057128906, "__label__religion": 0.0007977485656738281, "__label__science_tech": 0.30859375, "__label__social_life": 0.000186920166015625, "__label__software": 0.0076904296875, "__label__software_dev": 0.6689453125, "__label__sports_fitness": 0.0005893707275390625, "__label__transportation": 0.0012664794921875, "__label__travel": 0.0002930164337158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28463, 0.02862]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28463, 0.67965]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28463, 0.91227]], "google_gemma-3-12b-it_contains_pii": [[0, 323, false], [323, 4518, null], [4518, 7808, null], [7808, 12053, null], [12053, 14841, null], [14841, 18758, null], [18758, 22701, null], [22701, 25693, null], [25693, 28463, null]], "google_gemma-3-12b-it_is_public_document": [[0, 323, true], [323, 4518, null], [4518, 7808, null], [7808, 12053, null], [12053, 14841, null], [14841, 18758, null], [18758, 22701, null], [22701, 25693, null], [25693, 28463, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28463, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28463, null]], "pdf_page_numbers": [[0, 323, 1], [323, 4518, 2], [4518, 7808, 3], [7808, 12053, 4], [12053, 14841, 5], [14841, 18758, 6], [18758, 22701, 7], [22701, 25693, 8], [25693, 28463, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28463, 0.1087]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
00bf32428fd828016d83711fb17ac184187cbec7
|
Learning Relational Representations with Auto-encoding Logic Programs
Sebastijan Dumančić1*, Tias Guns2, Wannes Meert1 and Hendrik Blockeel1
1KU Leuven, Belgium
2VUB, Belgium
{sebastijan.dumancic, wannes.meert, hendrik.blockeel}@cs.kuleuven.be, tias.guns@vub.be
Abstract
Deep learning methods capable of handling relational data have proliferated over the last years. In contrast to traditional relational learning methods that leverage first-order logic for representing such data, these deep learning methods aim at representing symbolic relational data in Euclidean spaces. They offer better scalability, but can only numerically approximate relational structures and are less flexible in terms of reasoning tasks supported. This paper introduces a novel framework for relational representation learning that combines the best of both worlds. This framework, inspired by the auto-encoding principle, uses first-order logic as a data representation language, and the mapping between the original and latent representation is done by means of logic programs instead of neural networks. We show how learning can be cast as a constraint optimisation problem for which existing solvers can be used. The use of logic as a representation language makes the proposed framework more accurate (as the representation is exact, rather than approximate), more flexible, and more interpretable than deep learning methods. We experimentally show that these latent representations are indeed beneficial in relational learning tasks.1
1 Introduction
Deep representation learning (DL) [Goodfellow et al., 2016] has proven itself to be an important tool for modern-day machine learning (ML): it simplifies the learning task through a series of data transformation steps that define a new feature space (so-called latent representation) making data regularities more explicit. Yet, DL progress has mainly focused on learning representations for classifiers recognising patterns in sensory data, including computer vision and natural language processing, having a limited impact on representations aiding automated reasoning. Learning such reasoning systems falls under the scope of Statistical Relational Learning (SRL) [Getoor and Taskar, 2007], which combines knowledge representation capabilities of first-order logic with probability theory and hence express both complex relational structures and uncertainty in data. The main benefit of SRL models, that most ML methods lack, is the ability to (1) operate on any kind of data (feature vectors, graphs, time series) using the same learning and reasoning principles, and (2) perform complex chains of reasoning and answer questions about any part of a domain (instead of one pre-defined concept).
Recent years have yielded various adaptations of standard neural DL models towards reasoning with relational data, namely Knowledge graph embeddings [Nickel et al., 2016] and Graph neural networks [Kipf and Welling, 2017; Hamilton et al., 2017]. These approaches aim to re-represent relational data in vectorised Euclidean spaces, on top of which feature-based machine learning methods can be used. Though this offers good learning capabilities, it sacrifices the flexibility of reasoning [Trouillon et al., 2019] and can only approximate relational data, but not capture it in its entirety.
This work proposes a framework that unites the benefits of both the SRL and the DL research directions. We start with the question:
Is it possible to learn latent representations of relational data that improve the performance of SRL models, such that the reasoning capabilities are preserved?
Retaining logic as a representation language for latent representations is crucial in achieving this goal, as retaining it inherits the reasoning capabilities. Moreover, it offers additional benefits. Logic is easy to understand and interpret (while DL is black-box), which is important for trust in AI systems. Furthermore, SRL methods allow for incorporation of expert knowledge and thus can easily build on previously gathered knowledge. Finally, SRL systems are capable of learning from a few examples only, which is in sharp contrast to typically data-hungry DL methods.
We revisit the basic principles of relational representation learning and introduce a novel framework to learn latent representations based on symbolic, rather than gradient-based computation. The proposed framework implements the auto-encoder principle [Hinton and Salakhutdinov, 2006] – one of the most versatile deep learning components – but uses logic programs as a computation engine instead of (deep) neural networks. For this reason, we name our approach Auto-encoding logic programs (Alps).
2 Auto-encoding Logic Programs
Auto-encoders learn new representations through the reconstruction principle: the goal is to learn an encoder, mapping the input data to its latent representation, and a decoder, mapping the latent representation back to the original space so that the input data can be faithfully reconstructed. For a latent representation to be useful, it is important to prevent it from learning an identity mapping – often done by limiting the dimensionality and/or enforcing sparsity.
In neural auto-encoders, data is represented with vectors and mapping functions are matrices. Our goal is, intuitively, to lift the framework of auto-encoders to use first-order logic as a data representation language, and logic programs as mapping functions of encoder and decoder (Figure 1). In the following paragraphs, we describe the basic components of Alps.
Data. To handle arbitrary relational data, Alps represent data as a set of logical statements, such as \( \text{father(vader,luke)} \) (Figure 1, Input). These statements consist of constants representing the entities in a domain (e.g., \( \text{vader, luke} \)) and predicates indicating the relationships between entities (e.g., \( \text{father} \)). A ground atom is a predicate symbol applied to constants (e.g., \( \text{father(vader,luke)} \)); if an atom evaluates to true, it represents a fact. Given a set of predicates \( P \) and a set of constants \( C \) (briefly, a vocabulary \( (P, C) \)), the Herbrand base \( HB(P, C) \) is the set of all atoms that can be constructed using \( P \) and \( C \). A knowledge base is a subset of the Herbrand base; it contains all the atoms that evaluate to true.
Mapping functions. The mapping functions of both encoder and decoder are realised as logic programs. A logic program is a set of clauses – logical formulas of the form \( h :- b_1, \ldots, b_n \), where \( h \) is called the head literal and \( b_i \) are body literals (comma denotes conjunction). A literal is an atom or its negation. Literals can contain variables as arguments; these are by definition universally quantified. Given a vocabulary \( (P, C) \), we call a literal a \( (P, C) \)-literal if its predicate is in \( P \) and its argument are constants in \( C \) or variables. Clauses are read as logical implications; e.g., the clause \( \text{mother(X,Y)} :- \text{parent(X,Y)}, \text{female(X)} \) states that for all \( X \) and \( Y \), \( X \) is a mother of \( Y \) if \( X \) is a parent of \( Y \) and \( X \) is female.
Encoding program. Given an input vocabulary \( (P, C) \), an encoding logic program \( E \) (Fig. 1 middle left) is a set of clauses with \( (P, C) \)-literals in the body and a positive \( (L, C) \)-literal in the head, where \( L \) is a set of predicates that is disjoint with \( P \) and is extended by the learner as needed. \( E \) takes as input a knowledge base \( KB \subseteq HB(P, C) \) and produces as output a latent representation \( KB' \subseteq HB(L, C) \), more specifically the set of all facts that are implied by \( E \) and \( KB \).
Decoding program. A decoding logic program \( D \) similarly maps a subset of \( HB(L, C) \) back to a subset of \( HB(P, C) \). Its clauses are termed decoder clauses; they contain \( (L, C) \) literals in the body and a positive \( (P, C) \)-literal in the head.
Alps. Given encoding and decoding logic programs \( E \) and \( D \), their composition \( D \circ E \) is called an auto-encoding logic program (Alp). An Alp is lossless if for any \( KB \), \( D(E(KB)) = KB \). In this paper, we measure the quality of Alps using the following loss function:
**Definition 1** Knowledge base reconstruction loss. The knowledge base reconstruction loss (the disagreement between the input and the reconstruction), \( \text{loss}(E, D, KB) \), is defined as
\[
\text{loss}(E, D, KB) = |D(E(KB)) \Delta KB| \tag{1}
\]
where \( \Delta \) is the symmetric difference between two sets.
3 Learning as Constraint Optimisation
With the main components of Alps defined in the previous section, we define the learning task as follows:
**Definition 2** Given a knowledge base \( KB \) and constraints on the latent representation, find \( E \) and \( D \) that minimise \( \text{loss}(E, D, KB) \) and \( E(KB) \) fulfills the constraints.
The constraints on the latent representation prevent it from learning an identity mapping. For example, enforcing sparsity by requiring that the $E(KB)$ has at most $N$ facts. We formally define these constraints later.
Intuitively, learning Alps corresponds to a search for a well-performing combination of encoder and decoder clauses. That is, out of a set of possible encoder and decoder clauses, select a subset that minimises the reconstruction loss. To find this subset, we introduce a learning method inspired by the enumerative and constraint solving techniques from program induction [Gulwani et al., 2017] (illustrated in Figure 2). Given a $KB$ and predicates $P$, we first enumerate possible encoder clauses. These clauses define a set of candidate latent predicates $L$ which are subsequently used to generate candidate decoder clauses. The obtained sets, which define the space of candidate clauses to choose from, are then pruned and used to formulate the learning task as a generic constraint optimisation problem (COP) [Rossi et al., 2006]. Such a COP formulation allows us to tackle problems with an extremely large search space and leverage existing efficient solvers. The COP is solved using the Oscar solver. The resulting solution is a subset of the candidate encoder and decoder clauses that constitute an Alp.
A COP consists of three components: decision variables whose values have to be assigned, constraints on decision variables, and an objective function over the decision variables that expresses the quality of the assignment. A solution consists of a value assignment to the decision variables such that all constraints are satisfied. In the following sections, we describe each of these components for learning Alps.
### 3.1 Decision Variables: Candidate Clauses
The COP will have one Boolean decision variable $ec_i$ for each generated candidate encoder clause, and a Boolean decision variable $dc_i$ for each generated candidate decoder clause, indicating whether a clause is selected (having the value 1) or not (having value 0).
To generate the candidate encoder clauses, we start from the predicates in the input data and generate all possible bodies (conjunctions or disjunctions of input predicates with logical variables as entities) up to a given maximum length $l$. Furthermore, we enforce that the predicates share at least one logic variable, e.g. $p_1(X,Y), p_2(Y,Z)$ is allowed while $p_1(X,Y), p_2(Z,W)$ is not. For each possible body, we then define a new latent predicate that will form the head of the clause. This requires deciding which variables from the body to use in the head. We generate all heads that use a subset of variables, with the maximal size of the subset equal to the maximum number of arguments of predicates $P$. Candidate decoder clauses are generated in the same way, but starting from the predicates $L$.
### 3.2 Constraints
#### Bottleneck Constraint
The primary role of constraints in Alps is to impose a bottleneck on the capacity of the latent representation; this is the key ingredient in preventing the auto-encoder from learning the identity mapping as $E$ and $D$. This is often done by enforcing compression in the latent representation, sparsity or both.
The straightforward way of imposing compression in Alps is to limit the number of facts in the latent representation. Preliminary experiments showed this to be a very restrictive setting. In Alps we impose the bottleneck by limiting the average number of facts per latent predicate through the following constraint
$$\sum_{i=1}^{N} wi_{ec_i} \leq \gamma G$$
where $wi_{ec_i}$ are decision variables corresponding to the encoder clauses, $w_i$ is the number of latent facts the encoder clause $ec_i$ entails, $G$ is the average number of facts per predicate in the original data representation and $\gamma$ is the compression parameter specified by the user. For example, in Figure 1, $G = 9/5$ and $w = 4$ for $latent1(X,Y) :- mother(X,Y); father(X,Y)$.
#### Semantic Constraints
The secondary role of constraints is to impose additional structure to the search space, which can substantially speed up the search. The following set of constraints reduces the search space by removing undesirable and redundant solutions\(^3\). These constraints are automatically generated and do not require input from the user.
**Connecting encoder and decoder.** A large part of the search space can be cut out by noticing that the encoder clauses deterministically depend on the decoder clauses. For instance, if a decoder clause $mother(X,Y) :- latent1(X,Y), latent2(X)$ is
\(^3\)https://bitbucket.org/oscarlib/oscar/wiki/Home
\(^3\)Exact constraint formulations are in the supplementary material.
selected in the solution, then the encoder clauses defining the latent predicates \textit{latent1} and \textit{latent2} have to be selected as well. Consequently, encoder clauses are implied by decoded clauses and search only has to happen over candidate decoder clauses. The implication is modelled with a constraint ensuring that the final solution must contain an encoder clause defining a predicate \(1\) if the solution contains at least one of the decoder clauses that use \(1\) in the body.
\textbf{Generality.} Given the limited capacity of the latent representation, it is desirable to prevent the solver from ever exploring regions where clauses are too similar and thus yielding a marginal gain. One way to establish the similarity of clauses is to analyse the ground atoms the clauses cover; a clause \(c_1\) is said to be more \textit{general} than a clause \(c_2\) if all examples entailed by \(c_2\) are also entailed by \(c_1\). As \(c_2\) cannot bring new information if \(c_1\) is already a part of the solution, we introduce constraints ensuring that \textit{if a clause \(c_1\) is more general than a clause \(c_2\), at most one of them can be selected.}
\textbf{Reconstruct one of each input predicates.} If \(KB\) contains a predicate with a substantially larger number of facts than the other predicates in \(KB\), a trivial but undesirable solution is one that focuses on reconstructing the predicate and its facts while ignoring the predicates with a smaller number of facts. To prevent this, we impose the constraints ensuring that among all decoder clauses with the same input predicate in the head, at least one has to be a part of the solution. This, of course, does not mean all facts of each input predicate will be reconstructed. We did notice that this constraint allows the solver to find a good solution substantially faster.
3.3 Objective Function: The Reconstruction Loss
We wish to formulate the objective over all \textit{missing} (in \(KB\) but not being reconstructed) and \textit{false reconstructions} (produced by the decoder, but not in \(KB\)). To do so, we first obtain a union of \textit{latent facts} generated by each of the candidate encoder clauses; these are a subset of \(HB(L,C)\). These latent facts allow us to obtain a union of all ground atoms generated by the candidate decoder clauses; these form a \textit{reconstruction} and are a subset of \(HB(P,C)\). Additionally, for each ground atom in the reconstruction, we remember which candidate decoder clause reconstructed it.
We hence use the above correspondence between the candidate decoder clauses and the reconstructions to create an auxiliary Boolean decision variable \(rf_i\) for each possible ground atom in \(HB(P,C)\) that can be reconstructed. Whether it is reconstructed or not depends on the decoder clauses that are in the solution.
For example, assume that \(mother(padme,leia)\) can be reconstructed with either of the following decoder clauses:
\[
mother(X,Y) :- \text{latent1}(X,Y), \text{latent2}(X).
\]
\[
mother(X,Y) :- \text{latent3}(X,Y).
\]
Let the two decoder clauses correspond to the decision variables \(dc_1\) and \(dc_2\). We introduce \(rf_i\) to represent the reconstruction of fact \(mother(padme,leia)\) and add a constraint
\[
rf_i \leftrightarrow dc_1 \lor dc_2.
\]
Associating such boolean variable \(rf_e\) with every \(e \in HB(P,C)\), we can formulate the objective as
\[
\text{minimize } \sum_{i \in KB} \text{not}(rf_i) + \sum_{j \in HB(P,C) \setminus KB} rf_j.
\]
3.4 Search
Given the combinatorial nature of Alps, finding the optimal solution exactly is impossible in all but the smallest problem instances. Therefore, we resort to the more scalable technique of \textit{large neighbourhood search} (LNS) [Ahuja et al., 2002]. LNS is an iterative search procedure that, in each iteration, performs the exact search over a subset of decision variables. This subset of variables is called the \textit{neighbourhood} and it is constructed around the best solution found in the previous iterations.
A key design choice in LNS is the construction of the neighbourhood. The key insight of our strategy is that the solution is necessarily sparse – only a tiny proportion of candidate decoder clauses will constitute the solution at any time. Therefore, it is important to preserve at least some of the selected decoder clauses between the iterations. Let a variable be \textit{active} if it is part of the best solution found so far, and \textit{inactive} otherwise. We construct the neighbourhood by remembering the value assignment of \(\alpha\) \% active variables (corresponding to decoder clauses), and \(\beta\) \% inactive variables corresponding to encoder clauses. For the individual search runs, we use \textit{last conflict search} [Gay et al., 2015] and the \textit{max degree} ordering of decision variables.
3.5 Pruning the Candidates
As the candidate clauses are generated naively, many candidates will be uninformative and introduce mostly false reconstructions. It is therefore important to help the search by pruning the set of candidates in an insightful and non-trivial way. We introduce the following three strategies that leverage the specific properties of the problem at hand.
\textbf{Naming variants.} Two encoder clauses are \textit{naming variants} if and only if they reconstructed the same set of ground atoms, apart from the name of the predicate of these ground atoms. As such clauses contain the same information w.r.t. the constants they contain, we detect all naming variants and keep only one instance as a candidate.
\textbf{Signature variants.} Two decoder clauses are \textit{signature variants} if and only if they reconstructed the same set of ground atoms and their bodies contain the same predicates. As signature variants are redundant w.r.t. the optimisation problem, we keep only one of the clauses detected to be signature variants and remove the rest.
\textbf{Corruption level.} We define the \textit{corruption level} of a decoder clause as a proportion of the false reconstructions in the ground atoms reconstructed by the decoder clause. This turns out to be an important notion: if the corruption level of a decoder clause is greater than 0.5 then the decoder clause cannot improve the objective function as it introduces more \textit{false} than \textit{true} reconstructions. We remove the candidate clauses that have a corruption level \(\geq 0.5\).
These strategies are very effective: applying all three of them during the experiments has cut out more than 50 \% of candidate clauses.
4 Experiments and Results
The experiments aim at answering the following question:
**Q:** Does learning from latent representations created by Alps improve the performance of an SRL model?
We focus on learning generative SRL models, specifically generative Markov Logic Networks (MLN) [Richardson and Domingos, 2006]. The task of generative learning consists of learning a single model capable of answering queries about any part of a domain (i.e., any predicate). Learning an MLN model consists of searching for a set of logical formulas that will be used to answer the queries. Therefore, we are interested in whether learning the structure of a generative model in latent space, and decoding it back to the original space, is more effective than learning the model in the original data space.
We focus on this task primarily because no other representation learning method can address this task. For instance, embeddings vectorise the relational data and thus cannot capture the generative process behind it, nor do they support conditioning on evidence.
The deterministic logical mapping of Alps might seem in contrast with the probabilistic relational approaches of SRL. However, that is not the case as the majority of SRL approaches consider data to be deterministic and express the uncertainty through the probabilistic model.
**Procedure.** We divide the data in training, validation and test sets respecting the originally provided splits. The models are learned on the training set, their hyper-parameters tuned on the validation set (in the case of Alps) and tested on the test set. This evaluation procedure is standard in DL, as full cross-validation is infeasible. We report both AUC-PR and AUC-ROC results for completeness; note, however, that the AUC-PR is the more relevant measure as it is less sensitive to class imbalance [Davis and Goadrich, 2006], which is the case with the datasets we use in the experiments. We evaluate the MLNs in a standard way: we query facts regarding one specific predicate given everything else as evidence and repeat it for each predicate in the test interpretation.
**Models.** We are interested in whether we can obtain better SRL models by learning from the latent data representation. Therefore, we compare the performance of an MLN learned on the original representation (the baseline MLN) and an MLN learned on the latent representation (the latent MLN) resulting from Alps. To allow the comparison between the latent and the baseline MLNs, once the latent MLN is learned we add the corresponding decoder clauses as deterministic rules. This ensures that the baseline and latent MLNs operate in the same space when being evaluated.
**Learner.** Both the baseline and the latent MLNs are obtained by the BUSL learner [Mihalkova and Mooney, 2007]. We have experimented with more recent MLN learner LSM [Kok and Domingos, 2010], but tuning its hyper-parameters proved challenging and we could not get reliable results. Note that our main contribution is a method for learning Alps and subsequently the latent representation of data, not the structure of an MLN; MLNs are learned on the latent representation created by Alps. Therefore, the exact choice of an MLN learner is not important, but whether latent representation enables the learner to learn a better model is.
**Practical considerations.** We limit the expressivity of MLN models to formulas of length 3 with at most 3 variables (also known as a liftable class of MLNs). This does not sacrifice the predictive performance of MLNs, as shown by Van Haaren et al. [2016]. Imposing this restriction allows us to better quantify the contribution of latent representations: given a restricted language of the same complexity if the latent MLN performs better that is clear evidence of the benefit of latent representations. The important difference when performing inference with a latent MLN is that each latent predicate that could have been affected by the removal of the test predicate (i.e., the test predicate is present in the body of the encoder clause defining the specific latent predicate). Hence it has to be declared open world, otherwise, MLNs will assume that all atoms not present in the database are false.
**Alps hyper-parameters.** As with standard auto-encoders, the hyper-parameters of Alps allow a user to tune the latent representation to its needs. To this end, the hyper-parameters pose a trade-off between the expressivity and efficiency. When learning latent representations, we vary the length of the encoder and decoder clauses separately in {2, 3} and the compression level (the α parameter) in {0.3, 0.5, 0.7}.
**Data.** We use standard SRL benchmark datasets often used with MLN learners: Cora-ER, WebKB, UWCSE and IMDB. The descriptions of the datasets are available in [Mihalkova and Mooney, 2007; Kok and Domingos, 2010], while the datasets are available on the Alchemy website4.
4.1 Results
The results (Figure 3) indicate that BUSL is able to learn better models from the latent representations. We observe an improved performance, in terms of the AUC-PR score, of the latent MLN on all datasets. The biggest improvement is observed on the Cora-ER dataset: the latent MLN achieves a score of 0.68, whereas the baseline MLN achieves a score of 0.18. The IMDB and WebKB datasets experience smaller but still considerable improvements: the latent MLNs improve the AUC-PR scores by approximately 0.18 points. Finally, a more moderate improvement is observed on the UWCSE dataset: the latent MLN improves the performance for 0.09 points.
These results indicate that latent representations are a useful tool for relational learning. The latent predicates capture the data dependencies more explicitly than the original data representation and thus can, potentially largely, improve the performance. This is most evident on the Cora-ER dataset. To successfully solve the task, a learner has to identify complex dependencies such as two publications that have a similar title, the same authors and are published at the same venue are identical. Such complex clauses are impossible to express with only three predicates; consequently, the baseline MLN achieves a score of 0.18. However, the latent representation makes these pattern more explicit and the latent MLN performs much better, achieving the score of 0.68.
Neural representation learning methods are sensitive to the hyper-parameter setup, which tend to be domain dependent.
---
4http://alchemy.cs.washington.edu/
We have noticed similar behaviour with Alps by inspecting the performance on the validation set (details in the supplement). The optimal parameters can be selected, as we have shown, on a validation set with a rather small grid as Alps have only three hyper-parameters.
Runtime. Figure 4 summarises the time needed for learning a latent representation. These timings show that, despite their combinatorial nature, Alps are quite efficient: the majority of latent representations is learned within an hour, and a very few taking more than 10 hours (this excludes the time needed for encoding the problem to COP, as we did not optimise that step). In contrast, inference with MLN takes substantially longer time and was the most time-consuming part of the experiments. Moreover, the best result on each dataset (Figure 3) is rarely achieved with the latent representation with the most expressive Alp, which are the runs that take the longest.
5 Related Work
The most prominent paradigm in merging SRL and DL are (knowledge) graph embeddings [Nickel et al., 2016; Hamilton et al., 2017]. In contrast to Alps, these methods do not retain full relational data representation but approximate it by vectorisation. Several works [Minervini et al., 2017; Demeester et al., 2016] impose logical constraints on embeddings but do not retain the relational representation.
Kazemi and Poole [2017] and Sourek et al. [2016] introduce symbolic variants of neural networks for relational data. Evans and Grefenstette [2018] introduce a differentiable way to learn predictive logic programs. These are likewise capable of discovering latent concepts (predicates), but focus on predictive learning, often with a pre-specified architecture.
Several works integrate neural and symbolic components but do not explore learning new symbolic representation. Rocktäschel and Riedel [2017] introduce a differentiable version of Prolog’s theorem proving procedure, which Campero et al. [2018] leverage to acquire logical theories from data. Manhaeve et al. [2018] combine symbolic and neural reasoning into a joint framework, but only consider the problem of parameter learning not the (generative) structure learning.
Inventing a new relational vocabulary defined in terms of the provided one is known as predicate invention in SRL [Kramer, 1995; Cropper and Muggleton, 2018]. In contrast to Alps, these methods create latent concepts in a weakly supervised manner – there is no direct supervision for the latent predicate, but there is indirect supervision provided by the accuracy of the predictions. An exception to this is the work by Kok and Domingos [2007]; however, it does not provide novel language constructs to an SRL model, but only compresses the existing data by identifying entities that are identical.
We draw inspiration from program induction and synthesis [Gulwani et al., 2017], in particular, unsupervised methods for program induction [Ellis et al., 2015; Lake et al., 2015]. These methods encode program induction as a constraint satisfaction problem similar to Alps, however, they do not create new latent concepts.
6 Conclusion
This work introduce Auto-encoding Logic Programs (Alps) – a novel logic-based representation learning framework for relational data. The novelty of the proposed framework is that it learns a latent representation in a symbolic, instead of a gradient-based way. It achieves that by relying on first-order logic as a data representation language, which has a benefit of exactly representing the rich relational data without the need to approximate it in the embeddings spaces like many of the related works. We further show that learning Alps can be cast as a constraint optimisation problem, which can be solved efficiently in many cases. We experimentally evaluate our approach and show that learning generative models from the relational latent representations created by Alps results in substantially improved AUC-PR scores compared to learning from the original data representation.
This work shows the potential of latent representations for the SRL community and opens challenges for bringing these ideas to their maturity; in particular, the understanding of the desirable properties of relational representations and the development of scalable methods to create them.
References
|
{"Source-Url": "http://delbp.github.io/papers/DeLBP-2019_Accepted-4.pdf", "len_cl100k_base": 6710, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 26274, "total-output-tokens": 8935, "length": "2e12", "weborganizer": {"__label__adult": 0.0005950927734375, "__label__art_design": 0.0009465217590332032, "__label__crime_law": 0.0007634162902832031, "__label__education_jobs": 0.0034427642822265625, "__label__entertainment": 0.000270843505859375, "__label__fashion_beauty": 0.0004222393035888672, "__label__finance_business": 0.00045561790466308594, "__label__food_dining": 0.0007715225219726562, "__label__games": 0.0014963150024414062, "__label__hardware": 0.0014209747314453125, "__label__health": 0.0009708404541015624, "__label__history": 0.00041556358337402344, "__label__home_hobbies": 0.0002574920654296875, "__label__industrial": 0.0010204315185546875, "__label__literature": 0.001071929931640625, "__label__politics": 0.0004639625549316406, "__label__religion": 0.0007910728454589844, "__label__science_tech": 0.352294921875, "__label__social_life": 0.00025916099548339844, "__label__software": 0.01462554931640625, "__label__software_dev": 0.61572265625, "__label__sports_fitness": 0.0004193782806396485, "__label__transportation": 0.000804901123046875, "__label__travel": 0.000263214111328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36043, 0.03091]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36043, 0.62494]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36043, 0.87083]], "google_gemma-3-12b-it_contains_pii": [[0, 4708, false], [4708, 9031, null], [9031, 13772, null], [13772, 20382, null], [20382, 26909, null], [26909, 31222, null], [31222, 36043, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4708, true], [4708, 9031, null], [9031, 13772, null], [13772, 20382, null], [20382, 26909, null], [26909, 31222, null], [31222, 36043, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36043, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36043, null]], "pdf_page_numbers": [[0, 4708, 1], [4708, 9031, 2], [9031, 13772, 3], [13772, 20382, 4], [20382, 26909, 5], [26909, 31222, 6], [31222, 36043, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36043, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
7e545fa3df783077d59443bf51bca861e161ecbe
|
PUMPING LEMMAs FOR CFL AND RL
These are Only Necessary Conditions:
- The Pumping Lemma for CFL (PL-CFL) is a necessary condition for CFLs, i.e., if \( L \) is a CFL then it satisfies PL-CFL.
- Similarly, for Pumping Lemma for RL (PL-RL), i.e., if \( L \) is a RL, then it satisfies PL-RL.
PL-RL is a more restrictive (special) form of PL-CFL:
- Since each RL is also a CFL, each RL also satisfies PL-CFL.
- Since a CFL may not be a RL, a CFL may not satisfies PL-RL.
Main Uses:
- Show that a language \( L \) is not regular by showing that it does not satisfy PL-RL.
- \( L_{a^n b^n} \) does not satisfy PL-RL (and hence not an RL).
- \( L_{has-11} \) satisfies RL-PL (and hence satisfies RL-CFL).
- Show that a language \( L \) is not context-free by showing that it does not satisfy PL-CFL.
- \( L_{a^n b^n c^n} \) does not satisfy PL-CFL and hence not a CFL.
- \( L_{a^n b^n} \) satisfies PL-CFL.
Question:
- Which pumping-lemmas will be satisfied by \( L_{sym} \)?
- Which pumping-lemmas will be satisfied by the language of special binary multiplications \( \{10^m \times 10^n = 10^{m+n} : m, n \geq 0\} \)?
- How about \( \{x \times y = z: \text{where } x, y, z \in 1(0+1)^* \text{ and binaryNum}(z) \text{ equals the product of binaryNum}(x) \text{ and binaryNum}(y)\} \)?
PUMPING LEMMA FOR CFL
Observations on CFG:
- We can eliminate all rules of the form \( A \rightarrow B \) from the grammar.
- A parse-tree of depth \( d \) can derive a string of length \( \leq m^d \), where \( m \) = max. length of the right side of a rule.
- If \( L = L(G) \) is infinite, then there are arbitrarily long strings in \( L \) and hence parse-trees of arbitrarily large depth.
- If \( |V(G)| = n \), then a parse-tree of depth > \( n \) will have some variable \( A \) repeating on a path from the root.
- This means we can derive from \( A \) a string of the form \( uAw \), where \( uw \in T^+ \). Such an \( A \) may be called a recursive variable.
Some Important Consequences:
- Replacing the upper \( A \)-subtree by the lower \( A \)-subtree gives \( xvy \in L \).
- Replacing the lower \( A \)-subtree by the upper \( A \)-subtree gives \( xuu-vwwy \in L \). Likewise, \( xu^kvw^k y \in L \) for \( k \geq 1 \).
- No recursion anywhere in the lower \( A \)-subtree means \( |v| \leq m^n \).
- No recursion in the upper \( A \)-subtree, save the one shown, means \( |uw| \leq m^n \).
PUMPING LEMMA FOR CFL
Pumping Lemma (PL-CFL).
- For each CFL \( L \), there exist an integer \( N > 0 \) (which may depend on \( L \)) such that every \( s \in L \) of length \( |s| \geq N \) can be written as \( s = xuvwy \) with the following properties:
1. \( 0 < |uw| < |uvw| \leq N \) (\( v \neq \lambda \) and at least one of \( u \) and \( w \neq \lambda \)).
2. For all \( k \geq 0 \), \( xu^kvw^k y \in L \).
3. Either or both of \( x \), \( y \) may be \( \lambda \).
- The decomposition \( s = xuvwy \) may depend on \( L \).
- The location of \( uvw \) in \( s \) may depend on \( s \) and \( L \), and cannot be chosen arbitrarily.
- The pair \( \langle u, w \rangle \) is called the pump; a pump is two sided if \( u \neq \lambda \neq w \).
- Fuiding a pump includes the part \( v \), the context of the pump.
Example 1. \( N = 4 \) works for PL-CFL for \( L = \{a^n b^n : n \geq 1\} \).
- The smallest string \( s \) of length \( \geq 3 \) is \( s = aabb \). Any pump \( uw \) must satisfy the following conditions in order for \( xu^kvw^k y \in L \).
\( i \) \quad \#(a, uw) = \#(b, uw) \).
\( ii \) \quad Each of \( u \) and \( w \) should consists of only \( a \)'s or only \( b \)'s in order to avoid mixing of \( a \)'s and \( b \)'s in \( xu^kvw^k y \) for \( k > 1 \).
- From (i)-(ii), we get \( u = a^m \) and \( w = b^m \) for some \( m \geq 1 \).
- \( u = a^2 \) and \( w = b^2 \) does not work because \( s = aabb = \lambda . u . \lambda . w . \lambda \) is a bad (because \( v = \lambda \)) and only decomposition; also, \( xvy = \lambda \notin L \).
- \( u = a \) and \( w = b \) works. For any \( s = a^n b^n \), \( n \geq 2 \), the decomposition \( s = a^{n-2} . a . ab . b . b^{n-2} \) satisfies the conditions in PL-CFL.
- \( N = 2 \) does not work; there is no pump in \( s = ab \in L \).
MORE EXAMPLES OF PUMP IN CFL
- For $L_{a^n b^n}$, $N = 3$ also works, with a slightly different decomposition.
\[ a^n b^n = a^{n-1} \cdot a \cdot b \cdot b^{n-2}, \quad \text{with } u = a \text{ and } v = w = b. \]
This decomposition is related to the following CFG for $L_{a^n b^n}$:
\[ S \rightarrow aB, \quad B \rightarrow aBb | b. \]
Another similar decomposition is $a^n b^n = a^{n-2} \cdot a \cdot a \cdot b \cdot b^{n-1}$, with $u = a = v$ and $w = b$.
- For $L_{a^m b^n} = \{ a^m b^n : m \geq n \geq 1 \}$, the smallest string in the language is $ab$ and $N = 4$ works.
\[ a^m b = a^{m-1} \cdot a \cdot b \cdot \lambda \cdot \lambda \text{ for } m > 1 \]
\[ a^m b^n = a^{m-1} \cdot a \cdot b \cdot \lambda \cdot b^{n-1}, \text{ when } m > n \]
\[ a^m b^m = a^{m-1} \cdot a \cdot a \cdot b \cdot b^{m-1}, \text{ for } m \geq 2 \]
This corresponds to the following CFG for $L_{a^m b^n}$:
\[ S \rightarrow ab | aSb | aAb, \quad A \rightarrow aA | a \]
- For $L_{a^m b^n c^{m+n}}$, the smallest string in the language is $abcc$ and $N = 6$ works (there is no string of length 5 in the language).
\[ a^m bc^{m+1} = a^{m-1} \cdot a \cdot b \cdot c^m \text{ (} m > 1 \text{)} \]
\[ a^m b^n c^{n+1} = a^m b^{n-2} \cdot b \cdot bc \cdot c^{m+n-2}, \text{ for } n > 1 \]
NON-CFL LANGUAGE
- If a language $L$ does not satisfy PL-CFL, i.e., there is no $N$ for which the pumping conditions (1)-(3) hold for all string $s \in L$ with $|s| \geq N$, then $L$ is not CFL (hence not a regular language either).
Example 2. $L = \{a^n b^n c^n : n \geq 1\}$ is not a CFL.
- We first show that $N = 6$ does not work; the same argument shows that no $N$ works, i.e., $L$ does not satisfy PL-CFL and hence $L$ is not a CFL.
- Let $s = aabbc$ $\in$ $L$, $|s| \geq 6$. If possible, let $s = xuvw$ be a proper decomposition that satisfies the conditions in PL-CFL. Then,
(i) The number of $a$‘s, $b$‘s, and $c$‘s are the same in $uw$.
(ii) Each of $u$ and $w$ is made of just one symbol from $\{a, b, c\}$.
- The condition (ii) means that $u$ should consist of $a$‘s and $w$ should consist of $b$‘s, but then (i) cannot be satisfied.
- Thus, there is no decomposition $s = xuvw$ as desired.
Question:
- Show that the language of binary multiplications of the form $2^m \times 2^n = 2^{m+n}$, i.e, the language $\{10^m \times 10^n = 10^{m+n} : m, n \geq 0\}$ satisfies PL-CFL. Does this mean this language is a CFL?
- Show that $\{x \times y = z :$ where $x, y, z \in 1(0 + 1)^*$ and binaryNum($z$) equals the product of binaryNum($x$) and binaryNum($y$)$\}$ does not satisfy PL-CFL. What does that say about this language? (Hint: consider multiplication of numbers of the form $2^m$ and $2^{2^m - 2^m}$.)
PUMPING LEMMA FOR REGULAR LANGUAGES
Pumping Lemma (PL-RL).
- For each regular language $L$, there exists an integer $N > 0$ (which may depend on $L$) such that every $s \in L$ of length $|s| \geq N$ can be written as $s = xuy$ with the following properties:
(1) $0 < |u| \leq N$ (actually, one can say that $0 < |u| \leq |xu| \leq N$)
(2) For all $k \geq 0$, $xu^k y \in L$.
- The pump $u$ can depend on $s$ and on $L$. The pump $u$ relates to a cycle (loop) in the FSA or NFSA for $L$. Thus, $N$ can be taken to be the minimum number of states in (N)FSA for $L$.
Notes:
- The conditions (1)-(2) above are obtained by putting $w = \lambda$ in the conditions (1)-(2) for the pumping lemma for CFL.
- Unlike CFL, we can assure that the pump $u$ is not far from the beginning of the string $s$.
- Since the reverse of a regular language is also regular, we also get a pump close to the end of $s$. Thus, for $|s| \geq 2N$, there will be a pump which is towards the beginning of $s$ and a disjoint pump (without any overlap with the pump on the left) towards the end of $s$.
- One can actually get a regular pump on any part of a large string $s$ in a regular language in the following sense. For any string $s = xyz \in L$, where $|s| \geq |y| \geq N$, we can write $y = uvw$ such that $0 < |v| \leq N$ and $xuv^k wz \in L$ for all $k \geq 0$.
Similarities between PL-CFL and PL-RL:
- If $N = N_0$ works for the PL-CFL for an $L$, then any $N > N_0$ also works for that $L$. The same is true for PL-RL.
EXAMPLE OF PUMPS IN AN RL
- Let \( L = a^+b^+ = \{ab, aab, abb, aaab, aabb, abbbi, \ldots\} \).
- Here, \( N = 3 \) works and there are two kinds of pumps depending on \( s \in L \) as shown below. (\( N \) must be larger than the length of the smallest string in \( L \).)
- For \( s = ab^n \) and \( n \geq 2 \), \( s = a \cdot b \cdot b^{n-1} \) is a valid decomposition.
- For \( s = a^m b^n \) and \( m \geq 2 \), \( s = \lambda \cdot a \cdot a^{m-1} b^n \) is a valid decomposition.
Each pump corresponds to a cycle or loop in this NFSA for \( a^+b^+ \).
- The valid decompositions look slightly different in terms of the (min-state) FSA for \( a^+b^+ \).
For \( s = ab^n \) and \( n \geq 2 \): \( s = ab \cdot b \cdot b^{n-2} \).
For \( s = a^m b^n \) and \( m \geq 2 \): \( s = a \cdot a \cdot a^{m-2} b^n \).
Each pump corresponds to a cycle or loop in this FSA for \( a^+b^+ \).
- There are many other valid decomposition of the form \( s = xuy \), with \( |u| \leq N \), if we do not insist on \( |xu| \leq N \).
- It is easy to see that \( a^+b^+ \) satisfies PL-CFL, and that \( L_{a^nb^n} \) does not satisfy PL-RL.
**EXERCISE.**
1. Find the smallest $N$ which satisfies PL-CFL for $L_{bal-par}$. Repeat the exercise for $L_{sym}$.
2. Find the smallest $N$ which satisfies PL-CFL for the following language $L_{m \geq n} = \{ a^m b^n : m \geq n \geq 1 \}$. Note that the pumps look different for different $s \in L_{m \geq n}$. Repeat the exercise for $L_{m \neq n} = \{ a^m b^n : m \neq n, m \geq 1 \text{ and } n \geq 1 \}$. (Do you notice any thing special about how the pumps change whether $m > n$ or $m < n$?)
3. Show that the language $L_{m,n,m+n} = \{ a^m b^n c^{m+n} : m, n \geq 1 \}$ satisfies PL-CFL. (You will need different pumps depending on whether $n$ is large or small; you need to describe the nature of the pump in each situation.)
4. Consider the languages $L_{m,n} = \{ a^m b^n c^n : m \geq 1, n \geq 1 \}$ and $L_{m,n,n} = \{ a^m b^n c^n : m \geq 1, n \geq 1 \}$. For $s = a^2 b^2 c^2 \in L_{m,m,n} \cap L_{m,n,n}$, compare the pumps for $s$ computed with respect to $L_{m,m,n}$ and $L_{m,n,n}$, respectively. After generalizing the observation to $a^j b^j c^j$ (why do we need to generalize it to $j > 2$), argue that $L_{m,m,n} \cap L_{m,n,n} = L_{n,n,n} = \{ a^n b^n c^n : n \geq 1 \}$ is not context-free.
5. Show that the binary additions presented as a language over the alphabet $\{0, 1, +, =\}$ is not a CFL.
6. Does the strings of the form $10^n + 0^n 1 = 10^{n-1} 1$ satisfy CFL-pumping lemma? How about the strings of the form $10^n + 1 = 10^{n-1} 1$?
7. Show that the binary multiplication language over the alphabet of binary triplets $\{ t_0, t_1, \ldots, t_7 \}$ does not satisfy CFL-pumping lemma. (Hint: exploit the special role of $t_6$ which cannot be part of any pump.)
8. What is wrong with the following statement for the pumping lemma for CFL:
There exists an integer $N \geq 1$ such that every string of the form $xzy \in L$, with $0 < |z| \leq N$, one can decompose $z$ as $z = uvw$ such that $|uw| > 0$, $|v| > 0$, and $xu^k v w^k y \in L$ for all $k$
Give an example of CFL that does not satisfy the above statement.
9. What is wrong with the following statement for the condition that \( L \) does not satisfy the Pumping Lemma for CFL?
\( L \) has strings of the form \(|xuvw| \geq N, N \geq 1\), such that \( uw \neq \lambda \neq v \) and \(|uvw| \leq N\) such that \( xu^kvw^ky \notin L \) for all \( k \neq 1 \).
Give a correct form of the above.
10. Show that \( L_{bal-sym} \), the balanced parenthetical strings which are symmetric, do not form a context-free language; \( L_{bal-sym} = \{ab, aabb, abab, aaabbb, aababb, ababab, \cdots\} = L_{bal} \cap L_{sym} \).
11. Show that none of the languages \( \{a^k b^m c^n : k \geq m \geq n \geq 1\} \) and \( \{a^m b^n c^{m+n} : m \geq n \geq 1\} \) satisfies the pumping lemma for CFL.
SEMI-LINEAR SETS
Semi-linear Set on line: More general than arithmatic progression.
• Simple form: \( \{m + k \cdot n: k \geq 0\} \), where \( m, n \) are fixed integers \( \geq 0 \).
• More general: \( \{m + k_1 \cdot n_1 + k_2 \cdot n_2 + \cdots + k_p \cdot n_p: \text{each } k_i \geq 0\} \), where \( m \) and \( n_i \)'s are fixed integers \( \geq 0 \).
Example. \( m = 2, n = 3, \) and \( p = 1 \) give the set \( \{2, 5, 8, 11, 14, \ldots\} \).
Semi-linear Set on the Plane:
• \( \{m + k_1 \cdot n + k_2 \cdot n_2 + \cdots + k_p \cdot n_p: \text{each } k_i \geq 0\} \), where \( m = (m_1, m_2) \) and \( n_i = (n_i_1, n_i_2)'s \) are fixed integer vectors with coordinates \( \geq 0 \).
Example. For \( m = (2,1), n_1 = (3,0), n_2 = (1,1), n_3 = (0,1) \), and \( p = 3 \) give the set shown below.
Generalization to Dimensions \( \geq 3 \): Similar.
**SEMI-LINEAR SETS AND CFLs**
**CountSet(L):** Let $\Sigma = \{a_1, a_2, \ldots, a_n\}$, the alphabet of $L$.
- CountVector($x$) = $\{(\#(a_1, x), \#(a_2, x), \ldots, \#(a_n, x))\}$, for $x \in L$.
- CountSet($L$) = \{CountVector($x$): $x \in L$\}.
**Example.** Each of the following is a semi-linear set.
- For $L = L_{a^n b^n}$, CountSet($L$) = \{(1,1), (2,2), (3,3), \ldots\}.
- For $L = L_{bal}$, CountSet($L$) = \{(1,1), (2,2), (3,3), \ldots\}.
- For $L = L_{#a=#b}$, CountSet($L$) = \{(1,1), (2,2), (3,3), \ldots\}.
- For $L = L_{a^n+1 b^n}$, CountSet($L$) = \{(2,1), (3,2), (4,3), \ldots\}.
**Parikh’s Mapping:**
- $x$ $\to$ CountVector($x$), a many-to-one mapping from strings to non-negative integer-vectors.
- $L$ $\to$ CountSet($L$), a many-to-one mapping from languages to sets of non-negative integer-vectors.
**Theorem** (Parikh, 1966):
- For each CFL $L$, CountSet($L$) is a finite union of semi-linear sets.
**Question:**
- Why do we need "union" in the above theorem?
- If $L_1$ and $L_2$ are two languages with the same alphabet and both CountSet($L_1$) and CountSet($L_2$) are semi-linear, then is CountSet($L_1 L_2$) also semi-linear? How about CountSet($L_1 \cup L_2$) and CountSet($L_1^*$)? How about CountSet($L$) if $L$ is a finite language?
|
{"Source-Url": "http://www.csc.lsu.edu/~kundu/thry/10-pump-slide.pdf", "len_cl100k_base": 5336, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37045, "total-output-tokens": 6073, "length": "2e12", "weborganizer": {"__label__adult": 0.0005555152893066406, "__label__art_design": 0.0006232261657714844, "__label__crime_law": 0.0005369186401367188, "__label__education_jobs": 0.003932952880859375, "__label__entertainment": 0.0002841949462890625, "__label__fashion_beauty": 0.0002498626708984375, "__label__finance_business": 0.0003981590270996094, "__label__food_dining": 0.0009455680847167968, "__label__games": 0.0018167495727539065, "__label__hardware": 0.0010938644409179688, "__label__health": 0.0007429122924804688, "__label__history": 0.00061798095703125, "__label__home_hobbies": 0.00020515918731689453, "__label__industrial": 0.0010805130004882812, "__label__literature": 0.003162384033203125, "__label__politics": 0.000507354736328125, "__label__religion": 0.0014104843139648438, "__label__science_tech": 0.250244140625, "__label__social_life": 0.0002620220184326172, "__label__software": 0.01129913330078125, "__label__software_dev": 0.71826171875, "__label__sports_fitness": 0.0005097389221191406, "__label__transportation": 0.0008540153503417969, "__label__travel": 0.0002682209014892578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14548, 0.01487]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14548, 0.85151]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14548, 0.77003]], "google_gemma-3-12b-it_contains_pii": [[0, 1296, false], [1296, 2406, null], [2406, 4254, null], [4254, 5534, null], [5534, 6960, null], [6960, 8477, null], [8477, 9623, null], [9623, 11619, null], [11619, 12414, null], [12414, 13277, null], [13277, 14548, null], [14548, 14548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1296, true], [1296, 2406, null], [2406, 4254, null], [4254, 5534, null], [5534, 6960, null], [6960, 8477, null], [8477, 9623, null], [9623, 11619, null], [11619, 12414, null], [12414, 13277, null], [13277, 14548, null], [14548, 14548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 14548, null]], "pdf_page_numbers": [[0, 1296, 1], [1296, 2406, 2], [2406, 4254, 3], [4254, 5534, 4], [5534, 6960, 5], [6960, 8477, 6], [8477, 9623, 7], [9623, 11619, 8], [11619, 12414, 9], [12414, 13277, 10], [13277, 14548, 11], [14548, 14548, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14548, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
fdffdf8baff1a9a1cb90c894d1c10e2a639fe022
|
1 Introduction
Today, it is common for users to own more than tens of gigabytes of digital pictures, videos, experimental traces, etc. Although many users already back up such data on a cheap second disk, it is desirable to also seek off-site redundancies so that important data can survive threats such as natural disasters and operator mistakes. Commercial online backup service is expensive [1, 11]. An alternative solution is to use a peer-to-peer storage system. However, existing cooperative backup systems are plagued by two long-standing problems [3, 4, 9, 19, 27]: enforcing minimal availability from participating nodes, and ensuring that nodes storing others’ backup data will not deny restore service in times of need.
This paper presents Friendstore, a cooperative backup system that differs from previous proposals in one key aspect: each node only stores its backup data on a subset of peer nodes chosen by its user. In practice, each user trusts nodes belonging to her friends or colleagues. By storing data on trusted nodes only, Friendstore offers a non-technical solution to both the availability and denial-of-service problems: users enter “storage contracts” with their friends via real world negotiations. Such contracts are reliable because social relationships are at stake. Each user only stores data with her friends instead of friends-of-friends because we do not believe non-direct social relationships can enforce such contracts reliably.
Although Friendstore’s architecture is conceptually simple, a number of technical challenges remain in order to provide reliable long term storage with the highest possible capacity. The capacity of Friendstore is limited by two types of resources: wide area bandwidth and the available disk space contributed by participating nodes. Bandwidth is a limiting resource because nodes must re-copy backup data lost due to failed disks. To prevent a node from storing more data than it can reliably maintain, we propose to let each node calculate its maintainable capacity based on its upload bandwidth and limit the amount of backup data it stores in the system accordingly.
The system’s capacity may also be limited by the available disk space. We propose to trade off bandwidth for disk space by storing coded data in situations when disk space, instead of bandwidth, is the more limiting resource. Our scheme, XOR(1, 2), doubles the amount of backup information stored at a node at the cost of transferring twice the amount of data during restore in order to decode the original data.
The technical challenges addressed in this paper, namely, calculating maintainable capacity and trading off bandwidth for storage, are not unique to Friendstore but present in all replicated storage systems. However, the targeted deployment environment of Friendstore makes addressing these challenges a pressing need. Friendstore runs on nodes with a wide range of bandwidth and available disk space. Some nodes may be limited by their upload bandwidth, hence they must refrain from storing more data than the maintainable capacity. Other nodes may be limited by the available disk space. As each Friendstore node only has a few choices to store data, it is attractive to store more information in the limited disk space using coding.
The paper is organized as follows: Section 2 discusses the underlying trust model that has inspired Friendstore’s architecture. We proceed to present Friendstore’s overall design (Section 3), how a node calculates maintainable capacity (Section 4) and how it trades off bandwidth for storage (Section 5). In Section 6, we evaluate the long term reliability of Friendstore using trace-driven simulations and share lessons from our early software deployment.
2 Trust Model
The viability of all cooperative backup systems depends on the majority of participants cooperating, hence the name. Unfortunately, no technical solution can ensure that nodes always cooperate. For example, a node storing others’ data can faithfully adhere to a system’s protocol for a long time but decide to maliciously deny service when it is asked to help others restore. Therefore, the best a system can do is to ensure that our assumptions about how “well behaved” nodes act are highly likely to hold in practice. Systems do so by pruning the set of trustworthy nodes to eliminate misfits and creating disincentives to violating assumptions in the first place. For example, a node can frequently check that others are faithfully storing its data and remove any node that fails periodic checks [3, 9, 19] from the system. A disincentive to misbehavior could be punishments in the form of deletion of
the offending node’s backup data [9] or expulsion from the system by a central authority [3]. Both of these approaches have drawbacks: pruning mechanisms based on system-level health probes can be imprecise (e.g. it is difficult to distinguish a node who has just suffered from a hard disk crash from one that purposefully deleted others’ data). Inflexible disincentives (e.g. deletion of an expelled node’s data) could cause the system to be unnecessarily fragile; it is easy to imagine an incident such as unexpected software crashes or temporary network congestion leading to an escalating spiral of punishments.
Friendstore leverages information in the social relationships among users to select trustworthy nodes and provide strong disincentives against non-cooperative behavior. Each user chooses a set of trusted nodes that she believes will participate in the system over the long term and have some minimal availability according to her social relationships. A Friendstore node only stores backup data on those trusted nodes. By exploiting real-world relationships in this way, Friendstore is able to use simple and lightweight technical mechanisms to provide a reasonable expectation that nodes will behave as expected. Friendstore checks for the presence of remote backup data infrequently and uses a long timeout to mask transient node failures knowing that the unresponsiveness of a trusted neighbor is more likely due to its user’s vacation than an act of malice. Also, disincentives in this system carry more weight since they stem from possible disruption of the social relationships: violation of trust results in the sentiments of one’s friend which we believe most users want to avoid. Friendstore defers punishments for a misbehaving node such as deleting of its backup data to individual users who are free to use their own retribution policies based on more complete and accurate information.
3 Design Overview
Friendstore consists of a collection of nodes administered by different users. Each node runs an identical copy of the software and communicates with a subset of other nodes over the wide area network. The software running on a node has two roles: backing up a node’s local data and helping others store their backups. We refer to a node as an owner when it is performing activities involving its own data and a helper when it is acting to help others. Each node is named and authenticated by its public key and a user chooses a subset of helpers for storing data by configuring her node with the public keys of her friends’ nodes.
An online backup system undertakes a number of activities: to store local data on remote helpers (backup), to periodically check that remote copies of its backup data are still intact and create new ones if not (verify and repair), and to retrieve remote backups following a disk crash (restore). We describe how Friendstore performs each task in turn.
Backup An owner prepares a collection of files for backup in a sequence of steps shown in Figure 1. The owner processes the files by chunking large files into smaller pieces, compressing and encrypting individual pieces using symmetric encryption. Finally, it uploads copies of its encrypted chunks on distinct helpers. We use the term replica to refer to these encrypted chunks stored at helpers. In our prototype, r is set to be two. We prefer replication over erasure coding because efficient erasure codes require more helpers than typically available for many owners. Owners do not modify replicas at helpers once created, but can explicitly delete them. To garbage collect data, helpers also expire replicas after a default three month period.
Friendstore discourages one common form of free-riding, namely, selfish helpers attempting to store less data for others than required. To detect such selfish behavior, each node keeps track of how much backup data its owner has stored on each of the helpers vs. how much data that helper’s corresponding node has stored on itself. An owner always prefers storing data at the helper who owes it the most storage and if denied storage by such a helper, the owner reports such incidence to its user for punishments or further investigations.
Verify and Repair Each owner periodically checks the health of its remote replicas by requesting its helpers to compute and return the hash of a randomly chosen replica starting at a random offset. By comparing a helper’s hash value with that computed from its local data, an owner can detect replica corruption and re-send the corrupted replica quickly. When an owner fails to contact a helper for verification after a timeout period, it re-sends all replicas stored on the unresponsive helper to another helper. Since users explicitly choose helpers that agree to participate over the long term, we believe most failures are due to transient node offline events as opposed to permanent departures. Thus, Friendstore uses a large timeout threshold of 200 hours to mask most transient failures.
Restore Restoring data after an owner’s disk crash is straightforward. However, since the owner might lose all per-
sistent data after a disk crash, it must store its private key used to encrypt the replicas offline. Friendstore uses a separate service to help an owner remember the identities of its helpers. During restore, a helper uploads the owner’s data as well as its own lost replicas previously stored on that node. We must point out that a helper has no real incentives to help an owner restore. The reason we believe it is likely to do so in practice comes from the real world social relationship between users.
4 Calculate the maintainable capacity
If backup data is to be stored reliably, it must be re-copied by owners as disks fail. The rate at which an owner can upload data determines the amount of data it can reliably store on remote helpers. This amount could be much less than the disk space available at helpers. To ensure the reliability of backup, we do not want to store more data than can be maintained over the long term. Therefore, we propose to let each owner calculate its maintainable storage capacity (\(s_{\text{max}}\)) based on its upload bandwidth and use this estimate to limit how much data it attempts to store on helpers. Similarly, we calculate the maintainable capacity for each helper (\(d_{\text{max}}\)) and use the estimate to limit the amount of data it contributes to other owners. For simplicity, we assume that a node’s download bandwidth is larger than its upload bandwidth. Similar arguments apply when a node’s download bandwidth is the more limiting resource.
Intuitively, the reliability of replicated data is affected by the amount of bandwidth required to recover from permanent disk failures relative to the amount of available bandwidth at each node [8]. When an owner stores \(s\) units of backup data on remote helpers, it must consume \(\lambda_f \cdot 2 \cdot s\) units of bandwidth to re-copy 2 replicas per unit of data when disks fail at rate \(\lambda_f\). Likewise, when a helper stores \(d\) units of replicas for others, it needs to upload one out of every two copies to help owners restore, consuming \(\lambda_f \cdot \frac{d}{2}\) units of bandwidth. Since a node acts both as an owner and a helper, the total bandwidth required to recover from permanent failures is: \(\lambda_f \cdot 2s + \lambda_f \cdot \frac{d}{2}\). Our simulation results show that when the required recovery bandwidth does not exceed one tenth of the actual available bandwidth, there is little data loss (< 0.15%) over a five year period. Therefore, we can calculate the maintainable storage capacity (\(s_{\text{max}}, d_{\text{max}}\)) as follows:
\[
\lambda_f \cdot 2 \cdot s_{\text{max}} + \lambda_f \cdot \frac{d_{\text{max}}}{2} = \frac{1}{10} \cdot B \cdot A
\] (1)
In Equation(1), a node’s available upload bandwidth is measured by its upload link speed \(B\) scaled by the fraction of time it remains online \(A\).
As each unit of backup data is replicated twice, helpers must store twice the amount of total backup data. Substituting \(d_{\text{max}} = 2s_{\text{max}}\) into (1), we obtain \(s_{\text{max}} = \frac{B \cdot A}{30 \cdot \lambda_f}\). To calculate the actual value of \(s_{\text{max}}\), an owner uses its measured upload bandwidth, node availability and an approximation of the permanent disk failure rate (e.g. \(\lambda_f \approx 1/3\) years if we approximate the average disk lifetime to be 3 years). If a user only allocates a fraction of her uplink capacity for use by Friendstore, \(s_{\text{max}}\) will be calculated using the throttled bandwidth. Each owner refrains from storing more than \(s_{\text{max}}\) units of backup data on remote helpers and each helper does not store more than \(d_{\text{max}} = 2s_{\text{max}}\) units of data for other owners.
5 Store more information with coding
Disk space, instead of upload bandwidth, can also be the limiting resource in many circumstances. For example, when operating on a college campus network, a helper can reliably store up to \(d_{\text{max}} = 754\)GB data for other owners with 1Mbps upload bandwidth. But in reality, its idle disk space can be far less than \(d_{\text{max}}\). Since each Friendstore owner only has a few trusted helpers to store data, it is important to utilize helpers’ available disk space efficiently. We introduce a coding scheme, called XOR(1,2), to let a helper simultaneously provide redundancy for multiple owners by storing coded replicas. The actual coding mechanism is not new and has been explored in RAID systems [23]. However, Friendstore presents a novel application for coding to enable a helper to trade off bandwidth for storage when the available disk space is the limiting resource.
Figure 2 illustrates an example in which helper \(A\) must store replicas for owners \(B, C, D\). Instead of storing backup data from them separately \((B_1, B_2, C_1, D_1)\), helper \(A\) can store \(B_1 \oplus C_1\) and \(B_2 \oplus D_1\), consuming two units of space as opposed to 4. To restore \(B_1\), helper \(A\) needs to fetch the original replica \((C_1)\) from owner \(C\) in order to decode \(B_1\), i.e. \(B_1 = C_1 \oplus (B_1 \oplus C_1)\). As this example shows, XOR(1,2) allows helper \(A\) to utilize the original information stored at owner \(C\) to recover data belonging to a different owner \((B)\) but it also consumes additional bandwidth during restore.
Since coding trades off bandwidth for storage, we must be careful not to apply XOR(1,2) in situations when the
capacity is limited by bandwidth. As Figure 2 shows, with coding, an owner $C$ must upload its replica again in response to owner $B$’s failure as well as helper $A$’s failure. Therefore, we must update Equation (1) to reflect the additional replica transfer by an owner when $\text{XOR}(1,2)$ is in use:
$$\lambda_f \cdot (2 + 1) \cdot s'_{\text{max}} + \frac{d_{\text{max}}}{2} = \frac{1}{10} \cdot B \cdot A,$$
resulting in
$$s'_{\text{max}} = \frac{B \cdot A}{10 \cdot \lambda_f}.$$ Each owner uses the new estimate ($s'_{\text{max}}$) to constrain the amount of data it attempts to store on remote helpers while allowing $\text{XOR}(1,2)$. Similarly, a helper uses $d'_{\text{max}}$ to limit the amount of information it stores as coded replicas for other owners. Furthermore, a helper does not code those replicas for which their owners have not explicitly permitted coding. By allowing coding, an owner agrees to undertake extra work to upload the original encrypted replicas to the helper again during data restore by another owner. Therefore, an owner should permit coding only for replicas that correspond to immutable data such as media files. We rely on the normal verification process to detect replica loss due to unexpected changes in original files. An owner has incentive to allow coding because doing so enables it to store more data at a helper than otherwise possible. Unfortunately, coding also causes an owner’s restore process to depend on another owner that might not be directly trusted by its user. This is a tradeoff that Friendstore leaves to individual users.
Storing coded blocks complicates the normal verification process because a helper is unable to calculate the requested hash value of the original replica. To enable verification, we use a homomorphic collision resistant hash function with the property:
$$h_G(x + y) = h_G(x)h_G(y)$$
where $G(p, q, g)$ specifies the hash function parameters [17]. To apply this homomorphic hash function to verify coded replicas, we change the XOR operator in $\text{XOR}(1,2)$ to be the addition operation over $\mathbb{Z}_q$ ($q$ is a large prime). We illustrate the new verification protocol using Figure 2 as an example. When helper $A$ is asked by owner $B$ to produce the hash for replica $B1$, it first requests the hash for $h_G(C1)$ from owner $C$, where $C1$ is the complement of $C1$ in $\mathbb{Z}_q$ and returns to owner $B$ the requested hash value by computing
$$h_G(B1) = h_G(B1 + C1)h_G(C1).$$
6 Evaluation
This section examines Friendstore’s performance in terms of storage utilization and long term reliability using trace-driven simulations. In addition, we also present statistics from a pilot prototype deployment on 21 nodes over a period of two months and share our lessons learnt from running Friendstore in practice.
Storage utilization One concern with Friendstore is that its storage utilization might be low: a node might find out that all of its helpers are full even though available disk space exists elsewhere in the system. We show that Friendstore achieves good utilization when operating under typical social relationships. We use a crawled 2363-node Orkut network as the simulated social graph so each owner stores data on helpers belonging to its Orkut neighbors. Since each helper contributes the same amount of disk space as its corresponding owner tries to consume, a homogeneous storage system (e.g. DHash [10], Pastry [28], OpenDHT [26]) would be able to achieve perfect utilization. In comparison, Friendstore’s utilization is less (87%). We find that most “wasted” storage space resides on nodes with low below degrees. As our crawled topology is only a subgraph of the Orkut network, 23% nodes have less than 3 crawled neighbors while less than 5% Orkut nodes have degrees smaller than five in the full graph. We vary the minimum node degree by adding new links to the subgraph while preserving the original subgraph’s clustering coefficient of 0.23 using the method proposed in [32]. Figure 3 shows that space utilization increases quickly to reach more than 95% when each owner has more than 5 helpers, suggesting Friendstore’s utilization is likely to be high in practice.
Long term data durability We use the FARSITE trace [7] which monitors the availability of corporate desktop machines to simulate transient node offline events. The median FARSITE node availability is 81%. Since the trace only covers 840 hours, we sample one random node’s up-down sequence from the trace every 840 hours over five simulated years. We generate disk failures using a Weibull distribution to approximate an average disk lifetime of 3 years [30]. Whenever a node suffers a disk failure, we delete all its data. The failed node rejoins the system six days later and its owner attempts to restore from helpers immediately.
Figure 4 shows the fraction of data lost at the end of 5 years as a function of the amount of data each owner stores in Friendstore in the beginning of the experiments. If nodes do not backup at all, 81.2% data will be lost after 5 years. The loss rate increases as each owner stores more backup...
data at helpers because it cannot promptly re-copy replicas lost due to disk failures with limited upload bandwidth. According to Equation (1), \( s_{\text{max}} = \frac{0.001}{0.002} = 48\) GB for owners with 150Kbps upload bandwidth and 81% availability, \( s'_{\text{max}} = 36\) GB if coding is used. As we can see from Figure 4, when each owner stores no more than \( s_{\text{max}} \), the probability of it losing data after five years is very low < 0.15%. When an owner’s upload bandwidth increases to 750Kbps, \( s_{\text{max}} \) increases to 240 GB. We have also simulated cases when an owner does not attempt to limit the amount of data it backs up according to \( s_{\text{max}} \). Instead, it simply stores more backup data whenever its upload link becomes idle. With such a greedy strategy, we found that 7.98% of data initially backed up five years ago is lost at the end of the experiments.
**Deployment Lessons** Friendstore is fully implemented in Java and runs on a variety of OSes. We deployed the first version of the software from August to October 2007 in a small scale deployment involving 17 users and 21 nodes. The 21 nodes are of a mixture of university desktops, home desktops and laptop nodes running Windows, Mac and Linux. Table 1 summarizes various statistics from the deployment. Users have configured a wide range of upload bandwidths.
<table>
<thead>
<tr>
<th>Number of users</th>
<th>17</th>
</tr>
</thead>
<tbody>
<tr>
<td>Number of nodes</td>
<td>21</td>
</tr>
<tr>
<td>Maximum nodes per user</td>
<td>3</td>
</tr>
<tr>
<td>Fraction of time online</td>
<td>75.3% (28.6%, 98.6%)</td>
</tr>
<tr>
<td>Max consecutive hours online</td>
<td>175 hours (53, 692)</td>
</tr>
<tr>
<td>Max consecutive hours offline</td>
<td>53 hours (13, 120)</td>
</tr>
<tr>
<td>Upload link bandwidth</td>
<td>624 kbps (211, 3744)</td>
</tr>
<tr>
<td>Number of neighbors per node</td>
<td>3 (1, 7)</td>
</tr>
<tr>
<td>Total amount of data backed up</td>
<td>578MB (275, 3077)</td>
</tr>
</tbody>
</table>
Table 1: Two months deployment statistics of Friendstore from 08/01/2007 to 10/01/2007. All statistics are shown by the median number followed by 20- and 80-percentile in parenthesis.
The deployed software displays a warning sign for users whenever a helper cannot be reached during the past five days. We intended for a user to contact her friend to fix the problem when noticing these warnings. Instead, our users often just ignored warnings altogether. The software could be more useful if it can automatically identify the source of the problem and email the responsible user to suggest a fix.
- **Our deployed software used existing social relationships collected by Google Talk and Facebook to help users configure trusted nodes. We are surprised to find out that many users do not have accounts with either of the popular services. This suggests that we will have to provide our own social relationship registration service for future deployments.**
- **Some user owns a few machines and would like to express separate backup policies for each of them. For example, she might want Friendstore to backup her laptop’s data on the desktop and not the other way around. Furthermore, a number of users administer a large pool of machines. Since the deployed software lacks the notion of a “group”, it is difficult for these users to configure and administer Friendstore on a large collection of machines easily.**
- **Many users prefer storing certain subsets of files without encryption at trusted nodes so their friends can browse and view the stored files. This suggests that there is potential synergy between backup and file sharing since both might be able to use Friendstore as a generic replicated storage infrastructure.**
The Friendstore software is currently undergoing its second major revision to address pitfalls observed in the deployment and to support users behind NATs.
## 7 Related work
Many researchers have exploited the use of social relationships for a variety of applications: for example, digital preservation (LOCKSS [20]), file sharing (Maze [33] and Turtle [24]), email (Re [13]), web search (Peerspective [22]). Many online reputation systems also use social networks to improve their accuracy [15, 21, 29]. Friendstore offers a novel use of social relationships, namely, to help users choose a set throttle for Friendstore. We are encouraged to find that the median upload bandwidth usable by Friendstore is quite high (624Kbps) and that nodes are fairly available (median availability is 75%). This suggests that Friendstore’s maintainable storage capacity is likely to be high in practice.
The pilot deployment has revealed a number of practical issues for which our early design and prototype lacked good solutions:
- **The deployed software displays a warning sign for users whenever a helper cannot be reached during the past five days. We intended for a user to contact her friend to fix the problem when noticing these warnings. Instead, our users often just ignored warnings altogether. The software could be more useful if it can automatically identify the source of the problem and email the responsible user to suggest a fix.**
- **Our deployed software used existing social relationships collected by Google Talk and Facebook to help users configure trusted nodes. We are surprised to find out that many users do not have accounts with either of the popular services. This suggests that we will have to provide our own social relationship registration service for future deployments.**
- **Some user owns a few machines and would like to express separate backup policies for each of them. For example, she might want Friendstore to backup her laptop’s data on the desktop and not the other way around. Furthermore, a number of users administer a large pool of machines. Since the deployed software lacks the notion of a “group”, it is difficult for these users to configure and administer Friendstore on a large collection of machines easily.**
- **Many users prefer storing certain subsets of files without encryption at trusted nodes so their friends can browse and view the stored files. This suggests that there is potential synergy between backup and file sharing since both might be able to use Friendstore as a generic replicated storage infrastructure.**
The Friendstore software is currently undergoing its second major revision to address pitfalls observed in the deployment and to support users behind NATs.
of trusted nodes for reliable storage. Such user-specified trust relationships resemble those in SPKI/SDSI [12] and PGP certification chain. However, Friendstore’s notion of trust is different from that in certification systems. CrashPlan [2] is a recently released commercial software that allow users to backup data on friends’ machines. Friendstore shares a similar structure but addresses two technical challenges: ensuring a node does not store more data than can be reliably maintained and trading off bandwidth for storage when disk space is the more limiting resource. These challenges are not addressed in our earlier design [18].
There is a vast body of previous work in building reliable replicated storage systems [5, 8, 14, 16]. Many researchers have also recognized that bandwidth can often be the limiting resource when running over wide area nodes [6]. Our calculation for a node’s maintainable capacity in Section 4 is directly inspired by [8] and similar in spirit to [25, 31]. Many storage systems use coding non-discriminately to store more information in the same amount of disk space. In contrast, Friendstore uses coding to trade off bandwidth for storage and hence it only applies coding when disk space is the more limiting resource.
8 Conclusion
This paper presents Friendstore, a cooperative backup system that gives users the choice to store backup data only on nodes they trust. Using trust based on social relationships allows Friendstore to provide a high assurance for reliable backup. Friendstore limits how much data a node stores according to its maintainable capacity and uses coding to store more information when disk space is the more limiting resource. Our initial deployment suggests that Friendstore is a viable solution for online backups. Friendstore is available publicly at http://www.news.cs.nyu.edu/friendstore.
Acknowledgments
We thank Frank Dabek who helped us greatly improve this paper. We are grateful to Robert Morris, Frans Kaashoek, Jinyuan Li and Friendstore’s early users for their encouragement and insightful comments. This project was partially supported by the NSF award CNS-0747052.
References
[28] Rowstron, A., and Druschel, P. Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility. In 18th ACM Symposium on Operating Systems Principles (SOSP) (2001).
|
{"Source-Url": "http://www.news.cs.nyu.edu/~trandinh/publications/friendstore_socialnet08.pdf", "len_cl100k_base": 6250, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 23124, "total-output-tokens": 8379, "length": "2e12", "weborganizer": {"__label__adult": 0.0003311634063720703, "__label__art_design": 0.0005822181701660156, "__label__crime_law": 0.0005559921264648438, "__label__education_jobs": 0.00135040283203125, "__label__entertainment": 0.00032138824462890625, "__label__fashion_beauty": 0.00019729137420654297, "__label__finance_business": 0.001346588134765625, "__label__food_dining": 0.0003693103790283203, "__label__games": 0.0012788772583007812, "__label__hardware": 0.00449371337890625, "__label__health": 0.0005440711975097656, "__label__history": 0.0004875659942626953, "__label__home_hobbies": 0.00025010108947753906, "__label__industrial": 0.0005235671997070312, "__label__literature": 0.0004758834838867187, "__label__politics": 0.0003104209899902344, "__label__religion": 0.0003914833068847656, "__label__science_tech": 0.388427734375, "__label__social_life": 0.0003497600555419922, "__label__software": 0.186279296875, "__label__software_dev": 0.41015625, "__label__sports_fitness": 0.00020575523376464844, "__label__transportation": 0.0005059242248535156, "__label__travel": 0.0002834796905517578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34548, 0.03584]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34548, 0.29446]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34548, 0.89762]], "google_gemma-3-12b-it_contains_pii": [[0, 4672, false], [4672, 9799, null], [9799, 15248, null], [15248, 20384, null], [20384, 26643, null], [26643, 34548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4672, true], [4672, 9799, null], [9799, 15248, null], [15248, 20384, null], [20384, 26643, null], [26643, 34548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34548, null]], "pdf_page_numbers": [[0, 4672, 1], [4672, 9799, 2], [9799, 15248, 3], [15248, 20384, 4], [20384, 26643, 5], [26643, 34548, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34548, 0.09091]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
4af940337c4536aa61a31eb73915f6a54a500042
|
Leverage the power of information management for Service-Oriented Architecture (SOA)-based modeling, architecture, design, and implementation. See the various services that information management offers arranged into a stack view and get a detailed description of each. The authors start with metadata management and the importance of metadata integration, move into an examination of services that information management offers, and then present a SOA case study. Finally, the authors list some tools for the services discussed.
Introduction
In this article you'll learn about specific services, such as the following:
- Metadata management
- Extract Transformation Load (ETL)
- Federation
- Data placement (such as replication and caching)
- Data modeling
- Search
- Analytics
We then present a case study that uses SOA to validate data quality, and end with a list of tools for various services. After reading this paper, you should be better able to unleash the power of information management to help build a robust and balanced SOA, enabling information and business integration and avoiding common mistakes such as isolated data silos, data inconsistencies, and untapped information assets.
SOA is more than Web services
Figure 1 shows a logical view categorizing services that information management offers based on their following value propositions:
- Security
• Collaboration
• Availability
• Manageability
• Information consumption
While no single product offers all of these services, taken together these services create a complete information management framework under SOA. Notably, while some articles might place metadata management at the bottom of the information management stack, we depict it in a way that shows that metadata management is pervasive and intertwined with the rest of other services. In fact, SOA is a metadata-driven architecture (see "Metadata Evolution Management in Your SOA" in the Resources section). Therefore, we begin with metadata management in the second half of this paper.
**Figure 1: Information management in SOA**
**Metadata management**
**Metadata, metamodel, and meta-metamodels**
The most common definition of metadata is data about data -- which doesn't really say much. Depending on the discipline, metadata can mean different things. In essence, metadata is information about data's structure (syntax) and meaning (semantics). Examples of structural approaches to metadata are Relational Database Management Systems (RDBMS) catalogs, Java library catalogs, and XML DTDs and schemas. Each of these defines how data looks and how it is used. From the semantic point of view, metadata provides meaning for data. Examples include descriptions in data dictionaries, annotations, or ontologies.
Furthermore, there are instance and class metadata in the content management arena. Instance metadata simply is data stored in a content management metadata repository and refers to objects stored somewhere else, such as documents, Web pages, audio, and video files. Entries in taxonomy and indexes are also considered to be instance metadata. Class metadata is, in some respects, equivalent to RDBMS catalogs and XML schemas, which describe the structure of instance metadata.
Metamodels (also known as meta-metadata) define the structure and semantics of metadata. Examples of standardized metamodels include Unified Modeling Language (UML) and Common Warehouse Meta-model (CWM). The meta-metamodel layer is comprised of the description of the structure and semantics of meta-metadata. It is an attempt to provide a common language that describes all other models of information. Meta Object Facility (MOF) is a standard for meta-metamodels (see Resources).
**Figure 2: MOF metadata architecture**
![MOF metadata architecture diagram]
It is vital for metadata producers to adhere to the standards in metamodels, metadata interfaces, meta-metamodels, and query languages to achieve maximum interoperability and reach for wider metadata consumers, such as data warehouses, analytics, and modeling tools. SOA relies on such cohesive standards in order to dynamically match service producers and consumers, monitor BEPL flow, and improve the traceability of IT resources and business processes.
**Considerations for metadata management**
When we reengineer metadata management, XML obviously is a default data format for metadata because of its ubiquity. Within a single vendor or an organization, centralized approaches are often preferred in order to encourage metadata asset reuse and to reduce development effort and confusion. Also, standardization is the preferred approach. For instance, IBM® uses the open source Eclipse Modeling Framework (EMF) as a common metadata integration technology. EMF provides metadata integration for tools and run-time, so that all the software developed on top of EMF shares a common understanding of other applications. In an ideal situation (though it might be difficult in the short-term), one metadata repository stores all metadata artifacts. Services offered by information management, such as SSO, ETL, federation, quality, search, versioning, and workflow can be invoked for data, content, and metadata management when they are needed.
Regarding the XML repository, there are two popular storage mechanisms for storing XML metadata. They are RDBMS and native XML repositories. Each has its advantages and disadvantages. Some of the determining factors are performance, flexibility, bandwidth, interoperability, support of user-defined data types, and data quality assurance.
Across vendors, enterprises or industries level, the federated approach is a more practical method for metadata management. A virtual metadata repository allows applications to access and
aggregate heterogeneous metadata sources through a single API. Physical metadata artifacts can be stored either in their original locations or using a ETL/replication/cache method to improve performance and metadata placement. Automatic discovery, mapping, and transformation among diverse metadata sources are critical to improve metadata manageability.
**Relationships among data, content, and metadata management**
On one hand, metadata provides the glue that enables programs to talk to each other (in fact, one vendor calls its metadata repository *SuperGlue*). On the other hand, requirements for metadata management are very similar to data and content management. Metadata management needs to offer the same types of services on security, collaboration, QoS, and manageability as on data and content management. Metadata management also needs to incorporate SSO, ETL, federation, quality, search, versioning, workflow and storage persistence. The automation and orchestration requirements for metadata management tend to be even greater than for data and content management, because the audience of metadata is really mostly computer programs.
Nevertheless, the good news is that asset reuse and service orchestration can be achieved by building metadata management on top of well-architected, SOA-based information management. This illustrates the importance of reengineering information management into SOA-based and reusable components.
**Challenges of metadata integration**
As we stated earlier, integrating metadata is more challenging than integrating data and content. Many factors contribute to the difficulty of metadata integration. They include the following:
- Metadata is pervasive and, in many cases, invisible to users.
- Metadata and metamodels, in many products, have their own proprietary format. This is especially true for content management.
- In content management, adding metadata to content is typically facilitated by manual workflows. A great deal of content lacks good metadata to enable integration and search.
- Metadata integration requires higher levels of automation and orchestration than data and content integration. This, in turn, requires higher levels of automated discovery, transformation, mapping, and semantic understanding.
- Vendors might choose to keep their proprietary metadata format for fear of losing current customers.
- It takes time and effort to transform to metadata standards such as MOF.
**Business value of metadata integration**
SOA is largely a metadata-driven architecture. To understand the high-level business value of metadata integration, let’s begin by taking a bird’s eye view. Figure 3 illustrates the importance of metadata integration within the context of On Demand Business. Based on information standards, metadata enables seamless information exchange. Given well-integrated metadata, information can freely flow from one place to another across boundaries imposed by operating systems, programming languages, locations, and data formats. Thus metadata can be thought of as the "brain" in information integration. Furthermore, information integration enables business...
integration, either across departments within an enterprise or across enterprise boundaries. It provides the following:
- It provides a single and complete view of customers, partners, products, and business through data warehouses or federation.
- It facilitates business performance management using analytical services.
- It enhances business applications with broad information access.
- It enables business process transformation with continuous information services.
Lastly, business integration is one of the cornerstones of an on demand business. Business integration differentiates itself from previous Enterprise Application Integration (EAI) by using IT technology to serve business objectives, rather than the reverse. Therefore, it is not an overstatement to say that metadata integration is the brain of an On Demand Business.
**Figure 3: Metadata integration is the brain of On Demand Business**

Examples of high-level metadata integration values include:
- Facilitating data/content integration from heterogeneous sources.
- Improving time to market for new applications and allowing faster application integration.
- Smoothing the process of inter-/intra-enterprise business integration.
- Providing new insight by enabling analysis of fully integrated information.
- Enabling impact analysis through change management and predictive analysis.
**Data and content federation: A decentralized approach**
Federation is the concept that a collection of resources can be viewed and manipulated as if they were a single resource, while retaining their autonomy (with little or no impact to existing applications or systems) and integrity (not corrupting data or content in existing applications or systems). Needless to say, autonomy and integrity are two important prerequisites for federation.
Since the late 1990s, data federation has emerged as a distinct approach from the centralized approach that data marts and warehouses had been using. Data federation strives to leave data in its original location and to create what can be thought of as a virtual database. Similarly, content federation has emerged in recent years to enable access to and aggregation of heterogeneous content sources. These decentralized approaches reduces data and content redundancies, bandwidth, storage, on-going synchronization, and additional administrative cost associated with
a centralized approach. Real-time access to distributed information sources also brings new capabilities to business intelligence, one example being compliance with legal and regulatory requirements. For developers, data federation reduces the need to write and maintain custom APIs for various data sources and to acquire highly specialized skills.
The top concern with data federation is performance. To improve performance, federation frequently uses caching, materialized query tables (MQTs), and distributed query optimization and execution. Caching and MQTs create and manage tables at the federated server, which can be a full or subset of rows from target federated data sources. As a cutting-edge tool, IBM WebSphere® Information Integrator takes into account the following:
- Standard statistics from source data (such as cardinality or indexes)
- Data server capability (such as join features or built-in functions)
- Data server capacity
- I/O capacity
- Network speed (please refer to the IBM Redbook, "DB2II: Performance Monitoring, Tuning and Capacity Planning Guide" in the Resources section)
ETL: A centralized approach
Extract-transform-load (ETL) is one of the oldest technologies for data integration and is closely allied with data warehousing and business intelligence. It enables data consolidation, migration, and propagation. ETL tools extract, transform, and load data from one or more data sources to one or more targets. ETL was, for some time, the backbone of information integration and still is very popular today. Unlike straightforward extract and load operations, transformation is the most complicated piece, as there is a need to understand, convert, aggregate, and calculate data. The benefits of ETL and data warehousing can be diminished by high costs, slow turn-around time, and incomplete sets of information in data sources.
Centralized and de-centralized approaches compliment each other, and there are major benefits when both approaches are combined.
The centralized approach involves some of these elements:
- Access performance or availability requirements demand centralized data.
- Currency requirements demand point-in-time consistency, such as close of business.
- Complex transformation is required to achieve semantically consistent data.
- The centralized approach is typically used for production applications, data warehouses and operational data stores.
- The centralized approach is typically managed by ETL or replication technologies.
The decentralized approach involves the following considerations:
- Accesses performance and load on source systems that can be traded for overall lower cost implementation.
- Currency requirements demand a fresh copy of the data.
- Data security, licensing restrictions, or industry regulations restrict data movement.
• The decentralized approach can combine mixed format data, such as customer ODS with related contract documents or images.
• Query requires real-time data, such as stock quote, on-hand inventory.
Data replication and event publishing
Data replication moves copies of data from one location to another location. The target location could be either a centralized location, such as data warehouse, or another distributed place on the network. In a grid environment, replication and cache services are used to create the Placement Management Service to meet Quality of Service (QoS) goals. Depending on the access patterns and location of the consuming applications, a Placement Management Service can improve response time and information availability by creating caches or replicas (See "Towards an information infrastructure for the grid" in Resources). In a Web application environment, data and content replication are often used to move data or content from the staging server (usually only for administrators) to the production server when data or content are ready to be published for public consumption. The staged data governance gives organizations greater control over the flow and life cycle of information. For example, a Web site supports multiple national languages. When a piece of data or content element needs to be translated before it can be published on the site, it is populated to the staging server first. Only after it gets translated and optionally approved by administrators is it replicated to the production server and subsequently made available to the public.
Replication can be used in conjunction with either centralized or decentralized approaches. The major differences between ETL and data replication are that ETL usually moves data to a centralized location after applying vigorous data cleansing and transformation rules, takes much longer, and moves larger amounts of data. This is in contrast to data replication, which moves a much smaller set of data, as it is going to central or distributed locations in a more automatic fashion. Data replication can access data in real-time or near-real-time. The primary goal of ETL is to analyze or monitor data and produce business intelligence, whereas the goals of data replication are mostly related to performance, data governance, and data availability. Lastly, ETL and data replication can complement each other nicely, in other words, one can use the data replication function to move data faster to data marts and warehouses, and the data transformation function in ETL can deliver greater flexibility and higher data quality in the data replication arena. In order to reuse the logic in different tools, easily callable and loosely-coupled information services need to be in place.
Unlike ETL and data replication, event publishing does not know where the data is going and how it will be used. Changes from source tables are published in XML format or other data formats to a message queue. It is the responsibility of the applications to retrieve published events and take proper actions, such as triggering a business process or transforming data before applying it to a target data source. The loosely coupled architecture separates service providers and consumers, and allows data events to be independent from applications.
Logical data and semantic information modeling
Logical data modeling is one of software development's best practices and also one of the most easily neglected areas when a development organization is under time and budget pressure. While
logical data modeling is often skipped during in-house development, organizations frequently buy or acquire Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), or other sorts of packages. The result is that there are many versions of data models referring to the same thing within an organization, and each data source has its own data model and meta-model. For example, it is not unusual to have different terms referring to customers -- CRM calls it a *customer*, the accounting system calls it as *client*, and sales system calls it as *buyer*. The textbooks and theorists tend to begin with the logical enterprise data model, then move onto a physical data model (such as Entity Relationship Diagram), code generation, and development, but the order is often reversed in reality.
In practice, organizations often build, buy, or acquire databases in pieces, and data remains in isolated islands. On occasion these organizations recognize a need to integrate the data. What do they do next? Often they dive into piles of documents, millions of lines of code, and terabytes of data to discover what types of information they produce and consume, not to mention that they need to discover and document after the fact the relationships among various data models and business processes. On the bright side, certain automatic data discovering and profiling tools can speed up processes and relieve the pain of performing these tasks. Many organizations might eventually derive a logical enterprise data model so that individual systems can be mapped to the common logical model. Transformation is required in certain cases, such as transforming one currency to another. In the end, physical data models are mapped to an Enterprise Data Model -- a common logical data model shared by an enterprise. An Enterprise Data Model provides maximum benefits if it is designed at the beginning as a part of Model Driven Architecture. Nevertheless, it is still invaluable as a result of the above reverse engineering steps. The main benefits of an Enterprise Data Model are:
- Provides an overview of enterprise information assets.
- Reinforces the practice of using IT technologies to support business processes.
- Reduces the cost and risks of Enterprise Information Integration (EII), Enterprise Application Integration (EAI), and data warehousing.
- Enables asset-based reuse of data, metadata, and meta-models.
- Improves data and metadata quality.
- Facilitates communication among business analysts, data modelers, developers, and database administrators.
Semantic information modeling (ontology) moves beyond structural, logical data modeling in the sense that it models semantics (meaning) and relationships of data. It unifies vocabularies (terms and concepts) across multiple knowledge domains. There are a number of problems ontology solves particularly well, such as problems with the following (see also "Semantics FAQs" in the Resources section):
- Information integration
- Model transformation
- Translation
- Data cleansing
- Search
- Navigation
Semantic information modeling (ontology) moves beyond structural, logical data modeling in the sense that it models semantics (meaning) and relationships of data. It unifies vocabularies (terms and concepts) across multiple knowledge domains. There are a number of problems ontology solves particularly well, such as problems with the following (see also "Semantics FAQs" in the Resources section):
- Information integration
- Model transformation
- Translation
- Data cleansing
- Search
- Navigation
• Text understanding
• Document preparation
• Speech understanding
• Question-and-answer issues
Data profiling
Data profiling is a process to discover the following:
• Data formats
• Patterns
• Characteristics
• Rules
• Hidden relationships
Data profiling also provides numerous benefits, including the following:
• Improves organizations' understanding of their data.
• Helps Electronic Data Management (EDM).
• Facilitates data mapping and transformation.
• Improves data quality.
• Builds baselines for performance tuning.
• Assists semantic modeling.
Data profiling aims to understand information better and create additional metadata about objects.
Data, content, and metadata quality
Data quality can make or break an enterprise information management strategy, which in turn determines the success of its business integration strategy. Data quality issues are reported to be one of the main reasons data warehousing projects miss deadlines. Poor data quality can cause misinformed decisions, ineffective operations, missed opportunities, and on occasion punishment by the organization or marketplace. Data quality no longer sits on the shelf as a luxury, nice-to-have item; instead, it has become a key operational element for businesses.
Examples of data quality problems are:
• Missing data for required fields
• Inconsistent data entries
• Incorrect or inaccurate data entries
Due to the inherent complexity of data quality work, some organizations opt to out-source such work to third-party service providers. We will take a look at a case study later in this paper.
Content quality is often neglected partially because evaluating content quality is a much harder task than evaluating data quality. Content, after all, is unstructured, and quality standards are thought of to be more subjective or arbitrary. Content quality is typically not in the scope of
technology projects. It is not well-regarded from an organizational perspective. However, in a SOA environment, content quality becomes more important due to SOA's fluid nature. If data errors or poor quality content are not being caught early on, they get propagated everywhere. Content quality criteria differ by the types of contents, but there are some common criteria to evaluate content quality, such as the following:
- Relevancy
- Timeliness
- Expiration
- Content validation
- Rating
- Duplication
- Link checking
Metadata quality has received increased attention lately due to the increasing demand for metadata management capabilities. Techniques that are used to improve data quality, such as standardization, profiling, inspection, cleansing, transformation, and validation also apply to metadata quality improvement.
Strong data typing is the key to ensuring consistent interpretation of XML data values across diverse programming languages and hardware. However, current XML technology only allows the schema validation for a single document, but an effective way to validate data types (including user-defined data types) and enforce semantic strong-typing across different schemas or data sources (such as between relational databases and OO data type facility) is missing. Standardizing on XML Document Type Definitions (DTDs) or schemas, which many industries are attempting to do as a solution to this problem, is insufficient, as issues on XML DTDs or schemas validation, semantic consistence, and compatibility still exist when you need to integrate data across multiple industries, which is a basic requirement for On Demand Business.
**Search and query**
Within enterprise search, there are many different types of searches: keyword, Boolean, range, faceted metadata, semantic, natural languages and parameterized. No matter which type of search, the purpose is to provide a consolidated, correlated, and ranked result set that enables quick and easy access to information. To facilitate search, indexing (not to be confused with indexes in relational databases) is used to index key words, concepts and instance metadata of unstructured content, such as Web pages, e-mail database, or file systems, so they can be searched and retrieved. Relational databases can also be indexed for faster and more flexible search.
Although many organizations realize the importance of integrating structured and unstructured information, today's search results are still unrelated to each other. What users get is a list of links that point to *potentially related information*. Users have to crawl through the search results to find information they need and to correlate it with the original intent of the query. This is largely a manual process. We think there is a strong need to research on using search and query to achieve *one query, one result set* across data and content.
Databases generally have their own search functions. The most generic search function is through query language, such as SQL and XQuery. Database search is great to retrieve structured and exactly matched data, but it requires highly specialized knowledge on query construction and data model familiarity. The users of database search are typically developers or database administrators. Besides, database search is not designed for relevance ranking, fuzzy search or multiple keywords. Therefore, database search is limited in scope. To achieve high performance, flexibility, relevance ranking, and so on, some search engines connect to databases directly, extract data, and generate indexes from databases. One example is IBM WebSphere OmniFind.
Analytics
As we illustrated in the previous ETL section, data warehouses consolidate data into a central location to enable better decision-making, cross-departmental reporting, and data mining. The traditional analytics include reporting, data mining, dashboards, scorecards, and business performance management. As competition increases, operations become more complex, and regulations become more restrictive over time. Organizations want to access heterogeneous data sources in real-time in order to make the following improvements:
- Employ integrated information to predict market trends.
- Understand customers better.
- Increase operation efficiency.
- Ensure compliance to regulations.
- Derive new knowledge.
All of these trends drive the increased demand for analytical capabilities in information management. Analytics have moved from the back office to the front line. For example, if a salesperson knows an existing client's contract, service experience, its industry trends, its competitors and customers, he or she will be in a much better position to form a customized sales proposal specific to that client. Lastly, analytics frequently necessitates information integration across heterogeneous information sources. For instance, to evaluate quality, a car manufacturer needs to correlate accident reports (stored in a document management system), dealers' repair records (stored in a relational database), drivers' risk factors, and environmental factors (stored in knowledge management system). The future of analytics will build increased intelligence to access and correlate information from heterogeneous information sources in order to allow new insights and business decisions.
Related services
The following services are described as related not because they are not important to information management, but because they are common to business processes and application integration as well.
SSO, access control, and audit
Single-Sign-On (SSO) to heterogeneous information sources, access control, and auditing the viewing and modification of information all build a foundation for a secure environment for information management. SSO asks users who are you, access control asks what can you do,
and audit keeps track of what have you done. The benefits of SSO are many; it reduces user frustration, lowers developing effort, and increases productivity. Access control ensures that only people with correct rights can access data and content. Some businesses require highly sophisticated access rights management such as Digital Rights Management. Audit service adds additional security to data and content. Viewing, inserting, modifying, and deleting information can all be audited and easily reported. With increasing demand on security and regulatory compliance, the combination of SSO, access control, and audit service builds a solid foundation for enterprise information management.
**Workflow and version control**
Both workflow and version control are designed to foster collaboration in a team environment. Data, content, and metadata management, application code development, and processes all need workflow to allow people to collaborate while establishing consistent points through version control so they can refer back to it later. Workflow links people, processes, and information into one ecosystem. Each part of the system -- people, processes and information -- is very interactive, and the interactions among them are even more dynamic. For example, a company sets up a program that every employee can submit their ideas on any topic. Depending on the domains of ideas (information), they will be routed, reviewed, and worked by different people (processes, people). Thus, a highly robust and adaptive workflow is needed to be able to handle unanticipated situations. Once you develop such a workflow service, it can be called by different applications, such as document management, HR system, or knowledge management.
**Portal**
Industry analysts predict that enterprise portals combined with Web services will take off within the next twelve months. Portals integrate applications and information and present them to the end users as one unified view. Since EII provides an abstraction layer, developers are able to access and aggregate various information sources, maintain the codes, and achieve performance and security requirements without writing customized adopters. As the result, application development can reduce time, cost and skill demands, and the portal users can access the wide variety of information effortlessly. Most importantly, end-to-end business processes can be integrated easily and quickly.
**Case study: An example of data quality service**
Services such as enterprise search, data quality and validation, and analytics in the information management stack are often good candidates for outsourcing. The framework of information management under SOA opens up a new and increasingly popular business model. Let's take a look at a case study of offering data validation services, a subset of data quality services, through SOA.
Many e-commerce companies need to verify addresses, telephone numbers, and social security numbers, as well as other identifying information in real-time in order to prevent mistakes and fraud or to comply with laws and regulations, such as Sarbanes-Oxley. Because of the complexity of data quality validation, some companies subscribe to data validation services from third-party
providers instead of developing in-house solutions. Some companies offer data validation and quality services and provide real-time address and telephone number validation over the Internet. Typically, after the customers fill out e-commerce applications and submit them online, e-commerce companies wrap customers' information into XML documents and send it to data validation companies through Web services, Simple Object Access Protocol (SOAP), and Web Services Description Language (WSDL). The receiving companies verify the data in real-time within the same customer transaction. For the customers, they get instant feedback and are able to correct or cancel the transaction.
In the past, if any data errors occur during the process, e-commerce got undeliverable addresses or e-mails days or even months later; meanwhile, customers wondered what happened to their account. As a result of data validation services through SOA, e-commerce companies are relieved from the burden of maintaining and updating gigabytes of database information that contains millions of people's names, phone numbers, and deliverable addresses, including information from other countries and territories.
Conclusion
The authors examined each of the services that information management offers and gave special attention to metadata management and integration. Although there are many types of services, and these might initially seem overwhelming, you can see the main point of information management if you remember the following value proposition:
- Security
- Collaboration
- Quality of Service
- Manageability
- Consumption
Hopefully, this paper makes you aware of the great importance and broad scope of information management. Armed with knowledge of the individual pieces and their interactions, you are able to unleash the power of information management and build a robust and balanced SOA.
Acknowledgement
The authors would like to thank Susan Malaika and Norbert Bieberstein for their excellent feedback and Robert D. Johnson for his support.
IBM information management products
The following table shows you the information management services and the IBM products available to implement these services.
**Table 1. IBM information management products**
<table>
<thead>
<tr>
<th>Information management services</th>
<th>IBM products</th>
</tr>
</thead>
<tbody>
<tr>
<td>Explore the different approaches to information management in SOA</td>
<td></td>
</tr>
<tr>
<td>---</td>
<td></td>
</tr>
<tr>
<td><strong>Analytics</strong></td>
<td>DB2® Data Warehouse Edition; DB2 Cube Views; DB2 Alphablox; DB2 Entity Analytics</td>
</tr>
<tr>
<td><strong>Content federation</strong></td>
<td>WebSphere® Information Integrator, Content Edition</td>
</tr>
<tr>
<td><strong>Data federation</strong></td>
<td>WebSphere Information Integrator</td>
</tr>
<tr>
<td><strong>Data modeling</strong></td>
<td>Rational® XDE; alphaWorks Data Architect for DB2 Document Management</td>
</tr>
<tr>
<td><strong>Data profiling</strong></td>
<td>WebSphere ProfileStage</td>
</tr>
<tr>
<td><strong>Data quality</strong></td>
<td>WebSphere QualityStage</td>
</tr>
<tr>
<td><strong>ETL</strong></td>
<td>WebSphere DataStage; DB2 Warehouse Manager</td>
</tr>
<tr>
<td><strong>Logical and semantic information modeling</strong></td>
<td>IBM Research Ontology management system (Snobase)</td>
</tr>
<tr>
<td><strong>Metadata repository</strong></td>
<td>WebSphere MetaStage; alphaWorks XML Registry</td>
</tr>
<tr>
<td><strong>Search</strong></td>
<td>WebSphere Information Integrator OmniFind Edition</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.ibm.com/developerworks/library/ws-soa-ims2/ws-soa-ims2-pdf.pdf", "len_cl100k_base": 6669, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28983, "total-output-tokens": 7113, "length": "2e12", "weborganizer": {"__label__adult": 0.0002627372741699219, "__label__art_design": 0.0004336833953857422, "__label__crime_law": 0.0003254413604736328, "__label__education_jobs": 0.0008106231689453125, "__label__entertainment": 8.07642936706543e-05, "__label__fashion_beauty": 0.00012731552124023438, "__label__finance_business": 0.0015163421630859375, "__label__food_dining": 0.0002651214599609375, "__label__games": 0.0003862380981445313, "__label__hardware": 0.0006971359252929688, "__label__health": 0.0002892017364501953, "__label__history": 0.00023281574249267575, "__label__home_hobbies": 7.420778274536133e-05, "__label__industrial": 0.0003948211669921875, "__label__literature": 0.00025343894958496094, "__label__politics": 0.0001913309097290039, "__label__religion": 0.0002923011779785156, "__label__science_tech": 0.033905029296875, "__label__social_life": 8.14199447631836e-05, "__label__software": 0.038299560546875, "__label__software_dev": 0.92041015625, "__label__sports_fitness": 0.0001608133316040039, "__label__transportation": 0.0003690719604492187, "__label__travel": 0.00017821788787841797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35544, 0.00087]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35544, 0.33526]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35544, 0.90995]], "google_gemma-3-12b-it_contains_pii": [[0, 1378, false], [1378, 3240, null], [3240, 5776, null], [5776, 8938, null], [8938, 11388, null], [11388, 14212, null], [14212, 17776, null], [17776, 21357, null], [21357, 23238, null], [23238, 26137, null], [26137, 29114, null], [29114, 32379, null], [32379, 34739, null], [34739, 35544, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1378, true], [1378, 3240, null], [3240, 5776, null], [5776, 8938, null], [8938, 11388, null], [11388, 14212, null], [14212, 17776, null], [17776, 21357, null], [21357, 23238, null], [23238, 26137, null], [26137, 29114, null], [29114, 32379, null], [32379, 34739, null], [34739, 35544, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35544, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35544, null]], "pdf_page_numbers": [[0, 1378, 1], [1378, 3240, 2], [3240, 5776, 3], [5776, 8938, 4], [8938, 11388, 5], [11388, 14212, 6], [14212, 17776, 7], [17776, 21357, 8], [21357, 23238, 9], [23238, 26137, 10], [26137, 29114, 11], [29114, 32379, 12], [32379, 34739, 13], [34739, 35544, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35544, 0.06897]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
5bbc9a5faeb4d5114d47caa9983a34ec41058f9e
|
Software development for the evaluation of the ergonomic compatibility on the selection of advanced manufacturing technology
Maldonado-Macías, A\textsuperscript{a,b,1}, Reyes, R.\textsuperscript{b}, Guillen, L\textsuperscript{b} and García, J\textsuperscript{a}
\textsuperscript{a} Department of Industrial and Manufacturing Engineering, Ciudad Juárez Autonomous University, Ave. del Charro 450 Norte, C.P. 32310. Cd. Juárez, Chihuahua, México
\textsuperscript{b} Graduate Studies and Research Division, Ciudad Juárez Institute of Technology, Ave. Tecnológico No. 4090. Cd. Juárez, Chihuahua, México
Abstract. Advanced Manufacturing Technology (AMT) is one of the most relevant resources that companies have to achieve competitiveness and best performance. The selection of AMT is a complex problem which involves significant amount of information and uncertainty when multiple aspects must be taken into consideration. Actual models for the selection of AMT are found scarce of the Human Factors and Ergonomics perspective which can lead to a more complete and reliable decision. This paper presents the development of software that enhances the application of an Ergonomic Compatibility Evaluation Model that supports decision making processes taking into consideration ergonomic attributes of designs. Ergonomic Compatibility is a construct used in this model and it is mainly based in the concept of human-artifact compatibility on human compatible systems. Also, an Axiomatic Design approach by the use of the Information Axiom was evolved under a fuzzy environment to obtain the Ergonomic Incompatibility Content. The extension of this axiom for the evaluation of ergonomic compatibility requirements was the theoretical framework of this research. An incremental methodology of four stages was used to design and develop the software that enables to compare AMT alternatives by the evaluation of Ergonomic Compatibility Attributes.
Keywords: advanced manufacturing technology, ergonomic compatibility evaluation, ergonomic incompatibility content, software development, fuzzy axiomatic design approach
1 Corresponding autor. E-mail: amaldona@uacj.mx, araande72@yahoo.com
1. Introduction
Advanced Manufacturing Technology (AMT) is recognized as one of the most valuable resources for companies in their quest for competitiveness in a globalized market. AMT is is generally related with the use of computers during all the stages of the manufacturing process of a product; including design, manufacturing and management activities. Typically includes computer numerically controlled machines (CNC), Computer Aided Design (CAD) and Manufacturing (CAM), systems mediated by computer for storage of materials, Flexible Manufacturing Systems (FMS), among others [9]. This technology undergoes continuous, gradual but also radical changes in the industry, so tools and strategies for the proper selection of materials, processes, equipment and machines are required [15]. Commonly, decision makers (DMs) face situations in which it is necessary to plan, evaluate and select equipment from a variety of available alternatives. Also, decision making regarding the evaluation and selection of AMT involves a large variety of aspects that are difficult to be considered in its entirety.
This work presents the development of software following an incremental methodology of design. The general objective is to achieve a more simple, understandable, effective and efficient system for the potential users, and to enhance the application of The Ergonomic Compatibility Evaluation Model (ECEM) for the selection of AMT proposed by Maldonado-Macías [5]. This model contributes to the need of integrate ergonomic attributes into processes of evaluation and selection of AMT. This document is integrated of six parts; the first one is introductory, then the second one deals with the theoretical framework necessary for this work, which includes the basis for ergonomic evaluation model and the structure for software development. Then, the methodology for the software development is presented in the third part. The results describing the functions and parts of the software will then be presented in part four. Finally the conclusions and recommendations of the work are discussed and references are presented.
2. Literature Review
For Karwowski [21], ergonomics is a unique and independent scientific discipline that focuses on the nature of human-artifact interactions; it promotes the design and management of human-compatible systems. Also, this author proposed the sub discipline called Simvatology that studies the compatibility of the human-artifact systems and it comes from two Greek words (Simvatotis = compatibility) and (Logos = reasoning about something). This sub discipline aims to discover the laws of artifact-human compatibility and develop a measurement for it. The EC is a construct used for the ECEM purposes and it is based on the concepts of compatibility between the human-system and the human-artifact proposed by Karwowski [19-21].
In addition, the Axiomatic Design Theory (ADT) developed by Suh [13], and particularly the use of the Information Axiom (IA) was included in the model. This axiom which was adapted and extended by Hélander [10,11] and adopted by Karwowski [20-23] to address ergonomic aspects in technology; was also used for obtaining the Ergonomic Incompatibility Content (EIC). This is a measurement for the probability of an AMT design to meet Ergonomic Functional Requirements. This approach is based mainly in the extension and adaptation of the Information Axiom (IA). This axiom proposes the selection of the best alternative that has minimum information content. Based on this axiom in a fuzzy environment, the Ergonomic Compatibility Evaluation Model (ECEM) achieves the effective integration of Ergonomic Compatibility Attributes in a multi attribute decision making schema [2-4]. This model has its theoretical foundation mainly in the construct of Ergonomic Compatibility in a Fuzzy Axiomatic Design (FAD) approach.
According to Suh [13], with the intention of evaluate a given design, it is necessary to define the Functional Requirements (FRs) of a certain design, also the Design Range (DR) for each FR which represents the desirability of a system or product established by the designer or expert. Also, the System Range (SR) which represents what the system or product can really comply with such DR. The overlap between these two ranges defines a region that is called Common Area, and represents the probability with which certain system or product can meet the established requirements. In a fuzzy approach, data can be linguistic terms, fuzzy sets, or fuzzy numbers. If the fuzzy data are linguistic terms, they are transformed into fuzzy numbers first. Then all the fuzzy numbers (or fuzzy sets) are assigned to crisp scores. These numerical approximation systems are proposed to systematically convert linguistic terms to their corresponding fuzzy numbers thru several conversion scales proposed by Chen and Hwang [16]. In this way, the ECEM proposes to settle the Design Range (DR) denoted by the triangular fuzzy number \((a, \beta, \theta)\)
and to determine the System Range (SR) denoted by the triangular number \( (a, b, c) \) for each Ergonomic Compatibility Attributes (ECA). Figure 1 illustrates these areas using Fuzzy Triangular Numbers; the Common Area is also denoted in a shaded area. Ergonomic Incompatibility content for \( (i) \) alternative and \( j \) attributes is obtained by Equation 1. The Equation 2 defines Ergonomic Incompatibility Content affected by the importance weight assigned by the experts for \( i \) attributes.
\[
EIC = \log_2 \frac{\text{Area of Ergonomic System Design (Triangular Fuzzy Number)}}{\text{Common Area}}
\]
\[
h = \sum_{i=1}^{n} (h_i)
\]
2.1 Overview of the model (ECEM)
Ergonomic Compatibility Attributes (ECA) of the proposed model were determined by an extensive literature review and mainly based on the works of Corlett and Clark [6], Endsley [12] and the usability attributes proposed by Bruseberg [1] those were established as Ergonomic Functional Requirements (EFRs). They were divided into five main attributes and twenty sub attributes. Among the main attributes are: compatibility with human skills and training (A11), physical work space compatibility (A12), usability (A13), equipment’s emissions requirements (A14) and compatibility with organizational requirements (A15). The main attribute A11 includes two sub-attributes which are: compatibility with human skill level (A111) and training compatibility (A121). The main attribute A12 includes five sub-attributes: access to machine and clearances (A121), horizontal and vertical reaches (A122), adjustability of design (A123), postural comfort of design (A124), physical work and strength related to design (A125). The main attribute A13 includes seven sub-attributes: compatibility with design controls’ design (A131), compatibility with controls’ physical distribution (A132), and compatibility of the visual work space (A133), information load (A134), error tolerance of design (A135), functional allocation of (A136), and design for maintainability (A137). The main attribute (A14) includes four sub-attributes: temperature (A141), vibration (A142), noise (A143), Residual Materials (A144). Finally, the main attribute (A15) includes two sub-attributes: compatibility with pace of work (A151) and compatibility with total work content (A152).
The Ergonomic Compatibility Evaluation Survey (ECES) proposed by Maldonado-Macias [3,4] was used to collect the data for the evaluation of these attributes on AMT and to obtain the relative importance of them from the participation of experts. Such evaluation applies to the selection of alternatives of AMT with a very similar or same manufacturing purpose (i.e. alternatives of CNC Milling Machines). The importance weight of each attribute for the model uses the AHP (Analytic Hierarchical Process) methodology proposed by Saaty [17]. The model is proposed for companies that face AMT selection processes and are interested in including ergonomic attributes in their evaluation. The software use will enables companies to create a database of AMT alternatives, perform systematic ergonomic evaluations of them and compare them to select the best choice that can satisfy ergonomic requirements.
2.2 Software quality requirements.
According to Pressman [14], to ensure computer systems efficiency, it is necessary to take into account certain aspects of quality, from the beginning of the process. Some of them are shown below:
- Maintainable: It must be possible that the software evolves and it continues to meet desirable specifications.
• Reliable: Software should not cause physical or economic damage in the case of failures.
• Effective: Software should not waste the resources of the system.
• Usable: The software must have an appropriate user interface and documentation.
During all the stages of this model these requirements were carefully pursued.
3. Methodology
The methodology developed in this work includes 4 stages of an incremental model. These are: analysis, design, coding and testing. As shown in Figure 2 all the information needs of the client and the purpose of the system are defined in the analysis phase, this phase will guarantee the system to meet the client needs. Later, in the design stage the system architecture is manufactured, defining how the system will be structured. Subsequently, entering the stage of coding, the encoding of the entire design is made using programming languages. Then, in the test phase distinction among the results obtained of the system and those requested by the client is developed. Ending this stage the process can start again with the first stage and thus this way successively until the improved system design can be delivered.
Commonly, throughout the software development process it is not possible to design it as a whole, because there will be changes during the process, either in the case of research projects, adaptations and changes requested by the client may occur. For this reason, the incremental model is suitable for these cases where several iterations can be executed until the desired system can be obtained.
In this case the software development dealt with the following stages of the process by the incremental model proposed by Pressman [14], shown in Figure 2.
The previous stages and how they were used for the development of the software are described below:
• Analysis: In this stage, it is defined what the software should do according to the client needs and specifications. This stage describes the user requirements for the system and explains the functionality and the interaction between the user and the software application. For this case, the analysis phase consisted in analyzing the requirements of the client. One of them was the creation of a database for capturing and processing the information from those companies interested in the use of software to assist their decision making processes about the selection of AMT. Also the needs of screens and menus to enter the generated data from the evaluation of experts and then how results can be shown for users were established. The most important part of this stage was to achieve the correct representation of the mathematical model proposed by Maldonado [5] for the ECEM for the selection of ATM.
Design: This stage produces the structure of the system. This includes the database design, the screens and menus for input data and the application of the “user case” model for designing all the user-system interfaces.
Code: This stage encodes the designed system. Code was developed in C# language within a Visual Studio ® 2010 environment. The works of Jiarratano, Sharp and Ross [7,8,18] were useful at this point of the investigation. Fuzzy Logic application was made by Matlab ® 2010 Software. This combination allowed handling the user interfaces and the database at the same time enabling to perform complex calculations for aggregation and compilation processes based on the Ergonomic Compatibility Evaluation Model (ECEM).
Testing: At this stage, verification of the software took place. In this, the system must comply with what client’s requests. Several comparisons between the system results and previous results made by other methods help the validation of the generated functions. Functions to obtain the System Range Area, Design Range Area, Common Area, from triangular fuzzy numbers and membership functions were tested also using AutoCAD ® against those created by the software. Results obtained from the Analytic Hierarchical Process (AHP) concerning the attribute’s relative importance or weights using the Expert Choice ® were compared with the results obtained from the system. Results were tested and the values were consistently correct.
4. Results
Results of each stage are explained in the following sections.
During the analysis stage and after several meetings with the user, the ergonomic evaluation model for the selection of AMT was clarified to design the system. The analysis was divided into 3 parts in order to have effective feedback on each one of the stages. These parts were: data acquisition, data processing and display of results. Also, in this stage the system was represented through a conceptual diagram shown in Figure 3.
Fig. 3. Conceptual diagram of the AMT system
Fig. 4. System Flowchart
4.1 Basic functions flowchart
For this project, the model and the user interfaces, the database structure and the screenshots, were designed. The flowchart of the basic functions is a typical tool for this stage and it is shown in Figure 4.
In the first phase of design, main areas of the system are specified; this includes providing access through a main menu from which it could be possible to select the items the user needs. At this phase the information routes to be followed after running a command and the general operation of the system are designed.
This gives rise to several groups of menus with functions and procedures that are listed below:
• Main Menu: Responsible for being the connection among all system functions.
• Add Enterprise Menu: Capture information of a general nature such as personnel, infrastructure, process data acquisition equipment etc.
• Add expert menu. Capture information about the experts who will perform the evaluations about the alternatives.
• Add AMT menu. This area of the system enables to include a new alternative to be compared. Also, provides detailed information about the characteristics of alternatives (equipment) to be evaluated and compared. Also, this menu provides access to the evaluation process which enables experts to capture the scores and ranking of each attributes and sub attributes for each alternative. In this part, the Ergonomic Design Range, System Design Range will be captured via the experts’ evaluations.
• Results Screenshot. This is the critical part of the system, because the information about the evaluations is presented. Also, calculations to obtain the Ergonomic Incompatibility Content (EIC) and the “spider” chart for the comparison are shown in this part. As a result, system delivers which alternative is the best in terms of satisfying established Ergonomic Functional Requirements.
This section displays results about the general comparison of alternatives and the detailed information of the evaluation of attributes and sub attributes separately this helps to analyze de final decision.
4.2 Data base design
The created database design contains the general information of the company, experts, alternatives, etc. It consists of 6 tables correctly declared by name, also the field names and the type of data that they may contain. In addition, the information length assigned to each field was determined with the proper tolerance to execute changes on these fields. The structure of the database allows capturing the data; define the tables, fields, the type of data and the way by which the information contained in tables can be linked, recorded and retrieved from the database. Once the encoded and testing of this part was made on this phase, the next step was the creation of a digital version of the Ergonomic Compatibility Survey (ECS) proposed by Maldonado-Macias [3,4] required for data acquisition for the system. This version enables to acquire general information of the company, the experts’ identification, equipment specifications and identification. Also, it contains the section where the evaluation process can be made, this part will be explained below.
4.3 Ergonomic Compatibility Evaluation Process
At this part, the evaluations made by experts are supported by the software. This includes the evaluation and determination of Design Ranges, System Ranges and the relative importance of each attribute and sub attribute.
The ergonomic evaluation for every Ergonomic Compatibility Attribute (ECA) is made for each alternative. Experts determine the Design Range (Design Range) and assess the accomplishment of this range by each alternative (System Range) for each attribute and sub attribute according the ECS and the software menus using linguistic terms. The software systematically converts these terms into fuzzy numbers using appropriate scales.
Importance weight for every ECA is determined via pairwise comparisons according the AHP methodology. The appropriate menu was designed by the software as well.
After the experts’ assessment was made supported by the software, the Design Range \((a, \beta, \theta)\) and the System Range \((a, b, c)\) are obtained for each attribute, sub attribute and alternative. This is made by appropriate aggregation processes integrated into the coding of the system. These data will be stored in the database, and will be transformed from the linguistic term of responses given by the experts who evaluated the alternatives of AMT to numerical data using fuzzy numbers. Once these ranges are obtained, it
necessary the creation of a function that will find the Common area which is needed to find the value of the Ergonomic Incompatibility Content (EIC) for each attribute and sub attribute.
Figure 5 shows the information workflow of the system, it includes four stages in screen menus, text is presented in Spanish. The upper left screen is used for capturing a new company; at this point the information about personnel, the experts, equipment (alternative) and general company’s information. The upper right screen is used to capture the evaluation of the ergonomic compatibility attributes and sub attributes made by experts. Next, the lower left screen displays the menu to capture the rankings made by experts according the AHP methodology. Then, the data is stored in a table to obtain the relative importance or weight value of each attribute and sub attribute \( w_i \); this data is stored for each alternative and for each particular expert. These values will affect the calculation of the retrieved EIC by attribute and the total EIC for every alternative. Finally the software delivers the final results numerically and by a series of “spider” charts, this makes easier for user to find the EIC for each alternative and attribute separately. The alternative with the lowest total EIC is chosen as the best among the evaluated machines.
5. Conclusions
This proposal has been considered effective for the implementation of the EMEC which allows collecting and processing information about ergonomic compatibility evaluation of AMT. The model proposes a multi attribute structure and a fuzzy axiomatic design methodology that delivers the final result expected by the user. The proposed software is a technological innovation to assist decision-makers in the selection of the best alternative of AMT taking into consideration ergonomic attributes that have been obviated and underestimated by actual models in the selection of AMT. The application of the model using the system may contribute to make better decisions about AMT in its interaction with humans.
Acknowledgements
Researchers are grateful to the program for postgraduate improvement (PROMEP) and the Autonomous University of Ciudad Juarez for the given facilities for this research and the support and funding for the project.
References
Official sites for acquisition of the aforementioned software applications:
http://www.mathworks.com/products/matlab/
http://www.expertchoice.com/
http://www.microsoft.com/spain/visualstudio
http://mexico.autodesk.com/adsk/servlet/home?siteI D=1002155&id=7659874
|
{"Source-Url": "http://content.iospress.com/download/work/wor0386?id=work%2Fwor0386", "len_cl100k_base": 4464, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 20923, "total-output-tokens": 6018, "length": "2e12", "weborganizer": {"__label__adult": 0.0005717277526855469, "__label__art_design": 0.0012845993041992188, "__label__crime_law": 0.0007596015930175781, "__label__education_jobs": 0.004268646240234375, "__label__entertainment": 8.982419967651367e-05, "__label__fashion_beauty": 0.0003943443298339844, "__label__finance_business": 0.0018482208251953125, "__label__food_dining": 0.0011224746704101562, "__label__games": 0.0008726119995117188, "__label__hardware": 0.0035228729248046875, "__label__health": 0.0027294158935546875, "__label__history": 0.0004069805145263672, "__label__home_hobbies": 0.0003840923309326172, "__label__industrial": 0.0259552001953125, "__label__literature": 0.000396728515625, "__label__politics": 0.00038909912109375, "__label__religion": 0.0007452964782714844, "__label__science_tech": 0.41552734375, "__label__social_life": 0.00014019012451171875, "__label__software": 0.0207366943359375, "__label__software_dev": 0.51513671875, "__label__sports_fitness": 0.0007128715515136719, "__label__transportation": 0.0016679763793945312, "__label__travel": 0.00028586387634277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26413, 0.02821]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26413, 0.39001]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26413, 0.90031]], "google_gemma-3-12b-it_contains_pii": [[0, 2184, false], [2184, 7215, null], [7215, 10786, null], [10786, 13508, null], [13508, 15555, null], [15555, 20127, null], [20127, 21473, null], [21473, 26413, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2184, true], [2184, 7215, null], [7215, 10786, null], [10786, 13508, null], [13508, 15555, null], [15555, 20127, null], [20127, 21473, null], [21473, 26413, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26413, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26413, null]], "pdf_page_numbers": [[0, 2184, 1], [2184, 7215, 2], [7215, 10786, 3], [10786, 13508, 4], [13508, 15555, 5], [15555, 20127, 6], [20127, 21473, 7], [21473, 26413, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26413, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
75a43f05ea4d784d924a5b2f68417412055659b3
|
The Use of Application Scanners in Software Product Quality Assessment
Stefan Wagner
Institute of Software Technology
University of Stuttgart
Stuttgart, Germany
stefan.wagner@informatik.uni-stuttgart.de
ABSTRACT
Software development needs continuous quality control for a timely detection and removal of quality problems. This includes frequent quality assessments, which need to be automated as far as possible to be feasible. One way of automation in assessing the security of software are application scanners that test an executing software for vulnerabilities. At present, common quality assessments do not integrate such scanners for giving an overall quality statement. This paper presents an integration of application scanners into a general quality assessment method based on explicit quality models and Bayesian nets. Its applicability and the detection capabilities of common scanners are investigated in a case study with two open-source web shops.
Categories and Subject Descriptors
D.2.9 [Software Engineering]: Management—Software Quality Assurance
General Terms
Security, measurement
Keywords
Application scanner, quality assessment, Bayesian net, quality model
1. INTRODUCTION
Continuous quality control means to assess and improve software quality almost continuously, i.e., on an hourly or daily basis. This allows the developers to detect quality defects and to remove them early after their introduction into the system, which avoids a general quality decay and far higher costs in later phases in the software’s life cycle. These benefits, however, come at the cost that quality assessments need to be done often and hence are elaborate. Therefore, automation and good tool support is necessary to employ continuous quality control in practice [10].
1.1 Problem Statement
Software product quality assessments need to cover a large variety of topics including security. The assessment of product security is – as all quality analyses – elaborate. Hence, also for security, automation is necessary for practical adoption. In quality assessments automation relies to a large degree on automated static analysis. Static analysis, however, can only assess security partial. Dynamic analyses are needed to complement the static ones. Most existing automatic dynamic analyses for security are not integrated into product quality assessment methods. Instead, dynamic analysis tools, mostly so-called application scanners, are used solely for analysing the security of networks and hosts.
1.2 Research Objective
Similar to static analysis tools, there is a plethora of tools for automatic dynamic security analysis. Especially application scanners are available commercially as well as open source. Those tools scan the executing application automatically for vulnerabilities and hence are a promising addition to static analysis. Our overall objective is to investigate the available tools and the kinds of vulnerabilities they detect to define how these tools should be integrated in a general quality assessment.
1.3 Contribution
We employ an existing quality assessment method based on explicit quality models and Bayesian nets and extend it by defining how application scanners can be used in the assessment. This extended method is performed using three well-known open source application scanners (w3af, Wapiti, and Grendel Scan) on two open source web shops (PHP Shop and Zen Cart). We show the principal applicability of the method to these real-world applications and also find first indications that the scanners find different vulnerabilities and, hence, should be used in combination. Therefore, this paper is only a first step in the direction of the research objective.
1.4 Context
The approach is applicable in principle to any kind of software. Most application scanners focus on web applications at present. The used application scanners and study object are open source but in use in commercial contexts.
2. APPLICATION SECURITY SCANNERS
We first give a general introduction into what application security scanners are and present three scanners that are also used in the case study in Section 4.
2.1 General
In general, an application scanner is a software that performs automatic penetration testing. Most scanners use a set of common patterns of inputs that they send to the application and decide, based on the output, whether there is a vulnerability that might be exploited. In addition, they have many possibilities to configure the penetration tests so that they fit to the system under analysis. Most application scanners concentrate on web applications as these are most exposed to attacks. Black et al. define in [7] a Web application security scanner as an "automated program that searches for software security vulnerabilities within web applications".
There are several groups that work on specific application scanners (e.g., [4, 19, 20]) in order to either find new vulnerabilities or improve the detection of vulnerabilities. There are also specialised tools that dynamically and (partly) statically detect specific vulnerabilities [3]. However, these different tools have not been compared and analysed w.r.t. their usage in product quality assessment. We discuss three common open-source scanners in the following.
2.2 W3af
The Web Application Attack and Audit Framework (w3af) provides a framework as well as a complete graphical and command-line interface to run application scans and view results. The framework provides simple wrappers for HTTP communication, web services, sessions, and HTML parsing. It also contains many plugins that implement scanning and testing an application. It is written in Python and is available at http://w3af.sourceforge.net/.
2.3 Wapiti
The Web application vulnerability scanner / security auditor (Wapiti) is a command-line tool that scans the web pages of an application and identifies scripts and forms to inject data. Using these scripts and forms it acts like a fuzzer and injects payloads to see if a script is vulnerable. Wapiti is developed in Python. It is available at http://wapiti.sourceforge.net/.
2.4 Grendel-Scan
Grendel-Scan is a web application security testing tool that also provides a graphical user interface. It contains an automatic application scanner that detects common web application vulnerabilities. It is written in Java and is available at http://www.grendel-scan.com/.
3. QUALITY ASSESSMENT METHOD
Quality assessment is the part in quality control that compares the actual state of an application with its requirements. It evaluates if and how well the software fits to what was intended. There are various ways to perform this assessment and the major difficulty lies in combining the various quality assurance results and measures to a common quality statement. In the project Quamoco¹, we developed such a quality assessment method. One specific instance of this method uses Bayesian nets to describe the uncertainties in the results and measures as well as to calculate a quality statement. In the following, we propose how application scanners can be integrated in the method for a substantial security assessment.
3.1 Quamoco
In the project Quamoco, we develop a quality model with a corresponding quality assessment method that has the aim to facilitate continuous improvement based on objective, quantitative feedback [21]. It has its origins in the Quality Improvement Paradigm [5] and the Goal/Question/Metric (GQM) approach [6]. We built one specific instance using Bayesian nets as a means for analysing assessment results [26] that was specifically aimed at using activity-based quality models [12].
We give a brief overview on the quality models developed in Quamoco and describe the assessment method from [26] adapted to the Quamoco quality models.
3.2 Quamoco Quality Models
In general, there are two main uses of quality models in a software project: (1) as a basis for defining quality requirements and (2) for defining quality assurance techniques and measurements for the quality requirements. The quality models developed in Quamoco advance existing quality models as they combine the practically shown advantages of different models [12, 17, 25, 28]. The idea is to use not only high-level “-ilities” for defining quality but instead to break it down into detailed factors and their influence on quality attributes. The quality attributes we use in this paper are the activities performed on and with the system, which are derived from activity-based quality models [12]. In the area of security, we use a hierarchy of attacks as activities [27]; in this case activities that should be prevented.
We developed an explicit meta-model in Quamoco that defines the quality model elements and their relationships. Five elements of the meta-model are most important in the context of this paper: entity, property, measure, impact, and activity. An entity can be any thing, animate or inanimate, that has an influence on the software’s quality, e.g., the source code of a PHP function or an HTML form. These entities are characterised by properties such as structuredness or conformity. The combination of an entity and a property is called a factor. These factors are measurable either by automatic measurement or by manual review. This is specified in the measures for a factor.
Entities as well as activities are organised in hierarchies. An influence of a factor is specified by an impact. We concentrate on the influences on attack activities, for example, SQL injection or password brute forcing. The impact on an activity can be positive or negative.
¹http://www.quamoco.de/
We need four steps to derive these nodes from the information of the quality model.
1. We identify the relevant activities with measures based on the assessment goal. We use GQM [6] to structure that derivation. We first define the assessment goal, for example, optimisation of security assurance, which leads to relevant activities, such as attack. This is refined by stating questions that need to be answered to reach that goal.
2. Influences by sub-activities and factors are identified. This step is repeated recursively for sub-activities. The resulting factors together with their impacts are modelled.
3. Suitable measures for the factors are added.
4. The node probability tables (NPT) are defined to reflect the quantitative relationships. This includes defining node states as well as filling the NPT for each node. The activity and factor nodes are usually modelled as ranked nodes, i.e., in an ordinal scale. Having that, the Bayesian net can be used for simulation by setting values for any of the nodes.
The definition of NPTs is the most complicated part in building Bayesian nets. The approach by Fenton, Neil and Galan Caballero [14] simplifies that by approximating the specific values in an NPT by general distributions or expressions. They formalise the behaviour observed with experts that have to estimate NPTs, who usually estimate the central tendency or some extreme values based on the influencing nodes. The remaining cells of the table are then filled accordingly. For example, it renders it possible to model the NPT of a node by a weighted mean over the influencing nodes.
In general, the NPTs of the measure nodes are defined using either common industry distributions or information from company-internal measurements. The influence of the activity or factor node it belongs to can be modelled in at least two ways: (1) partitioned expressions and (2) arithmetic expressions. The latter describes a direct arithmetical relationship from the level in the activity or factor node to the measure. Using a partitioned expression, the additional uncertainty can be expressed by defining probability distributions for each level of the activity or factor node.
### 3.5 Integration of Application Scanners
Application scanners provide findings of probable vulnerabilities for the analysed software. We can use them as measures for factors. Hence, the integration of application scanners affects the steps 3 and 4 of the assessment method. We define measure nodes that correspond to scanner findings. All scanners classify the found vulnerabilities into different types. Each vulnerability type forms a measure. These measures are matched to existing factors or new factors are generated enriching the knowledge about how to develop secure software applications.
For example, an application scanner might detect buffer overflows if the software is configured to return error pages. The assessment method user would create a measure node Buffer Overflow Error Page that represents the findings of the scanner. The quality model already contains a factor Confinement of Buffer, which specifies that the limits...
of buffers are respected. This factor is represented in the Bayesian net as a factor node and the assessor adds an influence to the measure node.
The factors that are measured by application scanners can have an impact on a very specific attack or in general ease attacking. This is reflected by the hierarchy level of the attack that has the impact. A general impact goes to a more generic attack in the activity hierarchy. For the example of the buffer overflow, the impact might be on the attack Forced Integer Overflow that represents the setting of a controllable integer value to an unexpected value.
For measures from static analysis, we calculate densities to reflect how large the problems are in relation to the software size. As each found vulnerability can potentially corrupt the complete application, we use a simpler yes/no voting. If there is at least one vulnerability of a type, the measure has the value yes. For example, if the scanner detects at least one buffer overflow error page, the assessor sets the observation of the measure node to yes. The NPT in the measure node is modelled by a partitioned expression. In the buffer overflow example, if Confinement of Buffer is in the state high, Buffer Overflow Error Page is in the state no and vice-versa. The expression should also add an uncertainty range depending on how well the measure indicates the factor.
If we employ more than one scanner, we can run into the problem that the scanners do not agree on the detection of specific vulnerabilities. We prefer a pessimistic assessment – possibly worse than it actually is – and hence vote yes if at least one scanner reports a vulnerability.
4. CASE STUDY
The case study shows the applicability of the method and to a smaller degree the detection capabilities of application scanners. We define the study design, describe the used study objects, and show and discuss the results.
4.1 Study Design & Procedure
The aim of this case study is a proof-of-concept that analyses the method’s applicability to real-world software. In particular, we are interested in the effort needed to incorporate and use the scanners as well as if they give useful results. Furthermore, the execution time for analysis should be short enough to be able to run the scanners often, e.g., on a daily basis. This leads to our first research question:
RQ 1. Is the assessment method applicable to realistic software systems?
Moreover, we investigate if common scanners are comparable in terms of the vulnerabilities they detect. The experience with static analysis has shown that different tools detect partly different classes of defects. If this is not the case, we could resort to just one tool in quality control, which would reduce our effort considerably. Hence, our second research question asks for the differences in vulnerability detection:
RQ 2. Are there differences between the detection capabilities of different application scanners?
We analyse both questions by applying 3 widely known open-source application scanners (see Section 2) to 2 open-source web shops. We install both web shops with their standard installation and run each scanner on each web shop. The scanners are configured to reasonable settings w.r.t. the study objects. For example, attacks specifically for Microsoft SQL Server make no sense as a MySQL database system is used by the study objects.
The vulnerabilities found by all scanners are partitioned into classes that stem from the types of vulnerabilities found by the scanners. The classes are used in the quality assessment method to make a quality statement about the study objects. The Bayesian net for that is built using the tool AgenaRisk. We analyse this application of the method qualitatively to answer RQ 1. Then we compare the results of all three scanners separately and compare their findings for answering RQ 2. The comparison analyses to what degree there are overlaps in the found classes of vulnerabilities. The vulnerabilities are not checked for false positives.
4.2 Study Objects
The study objects are two different web shops, one – Zen Cart – a large application, which is also the most popular of this kind on sourceforge. The other application – PHP Shop – is simple and small in comparison to Zen Cart. Hence, in the case selection, triangulation is used as far as possible. Detailed descriptive information about both study objects is given in Table 1.
<table>
<thead>
<tr>
<th>Study Object</th>
<th>Language</th>
<th>Version</th>
<th>Database</th>
<th>SLOC</th>
<th>Downloads</th>
</tr>
</thead>
<tbody>
<tr>
<td>PHP Shop</td>
<td>PHP</td>
<td>0.8.1</td>
<td>MySQL</td>
<td>8,052</td>
<td>53,000</td>
</tr>
<tr>
<td>Zen Cart</td>
<td>PHP/Perl</td>
<td>1.3.8</td>
<td>MySQL</td>
<td>73,001</td>
<td>625,000</td>
</tr>
</tbody>
</table>
Both applications were installed in the standard Apache web server available in Mac OS X and connected to a local MySQL installation as database management system. As far as possible all configuration were left with the default values.
4.3 Results
As a result for the applicability of the approach, we describe the concrete application together with our experiences. We start with the first step of our assessment approach and identify the relevant activities and corresponding measures. We analyse security, in particular the risk of vulnerabilities in the system. The risk can be the basis for deciding whether further security improvements needs to be employed. Therefore, the goal is “Planning of further security improvements”. For security improvements, attacks on the system need to be confounded. Hence, the activity “Planning of further security improvements” needs to be analysed. We derive the question “How many vulnerabilities are there in relation to the software size?”. For the security improvement planning, it is not only important how many vulnerabilities there are but also whether this number is in a reasonable relation to the system size. It might be economically inadvisable to invest in removing all vulnerabilities. The corresponding metric vulnerability density that measures the number of vulnerabilities by source code size in KSLOC can be directly derived from the question.
In the second step of the assessment method, we build the Bayesian net. The selection of the nodes in the study
is driven by the detection possibilities of the used scanners. There is the top-level activity Attack that we measure by the above derived vulnerability density. It has a direct impact from Visibility of Public Code Comment that describes that it is easier to attack if there are code comments visible to the public. Then, we break Attack down to Probabilistic Techniques, Injection, and Exploitation of Trusted Credentials. These are further refined into Password Brute Forcing, Script Injection, SQL Injection, Cross site Request Forgery, and Session Credential Falsification Through Prediction. Figure 2 shows in the top left the activity tree as represented in the Bayesian Network.
We include 6 impacts on these activities. The impacts are chosen so that their corresponding factors can be measured by the investigated application scanners. The factors used are:
- Completeness of Password Change: Any implementation of changing user passwords is also responsible for the quality of that password to avoid password brute forcing. If such a check is missing, we consider the implementation to be incomplete.
- Sanitation of Dynamic Web Page: If a web application does not sufficiently sanitise the data it is using in output, arbitrary content, including scripts, can be included by attackers.
- Sanitation of SQL Statement: Analogously to dynamic web pages, the used SQL statements need to be sanitised to avoid unwanted changes or reads to the database.
- Visibility of Public Code Comment: Comments in HTML or Java Script code visible to the public may give attackers information they can exploit.
- Authenticity of Request: The application needs to be able to undoubtedly decide on the authenticity of a request. If this is not the case, Cross Site Request Forgery is possible.
- Uniqueness of Session ID: Each session needs a unique ID that cannot easily be guessed. Otherwise an attacker may predict an ID and gain access to the application.
In the fourth step of the approach, measures are defined for all impacts. The measures here are derived from the vulnerabilities identifiable by the scanners and are attributable to the application – as opposed to the environment. The final topology of the Bayesian net is shown in Figure 2. Overall, building the Bayesian net took less than a day.
The execution of the scanners took between several minutes (PHP Shop) and several hours (Zen Cart) on a current MacBook Pro that runs both the web server and the scanners. The found vulnerabilities are shown in Table 2. Waipiti did not find any vulnerabilities in both cases. We analysed its execution in detail to avoid any misconfigurations, but it seems that it is not able to detect problems in the analysed software. Most vulnerabilities were detected by Grendel Scan, 3 vulnerabilities were reported by w3af. This information was then used in the Bayesian net to assess the quality of the two applications. Two vulnerability classes from Table 2, the input/output flows and unidentified vulnerabilities, were not further used because they cannot be attributed to a specific product entity. The predicted vulnerability densities (vulnerabilities/KSLOC) of both applications are very close. The net calculated a mean of 0.0064 for PHP Shop (standard deviation 0.0028) and a mean of 0.0066 for Zen Cart (standard deviation 0.0028).
### 4.4 Discussion & Threats to Validity
The assessment method including application scanners is applicable to the real-world systems we analysed. Yet, the results are close for both systems and the correctness of the results cannot be validated as we have no data about the real vulnerability density. Nevertheless, the effort for performing the assessment is reasonable. The setup of the scanners and a corresponding test environment is more demanding than the subsequent analysis using the Bayesian net. Altogether it took only a few days to set up the analysis. Also the time needed for running the scanners is promising and allows them to be included in continuous quality control.
An important decision in modelling application security is the border between the application that is analysed and its environment. For example, is the application responsible for passwords that are not prone to brute force attacks? In the case study we made subjective choices and for the example of passwords specified that the application has a partial responsibility.
The answer to RQ 2 is more clear as the found vulnerabilities differ between the used scanners. Waipiti did not find a single vulnerability, Grendel Scan found 8 and w3af found 3 vulnerabilities. For potential cross site request forgery in PHP Shop and SQL injection in Zen Cart both scanners had findings. However, only Grendel Scan detected a potential CSRF in Zen Cart. All these differences indicate that there are significant differences between the detection capabilities of different scanners.
As this is only a first, explorative study on the use of application scanners in quality assessment, there are various threats to the validity of the results. The internal validity is threatened because there were several subjective decisions in building the Bayesian net. We mitigated this threat by using comparable decisions as for static analysis. Furthermore, we did not check whether the found vulnerabilities are actual problems in the software. This especially affects RQ 2, because the results might be misleading. The external validity is also limited as we only analysed two applications and three scanners, which are all open source. For more reliable results, especially for the detection capabilities, we need to run larger studies that also involve commercial applications and scanners.
<table>
<thead>
<tr>
<th>Vulnerability</th>
<th>PHP Shop</th>
<th>Zen Cart</th>
</tr>
</thead>
<tbody>
<tr>
<td>Duplicate Session ID</td>
<td></td>
<td>C</td>
</tr>
<tr>
<td>Potential CSRF</td>
<td>A, C</td>
<td>C</td>
</tr>
<tr>
<td>SQL Injection</td>
<td>A, C</td>
<td></td>
</tr>
<tr>
<td>Code Comments</td>
<td>C</td>
<td></td>
</tr>
<tr>
<td>Unidentified Vuln.</td>
<td></td>
<td>A</td>
</tr>
<tr>
<td>Input/Output Flows</td>
<td></td>
<td>C</td>
</tr>
</tbody>
</table>
Table 2: The vulnerabilities found in the scans. The characters A–C denote which scanner found the vulnerability: A=w3af, B=Wapiti, C=Grendel Scan.
5. RELATED WORK
We discuss quality models, guidelines and measures and especially several security assessment approaches.
There is a wide variety of quality models. Deissenboeck et al. [11] differentiate between quality definition models and quality assessment models. The former is a specification of what constitutes quality in a software system, the latter describes how a software system’s quality can be assessed according to specific rules. In the area of software security, security pattern collections are an example of quality definition models, e.g., [18].
Quality definition models are either general but too abstract for a concrete use in assessing software quality (e.g., ISO 9126) or specialised for a specific quality attribute and hence difficult to integrate into general quality assessments [12]. In [12], Deissenboeck et al. propose a quality model (ABQM) that tackles this problem by breaking quality attributes into entities, their properties, and their influence on activities. In [27] we used the ABQM approach for modelling security but with a focus on security requirements.
Quality guidelines are developed by various companies and organisations and usually include technical aspects that have to be taken into account. For example, the Common Criteria catalog (CC) [9] and the German BSI IT-Grundschatz Manual [13] describe security requirements. Usually guidelines do not give rationales [12]. Hence, they do not guide through a structured process, are often read once and followed in a sporadic manner only [8]. Furthermore, it is often not checked whether guidelines are followed or not [12].
Common metric-based/stochastic approaches describe quality by measurable concepts that imply strong assumptions. While for some quality attributes, those assumptions are stable for others, such as security, the assumptions are changing fast [1]. Due to their single-value representation, metrics often do not explain how system properties influence the quality related activities that are performed with the system [12]. Hence, metrics are not well established for security [2] and unstable due to fast variation of the security underlying “physics” (i.e., the IT system) [1].
Artsiom et al. [29] propose an security assessment method that has similarities to the method in this paper. It also defines metrics and aggregates them to quality attributes. This method, however, uses “-ilities” similar to ISO 9126 that have several well-known problems. Moreover, they concentrate on the architecture of the software (white-box view) whereas this paper focuses on testing by application scanners (black-box view).
Frigault et al. [16] use Dynamic Bayesian Networks to investigate the security of networked systems. Their focus is more on the combined effects of different vulnerabilities as opposed to a complete quality statement for the system incorporating scan results.
There are several so-called scoring systems that evaluate vulnerabilities in applications. The most advanced scoring system is the Common Vulnerability Scoring System (CVSS) [15,22]. It provides a set of metrics and corresponding equations that combine these metrics with weights to provide a score for a vulnerability. It considers the constraints as well as the impacts of a vulnerability, but does describe how to find vulnerabilities and how to relate the results to a general quality assessment.
Recently the Common Weakness Scoring System (CWSS) [23] was released that analyses weaknesses in a software system and assigns scores for it for prioritising the weaknesses. One part of the scoring is the technical impact. Hence, there are similarities to the Quamoco quality model, which we should exploit in the future. By itself, the CWSS describes not how it fits into an overall quality assessment.
The Open Web Application Security Project (OWASP) is a non-commercial initiative to develop guidelines and stan-
standards for the security of web applications. Their OWASP Application Security Verification Standard 2009 (ASVS) [24] defines 4 security verification levels that describe what has to be done to provide appropriate security for an application. The developer of an application decides on its criticality and the standards gives the corresponding verification requirements that have to be met. This ranges from mostly automatic analysis to complete manual code reviews. A level is reached if all these requirements are checked. It does not contain more fine-grained evaluations and it is also not set into the context of a general quality assessment.
6. CONCLUSIONS
We summarise the contribution of this paper and discuss directions for future research.
6.1 Summary
Application security scanners, as employed in the area of web applications, are one promising possibility to automate the assessment of software application security. This automation could then be used in product quality assessments in the context of continuous quality control. However, the usage of application scanners in this kind of quality assessment has not been investigated so far.
We provide a first step to incorporate application scanners into quality assessment by extending an existing method based on explicit quality models and Bayesian nets. In the Quamoco quality models, measures are defined to make use of the scanning results. It is also defined how these results can be further used for a general quality statement.
We show in a case study how three open source application scanners can be used in the quality assessment of open source web shop applications. We found that the method is applicable in principle and that the detection capabilities of the scanners differ. Moreover, the needed time for performing the scans is promising for their inclusion into continuous quality control.
6.2 Future Work
A threat for the case study is that only three scanners are used. We plan to evaluate more application scanners, especially tools developed commercially. For a more reliable result we also plan to investigate further cases involving software developed in industry for which we also analyse the false positive rate of scanners.
The assessment method will be extended to be able to handle false positives explicitly. The found differences between scanners might also be an indication for false positives and then the method could mitigate that by a larger weight for vulnerabilities that are found by more than one scanner. Finally, a study involving the combination and comparison with static analysis would show the strength and weaknesses of both approaches.
Acknowledgements
I am grateful to Elmar Juergens for helpful suggestions on the manuscript.
7. REFERENCES
|
{"Source-Url": "https://elib.uni-stuttgart.de/bitstream/11682/8962/1/wosq11.pdf", "len_cl100k_base": 6316, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26002, "total-output-tokens": 8384, "length": "2e12", "weborganizer": {"__label__adult": 0.00033783912658691406, "__label__art_design": 0.0002932548522949219, "__label__crime_law": 0.0007872581481933594, "__label__education_jobs": 0.0005197525024414062, "__label__entertainment": 5.7578086853027344e-05, "__label__fashion_beauty": 0.00013446807861328125, "__label__finance_business": 0.00020897388458251953, "__label__food_dining": 0.00026535987854003906, "__label__games": 0.00081634521484375, "__label__hardware": 0.0006780624389648438, "__label__health": 0.0004184246063232422, "__label__history": 0.00015151500701904297, "__label__home_hobbies": 6.246566772460938e-05, "__label__industrial": 0.00026988983154296875, "__label__literature": 0.0002073049545288086, "__label__politics": 0.00019550323486328125, "__label__religion": 0.0002911090850830078, "__label__science_tech": 0.0186004638671875, "__label__social_life": 7.325410842895508e-05, "__label__software": 0.01551055908203125, "__label__software_dev": 0.95947265625, "__label__sports_fitness": 0.0002061128616333008, "__label__transportation": 0.00023257732391357425, "__label__travel": 0.00013947486877441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37263, 0.03506]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37263, 0.38821]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37263, 0.92128]], "google_gemma-3-12b-it_contains_pii": [[0, 3948, false], [3948, 9648, null], [9648, 12793, null], [12793, 19051, null], [19051, 25362, null], [25362, 29283, null], [29283, 35032, null], [35032, 37263, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3948, true], [3948, 9648, null], [9648, 12793, null], [12793, 19051, null], [19051, 25362, null], [25362, 29283, null], [29283, 35032, null], [35032, 37263, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37263, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37263, null]], "pdf_page_numbers": [[0, 3948, 1], [3948, 9648, 2], [9648, 12793, 3], [12793, 19051, 4], [19051, 25362, 5], [25362, 29283, 6], [29283, 35032, 7], [35032, 37263, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37263, 0.07742]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
3ec5a37ece4b1c5e7876625729980e7499aefc0d
|
Falcon: A Practical Log-based Analysis Tool for Distributed Systems
Francisco Neves, Nuno Machado and José Pereira
HASlab, INESC TEC and University of Minho
Braga, Portugal
{francisco.t.neves, nuno.a.machado}@inesctec.pt, jop@di.uminho.pt
Abstract—Programmers and support engineers typically rely on log data to narrow down the root cause of unexpected behaviors in dependable distributed systems. Unfortunately, the inherently distributed nature and complexity of such distributed executions often leads to multiple independent logs, scattered across different physical machines, with thousands or millions entries poorly correlated in terms of event causality. This renders log-based debugging a tedious, time-consuming, and potentially inconclusive task.
We present Falcon, a tool aimed at making log-based analysis of distributed systems practical and effective. Falcon’s modular architecture, designed as an extensible pipeline, allows it to seamlessly combine several distinct logging sources and generate a coherent space-time diagram of distributed executions. To preserve event causality, even in the presence of logs collected from independent unsynchronized machines, Falcon introduces a novel happens-before symbolic formulation and relies on an off-the-shelf constraint solver to obtain a coherent event schedule.
Our case study with the popular distributed coordination service Apache Zookeeper shows that Falcon eases the log-based analysis of complex distributed protocols and is helpful in bridging the gap between protocol design and implementation.
I. INTRODUCTION
Developers of distributed systems cater for recording runtime behavior by judiciously adding log statements to source code [1], [2]. The number of log statements needed, and the detail of the information collected, depends on the complexity of the code. In systems that deal with concurrency and faults, such as fault-tolerant consensus protocols, the resulting effort is substantial. However, when an unexpected outcome is noticed, log files are often the only source of information that programmers can use to debug and fix the problem.
Unfortunately, log analysis in distributed systems still remains a daunting task, which has motivated programmers to ask for more practical ways to understand runtime behavior. First, besides the sheer number of entries, trace files are typically spread across several nodes and generated by distinct logging libraries with heterogeneous formats. Second, although timestamped, interleaved statements or execution on different nodes leads to a wide set of possible execution flows and intermediate states that have to be considered. Third, the lack of context propagation between nodes hinders the ability to establish the causal relationship between events, i.e., the happens-before relationship typically denoted by “→” [3].
Causality is particularly helpful for debugging distributed executions, as it allows reasoning about the order of distributed event [4]. However, relying solely on log entry timestamps is not enough to establish causality. On the one hand, these timestamps are based on physical clocks and, even if clocks are synchronized on all relevant nodes, log messages are often produced asynchronously after the fact they describe. On the other hand, blindly considering that timestamps induce causality hides the true system logic by flattening history.
Several tracing systems have been proposed in the past to track causality and alleviate the burden of debugging distributed systems [4]–[8]. Nonetheless, they require careful program instrumentation and do not support the analysis of events stemming from distinct, heterogeneous log sources. In contrast, popular operating system utilities such as strace and ltrace are powerful assets for troubleshooting runtime behavior, as they are language-agnostic and capable of capturing the system calls and signals executed by a program, but fall short when it comes to inferring causality across processes.
In this paper, we aim to achieve the best of both worlds by enabling the inference of causally-related activity atop commodity monitoring tools. To this end, we propose Falcon, a practical and effective log-based analysis tool for distributed systems. Falcon does not require custom instrumentation and supports popular tracing and logging utilities (e.g., log4j, strace, tshark), thus being suitable for debugging real-world applications.
Falcon operates as a pipeline: first, it normalizes the events collected by the different logging sources; then, it resorts to symbolic constraint solving to generate a global execution trace that preserves causality; finally, it produces a space-time diagram that enables a visual analysis of the whole execution.
To ensure event causality, Falcon employs a novel approach that models a distributed execution by means of symbolic variables (representing the logical clocks of the events traced) and encodes the happens-before relationships as constraints over those variables. Solving the constraint system with an off-the-shelf solver yields a complete execution schedule that coherently relates the data from the various log files, thus providing a thorough and accurate view of the original production run. Due to its flexible design, Falcon’s pipeline can also be extended with additional log libraries and visualization tools.
We use event and log entry interchangeably in this paper.
Our case study with the popular coordination service Apache Zookeeper shows that Falcon is efficient and facilitates the understanding of complex distributed executions by relating low-level system calls with user-defined log messages.
The rest of this paper is structured as follows. Section II presents some background concepts and a motivating example. Section III describes the design of Falcon, while Section IV provides its implementation details. Section V presents the case study with Apache Zookeeper. Section VI overviews the most relevant related work and, finally, Section VII concludes the paper by summarizing its main points.
II. BACKGROUND AND MOTIVATION
Low-Level Tracing Overview. *NIX environments nowadays offer various kernel-level tracers that enable powerful troubleshooting capabilities. Moreover, by running at the operating system level, these tracers are programming-language-agnostic and even applicable to programs running on virtual machines, thus being extremely useful for program debugging. The rest of this paper is structured as follows. Section II presents some background concepts and a motivating example. Section III describes the design of Falcon, while Section IV provides its implementation details. Section V presents the case study with Apache Zookeeper. Section VI overviews the most relevant related work and, finally, Section VII concludes the paper by summarizing its main points.
Motivating Example. As an example of this limitation, consider a scenario with three participants of an online multiplayer game, represented by three processes on distinct machines connected through TCP sockets. In this scenario, player 2 (corresponding to process n2-782) tells his teammates to advance with a message “Go go go”. Player 3 (process n3-894) disagrees with the suggestion and asks the team to wait by replying “Wait guys”. Player 2 consents and writes “Ok”. Player 1 (process n1-675), in turn, simply receives the instructions given by the other players. The result of this interaction was that player 1 advanced alone, causing the team to later lose the game. Why did player 1 act against the instructions given by the rest of the team?
In Figure 1 we present a possible log obtained by merging the output of running strace on each process. The log contains the syscalls executed during a chat conversation between the three players, namely the reads and writes on each process’ socket. For the sake of readability, each entry is identified by the concatenation of the node and process ids and the actual files descriptors were replaced by the node ids.
In order to correctly reason about the runtime behavior from the trace in Figure 1 one must first establish the happens-before relationship between the syscalls. As defined by Lamport [3], if an event a causally-precedes an event b in a program execution, then a happens-before b (denoted c → b). A more detailed definition of the happens-before relationship is given in Section III-C.
Causality in distributed systems is typically captured by logical clocks [3] or vector clocks [9]. However, low-level tracing tools such as strace are not able to record logical time. Let us then mimic the procedure of manually inferring the happens-before relations present in Figure 1.
The causal order of syscalls within each process’ trace is trivial to define because it respects the program order [3]. As such, the main challenge here is to infer the inter-process happens-before relationships.
Note that the first parameter on each write/read syscall denotes the process that sent/received a message. Considering that a read is always preceded by its corresponding write and that TCP ensures reliable and ordered delivery, one is then able to causally order the syscalls across the three processes.
Fig. 2: Trace resulting from causally reordering the syscalls in Figure 1. [n-pid] denotes the global identifier of a process pid running on node i.

Fig. 3: Space-time diagram of the trace in Figure 2. Each vertical line represents a node, whereas the circles with in a line represent the syscalls executed by that node. Solid lines connecting circles indicate a happens-before relationship between the events. The dashed circular area highlights the problematic message race.
In Figure 2 we depict a possible trace resulting from causally reordering the syscalls according to the intra- and inter-node happens-before relationships. To further ease the analysis of the execution, we also convert the ordered trace into a space-time diagram, which is shown in Figure 3. In the diagram, vertical lines are the execution timelines of the processes indicated by the labels, and the circles are the events happening in each process. Each event is associated with a given logical clock “tick”. We added the message sizes on each event and the message content on read syscalls. Each pair of connected events indicates a happens-before relationship.
Displaying the messages received by each process, one obtains the following chat logs:
<table>
<thead>
<tr>
<th>n1-675</th>
<th>n2-782</th>
<th>n3-894</th>
</tr>
</thead>
<tbody>
<tr>
<td>n3: "Wait guys"</td>
<td>n2: "Go go go"</td>
<td>n2: "Ok"</td>
</tr>
<tr>
<td>n2: "Go go go"</td>
<td>n3: "Wait guys"</td>
<td>n3: "Wait guys"</td>
</tr>
<tr>
<td>n2: "Ok"</td>
<td>n2: "Ok"</td>
<td>n2: "Ok"</td>
</tr>
</tbody>
</table>
Note that the chat log of process n1-675 exhibits an inconsistency (highlighted in red) with respect to the actual message history, which explains the reason behind the reckless move by player 1. The dashed circular area in Figure 3 pinpoints the root cause of this inconsistency: a delay in the arrival of the message “Go go go” sent by process n2-782 caused an inversion in the expected chat output. Since the inverted messages are (semantically) causally related, this means that there is a message race bug in the system implementation.
This example illustrates a game scenario that simply caused a team to lose one round. However, in complex distributed systems that require coordination, the consequences may be much more severe (e.g. data loss or corruption). It is thus of paramount importance to devise practical and effective tools to aid the analysis of execution logs.
### III. Design
We propose Falcon – a practical and effective log-analysis tool for distributed systems, capable of generating a global execution schedule from multiple independent log files while preserving event causality. This is achieved by means of a novel symbolic constraint model that encodes the happens-before relationship between events. Moreover, Falcon automatically generates a causal space-time diagram of the execution, which further eases the analysis of the logs and the understanding of distributed executions. This section describes Falcon’s design requirements, architecture, and happens-before model.
#### A. Design Requirements
Time spent at post-mortem software debugging is directly affected by the amount of useful information captured during production runs. Since logging is an expensive operation, a trade-off must be made between the log’s verbosity level and the performance and space overhead imposed at runtime. For that reason, different tracing tools opt for focusing on different aspects and provide distinct features (e.g. printing log statements, sniffing network packets, profiling performance, etc). Nevertheless, one should be able to leverage all those features in order to ease the burden of debugging complex distributed systems. A practical and effective log-analysis tool should thus meet the following design requirements:
- **Support several log sources.** The tool should be able to extract useful knowledge about the execution from multiple data sources, such as logging libraries, network sniffers (e.g. libpcap-based tools), and low-level tracing tools (e.g. ptrace-based tools).
- **Combine data in a causally consistent way.** The tool should be able to combine all logged events in a seamless
and coherent fashion, even if they were captured at different physical machines with unsynchronized clocks. In practice, this corresponds to ensuring that the happened-before relationship between events is established across all log files regardless of their source.
• **Provide a visual representation of the execution.** To obviate complexity due to long verbosity and further help developers reason about the execution, the tool should be able to display events in a “human-friendly” way. In the particular context of distributed systems, space-time diagrams depicting the inter-process causal dependencies have long been used to aid the understanding of distributed protocols over multiple processes [10].
In the next section, we describe how Falcon meets the aforementioned requirements.
### B. Architecture
Falcon is designed with a modular architecture, whose components operate together as a pipeline. In a nutshell, Falcon receives as input log files from multiple data sources and outputs a space-time diagram that preserves event causality. Figure 4 depicts the architecture of Falcon, composed by three main modules: the **trace processor**, the **happens-before model generator**, and the **visualizer**. Each module is described in detail as follows.
**Trace Processor.** Since the events logged by the different tools can vary in both format and content, Falcon needs to first **normalize** and **merge** the collected data into a global event trace with a common scheme. This procedure is done by the trace processor module. The trace processor is equipped with a dedicated driver for each type of log, responsible for translating the library-specific entries into events that can be processed by Falcon. As such, drivers may range from simple parsers for textual logs (e.g., for log4j) to packet unpackers for network sniffers (e.g., tshark). In some cases, the trace processor generates events that are the result of merging data from different logs. For example, an event representing the sending of a message can be built by augmenting the information of a write syscall with the message payload captured by a network sniffer. The events resulting from Falcon’s log normalization and merging are the following:
- **START**(process): a process starting event;
- **END**(process): a process finishing event;
- **FORK**(parent, child): a process creation event, where child denotes the process spawned by process parent;
- **JOIN**(parent, child): represents a join event, where process parent waits until the child process finishes;
- **CONNECT**(process, src, dst): represents a new connection, where src and dst denote the addresses (IP and port) of the local and remote processes, respectively;
- **ACCEPT**(process, src, dst): event indicating that a connection was established, where src and dst also denote the local and remote addresses, respectively;
- **RCV**(process, src, dst, msg): a message receiving event, where msg is the identifier of message being sent from the src address to the dst address;
- **SND**(process, src, dst, msg): a message sending event, where msg denotes the identifier of the message sent by the src address and received by the dst address;
- **LOG**(process, msg): a log entry event, where msg is the content of the message logged by process process.
The trace processor module also exposes a public API to ease the development of drivers and the integration of additional logging libraries into Falcon.
**Happens-Before Model Generator.** The complete, normalized event trace is then fed into the happens-before (HB) model generator. This module is responsible for combining all events into a single causally-consistent schedule. To this end, the HB model generator builds a symbolic constraint formulation encoding the happens-before relations between events. For instance, the model encodes a constraint stating that the send event of a message must happen-before the corresponding receive event. The HB constraints are further described in Section III-C.
Solving the model with an off-the-shelf constraint solver yields a causally-ordered event schedule.
**Visualizer.** The visualizer finishes the Falcon’s pipeline by providing a graphical representation of the causal trace generated in the previous step. In detail, the visualizer generates a “space-time diagram”, as introduced by Lamport [3], depicting both the events executed by each process and the inter-process causal relationships between them.
### C. Happens-Before Constraint Model
As defined by Lamport [3], there exists a happens-before relationship between two events a and b, denoted a → b, if:
- a and b belong to the same process[^4] and a precedes b in the execution.
- a and b belong to different processes and a represents the sending of a message m and b represents the reception of m.
Distributed executions often comprise other causal relations that should be considered, namely a → b also holds if:
- a is the fork event of a process q by a process p and b is the first event of q.
- a is the last event of a process q and b the join event of q by a process p.
- a is the connect event issued by a process p to a process q and b is the accept event in q.
[^4]: We use the term process to denote both processes and threads.
Note that the happens-before relation is transitive, irreflexive and antisymmetric. Also, when \( a \rightarrow b \) and \( b \rightarrow a \) holds, then \( a \) and \( b \) are considered to be concurrent.
Falcon casts the problem of combining the events from independent logs into a global, causally-ordered execution schedule as a maximum satisfiability modulo theories (MaxSMT) problem. The MaxSMT problem can be seen as an optimization version of the satisfiability problem (for types of variables other than boolean ones) and has the goal of finding a total assignment to variables of a formula that maximizes the number of satisfied clauses. Among the variants of the MaxSMT problem, this paper assumes a partial MaxSMT problem where some clauses are considered as hard and others are considered as soft. The goal is thus to find an assignment to the variables such that all hard constraints are satisfied and the amount of satisfied soft constraints is maximized.
Falcon’s causality model comprises i) integer symbolic variables that represent the logical clocks of the events supported by Falcon (see Section III-B), and ii) hard constraints over those variables stating the causal relations between the events. A solution to this model thus assigns a value to each variable such that all happens-before rules are satisfied. In practice, this corresponds to inferring a causally-consistent execution schedule by computing a logical clock per event.
More formally, the constraint model, denoted \( \Phi_{HB} \), consists of a MaxSMT formulation defined as the following conjunction of sub-formulae:
\[
\Phi_{HB} = \phi_{inter} \land \phi_{intra} \land \text{GOAL}
\]
where \( \phi_{inter} \) encodes the inter-process causality constraints, \( \phi_{intra} \) encodes the intra-process happens-before rules due to program order, and \( \text{GOAL} \) states the soft constraints that allow steering the solving procedure towards a given goal. Falcon currently provides support for generating logical clocks that: i) follow the original timestamp order as much as possible (\( \phi_{ts} \)), and ii) expose concurrency issues by minimizing the logical time intervals between events (\( \phi_{min} \)). We now describe each sub-set of constraints in more detail.
a) Inter-process HB Constraints (\( \phi_{inter} \)): these constraints represent the causal dependencies due to message exchanges and inter-process synchronization. Following the happens-before rules presented at the beginning of this section, the inter-process HB constraints \( \phi_{inter} \) are written as follows:
\[
\begin{align*}
\text{fork}_{p,q} &< \text{start}_q \\
\text{end}_q &< \text{join}_{p,q} \\
\text{connect}_p &< \text{accept}_q \\
\text{snd}_{p,m} &< \text{rcv}_q
\end{align*}
\]
where \( p \) and \( q \) are distinct processes, \( m \) represents a given message, and the variable names correspond to the events described in Section III-B. For instance, for a message \( m \) sent by \( p \) to \( q \), the constraints encodes that the logical clock of the corresponding event \( \text{SND}(p, p, q, m) \) in the trace must be smaller than that of the event \( \text{RCV}(q, p, q, m) \).
b) Intra-process HB Constraints (\( \phi_{intra} \)): these constraints state that events in the same process execute sequentially according to the program order. Let \( \Gamma_p \) denote the event trace of a process \( p \), and let \( c_i \) and \( c_j \) be the symbolic variables representing the logical clocks of events \( i \) and \( j \). The intra-process HB constraints \( \phi_{intra} \) are given by:
\[
\forall i, j \in \Gamma_p : (i < j \implies c_i < c_j)
\]
c) Timestamp Constraints (\( \phi_{ts} \)): timestamp constraints are soft constraints (see \text{GOAL} in Equation 1) that aim at approximating the schedule produced by the solver to the actual event ordering observed during the production run. These constraints state that events should be given logical clocks that follow the order given by timestamps in the log files. However, since two causally-ordered events logged on different machines may exhibit physical timestamps conflicting with their HB relationship, timestamp constraints may be violated in order to satisfy causality.
d) Clock Minimization Constraint (\( \phi_{min} \)): clock minimization constraints are also encoded as \text{GOAL} soft clauses and strive to minimize the values assigned to the symbolic variables. The goal is to produce a compact schedule capable of exposing event concurrency. For instance, if two distinct \( \text{SND} \) events exhibit the same logical order in the schedule yielded by the solver and their corresponding \( \text{RCV} \) events belong to the same process, then there is a message race. Let \( e \in \Gamma \) be an event in the complete execution trace and \( c_e \) the symbolic variable representing its logical clock. The clock minimization constraint \( \phi_{min} \) is written as:
\[
\phi_{min} = \min \sum_{e \in \Gamma} c_e
\]
Solving the \( \Phi_{HB} \) model generated by Falcon using an off-the-shelf SMT solver yields an execution schedule in which events are guaranteed to be causally ordered.
IV. IMPLEMENTATION
This section discusses some relevant implementation details of our prototype of Falcon. The prototype is publicly available at [https://github.com/fntneves/falcon/](https://github.com/fntneves/falcon/).
The trace processor module is implemented as an extensible Python program that allows the integration of custom drivers for normalizing log files into a pre-defined JSON format. Currently, the trace processor provides three out-of-the-box drivers. The first is a ptrace-based tool that collects syscall traces. The second driver handles logs generated by log4j, a logging library for Java programs. The third uses tshark to extract message payloads from pcap files and add them to the corresponding send and receive events.
When tracing syscalls for pairs of events causally related, we intercept the syscall of the first event solely at its entry point and the syscall of the second event only at its exit point. Since ptrace-based tracing utilities do not guarantee that the two interception points of a syscall appear contiguously in the trace, this approach is crucial to correctly infer causality.
The happens-before model generator is written in Java and uses the Z3 solver [11] to solve the model. The causal trace produced by the solver is then output in JSON format.
Falcon’s current visualizer is implemented as a JavaScript program that consumes the causal trace and generates a space-time diagram using the SVG.js library. We are currently extending Falcon to use ShiViz [10] as the visualization module, as it provides interactive analysis features.
V. CASE STUDY: APACHE ZOOKEEPER
Apache Zookeeper [12] is an open-source, scalable and reliable service that enables distributed coordination. Zookeeper poses a good case study for Falcon as it implements complex algorithms and protocols for leader election and atomic broadcast, which are hard to analyze and understand in detail.
In a distributed deployment, Zookeeper runs with several servers, of which one is leader and the others are followers. Both roles are distinguishable in the sense that read requests can be served by the followers while write requests are handled only by the leader. For this case study, using Zookeeper 3.5.0, we analyze a setup containing two Zookeeper nodes that communicate with each other to elect the leader. In particular, our execution scenario consisted of setting up a standalone Zookeeper server and, then, adding a new node to the server quorum.
During the execution, we collected Zookeeper’s built-in log file produced with the log4j logging library and used our ptrace-based tracer tool to record syscalls regarding thread synchronization events, connections, and messages exchanges.
As the output layout generated by log4j is configurable, we set the layout parameter to a custom Java class. In order to correctly identifying the thread responsible for logging a given message, we augment each log entry with a unique identifier consisting of the concatenation of both the thread and process ids. However, since the Java Virtual Machine does not allow accessing the native thread identifier from a high-level API, we rely on the Java Native Access to execute the gettimeofday system call and retrieve the thread id directly from the native operating system. The result of the syscall is thus introduced as a parameter in the output layout of log4j.
In the following, we show how Falcon can be used to analyze the execution of Zookeeper and evaluate the performance and scalability of the constraint solving procedure.
A. Falcon in Action
Figure 5 depicts the space-time diagram generated by Falcon for the logs collected during our Zookeeper execution scenario. The causal trace was obtained by solving the model with the timestamp soft constraints. The diagram shows that there are two main processes (5598 and 5670) that spawn several threads while running. For brevity, we include just the thread timelines relevant for this example. In other words, we discarded the threads that have only START, END and LOG events. However, they can be useful for conducting a more thorough behavior analysis.

The LOG events correspond to the timestamped entries logged by `log4j`, while the remaining events were collected by the syscall tracer. To ease the understanding of the diagram, we highlight certain events with a circled number and display their message content (for LOG events) or payload (for SND and RCV events).
Figure 5 shows that, after booting, the server joining the quorum starts by connecting to the existing peer. In particular, the LOG events identified by ① and ③ reveal that the process 5598 is the node with the server identifier (sid) 1 while process 5670 has sid = 2. Note that the events also reveal the lines of code at which the messages were logged, namely lines 365 and 644 of the *QuorumCnxManager* class.
The most interesting part of the diagram is arguably the leader election procedure though, since it is one of the major applications of fault-tolerant consensus protocols. When the two quorum peers are connected, server 1 triggers a new leader election (see event ④). In Zookeeper, each server can be in one of the following states: LOOKING, FOLLOWING, LEADING and OBSERVING. At the beginning of the execution, all servers are in the LOOKING state and vote in themselves to be the leader by sending notifications to the other servers with the leader field set to their sid (see messages ③, ⑤ and ⑥). If a server receives a notification with a sid higher than its own, then it updates its vote proposal to the higher sid and broadcasts the new vote proposal to the rest of the quorum. Note that, since there are no client requests during this execution, the last seen transaction identifier zxid remains unchanged for both servers. Otherwise the servers would vote for the peer with higher zxid.
In a nutshell, the diagram of Figure 5 reveals the following leader election protocol in Zookeeper:
- Server 1 sends the vote message ③ with payload \{"leader" : 1\} to server 2;
- Server 2 sends the vote message ⑤ with payload \{"leader" : 2\} to server 1;
- Server 1 receives the vote message sent by server 2, updates its vote proposal for the latter because it has higher sid, and sends back the updated vote – message ⑥ with new payload \{"leader" : 2\} – to server 2;
- Server 1 and server 2 update their state from LOOKING to FOLLOWING and LEADING respectively, as indicated by the log messages.
A further analysis of this space-time diagram also allows drawing some conclusions regarding the Zookeeper’s behavior:
a) **Notification timeout**: The message ③ is sent twice due to a timeout that occurs when a server does not receive enough notifications within a given time frame.
b) **Message partitioning**: The sending of a message is actually partitioned into several SND events, i.e., to write syscall executions. Inspecting the Zookeeper’s source code we noticed that the messages sent during the leader election protocol are composed by an integer and a buffer. The integer is sent by executing the *write* syscall four times, while sending the buffer requires a single *write* invocation.
c) **Causality**: The diagram shows that messages ① and ② are not causally related, because the sender and receiver threads execute concurrently. In contrast, there is a causal relationship between messages ③ and ⑤ based on an state change. Upon reception, message ③ is added to a queue by the receiver thread. Afterwards, the sender thread dequeues the message, processes it (i.e., updates the vote proposal of server 1), and sends message ⑤ to server 2. Therefore, ③ → receiver thread dequeues message → sender thread dequeues message → change of vote proposal → ⑤.
B. Performance Impact of Syscall-level Tracing
We developed a micro-benchmark to evaluate the performance overhead imposed on Zookeeper due to tracing syscalls with `strace`. The micro-benchmark consists of a client issuing requests to two Zookeeper servers. Concretely, the client performs 10K iterations of four operations: i) check whether a znode exists, ii) create a new znode, iii) check again whether the znode exists, and iv) delete the created znode.
Figure 6 compares the average duration of an iteration (in milliseconds) between a vanilla execution of Zookeeper and an execution with `strace`. As expected, enabling the tracing affects negatively the runtime performance, causing Zookeeper to be 1.7× slower comparing to the baseline.
C. Scalability of Constraint Solving
Depending on the debugging level, message logs may contain hundreds or thousands of entries. In order to better understand how the constraint solving time varies with an increasing amount of log entries, we ran the same configuration of Zookeeper for INFO and DEBUG logging levels. To further increase the log size, we duplicated the number of entries on both cases log. The resulting logs contained 568 events for the DEBUG level and 342 events for the INFO level.
The time required by Z3 to solve the constraint models for both log files is depicted in Figure 7. The results show that adding the DEBUG-level log events to the model caused the solver to take 3.18× more time to find a solution than with an INFO-level log. Although a trade-off must be made between the duration of the constraint solving and the amount of information logged, we believe that Falcon is useful in practice and much more scalable than manual log analysis.
The tool is applied to Zookeeper and demonstrated with a case study. In this paper we introduce Falcon, an extensible tool for combining and visualizing log data from several data sources. The key contribution of this tool is the ability to merge log data from multiple sources and establish relationships between events, providing a coherent diagram for a visual analysis.
The tool is applied to Zookeeper and demonstrated with a syscall-level log and log4j log files. The resulting diagram helps in understanding how remote threads interact among themselves and what messages were exchanged to execute a given task. Clearly, the combination of hundreds of events that is ordered by the SMT solver to produce a trace would clearly be infeasible manually. Additionally, we assess the performance impact on syscall-level tracing in Zookeeper, showing that the impact is tolerable, in line with what is expected from a more detailed log level configuration in log4j.
ACKNOWLEDGMENT
The authors would like to thank the anonymous reviewers for their valuable feedback. This work is financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project POCI-01-0145-FEDER-006961, and by National Funds through the Portuguese funding agency, FCT - Fundação para a Ciência e a Tecnologia as a part of project UID/EEA/50014/2013. The research leading to these results has received funding from the European Union’s Horizon 2020 - The EU Framework Programme for Research and Innovation 2014-2020, under grant agreement No. 732051.
REFERENCES
|
{"Source-Url": "https://www.gsd.inesc-id.pt/~nmachado/papers/dsn18-fneves.pdf", "len_cl100k_base": 7362, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 31584, "total-output-tokens": 8752, "length": "2e12", "weborganizer": {"__label__adult": 0.0003027915954589844, "__label__art_design": 0.0002982616424560547, "__label__crime_law": 0.000324249267578125, "__label__education_jobs": 0.0004088878631591797, "__label__entertainment": 9.721517562866212e-05, "__label__fashion_beauty": 0.00014090538024902344, "__label__finance_business": 0.0002435445785522461, "__label__food_dining": 0.00031113624572753906, "__label__games": 0.0006699562072753906, "__label__hardware": 0.0014638900756835938, "__label__health": 0.000438690185546875, "__label__history": 0.00030231475830078125, "__label__home_hobbies": 9.66787338256836e-05, "__label__industrial": 0.0004379749298095703, "__label__literature": 0.0002257823944091797, "__label__politics": 0.0002868175506591797, "__label__religion": 0.0003979206085205078, "__label__science_tech": 0.0770263671875, "__label__social_life": 0.00011277198791503906, "__label__software": 0.0164947509765625, "__label__software_dev": 0.89892578125, "__label__sports_fitness": 0.0003323554992675781, "__label__transportation": 0.0005593299865722656, "__label__travel": 0.0002319812774658203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37842, 0.01265]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37842, 0.38549]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37842, 0.91614]], "google_gemma-3-12b-it_contains_pii": [[0, 5440, false], [5440, 9240, null], [9240, 13348, null], [13348, 18623, null], [18623, 24941, null], [24941, 28318, null], [28318, 33649, null], [33649, 37842, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5440, true], [5440, 9240, null], [9240, 13348, null], [13348, 18623, null], [18623, 24941, null], [24941, 28318, null], [28318, 33649, null], [33649, 37842, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37842, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37842, null]], "pdf_page_numbers": [[0, 5440, 1], [5440, 9240, 2], [9240, 13348, 3], [13348, 18623, 4], [18623, 24941, 5], [24941, 28318, 6], [28318, 33649, 7], [33649, 37842, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37842, 0.03247]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
c3bd9b54a605f5cbb1ba4dcc14f12cadd53207bc
|
A novel approach for integrating security policy enforcement with dynamic network virtualization
Original
Availability:
This version is available at: 11583/2592157 since: 2021-01-28T18:13:32Z
Publisher:
IEEE
Published
DOI:10.1109/NETSOFT.2015.7116152
Terms of use:
openAccess
This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository
Publisher copyright
IEEE postprint/Author's Accepted Manuscript
©2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works.
(Article begins on next page)
A novel approach for integrating security policy enforcement with dynamic network virtualization
Cataldo Basile, Antonio Lioy, Christian Pitscheider, Fulvio Valenza and Marco Vallini
Politecnico di Torino, Dip. Automatica e Informatica, Torino, Italy
(e-mail: {cataldo.basile, antonio.lioy, christian.pitscheider, fulvio.valenza, marco.vallini}@polito.it).
Abstract—Network function virtualization (NFV) is a new networking paradigm that virtualizes single network functions. NFV introduces several advantages compared to classical approaches, such as the dynamic provisioning of functionality or the implementation of scalable and reliable services (e.g., adding a new instance to support demands). NFV also allows the deployment of security controls, like firewalls or VPN gateways, as virtualized network functions. However, currently there is not an automatic way to select the security functions to enable and to configure the selected ones according to a set of user’s security requirements. This paper presents a first approach towards the integration of network and security policy management into the NFV framework. By adding to the NFV architecture a new software component, the Policy Manager, we provide NFV with an easy and effective way for users to specify their security requirements and a process that hides all the details of the correct deployment and configuration of security functions. To perform its tasks, the Policy Manager uses policy refinement techniques.
I. INTRODUCTION
Computer networks continue to grow in size and importance therefore the reduction of deployment and (re)configuration times has become critical. Nowadays, computing services are deployed on virtualized infrastructures but network and security functions still run on dedicated hardware. Network Functions Virtualization (NFV) tries to solve this disadvantage by defining a virtualized infrastructure for network functions. These functions, named Virtual Network Functions (VNF) are implemented as virtual machines, therefore can be dynamically added and removed on-demand reducing administration tasks, response times and costs. Recent works on NFV proposed to allow each tenant to customize its network infrastructure by inserting custom functions. This possibility enables the deployment of security functions as well (such as firewall, logging, proxies, VPN concentrators).
Adopting this NFV-based approach also for managing security functions may drastically reduce the deployment time and costs, but the increase in the overall management complexity may strongly reduce its impact. Therefore, automating the support of security functions is fundamental. This paper modifies the NFV architecture addressing the following challenges: (1) identify security functions to provision, (2) decide where selected security functions will be deployed, and (3) generate the necessary configurations to implement a set of user-defined security policies. At last, these features must be offered to end-users, with low technical skills, and to expert users that typically have specific needs (e.g., configuration tuning).
This paper adds to the NFV architecture a new software component, the Policy Manager. By using a user-oriented approach to specify security requirements, Policy Manager implements an automatic process for generation and deployment of related configurations. Although the proposed approach is suitable for every type of security features, this paper is focused on functions for filtering traffic (including stateful packet filtering, traffic inspection, etc.). These features can be joined to offer an integrated solution, e.g., parental control.
The Policy Manager acts as an intermediary between users and NFV, offering two interfaces. The user’s interface supports the definition of the policies and the NFV interface sends configuration commands to the NFV orchestrator. The Policy Manager uses policy refinement techniques to select the VNFs to use and derive their configurations. Policy refinement is the process “to determine the resources needed to satisfy policy requirements, to translate high-level policies into operational policies that may be enforced by the system, to verify that the set of lower level policies actually meets the requirements of the high-level policy” [1]. Policy refinement allows the separation of policy specification and VNF configuration. By providing a high-level view of the desired behaviour of the system, the user is not distracted by specific details of a function implementation and can focus on the desired outcome. Policy refinement is well studied in literature for legacy systems to translate user policies into configuration commands, but, to the best of our knowledge, it has never been proposed to configure NFV security functions.
The High-Level Policies language (HLP) is used to capture user’s security requirements. HLP is an authorization language whose policies can be represented with sentences close to natural language. Examples of HLP policies are “do not download malware”, “do not access blacklisted site”. As evident, HLP policies are technology, function and implementation independent.
The Policy Manager refines HLP policies into the concrete configuration of each VNF. Practically, it identifies the VNFs to enforce the user security requirements and derives configurations. When more VNFs are available for satisfy the same requirement (e.g., different types of firewall), an optimization to choose among VNFs will take place. Finally, configuration are passed by the orchestrator to the correct security VNFs. As alternative, VNFs are selected by the user. To avoid errors, the Policy Manager performs non-enforceability analysis to check compatibility between user’s requirements and selected VNFs. This analysis informs the user providing indications and remediation tips.
To make this complex refinement feasible, our approach divides the process into two steps by passing through an intermediate format, the medium-level policies (MLP). We
define three policy abstraction layers (HLP, MLP and the concrete VNF configurations), and two translation modules. The former translator refines HLP into MLP and the latter translates MLP into concrete VNF configurations.
The rest of this paper is structured as follows: Section II presents the most important concepts used in this paper; Section III gives a general overview of the proposed approach; Section IV describes the refinement process in detail; Section V summarizes the paper.
II. BACKGROUND
This section gives a short introduction of NFV, policy refinement, and non-enforccability. This introduction may help the reader for better understanding the proposed architecture. Furthermore a general overview of the research results in this areas is presented.
A. Network function virtualization NFV
The new Network Function Virtualization (NFV) concept decouples software implementation of network functions (e.g., router, firewall, NAT) from the compute, storage and networking resources through a virtualization layer. In this context, the ETSI standards organization is working on the definition of an architecture and the requirements for the deployment of any Virtual Network Functions (VNFs). Those VNFs can run on a range of industry standard server hardware, and can be moved and instantiated in any locations in the network, without the need of new equipment installation [2].
Recently a new kind of flexibility is achieved both from the Network Service Provider (NSP) and end-users sides through this new technology, as users traffic can be processed by any NSP-provided VNFs as well by any third-parties VNFs. Unfortunately, even if a great number of research activities analysed in depth the deployment of a generic VNFs in the NSP network [3], [4], at the best of our knowledge, none seems to address which effects could have the virtualization of a network security functions, (i.e., firewall, IDS, etc.) and which aspects must be taken into account.
Other research focuses on function description and how to correctly integrate third-parties VNFs in the NSP network: Koslovski et al. [5] proposes a language to describe storage and computing resources for a given function; wears Spinoso et al. [6] proposes that a VNF programmer must provide a functional description in order to correctly integrate and configure such VNF. However more detailed informations are required to support third-parties security functions in the provider network.
B. Refinement
Policy refinement has been well studied in literature and has been proven to be a mature and efficient process. Although policy refinement can be applied to all kind of policies, this paper only considers security policies.
Bartal et al. propose a solution named Firmato [7], it was one of the first solution proposals in this area and supports only packet filter firewalls. It is based on a entity-relationship model of the security policy and of the network topology. Verma et al. [8] used a similar approach, the authors present a firewall analysis and configuration engine named FACE. It takes as inputs the network topology and a global security policy written in a high-level language. Garcia-Alfaro et al. [9] proposed MIRAGE, a management tool for the analysis and deployment of configuration policies. It is based on the same principles as Firmato [7] and FACE [8], but it is also capable of configuring intrusion detection systems IDS and VPN routers. MIRAGE can also perform conflict analysis on already deployed configurations. Basile et al. [10] proposes to use ontologies for security policy translation. Network filtering rules are derived from security policies that refer to high-level concepts such as users and services. To map these high-level concepts to low-level network concepts such as IP address, port, protocol, an ontology is used. In [11], Guarnieri et al. proposed a model-driven security approach for the design and generation of concrete security configurations for software architectures. In this approach the system architect models the architecture of the system by means of UML class diagrams, and then the security administrator adds security requirements to the model by means of Security4UML, a UML profile. From the model enriched with security requirements, the concrete security configuration is derived in a semi-automated way.
According to [12], a good security policy must be implementable through system administration procedures (e.g., publishing of acceptable use guidelines) and enforceable with security tools or controls, where appropriate, and with sanctions, where actual prevention is not technically feasible. However in a real scenario, some policies may be less precisely enforceable on some systems than others or in worst case, completely non-enforceable. Unfortunately, in literature, non-enforceability analysis has received little or no attention and it has not been investigated in-depth. For example, as suggested by [13], the access control on traditional UNIX systems is much less granular when compared with ACLs on modern implementations and some policies are not fully supported. In particular, two situations can be detected: high-level constrains require a set of functions that are not available (non-enforceable) or only a subset of them is available (partial enforceable). Therefore the policy should be accompanied by an indication of how to handle these situations, e.g., warning the user, suggesting a more relaxed policy, adding a third-party software or install a different VNF to compensate the absent functionality. Verma et al. [8] propose an iterative process (that includes topological analysis) to identify an unimplementable policy and suggesting how to make it implementable.
III. APPROACH
The proposed approach modifies the NFV architecture adding a new component named Policy Manager. The integration of this component with NFV architecture is sketched in Fig. 1, where Policy Manager transparently enforces user security requirements in agreement with the other network requirements, providing an additional layer between the end-user and the NFV orchestrator. The Policy Manager performs the refinement of security requirements expressed with a high-level policies (HLP) into the concrete configuration of each VNF. Practically, the Policy Manager first identifies the security VNFs that can be used to enforce the user security requirements then derives the needed configurations. These configurations are later passed by the orchestrator to the correct security VNFs. By separating the security requirements from the effective required VNFs and their security configurations.
the end-user does not need to take in consideration the aspects related to VNFs configuration, focusing on the overall impact of his policy. To make this refinement feasible, we split the process into two steps by using an intermediate format, the medium-level policies (MLP). Therefore, the Policy Manager adopts three policy abstraction layers (i.e., HLP, MLP and the concrete VNF configurations) and two translation modules (to refine HLP into MLP and to translate MLP into concrete VNF configurations). It is worth noting that our proposal follows the design principles proposed by Strassner for policy-based network management, where the HLP maps to “Business/System View”-layer, the MLP maps to the “Administrative View”-layer and the concrete configurations map to the “Device/Instance View”-layer [14]. By introducing HLP and MLP in the architecture, the refinement process becomes independent from VNF implementations.
A. Policy abstractions.
Although the full specification of policy abstraction is out of the scope of this paper, in this section we provide a brief introduction of HLP and MLP. As introduced before, the users specify their security requirements with the HLP. We designed the HLP (starting from our previous works [15], [16]) as an authorization language that follows the subject-action-object-attribute paradigm (also referred to as target-effect-condition) [17]. A security requirement is expressed as a set of sentences close to natural language, e.g., “do not download malware”, “do not access gambling sites”, “allow Internet traffic from 18:30 to 20:00 for Alice”. The elements of a sentence (subject, object, etc.) are chosen by the user from a predefined set and implemented in a GUI editor as different lists, i.e., a list for each element (e.g., action, subject). This approach is transparent for users (avoiding to learn new language) and makes it possible to map each element of a sentence to the related HLP component. It is clear that, users can customize some elements of a sentence, for example to define timing constraints, particular URL, etc.. Again, to simplify the definition of a complex security policy, a template-based approach is provided. A template contains a set of HLP that participate to a common goal. For example, the template “enable parental control” implements simple HLPs as “do not access blacklisted site”, “log access to websites” and “permit access to Internet from 20:00 to 22:00”. Elements as “blacklisted site” contains a predefined set of URLs initially collected from a list managed by a trusted authority. However, the user can modify that list adding or removing some URLs. As a consequence, HLP policies are technology, function and implementation independent, therefore a HLP can be enforced with different VNFs of different vendors.
MLP has been designed to abstract the configurations of security VNFs. Unfortunately, defining this abstraction is not trivial because each security control has a specific language. To this purpose, MLP follows the approach of [18] and organized by security functions. A security function is a basic feature offered by a VNF (e.g., channel protection, filtering, anti-virus, parental control). Therefore, MLP is composed by a general model that defines the high-level concepts (policies, rules, conditions, actions, etc.) and a set of sub-models to capture the semantics specific concepts as attributes, condition types, methods (e.g., HTTP GET), etc. For instance, MLP supports the configuration of a packet filter, or the options related to the configuration of an anti-virus. Expert users may have specific needs (e.g., fine tuning) for the configuration of security features. To satisfy these requirements, users may directly use statements offered by MLP to write abstract configurations. After, those are passed as input to the Policy Manager as depicted in Fig. 1.
B. Translation.
The refinement of HLP policies into MLP policies is a very complex task. First of all, it requires to identify the security functions (i.e., capability) needed to enforce the policy. Then, suitable VNFs to enforce the policy must be selected. However, several VNFs with the same security function are typically available. Therefore, the process must choose among them. Different implementations and combinations of VNFs may have different side-effects on the overall performance, throughput, latency and/or bandwidth. For example, a particular VNF implementation requires more processing resources than the others, or significantly reduces the network throughput. For this reason, the Policy Manager adopts a set of optimization techniques (as presented in Section IV) to choose among alternatives. When the set of optimal VNFs is identified, the HLP are mapped into MLP statements. For example, “allow web traffic” is easily translated into a rule whose action is allow and the condition selects all the IP traffic towards the destination port 80. In the same way, other concepts are expanded with predefined values, like “gambling sites” that can be determined by a set of URLs (also maintained and obtained by third parties) or by a set of DNS servers that do not perform reverse translation of specific URLs (like OpenDNS).
On the other hand, the transformation of MLP policies into VNF configurations mainly involves a change of syntax, as MLP has been designed to share the same semantics as the VNFs. Each VNF implementation typically has a different configuration language and VNF-specific translation module is needed. This actually maps MLP policies into a concrete configuration. For example, the refinement process requires a
---
1 A VNF may implement more than a single security feature, e.g., a firewall can implement at the same time stateless packet filter and stateful filtering. The proposed refinement process also support this case. However, for simplicity, we avoid to explicitly distinguish these cases in this paper.
firewall VNF and generates the corresponding MLP. Then, the translation module transforms the MLP configuration into the firewall settings.
IV. HLP REFINEMENT
The HLP refinement approach is performed by the Policy Manager and presented in Fig. 2. The VNFs can be manually selected by the user, or automatically by using an ad hoc process. The former is named function-driven VNFs selection, the latter policy-driven VNFs selection. As introduced before, different VNFs provide different security functions. Examples of such functions are “packet filtering”, “deep packet inspection”, “signature-based malware detection”, “traffic anonymization”, “traffic encryption” and “transparent proxying”. For instance, the HLP policy “enable parental control” can be implemented by using either a single VNF (that contains all the required security functions) or by using a set of VNFs (e.g., a virtual packet filter, a VNF for logging traffic/sessions, a virtual web proxy). This choice may impact on performance, cost and/or efficiency.
In the function-driven approach (depicted and surrounded by the dashed line in Fig. 2), the user specifies her/his HLP policy and selects the set of VNFs she/he wants to use. Moreover, the user has the opportunity to decide which policy should be enforced with a particular VNF. Before starting the generation of MLP, the Policy Manager checks if the required VNFs support the security functions required to enforce the user’s policies. In practice, actions, objects and attributes are statically mapped to a set of security functions that are required to enforce an HLP. Then, each VNF supports a subset of that functions. Therefore, starting from an HLP it is possible to identify which security functions are required and which VNFs satisfy that policy. If the selected VNFs do not satisfy these requirements, the user is warned and a set of remediations strategies is proposed. This step is named early non-enforceability.
The early non-enforceability analysis is performed in real-time to identify only the macroscopic errors that lead the refinement process to failure. For example, this analysis is useful when the user specifies a parental control policy but the selected VNFs do not support this security function. In this case, the refinement is aborted after the analysis. If the early enforceability does not detect any lack of functions, the generation of MLP is automatically performed for each security control of a VNF(s) (the “abstract VNF configuration phase” in Fig. 2). During the generation of MLP other cases of non-enforceability may appear. For example, when a policy requires to inspect the content of a HTTP protocol field, but the selected VNF does not support this feature, the policy is not enforceable. Similarly, when it does not support a particular option, the policy is partially enforceable. Let us consider a parental control scenario to protect children access to Internet, where applications with different features are available. Two distinct VNFs, VNF1 and VNF2 are available, both capable of enforcing a parental control policy but with different functions. VNF1 includes a “application content inspection” function and a “URL filtering” function. VNF2 supports the functions of VNF1 and a feature to specify time-based policies for “URL filtering”. Hence, if a user wants to specify that access to Facebook web site is permitted only after dinner from 20 pm to 22 pm VNF1, he is not allowed to enforce that policy. Therefore, the abstract VNF configuration phase produces a complete non-enforceability report (CNE report) where all types of enforceability errors/issues are shown to the user. The function-driven approach is recommended only for expert users, as it can lead to several issues, e.g., sub-optimal configurations, lack of performance, costs, non-enforceability. Each VNF specifies the set of available functions (e.g., “application content inspection”, “URL filtering”), the supported features (e.g., user’s defined set of URL, time-based policies) and other tuning options. A unskilled user could select a VNF that does not completely satisfy the required network throughput requirements, or in the worst case cannot satisfy the security policy at all. Therefore a wrong VNF selection leads to a non-enforceable security policy.
The policy-driven approach (depicted and surrounded by a continuous line in Fig. 2) selects the required VNF automatically from a catalogue of available VNFs. Each VNF available in the catalogue is associated to a set of security functions. Therefore, when high-level policy requirements matched the related functions, a set of candidate VNFs is selected. The selection can be straightforward (when only one VNF is available with a required function) or may be based on various criteria (such as cost, performance, reliability or reputation) when multiple VNFs offer a required function. This may result in a trade-off among different criteria and the user must specify its preferences. Examples of these criteria are:
adopt open-source VNF; choose applications with low network latency (e.g., to match QoS requirements); adopt applications that are reliable to faults or that have a better reputation (according to an expert review). Since several VNFs may be identified to enforce the policies a selection criterion (i.e., optimization target functions) must be defined. The user may choose among a set of Policy Manager-provided profiles that specify a predefined set of target functions (e.g., maximize performance, minimize costs). In particular, for performance a set of different categories should be considered: e.g., CPU usage, RAM, network throughput. Once a profile or a criterion is selected by the user, the refinement process: formulates a Mixed integer linear programming (MILP) problem (considering a specific target functions and related constraints derived from selected profile), invokes an external solver to perform the optimization, analyses solver results and identifies the set of VNFs to adopt. For the sake of simplicity, we avoid to present full details on how an optimization problem is formulated.
V. Conclusions
The innovations in NFV made it possible to deploy complex network structures based on virtualized functions with a reduced cost and time. Although the infrastructure is flexible enough to accommodate this improvements and to configure the interconnections between the VNFs, there is no efficient method to select and configure security functions within a dynamic virtualized network. This paper proposes a novel approach to solve this problem by defining an extension, named Policy Manager, for the existing NFV architecture. The Policy Manager introduces an additional layer between the user and the NFV orchestrator. The user defines his security policies with the High-level Policy language (HLP) and the Policy manager refines them into configurations for the required VNFs. The required VNFs are selected either manually by the expert users (function-driven) or automatically by the Policy Manager (policy-driven) for the end-users. The policy-driven approach uses a selection criteria defined by the end-user to find the best possible combination of VNFs. In the function-driven approach the expert users selects his desired VNFs and also specifies which VNF enforces which policy. Both approaches perform an enforceability analysis and warn the user in case of some policies cannot be enforced. The proposed extension has mayor advantages over the current architecture. First, it enables end-users, with low technical skills, to configure the network and related services. Second, the policy definition is independent from VNF implementations and therefore one VNF can be substituted with another without reconfiguring the whole network.
Currently, our approach has been implemented only for a selected set of HLP policies, mainly related to filtering requirements, and only for a very limited set of VNF (packet filters, stateful firewalls, L7 filters, basic content inspection). However the proposed approach can be easily extended, adding new security feature and/or VNFs. Therefore, as future work, we will extend the Policy Manager adding other types of security functions (e.g., VPN, proxy, IPS/IDS) and supporting more VNFs. Other improvements are expected in the optimization process used in the policy-driven approach with the support of more multi-objective target functions.
ACKNOWLEDGMENT
The research described in this paper is part of the SE-CURED project, co-funded by the European Commission (FP7 grant agreement no. 611458).
REFERENCES
|
{"Source-Url": "https://iris.polito.it/retrieve/e384c432-d454-d4b2-e053-9f05fe0a1d67/2015Netsoft_author.pdf", "len_cl100k_base": 5539, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 17583, "total-output-tokens": 7203, "length": "2e12", "weborganizer": {"__label__adult": 0.00037169456481933594, "__label__art_design": 0.0004875659942626953, "__label__crime_law": 0.0009059906005859376, "__label__education_jobs": 0.0005664825439453125, "__label__entertainment": 0.00017833709716796875, "__label__fashion_beauty": 0.00017321109771728516, "__label__finance_business": 0.000736236572265625, "__label__food_dining": 0.0003170967102050781, "__label__games": 0.0007376670837402344, "__label__hardware": 0.0033893585205078125, "__label__health": 0.0006198883056640625, "__label__history": 0.00033283233642578125, "__label__home_hobbies": 0.00011175870895385742, "__label__industrial": 0.0007028579711914062, "__label__literature": 0.0003170967102050781, "__label__politics": 0.0004706382751464844, "__label__religion": 0.00041794776916503906, "__label__science_tech": 0.361572265625, "__label__social_life": 0.00011688470840454102, "__label__software": 0.0751953125, "__label__software_dev": 0.55126953125, "__label__sports_fitness": 0.0002410411834716797, "__label__transportation": 0.0005030632019042969, "__label__travel": 0.0002104043960571289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32181, 0.02647]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32181, 0.3006]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32181, 0.89384]], "google_gemma-3-12b-it_contains_pii": [[0, 1249, false], [1249, 7286, null], [7286, 13947, null], [13947, 19894, null], [19894, 24933, null], [24933, 32181, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1249, true], [1249, 7286, null], [7286, 13947, null], [13947, 19894, null], [19894, 24933, null], [24933, 32181, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32181, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32181, null]], "pdf_page_numbers": [[0, 1249, 1], [1249, 7286, 2], [7286, 13947, 3], [13947, 19894, 4], [19894, 24933, 5], [24933, 32181, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32181, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
5d2fae93639b4ffbdb3e9d2891867677cd0f8554
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01178557/document", "len_cl100k_base": 6894, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36487, "total-output-tokens": 8684, "length": "2e12", "weborganizer": {"__label__adult": 0.0003197193145751953, "__label__art_design": 0.0006084442138671875, "__label__crime_law": 0.0003714561462402344, "__label__education_jobs": 0.0020008087158203125, "__label__entertainment": 0.00014531612396240234, "__label__fashion_beauty": 0.00021922588348388672, "__label__finance_business": 0.0008320808410644531, "__label__food_dining": 0.0003273487091064453, "__label__games": 0.0006561279296875, "__label__hardware": 0.001129150390625, "__label__health": 0.0006341934204101562, "__label__history": 0.0004992485046386719, "__label__home_hobbies": 0.00011980533599853516, "__label__industrial": 0.0006003379821777344, "__label__literature": 0.0005855560302734375, "__label__politics": 0.0004417896270751953, "__label__religion": 0.0004935264587402344, "__label__science_tech": 0.21875, "__label__social_life": 0.00016057491302490234, "__label__software": 0.0272674560546875, "__label__software_dev": 0.74267578125, "__label__sports_fitness": 0.00023293495178222656, "__label__transportation": 0.0005855560302734375, "__label__travel": 0.00025200843811035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37909, 0.03625]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37909, 0.32559]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37909, 0.88774]], "google_gemma-3-12b-it_contains_pii": [[0, 1065, false], [1065, 1853, null], [1853, 4335, null], [4335, 7522, null], [7522, 10757, null], [10757, 12386, null], [12386, 15635, null], [15635, 18801, null], [18801, 19894, null], [19894, 23094, null], [23094, 26081, null], [26081, 29446, null], [29446, 31170, null], [31170, 34065, null], [34065, 37256, null], [37256, 37909, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1065, true], [1065, 1853, null], [1853, 4335, null], [4335, 7522, null], [7522, 10757, null], [10757, 12386, null], [12386, 15635, null], [15635, 18801, null], [18801, 19894, null], [19894, 23094, null], [23094, 26081, null], [26081, 29446, null], [29446, 31170, null], [31170, 34065, null], [34065, 37256, null], [37256, 37909, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37909, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37909, null]], "pdf_page_numbers": [[0, 1065, 1], [1065, 1853, 2], [1853, 4335, 3], [4335, 7522, 4], [7522, 10757, 5], [10757, 12386, 6], [12386, 15635, 7], [15635, 18801, 8], [18801, 19894, 9], [19894, 23094, 10], [23094, 26081, 11], [26081, 29446, 12], [29446, 31170, 13], [31170, 34065, 14], [34065, 37256, 15], [37256, 37909, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37909, 0.0431]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
b4179a4da92dfb4f3f4f10a29aef211eb77dcfc8
|
Schéma général auto-stabilisant et silencieux de constructions de type arbres couvrants
Stéphane Devismes, David Ilcinkas, Colette Johnen
To cite this version:
Stéphane Devismes, David Ilcinkas, Colette Johnen. Schéma général auto-stabilisant et silencieux de constructions de type arbres couvrants. ALGOTEL 2018 - 20èmes Rencontres Francophones sur les Aspects Algorithmiques des Télécommunications, May 2018, Roscoff, France. hal-01781338
HAL Id: hal-01781338
https://hal.archives-ouvertes.fr/hal-01781338
Submitted on 30 Apr 2018
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Schéma général auto-stabilisant et silencieux de constructions de type arbres couvrants
Stéphane Devismes¹, David Ilcinkas² et Colette Johnen²
¹Université Grenoble Alpes, VERIMAG UMR 514, Grenoble, France
²CNRS & Univ. Bordeaux, LaBRI, UMR 5800, F-33400 Talence, France
Nous proposons un schéma général, appelé Scheme, qui calcule des structures de données de type arbres couvrants dans des réseaux quelconques. Scheme est auto-stabilisant, silencieux et malgré sa généralité, est aussi efficace. Il est écrit dans le modèle à mémoires localement partagées avec atomicité composite, et suppose un démon distribué inéquitable, l’hypothèse la plus faible concernant l’ordonnancement dans ce modèle. Son temps de stabilisation est d’au plus 4\(n_{\text{maxCC}}\) ronds, où \(n_{\text{maxCC}}\) est le nombre maximum de processus dans une composante connexe. Nous montrons également des bornes supérieures polynomiales sur le temps de stabilisation en nombre de pas et de mouvements pour de grandes classes d’instances de l’algorithme Scheme. Nous illustrons la souplesse de notre approche en décrivant de telles instances résolvant des problèmes classiques tels que l’élection de leader et la construction d’arbres couvrants.
Mots-cles : Algorithmes distribués, auto-stabilisation, arbre couvrant, élection de leader, plus courts chemins.
1. Introduction
A self-stabilizing algorithm is able to recover a correct behavior (defined by a set of legitimate configurations) in finite time, regardless of the arbitrary initial configuration of the system, and therefore also after a finite number of transient faults. Among the vast self-stabilizing literature, many works (see [Gar03] for a survey) focus on spanning-tree-like constructions, i.e. constructions of specific distributed spanning tree- or forest- shaped data structures. Most of these constructions achieve an additional property called silence: a silent self-stabilizing algorithm converges within finite time to a configuration from which the values of the communication registers used by the algorithm remain fixed. Such a configuration is called terminal. Silence is a desirable property, as it facilitates composition of different algorithms and may utilize less communication operations and communication bandwidth. We consider the locally shared memory model with composite atomicity, which is the most commonly used model in self-stabilization. In this model, \(n\) processes communicate according to a given communication network using a finite number of locally shared registers, called variables. Each process can read its own variables and those of its neighbors, but can write only to its own variables. In this model, executions proceed in atomic steps and the asynchrony of the system is captured by the notion of daemon. The weakest (i.e., the most general) daemon is the distributed unfair daemon, meaning that while the configuration is not terminal, the daemon should select at least one enabled process, maybe more. Hence, solutions stabilizing under such an assumption are highly desirable, because they work under any other daemon assumption. Moreover, under an unfair daemon, the stabilization time can also be bounded in terms of steps (and moves, i.e., local state updates), which capture the execution time according to the fastest process, and not only in terms of rounds, which capture the execution time according to the slowest process. Note that if the number of moves (and thus steps) is unbounded, this means that there are processes whose moves do not make the system progress, hence wasting resources. There are many self-stabilizing algorithms proven under the distributed unfair daemon. However, analyses of the stabilization time in steps (or moves) is rather unusual and this may be an important issue. Indeed, recently, several self-stabilizing algorithms which work under a distributed unfair daemon have been shown to have an exponential stabilization time in steps in the worst case [DJ16, ACD+17].
†This study was partially supported by the ANR project DESCARTES: ANR-16-CE40-0023 and ANR project ESTATE: ANR-16-CE25-0009-03. A complete version of this work can be found in the technical report https://hal.archives-ouvertes.fr/hal-01667863.
Contribution. We propose a general scheme, called Algorithm Scheme, to compute spanning-tree-like data structures on bidirectional weighted networks of arbitrary (not necessarily connected) topology. Scheme is self-stabilizing and silent. It is written in the locally shared memory model with composite atomicity, assuming the distributed unfair daemon. Despite its versatility, Scheme is efficient. Indeed, its stabilization time is at most $4n_{\text{maxCC}}$ rounds, where $n_{\text{maxCC}}$ is the maximum number of processes in a connected component. Moreover, its stabilization time in moves is polynomial in usual cases. Precisely, we exhibit polynomial upper bounds on its stabilization time in moves that depend on the particular problems we consider. To illustrate the versatility of our approach, we propose instantiations of Scheme solving classical spanning-tree-like problems. Assuming an input set of roots but no identifiers, we propose two instantiations to compute a spanning forest of unconstrained, resp. shortest-path, trees, with non-rooted components detection. The first instantiation stabilizes in $O(n_{\text{maxCC}}n)$ moves, which matches the best known step complexity for spanning tree construction [Cou09] with explicit parent pointers. The second instantiation stabilizes in $O(\lambda_{\text{max}}n_{\text{maxCC}}^2n)$ moves ($\lambda_{\text{max}}$ is the maximum weight of an edge). This move complexity also matches the best known move complexity for this problem [DJ16]. Then, assuming the network is identified (i.e., processes have distinct IDs), we propose two instantiations of Scheme for electing a leader in each connected component and building a spanning tree rooted at each leader. The first instantiation stabilizes in $O(n_{\text{maxCC}}^2n)$ moves, matching the best known step complexity for leader election [ACD+17]. The second instantiation stabilizes in $O(n_{\text{maxCC}}^2n)$ moves but the built spanning tree is a Breadth first search tree. From these various examples, one can easily derive other silent self-stabilizing spanning-tree-like constructions.
2. Algorithm Scheme
Algorithm 1: Algorithm Scheme, code for any process $u$
Inputs
- $\text{canBeRoot}_u$: a boolean value; it is true if $u$ can be a root
- $\text{name}_u$: name of $u$, that belongs to $\text{IDS} = \mathbb{N} \cup \{\bot\}$
Variables
- $st_u \in \{L,C,EB,EF\}$: the status of $u$
- $\text{parent}_u \in \{\bot\} \cup \text{Lbl}$: a pointer to a neighbor or $\bot$
- $d_u$: the distance value associated to $u$
Predicates
- $P_{\text{root}}(u) \equiv \text{canBeRoot}_u \land st_u = C \land \text{parent}_u = \bot \land d_u = \text{distRoot}(u)$
- $P_{\text{abnormalRoot}}(u) \equiv \neg P_{\text{root}}(u) \land st_u \neq I \land \exists [\text{parent}_u \not\in \Gamma(u)] (st_{\text{parent}_u} = C \land d_{\text{parent}_u} < d_u = \text{canBeRoot}_u \lor \Theta(u)(\text{parent}_u) \lor (st_{\text{parent}_u} \neq \text{st}_u \land st_{\text{parent}_u} \neq EB)]$
- $P_{\text{updateNode}}(u) \equiv \exists [\exists v \in \Gamma(u)] (st_u = C \land d_u = \Theta(u)(v) < d_u)$
- $P_{\text{updateRoot}}(u) \equiv \text{canBeRoot}_u \land \exists [\exists v \in \Gamma(u) \land v \not\in \Gamma(u)] (d_u = \Theta(u)(v))$
Functions
- $\text{beRoot}(u)$: $st_u := C$; $\text{parent}_u := \bot$; $d_u := \text{distRoot}(u)$
- $\text{computePath}(u)$: $st_u := C$; $\text{parent}_u := \arg\min_{v \in \Gamma(u) \land v \not\in \Gamma(u)} (d_v = \Theta(u)(v))$; $d_u := d_{\text{parent}_u} = \Theta(u)(\text{parent}_u)$
if $P_{\text{updateRoot}}(u)$ then $\text{beRoot}(u)$
- $\text{Children}(u)$: $\{v \in \Gamma(u) | st_u \neq I \land st_v = C \land d_v \geq d_u \oplus \Theta(u)(v) \land (st_u = st_v \lor st_u = EB)\}$
Rules
- $\text{R}_u(u)$: $st_u := C \land P_{\text{nodeImp}}(u) \rightarrow \text{computePath}(u)$
- $\text{R}_{EB}(u)$: $st_u := C \land P_{\text{nodeImp}}(u) \land (P_{\text{abnormalRoot}}(u) \lor st_{\text{parent}_u} = EB) \rightarrow st_u := EB$
- $\text{R}_{EF}(u)$: $st_u := EB \land (\forall v \in \text{Children}(u) | st_v = EF) \rightarrow st_u := EF$
- $\text{R}_{R}(u)$: $P_{\text{reset}(u)} \land \neg \text{canBeRoot}_u \land (\exists v \in \Gamma(u) | st_v \neq C) \rightarrow st_u := I$
- $\text{R}_{R}(u)$: $P_{\text{reset}(u)} \land \neg \text{canBeRoot}_u \land (\exists v \in \Gamma(u) | st_v = C) \rightarrow \text{computePath}(u)$
$\dagger$. By non-rooted components detection, we mean that every process in a connected component that does not contain the root should eventually take a special state notifying that it detects the absence of a root.
Schéma général auto-stabilisant et silencieux de constructions de type arbres couvrants
According to the specific problem we consider, we may want to minimize the weight of the trees using some kind of distance. So, we assume that each edge \( \{u, v\} \) has two weights: \( \omega_u(v) \) denotes the weight of the arc \( (u, v) \), and \( \omega_v(u) \) denotes the weight of the arc \( (v, u) \). Both values belong to the domain DistSet. Let \( (\text{DistSet}, \oplus, \prec) \) be an ordered magma, i.e., \( \oplus \) is a closed binary operation on \( \text{DistSet} \) and \( \prec \) is a total order on this set. The definition of \( (\text{DistSet}, \oplus, \prec) \) is problem dependent. We assume that, for every edge \( \{u, v\} \) of \( E \) and for every value \( d \) of \( \text{DistSet} \), we have \( d \prec d \oplus \omega_u(v) \) and \( d \prec d \oplus \omega_v(u) \).
The silent self-stabilizing algorithm Scheme (see Algorithm 1 for its code), converges to a terminal configuration where a specified spanning forest (maybe a single spanning tree) is distributedly defined. Each process \( u \) uses as input a name \( p\text{name}_u \) (\( p\text{name}_u = \perp \), for every process \( u \) if the network is anonymous), a constant boolean value \( \text{canBeRoot}_u \), which is true if \( u \) is allowed to be root of a tree, and in this latter case a problem dependent distance \( \text{distRoot}(u) \), used when \( u \) is a root. Our scheme also uses a problem dependent predicate \( P_{\text{nodeImp}}(u) \), with specific properties, that indicates to \( u \) whether its current estimated distance to the root (variable \( d_u \)) can be improved (decreased). Then, a legitimate configuration is defined as follows.
**Definition 1 (Legitimate configuration)** A legitimate configuration of Scheme is a configuration where every process \( u \) is in a legitimate state, i.e., \( u \) satisfies \( \neg P_{\text{nodeImp}}(u) \) and one of the following conditions:
1. \( P_{\text{root}}(u) \);
2. there is a process satisfying \( \text{canBeRoot} \) in the connected component \( V_u \) containing \( u \), \( s_t = C \) (for Correct), and \( u \in \text{Children}(\text{parent}_u) \);
3. there is no process satisfying \( \text{canBeRoot} \) in \( V_u \) and \( s_t = I \) (for Isolated).
In any given configuration, every process \( u \) satisfies exactly one of the following cases: (1) \( u \) is isolated, i.e. it has status \( I \); (2) \( u \) is a normal root, i.e., \( P_{\text{root}}(u) \) holds; (3) \( u \) points to some neighbor and the state of \( u \) is coherent w.r.t. the state of its parent, i.e., \( u \in \text{Children}(\text{parent}_u) \); (4) \( u \) is an abnormal root, i.e., \( P_{\text{abnormalRoot}}(u) \) holds. In that latter case, we want to correct the state of \( u \) while avoiding the following situation: \( u \) leaves its abnormal tree \( T \); this removal creates some new abnormal trees, each of those being rooted at a previous child of \( u \); and later \( u \) joins one of those (created) abnormal trees. (This issue is sometimes referred to as the count-to-infinity problem.) Hence, the idea is to freeze \( T \), before removing any node from it. This is done as in a “Propagation of Information with Feedback”: From an abnormal root, the status \( EB \), for Error Broadcast, is broadcast down in the tree using rule \( R_{EF} \). Then, once the \( EB \)-wave reaches a leaf, the leaf initiates a convergencast \( EF \)-wave (Error Feedback) using rule \( R_{EF} \). Once the abnormal root gets status \( EF \), the tree is frozen and can be safely deleted from its abnormal root toward its leaves. At this point, an abnormal root \( u \) can either become the root of a new normal tree or join another tree, via rule \( R_I(u) \), depending on which option gives it the smaller distance, or \( u \) becomes isolated via rule \( R_I(\neg \text{canBeRoot}, u) \) and \( u \) has no neighbor with status \( C \). In parallel, rules \( R_U \) are executed to reduce the weight of the trees, if necessary, i.e., if \( P_{\text{nodeImp}}(u) \) holds. A detailed analysis of our algorithm allows us to prove the following result.
**Theorem 1** Any terminal configuration of Algorithm Scheme is legitimate, and vice versa. Moreover, Algorithm Scheme is silent self-stabilizing under the distributed unfair daemon, has a bounded move (and step) complexity, and stabilizes in at most \( 4n_{\maxCC} \) rounds from any configuration.
Roughly speaking, we define a \( GC \)-segment, for any connected component \( GC \), as a part of execution between two removals of a non-frozen abnormal tree. A key property of our algorithm is that non-frozen abnormal trees are never created. Combined with other properties, this allows us to prove that there are at most \( n_{\maxCC} + 1 \) \( GC \)-segments. The sequence of rules executed by a process \( u \) of \( GC \) during a \( GC \)-segment belongs to the following language: \( (R_I(\epsilon))(R_R + \epsilon)(R_U)^+(R_{EF} + \epsilon)(R_{EF} + \epsilon) \). This further leads to the two following key results.
**Theorem 2** If the number of \( R_U \) executions during a \( GC \)-segment by any process of \( GC \) is bounded by \( nb_U \), then the total number of moves (and steps) in any execution is bounded by \( (nb_U + 4)(n_{\maxCC} + 1)n \).
**Theorem 3** When all weights are strictly positive integers bounded by \( \omega_{\max} \), and \( \oplus \) is the addition operator, the stabilization time of Scheme in moves (and steps) is at most \( \omega_{\max} (n_{\maxCC} - 1)^2 + 5)(n_{\maxCC} + 1)n \).
Unconstrained and Shortest-Path Spanning Forest. Given an input set of processes $\text{rootSet}$, and assuming (strictly) positive integer weights for each edge, Algorithms Forest and SPF are the instantiations of Scheme with the parameters given in Algorithm 2.
Algorithm 2: Parameters for any process $u$ in Algorithms Forest and SPF
Inputs:
1. $\text{canBeRoot}_u$ is true if and only if $u \in \text{rootSet}$.
2. $\text{name}_u$ is the identifier of $u$ (n.b., $\text{name}_u \in \mathbb{N}$).
3. $\omega_u(v) = \omega(v) \in \mathbb{N}^*$, for every $v \in \Gamma(u)$.
Ordered Magma: (1) $\text{DistSet} = \mathbb{N}$, (2) $i_1 \oplus i_2 = i_1 + i_2$, (3) $i_1 < i_2 \equiv i_1 < i_2$, and (4) $\text{distRoot}(u) = 0$.
Predicate:
- Forest: $P_{\text{nodeImp}}(u) \equiv P_{\text{updateRoot}}(u)$
- SPF: $P_{\text{nodeImp}}(u) \equiv P_{\text{updateNode}}(u) \lor P_{\text{updateRoot}}(u)$
Algorithm Forest (resp. SPF) computes (in a self-stabilizing manner) an unconstrained (resp. shortest-path) spanning forest in each connected component of $G$ containing at least one process of $\text{rootSet}$. The forest consists of trees rooted at each process of $\text{rootSet}$. Moreover, in any component containing no process of $\text{rootSet}$, the processes eventually detect the absence of roots by taking the status $I$ (Isolated).
By Theorem 2 (resp. Theorem 3), Algorithms Forest and SPF self-stabilize to a terminal legitimate configuration in at most $O(n_{\text{maxCC}}n)$ (resp. $O(n_{\text{maxCC}}^2n)$) moves, where $n_{\text{max}}$ is the largest edge weight.
Leader Election Algorithms. Assuming the network is identified (each node has a unique identifier), Algorithm LEM and LEM$_{BFS}$ are the instantiations of Scheme with the parameters given in Algorithm 3.
Algorithm 3: Parameters for any process $u$ in Algorithm LEM and LEM$_{BFS}$
Inputs:
1. $\text{canBeRoot}_u$ is true for any process.
2. $\text{name}_u$ is the identifier of $u$ (n.b., $\text{name}_u \in \mathbb{N}$).
3. $\omega_u(v) = (\omega(v), 1)$ for every $v \in \Gamma(u)$.
Ordered Magma: (1) $\text{DistSet} = \mathbb{N} \times \mathbb{N}$; for every $d = (a, b) \in \text{DistSet}$, we let $d.id = a$ and $d.h = b$;
2. $(id_1, i_1) \oplus (id_2, i_2) = (id_1, i_1 + i_2)$;
3. $(id_1, i_1) \prec (id_2, i_2) \equiv (id_1 < id_2) \lor [(id_1 = id_2) \land (i_1 < i_2)]$;
4. $\text{distRoot}(u) = (\text{name}_u, 0)$.
Predicate:
- LEM: $P_{\text{nodeImp}}(u) \equiv ((\exists v \in \Gamma(u) \mid st_u = C \land d_v.id < d_u.id)) \lor P_{\text{updateRoot}}(u)$
- LEM$_{BFS}$: $P_{\text{nodeImp}}(u) \equiv P_{\text{updateNode}}(u) \lor P_{\text{updateRoot}}(u)$
In each connected component, Algorithm LEM and LEM$_{BFS}$ elect the process $u$ (i.e., $P_{\text{leader}}(u)$ holds) of smallest identifier and builds a tree rooted at $u$ that spans the whole connected component. Algorithm LEM builds a tree of arbitrary topology; algorithm LEM$_{BFS}$ builds a breadth-first-search tree.
By Theorem 3, Algorithm LEM$_{BFS}$, self-stabilizes to a terminal legitimate configuration in at most $O(n_{\text{maxCC}}n)$ moves. By Theorem 2, Algorithm LEM, self-stabilizes to a terminal legitimate configuration in at most $(2n_{\text{maxCC}} + 4)(n_{\text{maxCC}} + 1)n$ moves (i.e. $O(n_{\text{maxCC}}^2n)$ moves) since during a GC-segment, a process can only execute R$_U$ to improve its ID.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01781338/document", "len_cl100k_base": 5407, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 19757, "total-output-tokens": 6187, "length": "2e12", "weborganizer": {"__label__adult": 0.0005335807800292969, "__label__art_design": 0.0005383491516113281, "__label__crime_law": 0.0006208419799804688, "__label__education_jobs": 0.0008440017700195312, "__label__entertainment": 0.0001659393310546875, "__label__fashion_beauty": 0.0002789497375488281, "__label__finance_business": 0.0004832744598388672, "__label__food_dining": 0.0005850791931152344, "__label__games": 0.0011386871337890625, "__label__hardware": 0.0025997161865234375, "__label__health": 0.0016489028930664062, "__label__history": 0.0006222724914550781, "__label__home_hobbies": 0.0002319812774658203, "__label__industrial": 0.0009169578552246094, "__label__literature": 0.0005021095275878906, "__label__politics": 0.0004780292510986328, "__label__religion": 0.000934600830078125, "__label__science_tech": 0.298583984375, "__label__social_life": 0.0001404285430908203, "__label__software": 0.0081939697265625, "__label__software_dev": 0.67822265625, "__label__sports_fitness": 0.0005636215209960938, "__label__transportation": 0.00115966796875, "__label__travel": 0.0003287792205810547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20004, 0.02744]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20004, 0.60779]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20004, 0.70525]], "google_gemma-3-12b-it_contains_pii": [[0, 1078, false], [1078, 5328, null], [5328, 9966, null], [9966, 15622, null], [15622, 20004, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1078, true], [1078, 5328, null], [5328, 9966, null], [9966, 15622, null], [15622, 20004, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20004, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20004, null]], "pdf_page_numbers": [[0, 1078, 1], [1078, 5328, 2], [5328, 9966, 3], [9966, 15622, 4], [15622, 20004, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20004, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
bdcca530ec2b6e75ae8279ab4d763b6ec0e34b4f
|
On the Relationship between Technical Debt Management and Process Models
N. Rios
Federal University of Rio de Janeiro
S. Freire
Federal University of Bahia and Federal Institute of Ceará
B. Pérez
University of Los Andes and Francisco de Paula S/der University
C. Castellanos
University of Los Andes
D. Correal
University of Los Andes
M. Mendonça
Federal University of Bahia
D. Falessi
University of Rome “Tor Vergata”
C. Izurieta
Montana State University and Idaho National Laboratories
C. Seaman
University of Maryland Baltimore Country
R. Spínola
Salvador University and State University of Bahia
Abstract—As technical debt (TD) potentially occurs as a result of poor decisions that affect software development tasks, one might expect that practitioners following different process models, such as agile, hybrid or traditional, will perceive and manage the effects of TD differently. This study investigates the potential relationship between development process models and TD effects and their management by surveying 432 practitioners from software organizations in Brazil, Chile, Colombia, and the United States. Results indicate that, although opinions about debt prevention and repayment are the same regardless of the process model, there are differences in how practitioners monitor and feel the effects of TD. Development teams can use our findings to have a clearer view of the possible effects and managerial influence of TD in their projects for each type of process model.
Technical debt (TD) contextualizes the problem of taking design shortcuts in pending development tasks as a type of debt that brings a short-term benefit to the project, usually in terms of increased development speed or shortened time to market, but which may have to be paid with interest later on in the software development process [1]. As TD is potentially a result of poor decisions related to development tasks, one might expect that the inherent context resulting from following different process models, such as agile, hybrid or traditional [2], leads to differences in how teams view the effects of TD, and thus how they manage TD.
The effects of TD can impact software projects in several ways [3]. Having information about potential TD effects aids in the prioritization of TD items to pay off, by supporting a more precise impact analysis and the identification of corrective actions to minimize possible negative consequences for the project. It is even more useful to know what the common TD
effects are in specific contexts, e.g. when following a particular process model, so that projects can choose TD repayment strategies that ameliorate effects that they are likely to experience, as opposed to effects that are less likely to apply to their project.
TD management encompasses activities related to TD prevention (applying good practices that minimize the occurrence of debt [4]), TD monitoring (of the costs and benefits of unresolved TD over time [5]), and TD repayment (the elimination of identified TD items [4]). One might expect that there are differences in how debt items are prevented, monitored or repaid, for different process models. For example, due to agile’s focus on reacting to changes, practitioners following agile processes might be more able to prevent the occurrence of debt items. On the other hand, due to the high emphasis on documentation in traditional process models, debt repayment could be costier (as multiple artifacts are likely to be impacted by any change).
Although the aforementioned expectations and many others could be valid, the fact is that despite current efforts to understand TD effects [3,9,10] and TD management [1,4,5], very little is known about the relationship between TD effects, TD management, and process models. We approach this topic by surveying 432 practitioners from different countries in the context of the InsighTD Project [3] (see Sidebar 1). Such investigation can provide guidance for practitioners in several ways:
• Understanding the impact of the process model on TD management. This could avoid, for example, “silver bullet” thinking: just by choosing a specific process model, the TD management would be trivial;
• Knowing the effects that are more prone to affect their processes would help the team narrow down the choices for TD management strategies, including repayment options, impact reduction, and risk management;
• Reducing the chances of making decisions based only on subjective opinions or assumptions about TD. By relying too much only on their personal beliefs, practitioners might face several risks in their projects [11].
(Sidebar 1) The InsighTD Project
The InsighTD project started in 2017 and is based on an industry survey covering six main areas of investigation: TD concept, causes, effects, prevention, monitoring, and repayment. Previous InsighTD publications have addressed the following topics:
• What are the effects of different types of TD?
• When, how, and why do practitioners pay off TD in their projects?
• What do teams do to prevent TD and how successful are they?
Until recently, the InsighTD data set has not been large enough to partition according to the context questions asked in the survey (such as process model). This paper represents the first opportunity to explore differences between different types of projects in the InsighTD data. See http://www.tdsurvey.com/publication-map/ to learn more about other InsighTD investigations.
THE SURVEY
The work discussed herein uses a subset of 14 questions from the InsighTD questionnaire. Q1 to Q7 characterize the survey participants and their work context. In Q8, we presented the definitions of agile, hybrid, and traditional process models [2] and asked the participants which one was followed by the development team. Next, we ask participants to define TD in their own words in Q10. Then, we present a TD definition adapted from McConnell [8]. Q13 asks for an example of a TD item. Q22, Q24, and Q26 are yes/no questions about TD prevention, monitoring, and repayment, respectively. Finally, Q20 is an open-ended question about effects of TD. The answers to this question were analyzed qualitatively using the open coding technique [3] by at least three researchers.
Only responses from participants who provided a valid definition of TD for Q10 and a valid example of a TD item in Q13 were considered for analysis [3]. Thus, the survey obtained 432 valid answers from developers in Brazil (107), Chile (92), Colombia (134), and the United States (99). Most respondents identified themselves as proficient (33%), followed by competent (29%), expert (26%), beginner (11%), and novice (1%), indicating that, in general, the questionnaire was answered by professionals with experience in their functions.
According to Q8, 44% of the respondents followed hybrid, 41% followed agile and 14% followed traditional process models. We first checked for associations between the process model type and possible confounding factors (company size, team size, participant’s role, experience level of participants, system size, and system age) using
Pearson’s Chi-squared test and Fisher’s exact test. This check did not find any significantly significant effects. Thus, we can conclude that our dataset does not have significantly different distributions of context factors for the three process models.
If the reader is interested in additional details about the demographics, analysis procedures, and their execution, please access the auxiliary material at http://bit.ly/3jgQJBO.
IS TD MANAGEMENT DIFFERENT IN AGILE, HYBRID, AND TRADITIONAL PROCESS MODELS?
TD management encompasses a number of activities [4,5], including prevention, monitoring, and repayment. The InsighTD survey addresses the TD management topic by asking participants about these three activities specifically through yes/no questions. We used descriptive statistics and hypothesis testing to investigate the association between process model and how practitioners dealt with TD prevention, monitoring, and repayment. First, we divided the dataset into three groups by process model (agile, hybrid or traditional) along two dimensions (yes/no) for each TD management question. Then, we calculated the number of yes and no answers in each group for each question Q22 (prevention), Q24 (monitoring), and Q26 (repayment). Lastly, we investigated whether there are significant differences between the groups using Pearson’s Chi-squared statistical test and the V-Cramer statistic.
Figure 1a shows the percentage of each group indicating whether the TD item could be prevented. The cells of the table show the percentages of respondents, with actual totals per line and per column. Clearly, the distributions of answers are not different among the three process models, as confirmed by the V-Cramer statistic (0.0437) and the Pearson’s Chi-squared statistical test (p-value 0.663). No matter which process model the development team was following, most practitioners thought that the TD could have been prevented.
This result is interesting because of the perception that agile’s focus on reacting to changes would provide more opportunities to prevent debt. But these data show that practitioners following a traditional model still see opportunities for TD prevention. Clearly there is room for process improvement towards the prevention of TD, no matter which process model is being followed.
Figure 1b and Table 1 show the results for the question about TD monitoring, and indicate larger differences, especially between agile and traditional projects. Participants that follow agile process models are more inclined to monitor TD in comparison to those using traditional process models.
This result has two immediate implications. Firstly, the effects of TD are commonly related to internal


<table>
<thead>
<tr>
<th>Process Model</th>
<th>Pearson’s Chi-squared statistical test (with 95% of confidence level)</th>
<th>V-Cramer</th>
</tr>
</thead>
<tbody>
<tr>
<td>Agile, Hybrid and Traditional</td>
<td>0.009421 (Yes)</td>
<td>0.1128245</td>
</tr>
<tr>
<td>Agile and Traditional</td>
<td>0.005949 (Yes)</td>
<td>0.1675978</td>
</tr>
<tr>
<td>Agile and Hybrid</td>
<td>0.022910 (Yes)</td>
<td>0.1128245</td>
</tr>
<tr>
<td>Hybrid and Traditional</td>
<td>0.254300 (No)</td>
<td>0.06247666</td>
</tr>
</tbody>
</table>
quality issues and many of them (e.g. low internal quality, low maintainability, rework) are directly related to coding activities [3]. As coding occupies more attention and time in agile projects (as compared to traditional), practitioners on agile projects would also more immediately benefit from TD monitoring. Secondly, given the increased planning and monitoring burden in traditional processes (as compared to agile processes), TD monitoring might be seen as an unnecessary addition to that burden.
Figure 1c shows the percentage of each group (process model) indicating if the TD item was paid off. While there are some differences among groups, they are not large or significant (V-Cramer statistic 0.0718, Chi-squared p-value of 0.329). TD repayment did not happen in most cases regardless of the process model they followed.
In conclusion, although the overall tendency is not to repay the existing TD, regardless of the process model, we found a substantial percentage of items that were repaid (a mean of 40% of the cited items). Specifically, participants that follow agile process models seem to be slightly more prone to eliminate TD. This result can also be explained by the fact that debt items are more commonly found or reported in coding related activities. Those activities are subject of much focus in agile processes and, thus, the benefits of repaying the debt would be felt more immediately in agile process. On the other side, 63% of participants from traditional processes indicated that the debt items were not repaid. This high percentage indicates that the traditional process tends to suffer more from the negative effects of the presence of TD, and add up to well-known problems faced by traditional processes, in an equation that is hard to solve.
Lastly, we noticed advances in terms of TD monitoring, which was performed for most of the cases (54% overall). However, the TD monitoring is significantly influenced by the followed process model. Agilists are more likely to monitor TD than hybridists and traditionalists.
ARE THE EFFECTS OF TD DIFFERENT IN AGILE, HYBRID, AND TRADITIONAL SOFTWARE PROJECTS?
Through qualitative analysis of the responses to Q20 [3], we identified 79 effects of TD in software projects. We then divided the cited effects into agile, hybrid, and traditional subsets based on Q8. The effects in each subset were ordered according to their frequency, i.e. the number of participants who said that an effect was an impact of the TD example they were describing.
To better understand the relationship between TD effects and process models, we quantitatively measured the similarity of the effect ranking lists for each process model using rank-biased overlap (RBO). This analysis indicates whether, in general, the process models suffer from the same types of TD effects, and with the same frequency. RBO supports top-weighted ranked lists, thus, the first elements in a list have more impact on the similarity index than the latter ones [6].
Figure 2 shows the RBO comparison between the ranked lists of effects identified from each process model. Each line of the figure represents a comparison between two of the process models. As the p-value increases, more elements of the lists are included in the comparison. The RBO indicates that the lists are overlapping when its value is closer to 1. As one might expect, the largest differences are found between the traditional and agile process models. The hybrid model had more similarity to both the agile and traditional models than they did to each other, although hybrid was much more similar to the traditional than the agile. Thus, we conclude from Figure 2 that agile software development projects experience the effects of TD in different ways than non-agile projects.

Still analyzing Figure 2, we can also observe that as RBO includes more effects in the comparison (p-value increases), the overlapping between the process models decreases, revealing that the most commonly cited effects on each list are very similar. Table 2 presents the five most commonly cited effects by process model, which are very similar. Thus, just looking at the most common effects is not enough to
really understand the differences between the process models. The real difference is not in what effects are high on the list, but the balance or relative probabilities of the different effects. For instance, although delivery delay is the most common effect in all process models, it is more commonly faced in traditional models (25.8% of the cases) when compared with agile models (20.1%).
These findings can assist in decision making, incorporating a wider range of potential consequences in decision models that attempt to capture the long-term cost of TD. Although the overlapping of agile, hybrid, and traditional process models reveals that they share the majority (65%) of identified effects, these effects can impact projects slightly differently depending on which process model they are following. Thus, new approaches to mitigate TD effects should consider how frequently each effect impacts the process model under use. A way to incorporate this probability information into decision making is with the probabilistic diagrams of effects of TD, which are discussed in the next subsection.
For those interested on the definitions of the most commonly cited effects, and the full list of effects, please access the auxiliary material.
**Probabilistic Diagrams of Effects of TD**
The probabilistic diagrams of effects of TD, proposed in [7], highlight the most common effects that occur as a result of a problem, helping them to identify effects that they would not have identified otherwise. Such diagrams can support TD effect analysis meetings, in which the effects of TD items can be analyzed to support the definition of action plans to deal with them.
We can specialize the diagrams of the effects of TD to show only the effects related to a specific development process model. This is useful because there are differences on how frequently each effect impacts software projects following specific process models. For example, instead of having a general effect diagram that would indicate that increased cost is an effect of TD that can impact the project, by using specialized diagrams, an agile development team, for example, could see that they are less likely (5%) to experience this effect than a traditional (9.7%) process. Figure 3 presents such a specialized diagram for an agile process model (see the other diagrams in the auxiliary material). The diagram is created based on the ranked list of effects for agile process models. It shows how the probabilities for each possible effect impact development teams and represents the effects using grey tones, where effects with higher probability are shown closer to the center and in darker tones.
Suppose that a team is using the diagram in Figure 3 to plan steps to mitigate the impacts of TD. One strategy would be to examine Figure 3, beginning with the Planning and Management category and with the effect delivery delay, and then work their way up to reduction in scope. For each effect, the team would decide (i) whether that effect applies to their project, and (ii) whether it is possible to work on actions that reduce this effect. After going through all the Planning and Management effects, the team could then move on to the Internal Quality Issues category, following the same process. Continuing in this way, the team could reflect on each category, until they have compiled a sufficient list of potential actions to minimize the impact of the presence of TD, or until they run out of time. In this way, a team can prioritize their time and improve the use of resources to focus on those effects most likely felt by the project, and whose elimination would be most likely to have a positive impact on it.
**Using a probabilistic diagram that is specialized to practitioners’ context, in particular the process model they are following, makes this process more efficient and effective.**
---
**Table 2. Top 5 TD effects cited frequency by the process model.**
<table>
<thead>
<tr>
<th>Ranking</th>
<th>Agile</th>
<th>Hybrid</th>
<th>Traditional</th>
</tr>
</thead>
<tbody>
<tr>
<td>1st</td>
<td>Delivery delay (20.1%)*</td>
<td>Delivery delay (24.6%)</td>
<td>Delivery delay (25.8%)</td>
</tr>
<tr>
<td>2nd</td>
<td>Low external quality (17.3%)</td>
<td>Low maintainability (18.3%)</td>
<td>Low maintainability (21%)</td>
</tr>
<tr>
<td>3rd</td>
<td>Low maintainability (16.8%)</td>
<td>Rework (16.2%)</td>
<td>Rework (16.1%)</td>
</tr>
<tr>
<td>4th</td>
<td>Rework (16.2%)</td>
<td>Low external quality (15.2%)</td>
<td>Low external quality (12.9%)</td>
</tr>
<tr>
<td>5th</td>
<td>Increased effort (8.9%)</td>
<td>Increased cost (6.8%)</td>
<td>Increased cost (9.7%)</td>
</tr>
</tbody>
</table>
* The number in parentheses represents the percentage of the number of citations of each effect by process.
After using the diagrams, development teams will have a list of effects to be monitored. That list is the starting point to plan actions to minimize the impact or eliminate the debt. To this end, practitioners can adopt TD repayment practices. For example, code and design refactoring are commonly used practices to eliminate debt items for the category internal quality issues in agile process models. A comprehensive list of repayment practices related to each of the category of effects of the diagram is presented in [1]. Interested readers can find a detailed description on the use of the diagrams and their benefits in [7].
**THREATS TO VALIDITY**
We sought to reduce conditions that limit our ability to generalize the results by achieving a diversity of surveyed participants. The number of participants also reduces the chances of biased results due to specific groups of participants.
Also, participants may act differently than they usually do because they are part of a study. To prevent it, we clearly explain the purpose of the study and ask the participants to answer questions based on their own experience. We also explicitly stated that the questionnaire is anonymous and that the collected data is analyzed without taking into consideration the participants’ identities.
The interested reader can find more details on the threats to validity in the auxiliary material.
**TAKEAWAYS FOR PRACTITIONERS**
InsighTD data indicate that the current practice in TD monitoring and repayment is still far from ideal. However, practitioners working in agile projects are more likely to invest in TD monitoring and repayment activities, signaling that TD management might be improved by investing in some agile practices, such as iterative development and tightly knit teams. There is also a common perception, cutting across process models, that TD can be prevented. Thus, investing on prevention initiatives is likely to be supported by practitioners in all types of projects.
InsighTD results also show that the most common TD effects are felt to different degrees in agile vs. non-agile projects, so prioritization strategies for TD mitigation activities should also differ. We have presented a mechanism for using this information, probabilistic diagrams of TD effects, to make better decisions about anticipating and avoiding the effects of TD. To illustrate the benefit of using diagrams specialized by process models, consider the following scenario. Suppose that an agile project team is using a probabilistic diagram to plan TD repayment and other improvement activities. Suppose also that the characteristics of their customer base dictate the need to deliver very high external quality. If this project team was using a generic probabilistic diagram based on data from traditional projects, then according to Table 2, the diagram would indicate that the main benefits of repaying the debt would be to mitigate the effects delivery delay, low maintainability, and rework. Thus, the team would be led to believe that TD repayment would not contribute towards their primary goal, high external quality, and they would make decisions that de-prioritize TD repayment. The impact on the project would be to ignore TD that leads to.

to low external quality, raising the risk of affecting their top priority issue negatively. Thus, practitioners should use techniques based on historical data (such as probabilistic diagrams) that are specialized to the attributes of the current project, in particular the development process.
REFERENCES
Nicolli Rios (nicolli@cos.ufrj.br) is a Post-Doc researcher at COPPE at Federal University of Rio de Janeiro. She is also a researcher at the Technical Debt Research Team.
Sávio Freire (savio.freire@ifce.edu.br) is a PhD student in the Department of Computer Science at the Federal University of Bahia and an Assistant Professor at the Federal Institute of Ceará. He is a researcher at the Technical Debt Research Team.
Boris Pérez (borisperezg@ufps.edu.co) is a Ph.D. candidate at the Department of Systems and Computing Engineering, Universidad de Los Andes, Colombia.
Camilo Castellanos (cc.castellanos87@uniandes.edu.co) is a Ph.D. candidate at the Department of Systems and Computing Engineering, Universidad de Los Andes, Colombia.
Dario Correal (dcorreal@uniandes.edu.co) is an Associate Professor of the Department of Systems and Computing at the University of Los Andes.
Manoel Mendonça (manoel.mendonca@ufba.br) is a Professor of computer science at the Federal University of Bahia (UFBA). He acted as the Founding Director of the Fraunhofer Center for Software and Systems Engineering at UFBA.
Davide Falessi is an Assistant Professor (RTDb) at the University of Rome “Tor Vergata”, Italy.
Clemente Izurieta (clemente.izurieta@montana.edu) is an Associate Professor of Computer Science in the Gianforte School of Computing at Montana State University. He is also the CTO of Authors A.I. (authors.ai).
Carolyn Seaman (cseaman@umbc.edu) is a Professor of Information Systems at the University of Maryland Baltimore County (UMBC). She is also the Director of the Center for Women in Technology, also at UMBC.
Rodrigo Spinola (rodrigo.spinola@unifacs.br) is a Professor of Software Engineering at the Salvador University where he leads the Technical Debt Research Team. He is also a Visiting Professor at State University of Bahia.
|
{"Source-Url": "https://www.cs.montana.edu/izurieta/pubs/IEEE_Software_03_2021.pdf", "len_cl100k_base": 5243, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23192, "total-output-tokens": 6280, "length": "2e12", "weborganizer": {"__label__adult": 0.000492095947265625, "__label__art_design": 0.0005035400390625, "__label__crime_law": 0.0004701614379882813, "__label__education_jobs": 0.004917144775390625, "__label__entertainment": 9.018182754516602e-05, "__label__fashion_beauty": 0.00022840499877929688, "__label__finance_business": 0.0032253265380859375, "__label__food_dining": 0.00037932395935058594, "__label__games": 0.0006265640258789062, "__label__hardware": 0.000606536865234375, "__label__health": 0.0006213188171386719, "__label__history": 0.00026345252990722656, "__label__home_hobbies": 0.00012731552124023438, "__label__industrial": 0.000400543212890625, "__label__literature": 0.0005102157592773438, "__label__politics": 0.0004489421844482422, "__label__religion": 0.00044918060302734375, "__label__science_tech": 0.0118560791015625, "__label__social_life": 0.00022518634796142575, "__label__software": 0.00643157958984375, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.000293731689453125, "__label__transportation": 0.0006098747253417969, "__label__travel": 0.0002092123031616211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27241, 0.02258]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27241, 0.40782]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27241, 0.93951]], "google_gemma-3-12b-it_contains_pii": [[0, 2527, false], [2527, 7156, null], [7156, 10592, null], [10592, 14858, null], [14858, 19452, null], [19452, 22793, null], [22793, 27241, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2527, true], [2527, 7156, null], [7156, 10592, null], [10592, 14858, null], [14858, 19452, null], [19452, 22793, null], [22793, 27241, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27241, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27241, null]], "pdf_page_numbers": [[0, 2527, 1], [2527, 7156, 2], [7156, 10592, 3], [10592, 14858, 4], [14858, 19452, 5], [19452, 22793, 6], [22793, 27241, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27241, 0.11111]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
7efad9c44438531d8c12fcc422972dc9cb8aae9f
|
Abstract
The recognition of the goal a user is pursing when interacting with a software application is a crucial task for an interface agent as it serves as a context for making opportune interventions to provide assistance to the user. The prediction of the user goal must be fast and a goal recognizer must be able to make early predictions with few observations of the user actions. In this work we propose an approach to automatically build an intention model from a plan corpus using Variable Order Markov models. We claim that following our approach, an interface agent will be capable of accurately ranking the most probable user goals in a time linear to the number of goals modeled.
1 Introduction
Interface Agents [Maes, 1994] are an assistive technology that emerged to provide proactive and reactive assistance to human users in their computer-based tasks in a personalized manner. To accomplish their task of assisting users, interface agents are designed to learn the interests, preferences, priorities and needs of the following the metaphor of a personal assistant. However, interface agents not only have to learn the user's preferences and habits regarding the use of the application itself, but should also consider what the user's current goal is before initiating an interaction with him or her. Considering the status of the user's attention (i.e. the goal he is pursuing) and the uncertainty about the user's intentions are critical factors for the effective integration of automated services with direct manipulation interfaces [Horvitz, et al. 1998]. For these reasons, it is desirable to build agents capable of detecting as soon as possible the user's goal so that it can predict opportune moments for gaining the user's attention.
A correct and early detection of the user's goal will avoid the agent interrupting the user in an improper moment. Users generally don't want to be interrupted while working on a specific task, unless this interruption is strongly related to the task they are performing [Whitworth, 2005]. By considering the user's intention the agent will be able to answer to his requirements always in the realm of his current intention. For example, if the agent observes that the user is scheduling a work meeting for the following day, the agent can offer to automatically complete the information required and to send an email to each participant of the meeting, providing that it knows the user's preferences about the kind of meeting he is scheduling.
With this purpose, plan recognition aims at identifying the goal of a subject based on the actions he or she performs in an environment. The goal usually has one or more associated plans that can predict the user's subsequent behavior. Inputs to a plan recognizer are generally a set of goals the agent expects the user to carry out in the domain, a set of plans describing the way in which the user can reach each goal, and an action observed by the agent. The plan recognition process itself, consists in foretelling the user's goal, and determining how the observed action contributes to reach it.
Goal recognition is a special case of plan recognition in which only the goal is recognized. Goal recognition is used in domains in which it is better a fast detection of just the user goal than a more precise but time consuming detection of the complete user plan.
In this work we tackle the problem of automatically acquiring a model of a user’s goals and the posterior detection of the current user’s goal by means of variable order Markov models. The algorithm for learning such models is based on the observation of the actions a user performs on a software application and the goal that motivates the execution of those actions. The learning algorithm will build a model of the actions the user performs to achieve each goal. This model will then enable an interface agent both to detect the intention the user has in any given moment, and the actions or sequence of actions that he will probably perform next to achieve his goal. The agent can use this information as a context for future assistance, not to bother the user with interventions that are not related to his main goal.
2 Background and Related Work
Plan and goal recognition systems can be roughly grouped in two main categories: Consistency and Probabilistic approaches.
---
1 How the information about current user goal is used by the interface agent is out of the scope of this work.
Consistency approaches face the problem by determining which of an input set of goals is consistent with the observed tasks. A goal $G$ is consistent with an observed task sequence $A$ if $A$ might have been executed in service of $G$. Kautz [Kautz, 1991] provided the first formal theory of plan recognition in which plans and goals are represented as an event hierarchy which describes all the behavior that the user might exhibit in some particular domain. Every observed action is part of one or more top-level plans, and the plan recognition task is to minimize the set of top-level plans sufficient to explain the observed actions. The plan recognizer presented in [Lesh, 1998] uses a consistency graph that represents the relations between the actions and possible goals of the domain. The system iteratively applies pruning rules which remove from the graph the goals that are not in any consistent plan, given the observed actions. COLLAGEN [Rich et al., 2001], on the other hand, uses a set of tasks distinguishing between primitive and high level tasks. Then, it uses recipes to break down non primitive tasks in sub-goals. Recipes are presented as functions that map a task to a plan to perform that task.
Probabilistic approaches to plan recognition, on the other hand, mainly make use of Bayesian networks [Charniak and Goldman, 1993], [Horvitz et al., 1998], [Huber and Simpson, 2004] and Markov models [Davison and Hirsh, 1998], [Gorniak and Poole, 2000], [Bui, 2003], [Blaylock and Allen, 2005].
Bayesian networks are a popular representation for reasoning under uncertainty because they combine a graphical representation of causal relationships with a sound Bayesian foundation. The directed acyclic graph structure of the network contains representations of both the conditional dependencies and independences between elements of the problem domain. The knowledge is represented by nodes called random variables and arcs representing the causal relationships between variables. The strengths of the relationships are described using parameters encoded in conditional probability tables (CPTs).
The structure of Markov models, on the other hand, is in the form of a state transition graph. This simple structure is due to its dependence in the Markov assumption to represent sequences of events, which states that the occurrence of the next event depends only on a fixed number of previous events. Given some previously observed events, the next event is predicted from the probability distribution of events that followed those observed in the past.
Both kinds of approaches (consistency and probabilistic) can lead to accurate predictions providing that the plan library is complete and correct. However, probabilistic approaches can find the most probable intention when the observations so far enable more than one possible intention; consistency approaches cannot select between more than one possible intention and have to wait for a single consistent explanation, since they do not consider a priori likelihood of different plans. There are some problems with Bayesian networks, however, that complicate the task of plan recognition. These problems include that the result of the inference process is not sensitive to the order in which evidence is entered in the network, that cycles are not allowed by definition and that learning the structure and parameters of the network from data is a very complex and time consuming task.
Regarding the learning of the plan libraries, most of the previous approaches to the problem centered they efforts in the recognition process itself based on a predefined hand-coded plan library [Kautz, 1991] [Charniak and Goldman, 1993] [Horvitz et al., 1998] [Lesh, 1998] [Goldman et al., 1999]. However, the success of a plan recognizer firstly relies on the correctness and completeness of the plan library. For this reason, in the recent years researchers have put special attention in the acquisition of plan libraries by learning regularities in the user behavior. Nevertheless, most of this research was conducted to learning the parameters of the model, such as probabilities, while the structure of the model itself remained fixed [Oliver et al., 2002] [Bui, 2003] [Liao et al., 2007] [Philipose, et al. 2004] [Nguyen et al., 2005] [Duong, et al. 2006]. On the other hand, few efforts were put on the task of learning plan libraries from the interaction history of a user with a software application and the proposed approaches are limited in the kind of plan structures that they are able to model [Davison and Hirsh, 1998] [Bauer, 1999] [Gorniak and Poole 2000] [Garland et al., 2001].
2 Variable Order Markov Intention Model
Markov models are a natural way of modeling sequences of actions observed along time. In its simplest form, a Markov chain is a stochastic process with the Markov property. Having the Markov property means that, given the present state, future states are independent of the past states. In other words, the description of the present state fully captures all the information that could influence the future evolution of the process. At each step the system may change its state from the current state to another state, or remain in the same state, according to a certain probability distribution. The changes of state are called transitions, and the probabilities associated with various state changes are called transition probabilities.
Markov chains of fixed order are a natural extension in which the future state is dependent on a fixed number of previous states, $m$. Although this extension is beneficial for many domains, there are some main drawbacks in the use of these models. First, only models with very small order are of practical value since there is an exponential grown in the number of states of Markov chains as their order is increased. Second, for sequences of actions performed by a user to achieve a given goal, the probability of the next performed action is not always determined by the same fixed number of previous actions. There is usually a variable length previous context that determines the probability distribution of what the user may perform next.
Variable Order Markov (VOM) models arose as a solution to capture longer regularities while avoiding the size explosion caused by increasing the order of the model. In contrast to the Markov chain models, where each random variable in a sequence with a Markov property depends on a
fixed number of random variables, in VOM models this number of conditioning random variables may vary based on the specific observed realization, known as context. These models consider that in realistic settings, there are certain realizations of states (represented by contexts) in which some past states are independent from the future states leading to a great reduction in the number of model parameters [Rissanen, 1983].
Algorithms for learning VOM models over a finite alphabet \( \Sigma \) attempt to learn a probabilistic finite state automaton (PFA) which can model sequential data of considerable complexity. In contrast to M-order Markov models, which attempt to estimate conditional distributions of the form \( P(r|s) \), with \( s \in \Sigma^N \) and \( r \in \Sigma \), VOM algorithms learn such conditional distributions where context lengths \( |s| \) vary in response to the available statistics in the training data. Thus, VOM models provide the means for capturing both large and small order Markov dependencies based on the observed data.
Ron et al. introduced an algorithm for learning VOM models from data [Ron et al., 1996] and Armentano [Armentano, 2008] extended this algorithm to work incrementally as new data is available. This model is described using a subclass of PFA, which they called Probabilistic Suffix Automata (PSA). For the construction of the PSA, a construction called Prediction Suffix Tree (PST) is used. PST preserves the minimal sub-sequences of variable length that are necessary for precise modeling of the given statistical source.
Transition probabilities are computed as follows. Let \( e^1, e^2, \ldots, e^m \) be the set of \( m \) training examples over the alphabet \( \Sigma \). The length of the \( i \)-th training example is given by \( l_i \), that is \( e^i = e^i_1, e^i_2, \ldots, e^i_{l_i} \) \( \forall e^i_j \in \Sigma \). The empirical probability of a sequence \( s \) of length \( l \) is computed as shown in Equation 1.
\[
\tilde{P}(s) = \frac{\sum_{l \geq |s|} \chi_{s,l}}{\sum_{l \geq |s|} l^{-(|s|-1)}}, \quad (1)
\]
where \( \chi_{s,l} = \begin{cases} 1 & \text{if } s_1, s_2, \ldots, s_l = e_1^j, e_2^j, \ldots, e_{l_i}^j, \forall e^j_i \in \Sigma \, \text{otherwise}\end{cases} \)
The numerator is the number of times the sequence \( s \) was observed in the sample and the denominator is an estimation of the maximal number of possible overlapping occurrences a pattern of the same length could have had.
The conditional empirical probability of observing an action \( \sigma \) right after a given sequence \( s \) is given by Equation 2.
\[
P(\sigma|s) = \frac{\tilde{P}(s \cdot \sigma)}{\tilde{P}(s)} \quad (2)
\]
The training algorithm [Armentano, 2008] involves building a suffix tree where each node is labeled with a string up to a predetermined length \( L \). It keeps track of the number of times each symbol \( \sigma \) is observed in each context \( s \). Transition probabilities are then computed using Equation 2 and a pruning procedure is applied to build a prediction suffix tree. The pruning procedure eliminates those nodes with similar prediction capabilities to other nodes corresponding to shorter contexts. It also eliminates nodes corresponding to rarely observed sub-sequences and nodes that do not predict any action with a significant probability value. All these pruning schemes are controlled by a set of parameters of the learning algorithm. Finally, a smoothing technique is applied to face the fact that an action that was not observed in a given context in the training examples might be observed when the PST is used for prediction. If \( \gamma_{\min} \) is the minimum probability we allow the PST to assign to any action in a given context (which corresponds to the execution of an unobserved task in the context), the algorithm collects a probability mass of \( |\Sigma| \gamma_{\min} \) from the observed tasks in the given context and then redistributes it among all the tasks in that context. To prevent negative numbers we must assure that \( \gamma_{\min} < 1/|\Sigma| \).
The PST structure is then converted to an equivalent Probabilistic Suffix Automata that is able to assign a probabilistic value to a sequence of observed actions in a time linear to the length of the sequence.
3 GOAL RECOGNITION WITH A VARIABLE ORDER
MARKOV INTENTION MODEL
Once the agent owns a model of a user’s intentions, it should be able to make use of it to recognize the user’s intention while he/she interacts with the application. When checking a given sequence \( r \) against a PSA, we initialize the automata in its unique initial state (that corresponding to the empty context) and for each task in the sequence we simply follow the states transitions and compute the probability of the sequence by multiplying the probability of the corresponding task at each state. This process is linear to the length of the sequence.
To perform goal recognition, the agent will have a PSA model for each goal. By having a separate model for each goal, the agent will be able to track several goals that are being pursued simultaneously by the user. The goal recognition process itself will consist in classifying any given sequence of tasks performed by the user into one of the possible user goals that is into one of the PSA models. For doing so, as the user performs tasks in the application the conventional use of PSAs (or PSTs) for classification will make the corresponding state transitions and compute the probability that each PSA \( k \) generated the given sequence of tasks as shown in Equation 3, where \( \gamma(s_{i-1}, \sigma_i) \) indicates the transition probability in state \( s_{i-1} \) for the observation \( \sigma_i \) (\( s_0 \) is the state corresponding to the empty sequence).
\[
PSA_k(r = \sigma_1, \cdots, \sigma_l) = \prod_{i=1}^{l} \gamma(s_{i-1}, \sigma_i) \quad (3)
\]
Then, the goal recognizer would select the PSA that assigns the maximum probability as the PSA corresponding to the user’s intention. However, as the user continues performing tasks, the total cumulative probability value assigned by each PSA will become smaller and smaller as we are multiplying values in the range \((0,1]\). Furthermore, we are interested in giving more importance to recent observations than to older observations in a way that we can better detect the underlying trend in the process.
Furthermore, the problem we are facing is not a classical problem of classification as we do not predict a “class” (goal) after observing a complete sequence of performed actions. In our domain, the interface agent should be able to predict the most probable goal after each performed action, and the limit between sequences of actions corresponding to different goals is often fuzzy.
To tackle this problem we choose to use an exponential moving average on the prediction probability \( y(s,\sigma) \) at each step in each PSA as the predicted value for each corresponding user intention. Moving averages are one of the most popular and easy to use tools to smooth a data series and make it easier to spot trends. An exponential moving average (EMA) [Hunter, 1986] is a statistic for monitoring a process that averages the data in a way that gives less and less weight to data as time passes. This is done by applying weighting factors which decrease exponentially, giving much more importance to recent observations while still not discarding older observations entirely.
By the choice of a weighting factor \( 0 \leq \lambda \leq 1 \), the EMA control procedure can be made sensitive to a small or gradual drift in the process. \( \lambda \) may be expressed as a percentage, so a smoothing factor of 10% is equivalent to \( \lambda = 0.1 \).
Alternatively, \( \lambda \) may be expressed in terms of N time periods, where \( \lambda = 2/N + 1 \).
\[ EMA_t = \lambda y(s_{t-1},\sigma_t) + (1 - \lambda) EMA_{t-1} \] (4)
The parameter \( \lambda \) determines the rate at which older probabilities enter into the calculation of the EMA statistic. A value of \( \lambda = 1.0 \) implies that only the most recent measurement influences the EMA. Thus, a large value of \( \lambda \) gives more weight to recent probabilities and less weight to older probabilities; a small value of \( \lambda \) gives more weight to older probabilities. The value of \( \lambda \), is usually set between 0.2 and 0.3 [Hunter, 1986] although this choice is somewhat arbitrary and its value should be determined empirically.
After each observation, the goal recognizer computes the EMA value for each PSA model and builds a ranking corresponding to the most probable goals the user may be pursuing at each moment. Then, if the most probable goal has an EMA value over a given confidence threshold \( \tau \) it makes a prediction. The goal recognizer is also able to make predictions of the N-best goals in the ranking, instead of only the most probable goal. This is useful to make further analysis on this reduced set of the N most probable goals.
4 Experimental evaluation
4.1 The plan corpus
Plan corpus is the term used to describe a set of plan sessions and consists of a list of goals and the actions a user executed to achieve them. Although many corpora is available for testing Machine Learning algorithms in other domains, corpora for training plan recognizers are hard to obtain. There are a few plan recognizers [Lesh, 1998; Bauer, 1999; Garland and Lesh, 2002; Stumpf et al., 2005; Blaylock and Allen, 2005] that make use of corpora to build the plan library and a few others for which the sequences of actions collected are not labeled with the user goal [Davison and Hirsh, 1998; Gorniak and Poole, 2000] which are not used for plan recognition but for next action/command prediction. Moreover, each of the plan recognizers using plan corpora to learn the plan library uses its own data making comparison between them difficult.
For our experiments we selected to use the Linux Plan Corpus kindly provided by Nate Blaylock. This plan corpus [Blaylock and Allen, 2005] is modeled after Lesh's Unix Plan Corpus [Lesh, 1998] and was collected from 56 human Linux users at the University of Rochester's Department of Computer Science. Users involved volunteer students, faculty and staff with different levels of expertise in the use of Linux, categorized from 1 (lowest) to 5 (highest).
Each user was given English descriptions of a set of 19 goals and was instructed to solve them using any Linux commands with some restrictions, such as not using pipes or some special commands which simplifies achieving the desired goal. Users were given the possibility of performing more than one session for each goal.
All sessions, consisting in the sequence of commands performed by a user to achieve a given goal, were automatically recorded. At the end of the session the user was asked to indicate whether he/she succeeded or not in achieving the goal. Other information was also recorded in the session, such as the time the session was initiated, the directory structure and the system response after each command, among others. For more details about how the data was collected, refer to [Blaylock and Allen, 2005].
The first step in our experiments was to pre-process the raw user sessions. From the data recorded for each user session, we were only interested in the user goal and in the sequence of commands he/she performed. We automatically removed commands with typos from each session, and sessions consisting in only 1 command. Attributes, flags and parameters for the rest of the commands were removed so that we only kept the name of the command as action schemas. There is an exception for two commands, \( ls \) and \( find \), for which some flags change the command functionality. For these commands we create more than one action schemas as detailed below.
The command \( find \), which searches the directory tree rooted at each given file name by evaluating a given expression, was split in four action schemas: \( find\text{-ctime} \), representing the command \( "find \{path...\} -ctime n" \) which searches for files whose status was last changed \( n \) hours ago; \( find\text{-name} \), representing the command \( "find \{path...\} -name pattern" \) which searches for files whose name matches the given pattern; \( find\text{-size} \), representing the use of the command \( "find \{path...\} -size s" \) which searches for files using \( s \) units of storage space; and the action schema \( find \) grouping all other uses of the command.
In a similar fashion, command \( ls \), which lists information about files in the current directory by default, was split in
two action schemas: `ls-R`, representing the command `ls -R`, which lists information about files in the base directory and recursively in its subdirectories; and `ls` grouping all other uses of the command.
After preprocessing, we got 19 goal schemas and 48 action schemas resulting from the plan corpus.
4.2 Evaluation metrics
In the experiments shown in this section we evaluate three different metrics. The error for a model $Q$ given an observed task sequence $Seq = a_1, ..., a_n$ is computed as the sum of the absolute differences between the value assigned by the model after observing each task, $Q(a_i)$, and the highest value assigned by any of the PSAs after observing that task, $Q_{best}(a_i)$, as shown in Equation 5.
$$\text{error}_Q(Seq = a_1, a_2, ..., a_n) = \sum_{i=1}^{n} |Q(a_i) - Q_{best}(a_i)|$$
On the other hand, precision measures the number of times a model $Q$ was in the N-best predicted models, $\text{best}_Q(a_i)$, divided the number of predictions made, $m$, as shown in Equation 6. We consider that the goal recognizer makes a prediction every time the higher value assigned by any PSA after some observations is over a threshold $t$.
$$\text{precision}_Q(Seq = a_1, a_2, ..., a_n) = \frac{\sum_{i=1}^{n} \text{best}_Q(a_i)}{m}$$
On the other hand, convergence is a metric that indicates how much time the recognizer took to converge in what the current user goal was. If from the time step $t$ to the time step corresponding to the last performed action for the current goal the algorithm predicted correctly the actual user goal, the convergence is computed as shown in Equation 7. The time step $t$ is called convergence point.
$$\text{convergence}_Q(Seq = a_1, a_2, ..., a_n) = \frac{m - t + 1}{m},$$
with $not_{best}_Q(a_{t-1})$ and $\text{best}_Q(a_j) \forall j \leq t \leq n$
where $\text{best}_Q(a_i) = \begin{cases} 1 & \text{if } Q(a_i) = q_{best}(a_i) \\ 0 & \text{otherwise} \end{cases}$
4.3 Goal schema recognition
For our experiments, we trained 19 different PSA models, one for each goal schema in the plan corpus, using the sequences of action schemas performed by the different users that took part in the experiments. Next, we performed leave-one-out cross validation to evaluate the performance of our goal recognizer.
We evaluate different values of the smoothing factor $\lambda$ varying from 0.1 to 1.0 with intervals of 0.1. We also evaluate different thresholds $t$ to make predictions.
For all the experiments, a value of $\lambda = 0.3$ and $t = 0.2$ led to better results. Table 1 shows the recognition results for the Linux corpus.
We obtained an error of 1.15%. Notice that the error metric is independent of the number of models we consider for prediction because this metric measures the distance in the prediction of the current user goal from the model that is predicted with highest probability by the plan recognizer.
<table>
<thead>
<tr>
<th>N-best</th>
<th>1-best</th>
<th>2-best</th>
<th>3-best</th>
</tr>
</thead>
<tbody>
<tr>
<td>Error</td>
<td>0.115</td>
<td>0.115</td>
<td>0.115</td>
</tr>
<tr>
<td>Precision</td>
<td>0.646</td>
<td>0.765</td>
<td>0.808</td>
</tr>
<tr>
<td>Convergence</td>
<td>0.589</td>
<td>0.706</td>
<td>0.745</td>
</tr>
</tbody>
</table>
Table 1: Goal schema recognition results
On the other hand, we obtained a precision of 64.6% that is increased to 80.8% for the case of 3-best prediction, with a convergence of 58.9% for 1-best prediction. Convergence can be increased to 74.5% by considering 3-best prediction.
These values improve results presented in Blaylock research [Blaylock and Allen, 2005] for a bigram model of the user goals. By using variable order Markov models with exponential moving average, we got an increment of 21.5% better convergence for 1-best prediction, 14.1% for 2-best prediction and 14.8% for 3-best prediction. Precision on the other hand was improved by 26.8% for 1-best prediction, 12.4% for 2-best prediction and 7.1% for 3-best prediction. Since our goal recognizer has the same complexity of $O(|G|)$ than Blaylock’s goal recognizer, where $G$ is the set of goal schemas in the domain, we believe that our improvements are significant.
6 Conclusions and future work
We have presented a goal schema recognition approach based on variable order Markov models to extend existent approaches that use fixed order Markov models (especially unigrams and bigrams models). We also make use of a smoothing technique, namely an exponential moving average, to better detect the underlying trend in the predictions.
Our plan recognizer is able to make partial predictions after each observed task and takes a linear time in the number of goals modeled.
There are several areas of future work that we are exploring such as hierarchical goal recognition and handling parameterized goals and actions. A simple solution might be to extend the number of actions creating one action for every pair `<action schema, parameter>`. However, this approach produces an explosion in the number of actions besides data sparseness, because the likelihood of observing a symbol in a given context will be very low.
Finally, the initial results presented in this work are based on a relatively small dataset for one particular domain. Exploration of other domains with different characteristics will be required to ensure that the performance of the goal recognizer is consistent with the results we obtained in this work.
References
|
{"Source-Url": "http://www.aaai.org/ocs/index.php/IJCAI/IJCAI-09/paper/download/374/855", "len_cl100k_base": 6486, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21665, "total-output-tokens": 8729, "length": "2e12", "weborganizer": {"__label__adult": 0.00032901763916015625, "__label__art_design": 0.0008683204650878906, "__label__crime_law": 0.0004436969757080078, "__label__education_jobs": 0.002765655517578125, "__label__entertainment": 0.00020182132720947263, "__label__fashion_beauty": 0.00023031234741210935, "__label__finance_business": 0.0004737377166748047, "__label__food_dining": 0.00042128562927246094, "__label__games": 0.0009889602661132812, "__label__hardware": 0.001209259033203125, "__label__health": 0.0007419586181640625, "__label__history": 0.0004131793975830078, "__label__home_hobbies": 0.0001780986785888672, "__label__industrial": 0.0005669593811035156, "__label__literature": 0.0006785392761230469, "__label__politics": 0.0003767013549804687, "__label__religion": 0.00043082237243652344, "__label__science_tech": 0.339599609375, "__label__social_life": 0.0001842975616455078, "__label__software": 0.03302001953125, "__label__software_dev": 0.61474609375, "__label__sports_fitness": 0.00029277801513671875, "__label__transportation": 0.0005488395690917969, "__label__travel": 0.0002193450927734375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34387, 0.03803]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34387, 0.59013]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34387, 0.90578]], "google_gemma-3-12b-it_contains_pii": [[0, 4470, false], [4470, 10939, null], [10939, 17362, null], [17362, 23663, null], [23663, 29078, null], [29078, 34387, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4470, true], [4470, 10939, null], [10939, 17362, null], [17362, 23663, null], [23663, 29078, null], [29078, 34387, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34387, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34387, null]], "pdf_page_numbers": [[0, 4470, 1], [4470, 10939, 2], [10939, 17362, 3], [17362, 23663, 4], [23663, 29078, 5], [29078, 34387, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34387, 0.04274]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
29b2e1dcc5293fdcdae7e6409da42f51449ecb93
|
Installing and configuring Apache Kafka
Date of Publish: 2018-08-13
Installing Kafka
Although you can install Kafka on a cluster not managed by Ambari, this chapter describes how to install Kafka on an Ambari-managed cluster.
Prerequisites
Before installing Kafka, ZooKeeper must be installed and running on your cluster.
Note that the following underlying file systems are supported for use with Kafka:
- EXT4: supported and recommended
- EXT3: supported
Caution:
Encrypted file systems such as SafenetFS are not supported for Kafka. Index file corruption can occur.
Installing Kafka Using Ambari
After Kafka is deployed and running, validate the installation. You can use the command-line interface to create a Kafka topic, send test messages, and consume the messages.
Procedure
1. Click the Ambari "Services" tab.
2. In the Ambari "Actions" menu, select "Add Service." This starts the Add Service wizard, displaying the Choose Services page. Some of the services are enabled by default.
3. Scroll through the alphabetic list of components on the Choose Services page, and select "Kafka".
4. Click **Next** to continue.
5. On the Assign Masters page, review the node assignments for Kafka nodes.
The following screen shows node assignment for a single-node Kafka cluster:
6. If you want Kafka to run with high availability, you must assign more than one node for Kafka brokers, resulting in Kafka brokers running on multiple nodes.
Click the "+" symbol to add more broker nodes to the cluster:
The following screen shows node assignment for a multi-node Kafka cluster:
7. Click **Next** to continue.
8. On the Assign Slaves and Clients page, choose the nodes that you want to run ZooKeeper clients:

9. Click **Next** to continue.
10. Ambari displays the Customize Services page, which lists a series of services:

For your initial configuration you should use the default values set by Ambari. If Ambari prompts you with the message "Some configurations need your attention before you can proceed," review the list of properties and provide the required information.
For information about optional settings that are useful in production environments, see Configuring Apache Kafka for a Production Environment.
11. Click **Next** to continue.
12. When the wizard displays the Review page, ensure that all HDP components correspond to HDP 2.5 or later:
13. Click **Deploy** to begin installation.
14. Ambari displays the Install, Start and Test page. Monitor the status bar and messages for progress updates:
15. When the wizard presents a summary of results, click "Complete" to finish installing Kafka:
What to do next
After Kafka is deployed and running, validate the installation. You can use the command-line interface to create a Kafka topic, send test messages, and consume the messages. For more information, see Validate Kafka in the Non-Ambari Cluster Installation Guide.
Configuring Kafka for a Production Environment
This chapter covers topics related to Kafka configuration, including:
- Preparing the environment
- Customizing settings for brokers, producers, and consumers
- Configuring ZooKeeper for use with Kafka
- Enabling audit to HDFS when running Kafka on a secure cluster
Preparing the Environment
The following factors can affect Kafka performance:
- Operating system settings
- File system selection
- Disk drive configuration
- Java version
- Ethernet bandwidth
Operating System Settings
Consider the following when configuring Kafka:
- Kafka uses page cache memory as a buffer for active writers and readers, so after you specify JVM size (using -Xmx and -Xms Java options), leave the remaining RAM available to the operating system for page caching.
- Kafka needs open file descriptors for files and network connections. You should set the file descriptor limit to at least 128000.
- You can increase the maximum socket buffer size to enable high-performance data transfer.
File System Selection
Kafka uses regular Linux disk files for storage. We recommend using the EXT4 or XFS file system. Improvements to the XFS file system show improved performance characteristics for Kafka workloads without compromising stability.
Caution:
- Do not use mounted shared drives or any network file systems with Kafka, due to the risk of index failures and (in the case of network file systems) issues related to the use of MemoryMapped files to store the offset index.
- Encrypted file systems such as SafenetFS are not supported for Kafka. Index file corruption can occur.
Disk Drive Considerations
For throughput, we recommend dedicating multiple drives to Kafka data. More drives typically perform better with Kafka than fewer. Do not share these Kafka drives with any other application or use them for Kafka application logs.
You can configure multiple drives by specifying a comma-separated list of directories for the log.dirs property in the server.properties file. Kafka uses a round-robin approach to assign partitions to directories specified in log.dirs; the default value is /tmp/kafka-logs.
The num.io.threads property should be set to a value equal to or greater than the number of disks dedicated for Kafka. Recommendation: start by setting this property equal to the number of disks.
Depending on how you configure flush behavior (see "Log Flush Management"), a faster disk drive is beneficial if the log.flush.interval.messages property is set to flush the log file after every 100,000 messages (approximately).
Kafka performs best when data access loads are balanced among partitions, leading to balanced loads across disk drives. In addition, data distribution across disks is important. If one disk becomes full and other disks have available space, this can cause performance issues. To avoid slowdowns or interruptions to Kafka services, you should create usage alerts that notify you when available disk space is low.
RAID can potentially improve load balancing among the disks, but RAID can cause performance bottleneck due to slower writes. In addition, it reduces available disk space. Although RAID can tolerate disk failures, rebuilding RAID array is I/O-intensive and effectively disables the server. Therefore, RAID does not provide substantial improvements in availability.
Java Version
With Apache Kafka on HDP 2.5, you should use the latest update for Java version 1.8 and make sure that G1 garbage collection support is enabled. (G1 support is enabled by default in recent versions of Java.) If you prefer to use Java 1.7, make sure that you use update u51 or later.
Here are several recommended settings for the JVM:
-XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
-XX:G1HeapRegionSize=16M
-XX:MinMetaspaceFreeRatio=50
-XX:MaxMetaspaceFreeRatio=80
To set JVM heap size for the Kafka broker, export KAFKA_HEAP_OPTS; for example:
```bash
export KAFKA_HEAP_OPTS="-Xmx2g -Xms2g"
./kafka-server-start.sh
```
Ethernet Bandwidth
Ethernet bandwidth can have an impact on Kafka performance; make sure it is sufficient for your throughput requirements.
Customizing Kafka Settings on an Ambari-Managed Cluster
To customize configuration settings during the Ambari installation process, click the "Kafka" tab on the Customize Services page:
If you want to access configuration settings after installing Kafka using Ambari:
1. Click Kafka on the Ambari dashboard.
2. Choose Configs.
To view and modify settings, either scroll through categories and expand a category (such as "Kafka Broker", as shown in the graphic), or use the "Filter" box to search for a property.
Settings in the Advanced kafka-env category are configured by Ambari; you should not modify these settings:
To add configuration properties that are not listed by default in Ambari, navigate to the Custom kafka-broker category:
Kafka Broker Settings
The following subsections describe configuration settings that influence the performance of Kafka brokers.
Connection Settings
Review the following connection setting in the Advanced kafka-broker category, and modify as needed:
**zookeeper.session.timeout.ms**
Specifies ZooKeeper session timeout, in milliseconds. The default value is 30000 ms.
If the server fails to signal heartbeat to ZooKeeper within this period of time, the server is considered to be dead. If you set this value too low, the server might be falsely considered dead; if you set it too high it may take too long to recognize a truly dead server.
If you see frequent disconnection from the ZooKeeper server, review this setting. If long garbage collection pauses cause Kafka to lose its ZooKeeper session, you might need to configure longer timeout values.
**advertised.listeners**
If you have manually set listeners to advertised.listeners=PLAINTEXT://$HOSTNAME:$PORT, after enabling Kerberos, change the listener configuration to advertised.listeners=SASL_PLAINTEXT://$HOSTNAME:$PORT.
**Important:**
Do not change the following connection settings:
**zookeeper.connect**
A comma-separated list of ZooKeeper hostname:port pairs. Ambari sets this value. Do not change this setting.
---
**Topic Settings**
For each topic, Kafka maintains a structured commit log with one or more partitions. These topic partitions form the basic unit of parallelism in Kafka. In general, the more partitions there are in a Kafka cluster, the more parallel consumers can be added, resulting in higher throughput.
You can calculate the number of partitions based on your throughput requirements. If throughput from a producer to a single partition is \( P \) and throughput from a single partition to a consumer is \( C \), and if your target throughput is \( T \), the minimum number of required partitions is
\[
\text{max} \left( \frac{T}{P}, \frac{T}{C} \right) \text{.}
\]
Note also that more partitions can increase latency:
- End-to-end latency in Kafka is defined as the difference in time from when a message is published by the producer to when the message is read by the consumer.
- Kafka only exposes a message to a consumer after it has been committed, after the message is replicated to all in-sync replicas.
- Replication of one thousand partitions from one broker to another can take up 20ms. This is too long for some real-time applications.
- In the new Kafka producer, messages are accumulated on the producer side; producers buffer the message per partition. This approach allows users to set an upper bound on the amount of memory used for buffering incoming messages. After enough data is accumulated or enough time has passed, accumulated messages are removed and sent to the broker. If you define more partitions, messages are accumulated for more partitions on the producer side.
- Similarly, the consumer fetches batches of messages per partition. Consumer memory requirements are proportional to the number of partitions that the consumer subscribes to.
**Important Topic Properties**
Review the following settings in the Advanced kafka-broker category, and modify as needed:
- **auto.create.topics.enable**
Enable automatic creation of topics on the server. If this property is set to true, then attempts to produce, consume, or fetch metadata for a nonexistent topic automatically create the topic with the default replication factor and number of partitions. The default is enabled.
- **default.replication.factor**
Specifies default replication factors for automatically created topics. For high availability production systems, you should set this value to at least 3.
- **num.partitions**
Specifies the default number of log partitions per topic, for automatically created topics. The default value is 1. Change this setting based on the requirements related to your topic and partition design.
- **delete.topic.enable**
Allows users to delete a topic from Kafka using the admin tool, for Kafka versions 0.9 and later. Deleting a topic through the admin tool will have no effect if this setting is turned off.
By default this feature is turned off (set to false).
Log Settings
Review the following settings in the Kafka Broker category, and modify as needed:
**log.roll.hours**
The maximum time, in hours, before a new log segment is rolled out. The default value is 168 hours (seven days).
This setting controls the period of time after which Kafka will force the log to roll, even if the segment file is not full. This ensures that the retention process is able to delete or compact old data.
**log.retention.hours**
The number of hours to keep a log file before deleting it. The default value is 168 hours (seven days).
When setting this value, take into account your disk space and how long you would like messages to be available. An active consumer can read quickly and deliver messages to their destination.
The higher the retention setting, the longer the data will be preserved. Higher settings generate larger log files, so increasing this setting might reduce your overall storage capacity.
**log.dirs**
A comma-separated list of directories in which log data is kept. If you have multiple disks, list all directories under each disk.
Review the following setting in the Advanced kafka-broker category, and modify as needed:
**log.retention.bytes**
The amount of data to retain in the log for each topic partition. By default, log size is unlimited.
Note that this is the limit for each partition, so multiply this value by the number of partitions to calculate the total data retained for the topic.
If log.retention.hours and log.retention.bytes are both set, Kafka deletes a segment when either limit is exceeded.
**log.segment.bytes**
The log for a topic partition is stored as a directory of segment files. This setting controls the maximum size of a segment file before a new segment is rolled over in the log. The default is 1 GB.
Log Flush Management
Kafka writes topic messages to a log file immediately upon receipt, but the data is initially buffered in page cache. A log flush forces Kafka to flush topic messages from page cache, writing the messages to disk.
We recommend using the default flush settings, which rely on background flushes done by Linux and Kafka. Default settings provide high throughput and low latency, and they guarantee recovery through the use of replication.
If you decide to specify your own flush settings, you can force a flush after a period of time, or after a specified number of messages, or both (whichever limit is reached first). You can set property values globally and override them on a per-topic basis.
There are several important considerations related to log file flushing:
- Durability: unflushed data is at greater risk of loss in the event of a crash. A failed broker can recover topic partitions from its replicas, but if a follower does not issue a fetch request or consume from the leader's log-end
offset within the time specified by `replica.lag.time.max.ms` (which defaults to 10 seconds), the leader removes the follower from the in-sync replica ("ISR"). When this happens there is a slight chance of message loss if you do not explicitly set `log.flush.interval.messages`. If the leader broker fails and the follower is not caught up with the leader, the follower can still be under ISR for those 10 seconds and messages during leader transition to follower can be lost.
- Increased latency: data is not available to consumers until it is flushed (the fssync implementation in most Linux filesystems blocks writes to the file system).
- Throughput: a flush operation is typically an expensive operation.
- Disk usage patterns are less efficient.
- Page-level locking in background flushing is much more granular.
`log.flush.interval.messages` specifies the number of messages to accumulate on a log partition before Kafka forces a flush of data to disk.
`log.flush.scheduler.interval.ms` specifies the amount of time (in milliseconds) after which Kafka checks to see if a log needs to be flushed to disk.
`log.segment.bytes` specifies the size of the log file. Kafka flushes the log file to disk whenever a log file reaches its maximum size.
`log.roll.hours` specifies the maximum length of time before a new log segment is rolled out (in hours); this value is secondary to `log.roll.ms`. Kafka flushes the log file to disk whenever a log file reaches this time limit.
### Compaction Settings
Review the following settings in the Advanced kafka-broker category, and modify as needed:
- **log.cleaner.dedupe.buffer.size**
- Specifies total memory used for log deduplication across all cleaner threads.
- By default, 128 MB of buffer is allocated. You may want to review this and other log.cleaner configuration values, and adjust settings based on your use of compacted topics (`.consumer.offsets` and other compacted topics).
- **log.cleaner.io.buffer.size**
- Specifies the total memory used for log cleaner I/O buffers across all cleaner threads. By default, 512 KB of buffer is allocated. You may want to review this and other log.cleaner configuration values, and adjust settings based on your usage of compacted topics (`.consumer.offsets` and other compacted topics).
### General Broker Settings
Review the following settings in the Advanced kafka-broker category, and modify as needed:
- **auto.leader.rebalance.enable**
- Enables automatic leader balancing. A background thread checks and triggers leader balancing (if needed) at regular intervals. The default is enabled.
- **unclean.leader.election.enable**
- This property allows you to specify a preference of availability or durability. This is an important setting: If availability is more important than avoiding data loss, ensure that this property is set to true. If preventing data loss is more important than availability, set this property to false.
This setting operates as follows:
• If unclean.leader.election.enable is set to true (enabled), an out-of-sync replica will be elected as leader when there is no live in-sync replica (ISR). This preserves the availability of the partition, but there is a chance of data loss.
• If unclean.leader.election.enable is set to false and there are no live in-sync replicas, Kafka returns an error and the partition will be unavailable.
This property is set to true by default, which favors availability.
If durability is preferable to availability, set unclean.leader.election to false.
**controlled.shutdown.enable**
Enables controlled shutdown of the server. The default is enabled.
**min.insync.replicas**
When a producer sets acks to "all", min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception.
When used together, min.insync.replicas and producer acks allow you to enforce stronger durability guarantees.
You should set min.insync.replicas to 2 for replication factor equal to 3.
**message.max.bytes**
Specifies the maximum size of message that the server can receive. It is important that this property be set with consideration for the maximum fetch size used by your consumers, or a producer could publish messages too large for consumers to consume.
Note that there are currently two versions of consumer and producer APIs. The value of message.max.bytes must be smaller than the max.partition.fetch.bytes setting in the new consumer, or smaller than the fetch.message.max.bytes setting in the old consumer. In addition, the value must be smaller than replica.fetch.max.bytes.
**replica.fetch.max.bytes**
Specifies the number of bytes of messages to attempt to fetch. This value must be larger than message.max.bytes.
**broker.rack**
The rack awareness feature distributes replicas of a partition across different racks. You can specify that a broker belongs to a particular rack through the "Custom kafka-broker" menu option. For more information about the rack awareness feature, see [http://kafka.apache.org/documentation.html#basic_ops_racks](http://kafka.apache.org/documentation.html#basic_ops_racks).
Kafka Producer Settings
If performance is important and you have not yet upgraded to the new Kafka producer (client version 0.9.0.1 or later), consider doing so. The new producer is generally faster and more fully featured than the previous client.
To use the new producer client, add the associated maven dependency on the client jar; for example:
```xml
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.0</version>
</dependency>
```
For more information, see the KafkaProducer javadoc.
The following subsections describe several types of configuration settings that influence the performance of Kafka producers.
Important Producer Settings
The lifecycle of a request from producer to broker involves several configuration settings:
1. The producer polls for a batch of messages from the batch queue, one batch per partition. A batch is ready when one of the following is true:
- batch.size is reached. Note: Larger batches typically have better compression ratios and higher throughput, but they have higher latency.
- linger.ms (time-based batching threshold) is reached. Note: There is no simple guideline for setting linger.ms values; you should test settings on specific use cases. For small events (100 bytes or less), this setting does not appear to have much impact.
- Another batch to the same broker is ready.
- The producer calls flush() or close().
2. The producer groups the batch based on the leader broker.
3. The producer sends the grouped batch to the broker.
The following paragraphs list additional settings related to the request lifecycle:
**max.in.flight.requests.per.connection (pipelining)**
The maximum number of unacknowledged requests the client will send on a single connection before blocking. If this setting is greater than 1, pipelining is used when the producer sends the grouped batch to the broker. This improves throughput, but if there are failed sends there is a risk of out-of-order delivery due to retries (if retries are enabled). Note also that excessive pipelining reduces throughput.
**compression.type**
Compression is an important part of a producer’s work, and the speed of different compression types differs a lot. To specify compression type, use the compression.type property. It accepts standard compression codecs ('gzip', 'snappy', 'lz4'), as well as 'uncompressed' (the default, equivalent to no compression), and 'producer' (uses the compression codec set by the producer).
Compression is handled by the user thread. If compression is slow it can help to add more threads. In addition, batching efficiency impacts the compression ratio: more batching leads to more efficient compression.
The acks setting specifies acknowledgments that the producer requires the leader to receive before considering a request complete. This setting defines the durability level for the producer.
<table>
<thead>
<tr>
<th>Acks</th>
<th>Throughput</th>
<th>Latency</th>
<th>Durability</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>High</td>
<td>Low</td>
<td>No Guarantee. The producer does not wait for acknowledgment from the server.</td>
</tr>
<tr>
<td>1</td>
<td>Medium</td>
<td>Medium</td>
<td>Leader writes the record to its local log, and responds without awaiting full acknowledgment from all followers.</td>
</tr>
<tr>
<td>-1</td>
<td>Low</td>
<td>High</td>
<td>Leader waits for the full set of in-sync replicas (ISRs) to acknowledge the record. This guarantees that the record is not lost as long as at least one ISR is active.</td>
</tr>
</tbody>
</table>
The new Producer API supports an optional flush() call, which makes all buffered records immediately available to send (even if linger.ms is greater than 0).
When using flush(), the number of bytes between two flush() calls is an important factor for performance.
- In microbenchmarking tests, a setting of approximately 4MB performed well for events 1KB in size.
- A general guideline is to set batch.size equal to the total bytes between flush() calls divided by number of partitions:
\[
\text{(total bytes between flush() calls)} / \text{(partition count)}
\]
Additional Considerations
A producer thread going to the same partition is faster than a producer thread that sends messages to multiple partitions.
If a producer reaches maximum throughput but there is spare CPU and network capacity on the server, additional producer processes can increase overall throughput.
Performance is sensitive to event size: larger events are more likely to have better throughput. In microbenchmarking tests, 1KB events streamed faster than 100-byte events.
Kafka Consumer Settings
You can usually obtain good performance from consumers without tuning configuration settings. In microbenchmarking tests, consumer performance was not as sensitive to event size or batch size as was producer performance. Both 1KG and 100B events showed similar throughput.
One basic guideline for consumer performance is to keep the number of consumer threads equal to the partition count.
Configuring ZooKeeper for Use with Kafka
Here are several recommendations for ZooKeeper configuration with Kafka:
- Do not run ZooKeeper on a server where Kafka is running.
- When using ZooKeeper with Kafka you should dedicate ZooKeeper to Kafka, and not use ZooKeeper for any other components.
- Make sure you allocate sufficient JVM memory. A good starting point is 4GB.
- To monitor the ZooKeeper instance, use JMX metrics.
Configuring ZooKeeper for Multiple Applications
If you plan to use the same ZooKeeper cluster for different applications (such as Kafka cluster1, Kafka cluster2, and HBase), you should add a chroot path so that all Kafka data for a cluster appears under a specific path.
The following example shows a sample chroot path:
c6401.ambari.apache.org:2181:/kafka-root, c6402.ambari.apache.org:2181:/kafka-root
You must create this chroot path yourself before starting the broker, and consumers must use the same connection string.
Enabling Audit to HDFS for a Secure Cluster
To enable audit to HDFS when running Storm on a secure cluster, perform the steps listed at the bottom of Manually Updating Ambari HDFS Audit Settings in the HDP Security Guide.
|
{"Source-Url": "https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.0/installing-configuring-kafka/kafka-instaling-configuring.pdf", "len_cl100k_base": 5554, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 32616, "total-output-tokens": 6423, "length": "2e12", "weborganizer": {"__label__adult": 0.00022542476654052737, "__label__art_design": 0.00034999847412109375, "__label__crime_law": 0.00023114681243896484, "__label__education_jobs": 0.0008778572082519531, "__label__entertainment": 0.00012874603271484375, "__label__fashion_beauty": 8.934736251831055e-05, "__label__finance_business": 0.000240325927734375, "__label__food_dining": 0.0002225637435913086, "__label__games": 0.0009083747863769532, "__label__hardware": 0.0012569427490234375, "__label__health": 0.0001767873764038086, "__label__history": 0.0001914501190185547, "__label__home_hobbies": 9.644031524658204e-05, "__label__industrial": 0.00026106834411621094, "__label__literature": 0.00017976760864257812, "__label__politics": 0.0001653432846069336, "__label__religion": 0.0003101825714111328, "__label__science_tech": 0.0213775634765625, "__label__social_life": 0.000125885009765625, "__label__software": 0.1807861328125, "__label__software_dev": 0.79150390625, "__label__sports_fitness": 0.00017154216766357422, "__label__transportation": 0.0001596212387084961, "__label__travel": 0.0001856088638305664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26280, 0.0088]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26280, 0.24828]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26280, 0.85985]], "google_gemma-3-12b-it_contains_pii": [[0, 69, false], [69, 69, null], [69, 1104, null], [1104, 1292, null], [1292, 1591, null], [1591, 2448, null], [2448, 2702, null], [2702, 4009, null], [4009, 6992, null], [6992, 7759, null], [7759, 7879, null], [7879, 9034, null], [9034, 12084, null], [12084, 14912, null], [14912, 17898, null], [17898, 20145, null], [20145, 22876, null], [22876, 24682, null], [24682, 26280, null]], "google_gemma-3-12b-it_is_public_document": [[0, 69, true], [69, 69, null], [69, 1104, null], [1104, 1292, null], [1292, 1591, null], [1591, 2448, null], [2448, 2702, null], [2702, 4009, null], [4009, 6992, null], [6992, 7759, null], [7759, 7879, null], [7879, 9034, null], [9034, 12084, null], [12084, 14912, null], [14912, 17898, null], [17898, 20145, null], [20145, 22876, null], [22876, 24682, null], [24682, 26280, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26280, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26280, null]], "pdf_page_numbers": [[0, 69, 1], [69, 69, 2], [69, 1104, 3], [1104, 1292, 4], [1292, 1591, 5], [1591, 2448, 6], [2448, 2702, 7], [2702, 4009, 8], [4009, 6992, 9], [6992, 7759, 10], [7759, 7879, 11], [7879, 9034, 12], [9034, 12084, 13], [12084, 14912, 14], [14912, 17898, 15], [17898, 20145, 16], [20145, 22876, 17], [22876, 24682, 18], [24682, 26280, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26280, 0.01976]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
ce75fb1d32c7e0f101a4668bc6edbca269a3aba6
|
[REMOVED]
|
{"Source-Url": "http://dl.ifip.org/db/conf/ifip11-10/cip2009/WattsO09.pdf", "len_cl100k_base": 5643, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 75841, "total-output-tokens": 6903, "length": "2e12", "weborganizer": {"__label__adult": 0.0009245872497558594, "__label__art_design": 0.000949382781982422, "__label__crime_law": 0.0033111572265625, "__label__education_jobs": 0.00049591064453125, "__label__entertainment": 0.00017571449279785156, "__label__fashion_beauty": 0.0003383159637451172, "__label__finance_business": 0.0003647804260253906, "__label__food_dining": 0.0006327629089355469, "__label__games": 0.0016813278198242188, "__label__hardware": 0.041107177734375, "__label__health": 0.0009064674377441406, "__label__history": 0.00044918060302734375, "__label__home_hobbies": 0.0002865791320800781, "__label__industrial": 0.002056121826171875, "__label__literature": 0.0003535747528076172, "__label__politics": 0.0005488395690917969, "__label__religion": 0.0007839202880859375, "__label__science_tech": 0.26806640625, "__label__social_life": 9.5367431640625e-05, "__label__software": 0.0159149169921875, "__label__software_dev": 0.658203125, "__label__sports_fitness": 0.0006346702575683594, "__label__transportation": 0.0013933181762695312, "__label__travel": 0.00023651123046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31300, 0.01738]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31300, 0.64411]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31300, 0.89456]], "google_gemma-3-12b-it_contains_pii": [[0, 2074, false], [2074, 3664, null], [3664, 5970, null], [5970, 8942, null], [8942, 11498, null], [11498, 14340, null], [14340, 16891, null], [16891, 18399, null], [18399, 19446, null], [19446, 22671, null], [22671, 25995, null], [25995, 28943, null], [28943, 31300, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2074, true], [2074, 3664, null], [3664, 5970, null], [5970, 8942, null], [8942, 11498, null], [11498, 14340, null], [14340, 16891, null], [16891, 18399, null], [18399, 19446, null], [19446, 22671, null], [22671, 25995, null], [25995, 28943, null], [28943, 31300, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31300, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31300, null]], "pdf_page_numbers": [[0, 2074, 1], [2074, 3664, 2], [3664, 5970, 3], [5970, 8942, 4], [8942, 11498, 5], [11498, 14340, 6], [14340, 16891, 7], [16891, 18399, 8], [18399, 19446, 9], [19446, 22671, 10], [22671, 25995, 11], [25995, 28943, 12], [28943, 31300, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31300, 0.04734]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
ab3e87415cf2fddc54f73ed97fba38444f3fdcbb
|
Linguistics 106, lecture notes
The limits of Finite State Automata
25 July 2002
1 Introduction: Are there non-Regular Languages?
- We set out to find a general format for stating the grammatical rules of
a language—rules accounting for the distribution of morphemes in the
language.
Our first idea was the rules could be “word chains”, in Pinker’s sense.
We implemented the word-chain model with Finite State Automata and,
equivalently, with Regular grammars.
- So far we have seen what FSAs and RGs can do. Now we should ask the
question: Is there anything FSAs/RGs can’t do?
- This question will have two independent parts:
1. Are there sets of strings (languages) that cannot be generated by any
FSA, or any RG? And if so, then what defines the difference between
Regular Languages and non-Regular languages?
2. Are FSAs/RGs adequate for the task of describing the structures we
find in natural language sentences, e.g. in sentences of English?
2 FSAs and unbounded dependencies
- The expressive power of FSAs is limited by the following property:
The transitions of an FSA are defined solely in terms of the single state
the machine is in and the single symbol read; and the transitions may go
to just a single state, not a sequence of them:
\[ \delta([symbol],[state]) = [state]. \]
- Because of this property:
It is impossible to state explicitly in a single transition rule that the occurrence of \( \sigma_j \) in an input string depends on the occurrence of a symbol \( \sigma_i \) elsewhere in the string, whether before \( \sigma_j \) or after.
- **Regular Grammars** have the following related property:
In the rules of a RG, the left-hand side of the rule may refer to just a single non-terminal AND the right-hand side may introduce just a single terminal; also, the right-hand side may introduce just a single non-terminal, not a sequence of them.\(^1\):
\[ A \rightarrow \sigma(B) \]
- Because of this property:
It is impossible to state explicitly in a single rewrite rule that the occurrence of \( \sigma_j \) in an output string depends on the occurrence of a symbol \( \sigma_i \) elsewhere in the string, whether before \( \sigma_j \) or after.
- Yet there are languages, even Regular Languages, where the occurrence of a symbol (or substring) \( \sigma_j \) depends on the occurrence of a symbol (or substring) \( \sigma_i \) elsewhere. For example:
\[ RL_{dep} = \{a1z, a2z, a3z, \ldots, b1y, b2y, b3y, \ldots\} \]
\[ RL_{a \leq p \leq z} = \{\varepsilon, ab, aabb, aaabbb, aaaaabbbb, aaaaabbbbb\} \]
- **Question:** How can such dependencies be recognized by an FSA (RG), if the transitions (rewrite rules) in an FSA (RG) can refer neither what symbols have already read (written), nor to what symbols will be read (written)?
- **Answer:** An FSA (RG) can model such dependencies only by multiplying states (non-terminals), in order to multiply the number of distinguishable paths (derivations) in the machine (grammar).
This way, relevantly different histories of symbols read (or terminals written) lead to different states (non-terminals).
---
\(^1\) In a rule \( A \rightarrow \sigma(B) \), the left-hand side is \( A \), and the right-hand side is \( \sigma(B) \). The left-hand side is the specification of what is to be rewritten; the right-hand side is what that is to be rewritten as.
Examples:
1. FSA and RG for $RL_{dep}$:
$$S \rightarrow aA \quad A \rightarrow 1B \quad B \rightarrow z$$
$$A \rightarrow 2B$$
$$\vdots$$
$$S \rightarrow bC \quad C \rightarrow 1D \quad D \rightarrow y$$
$$\vdots$$
2. $\{ \omega \mid a^nb^n, 0 \leq n \leq i \}$ for progressively higher finite values of $i$.
(a) Finite State Automata:
$$\{ \omega \mid \omega = a^nb^n, 0 \leq n \leq 1 \}$$
$$\{ \omega \mid \omega = a^nb^n, 0 \leq n \leq 2 \}$$
$$\{ \omega \mid \omega = a^nb^n, 0 \leq n \leq 3 \}$$
$$\{ \omega \mid \omega = a^nb^n, 0 \leq n \leq 4 \}$$
$$\{ \omega \mid \omega = a^nb^n, 0 \leq n \leq i \}$$
$$\vdots$$
(b) Regular Grammars:
<table>
<thead>
<tr>
<th></th>
<th>$a^n b^n, 0 \leq n \leq 1$</th>
<th>$a^n b^n, 0 \leq n \leq 2$</th>
<th>$a^n b^n, 0 \leq n \leq 3$</th>
<th>…</th>
</tr>
</thead>
<tbody>
<tr>
<td>$S \rightarrow \epsilon$</td>
<td>$S \rightarrow \epsilon$</td>
<td>$S \rightarrow \epsilon$</td>
<td>:</td>
<td></td>
</tr>
<tr>
<td>$S \rightarrow aB$</td>
<td>$S \rightarrow aB$</td>
<td>$S \rightarrow aB$</td>
<td>:</td>
<td></td>
</tr>
<tr>
<td>$B \rightarrow b$</td>
<td>$B \rightarrow b$</td>
<td>$B \rightarrow b$</td>
<td>:</td>
<td></td>
</tr>
<tr>
<td>$S \rightarrow aC$</td>
<td>$S \rightarrow aC$</td>
<td>:</td>
<td></td>
<td></td>
</tr>
<tr>
<td>$C \rightarrow aD$</td>
<td>$C \rightarrow aD$</td>
<td>:</td>
<td></td>
<td></td>
</tr>
<tr>
<td>$D \rightarrow bB$</td>
<td>$D \rightarrow bB$</td>
<td>:</td>
<td></td>
<td></td>
</tr>
<tr>
<td>$S \rightarrow aE$</td>
<td>$E \rightarrow aF$</td>
<td>$F \rightarrow aG$</td>
<td>$G \rightarrow bD$</td>
<td></td>
</tr>
</tbody>
</table>
- Let’s think of the ability to keep track of such dependencies as a kind of memory.
- **Fact we just established**: The memory available to FSAs relies entirely on the multiplication of states.
- **Question**: What limit does this fact put on the kind of languages that a **Finite State Automaton** can generate?
- **Answer**: Since an FSA can have only a **finite** number of states, the memory available to an FSA can have only a bounded (finite) depth.
- To spell it out somewhat more:
Because the number of states is finite, there may be only a finite number of distinct paths connecting states.
The number of distinct paths is exactly what gives an FSA its memory, because each distinct path constitutes a distinct history of symbols read.
Consequently the number of ‘histories’ of symbols read that any FSA can distinguish is finite, and bounded by the number of its states.
As a result, any language whose generation requires a memory of unbounded depth cannot be Regular.
• For example the following languages are not Regular:
\[ L_{a^n b^n} = \{ \omega \mid \omega = a^n b^n, n \geq 0 \} \]
\[ L = \{ \omega \mid \omega \text{ is a palindrome} \} \]
3 The Pumping Lemma
• **Question**: Are there ways to tell whether a language is Regular?
• One important diagnostic tool is known as *The Pumping Lemma*.
3.1 Informal derivation of the PL
• Any FSA has only a finite number of states.
Consequently, there is a bound on how long a path in an FSA can be
without repeating any states (i.e. without including a loop).
For instance, if FSA \( M \) has three states, there could not possibly be a
path in \( M \) that was more than two ‘transition arrows’ long, unless that
path repeated at least one state (i.e. included a loop).
• Since there is a bound on non-looping path length, there is a bound on
the length of non-looping paths that lead to accept states.
• A path to an accept state of course represents a string recognized by the
FSA. And the length of a path to an accept state is equal to the length of
the string accepted.
\[
\text{Therefore, for any FSA, there is a maximal length } l_{\text{max}} \text{ such that the} \\
\text{FSA could not possibly recognize strings longer than } l_{\text{max}} \text{ without running} \\
\text{through some state(s) more than once.}
\]
• Suppose that \( \omega \) is a string in \( L \) whose length is greater than \( l_{\text{max}} \), the
crucial ‘pumping length’ for \( M \).
• Then the path in \( M \) defined by \( \omega, \Pi_\omega \), must include at least one pass
through at least one loop.
Call one loop in \( \Pi_\omega \lambda \), and suppose that \( \omega \) defines \( m \) passes through loop \( \lambda \).
• There must be a path to an accept state in $M$ which differs from $\Pi_\omega$ just in that there are $m + 1$ runs through $\lambda$.
Indeed there must be any number of paths to accept states which differ from $\Pi_\omega$ just in there being $m + x$ runs through $\lambda$, for any positive integer $x$.
Moreover, there must be a path which differs from $\Pi_\omega$ just in that it skips $\lambda$ altogether.
• The class of languages accepted by FSAs is the class of Regular Languages. Therefore, any Regular Language will have the properties just described. If $L$ is regular, then any string in $L$ over a certain length will have to include some indefinitely repeatable substring.
These conclusions constitute the main idea of “The Pumping Lemma”.
3.2 Formal statement of the PL
The Pumping Lemma
If $L$ is a Regular Language over alphabet $\Sigma$,
Then there is a number $n$ such that any string $\omega$ in $L$ whose length is at least $n$ can be divided into three pieces, $\omega = xyz$, where all the following conditions hold:
1. $y \neq \epsilon$
2. for any number $i \geq 0$, $xy^iz \in L$
3. $|xy| \leq n$
Four important details of the Pumping Lemma
1. The PL reads, “If $L$ is regular, then...”. So the PL says that any Regular Language has a certain property. It does not say that any language with the ‘pumping property’ is Regular.
Consequently, it would be invalid to argue that a language is regular because it has this property. We can only argue that, because a language lacks the pumping property, it is not regular.
2. The PL says that, in a Regular Language, every string over a certain length has a certain property. So, to prove that a language is not Regular, it is sufficient to demonstrate that a single string over the crucial length does not have the 'pumping property.'
3. The PL says that, if $L$ is Regular, then every string in $L$ over length $n$ must meet certain conditions. Notice, if $L$ has zero strings over length $n$, then these conditions are met trivially.
4. The PL says that any string in a regular language $L$ over length $n$ must be segmentable into $xyz$, such that $xy^iz$ is also in $L$, for any $i$ greater that or equal to zero. Thus $xy^0z$ must be in $L$, meaning that $xz$ must be in $L$.
### 3.3 Using the Pumping Lemma
To show that a language $L$ is not regular it suffices to show that $L$ does not have the ‘pumping property’. For if $L$ were regular, it would have this property.
1. $L_{ab^n} = \{ \omega \mid \omega = a^n b^n, n \geq 0 \}$
2. $L_{ab^n a^n} = \{ \omega \mid \omega = a^n b^n a^n, n \geq 0 \}$
3. $L_{\omega\omega} = \{ \omega \in \{a, b\}^* \mid \omega = \sigma\sigma \}$
4. $L_{n>m} = \{ \omega \mid \omega = a^n b^m, n > m \}$
4 Chomsky’s argument that English is not a RL
- In the 1950s, Chomsky presented an argument that English is not a Regular Language, relying on the results we just proved.
- Chomsky noted that English has many constructions involving discontinuous dependencies, where elements of one type must be matched by elements of another type elsewhere in the string:
“Either$_1$ you are confused, or$_1$ I am a moron.”
“If$_1$ you are confused, then$_1$ you should interrupt.”
“The cat$_1$ meowed$_1$.”
- And he observed that there may be several such dependencies in a single string, one embedded inside the other.
“If$_1$ either$_2$ you are confused or$_2$ I am a moron, then$_1$ someone should interrupt.”
“The cat$_1$ the dog$_2$ chased$_2$ meowed$_1$.”
- Such constructions define sets of strings that are essentially similar to \( \{\omega \mid \omega = a^n b^n \} \). The number of \( X \)'s in one part of the string must match the number of \( Y \)'s elsewhere.
- We know that sets like this are Regular Languages if \( n \) is bounded, but not Regular if \( n \) can be any number.
**Question:** Is there a limit on how many dependencies of the type Chomsky observed there may be in a sentence of English?
- Chomsky insisted that there is no *grammatical* limit. It’s just that, the more there are, the harder the sentence is to understand.
The following sentences are grammatical, even if they are hard to understand:
“If$_1$ it’s true either$_2$ that you suspect that if$_3$ either$_4$ I continue to claim that ... or$_4$ if anyway Lawrence does, then$_3$ we might be here forever, or$_2$ that you are simply bored, then$_1$ perhaps the point has been made.
“The cat$_1$ the dog$_2$ the hunter$_3$ the police$_4$ ... arrested$_4$ trained$_3$ chased$_2$ meowed$_1$.”
- So Chomsky concluded that English is not a Regular Language.
5 Context Free Grammars
- **Fact we just learned:** FSAs and RGs cannot generate languages permitting an unbounded number of discontinuous dependencies.
- **Question:** What sorts of devices can?
- Today we will discuss one type of grammar that can: Context Free Grammars.\(^2\)
- Recall the definition of a *Grammar*:
**Grammar** A set \( \{V_T, V_N, S, R\} \), where:
- \( V_T \) is a set of symbols, the *terminal alphabet*;
- \( V_N \) is a set of symbols, the *non-terminal alphabet*;
- \( S \) is the unique *start symbol*, \( S \in V_N \); and
- \( R \) is a set of rewrite rules, \( \Phi \rightarrow \Psi \), where: \( \Phi, \Psi \in (V_T \cup V_N)^* \); \( \Phi \) always contains at least one non-terminal; and in at least one rule, \( \Phi = S \).
- A *Context Free Grammar* is a special type of Grammar:
**Context Free Grammar** A grammar all of whose rules are of the form
\[ A \rightarrow \ldots \]
where \( A \) is a single non-terminal symbol \( (A \in V_T) \) and \( \ldots \) is any string over the union of terminal and non-terminal alphabets \( (\ldots \in (V_T \cup V_N)^*) \).
**CFG rules** \( A \rightarrow \ldots \)
**RG rules** \( A \rightarrow \sigma(B) \)
**Possible CFG rules** \( A \rightarrow BC \quad A \rightarrow xBy \quad A \rightarrow BxCyD \)
\[ A \rightarrow xB \quad A \rightarrow xyz \quad A \rightarrow \epsilon \]
**Impossible CFG rules** \( AB \rightarrow xB \quad xA \rightarrow xBC \quad xAy \rightarrow xBCy \quad x \rightarrow y \quad x \rightarrow yA \quad x \rightarrow \epsilon \)
\[ \text{N.B.: Every RG is a CFG, but not vice versa.} \]
\(^2\)There is an automata-theoretic equivalent of Context Free Grammars, called Pushdown Automata. I'm not sure how much time we will have to explore these.
Context Free Language A language generable by a CFG.
Example CFGs:
1. \( G_1 = \{ V_{1T}, V_{1N}, S_1, R_1 \} \), where:
\[
V_{1T} = \{ a, b \} \quad V_{1N} = \{ S, A, B \} \quad S_1 = S
\]
\[
R_1 = \begin{cases}
S \rightarrow aaRb \\
S \rightarrow \epsilon \\
R \rightarrow bbSa \\
R \rightarrow \epsilon
\end{cases}
\]
2. \( G_2 = \{ V_{2T}, V_{2N}, S_2, R_2 \} \), where:
\[
V_{2T} = \{ a, b \} \quad V_{1N} = \{ a, b \} \quad S_2 = S
\]
\[
R_2 = \begin{cases}
S \rightarrow SS \\
S \rightarrow AB \\
A \rightarrow aAb \\
B \rightarrow bbA \\
A \rightarrow \epsilon \\
B \rightarrow \epsilon
\end{cases}
\]
Exercises
1. Give a CFG for the language \( L_4 \):
\[
L_4 = \{ \omega \mid \omega = a^n b^n, \text{n is even, m is odd} \}
\]
2. Give a CFG for the language \( L_4 \):
\[
L_4 = \{ \omega \mid \omega = \sigma^{reverse} \}
\]
|
{"Source-Url": "http://www.ling.upenn.edu:80/courses/Summer_2002/ling106/RGlimits_02.pdf", "len_cl100k_base": 4463, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39356, "total-output-tokens": 4939, "length": "2e12", "weborganizer": {"__label__adult": 0.0015726089477539062, "__label__art_design": 0.0021820068359375, "__label__crime_law": 0.0013303756713867188, "__label__education_jobs": 0.0213470458984375, "__label__entertainment": 0.0019369125366210935, "__label__fashion_beauty": 0.0008573532104492188, "__label__finance_business": 0.0005879402160644531, "__label__food_dining": 0.0021533966064453125, "__label__games": 0.004375457763671875, "__label__hardware": 0.001644134521484375, "__label__health": 0.002002716064453125, "__label__history": 0.00171661376953125, "__label__home_hobbies": 0.0004646778106689453, "__label__industrial": 0.001796722412109375, "__label__literature": 0.1396484375, "__label__politics": 0.00199127197265625, "__label__religion": 0.003602981567382813, "__label__science_tech": 0.384033203125, "__label__social_life": 0.0011348724365234375, "__label__software": 0.01280975341796875, "__label__software_dev": 0.40869140625, "__label__sports_fitness": 0.001209259033203125, "__label__transportation": 0.002574920654296875, "__label__travel": 0.0004916191101074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14825, 0.01343]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14825, 0.7744]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14825, 0.83288]], "google_gemma-3-12b-it_contains_pii": [[0, 1365, false], [1365, 3386, null], [3386, 4053, null], [4053, 5742, null], [5742, 7476, null], [7476, 9032, null], [9032, 10211, null], [10211, 12087, null], [12087, 13888, null], [13888, 14825, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1365, true], [1365, 3386, null], [3386, 4053, null], [4053, 5742, null], [5742, 7476, null], [7476, 9032, null], [9032, 10211, null], [10211, 12087, null], [12087, 13888, null], [13888, 14825, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14825, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14825, null]], "pdf_page_numbers": [[0, 1365, 1], [1365, 3386, 2], [3386, 4053, 3], [4053, 5742, 4], [5742, 7476, 5], [7476, 9032, 6], [9032, 10211, 7], [10211, 12087, 8], [12087, 13888, 9], [13888, 14825, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14825, 0.045]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
b6702b06e445fdcb3107ca17f5c891c4da4e1767
|
[REMOVED]
|
{"len_cl100k_base": 7112, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 39781, "total-output-tokens": 9014, "length": "2e12", "weborganizer": {"__label__adult": 0.00037598609924316406, "__label__art_design": 0.0004112720489501953, "__label__crime_law": 0.00041556358337402344, "__label__education_jobs": 0.0005946159362792969, "__label__entertainment": 0.00018262863159179688, "__label__fashion_beauty": 0.00017499923706054688, "__label__finance_business": 0.0004549026489257813, "__label__food_dining": 0.0004093647003173828, "__label__games": 0.0006513595581054688, "__label__hardware": 0.003204345703125, "__label__health": 0.0005092620849609375, "__label__history": 0.00041866302490234375, "__label__home_hobbies": 0.0001112222671508789, "__label__industrial": 0.0008382797241210938, "__label__literature": 0.0002651214599609375, "__label__politics": 0.0003812313079833984, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.285888671875, "__label__social_life": 0.00013458728790283203, "__label__software": 0.04583740234375, "__label__software_dev": 0.65673828125, "__label__sports_fitness": 0.0002884864807128906, "__label__transportation": 0.0008473396301269531, "__label__travel": 0.0002589225769042969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36514, 0.03446]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36514, 0.24741]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36514, 0.8777]], "google_gemma-3-12b-it_contains_pii": [[0, 2360, false], [2360, 5667, null], [5667, 8354, null], [8354, 10892, null], [10892, 12217, null], [12217, 14194, null], [14194, 14798, null], [14798, 17690, null], [17690, 19079, null], [19079, 20077, null], [20077, 20664, null], [20664, 21870, null], [21870, 22727, null], [22727, 25097, null], [25097, 27807, null], [27807, 29029, null], [29029, 30722, null], [30722, 33593, null], [33593, 35748, null], [35748, 36514, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2360, true], [2360, 5667, null], [5667, 8354, null], [8354, 10892, null], [10892, 12217, null], [12217, 14194, null], [14194, 14798, null], [14798, 17690, null], [17690, 19079, null], [19079, 20077, null], [20077, 20664, null], [20664, 21870, null], [21870, 22727, null], [22727, 25097, null], [25097, 27807, null], [27807, 29029, null], [29029, 30722, null], [30722, 33593, null], [33593, 35748, null], [35748, 36514, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36514, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36514, null]], "pdf_page_numbers": [[0, 2360, 1], [2360, 5667, 2], [5667, 8354, 3], [8354, 10892, 4], [10892, 12217, 5], [12217, 14194, 6], [14194, 14798, 7], [14798, 17690, 8], [17690, 19079, 9], [19079, 20077, 10], [20077, 20664, 11], [20664, 21870, 12], [21870, 22727, 13], [22727, 25097, 14], [25097, 27807, 15], [27807, 29029, 16], [29029, 30722, 17], [30722, 33593, 18], [33593, 35748, 19], [35748, 36514, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36514, 0.11905]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
db61242b63666f51d1e7597c0e68d2099127d2ab
|
Locator/ID Separation Protocol (LISP) Map-Server Interface
Abstract
This document describes the Mapping Service for the Locator/ID Separation Protocol (LISP), implemented by two new types of LISP-speaking devices -- the LISP Map-Resolver and LISP Map-Server -- that provides a simplified "front end" for one or more Endpoint ID to Routing Locator mapping databases.
By using this service interface and communicating with Map-Resolvers and Map-Servers, LISP Ingress Tunnel Routers and Egress Tunnel Routers are not dependent on the details of mapping database systems, which facilitates experimentation with different database designs. Since these devices implement the "edge" of the LISP infrastructure, connect directly to LISP-capable Internet end sites, and comprise the bulk of LISP-speaking devices, reducing their implementation and operational complexity should also reduce the overall cost and effort of deploying LISP.
Status of This Memo
This document is not an Internet Standards Track specification; it is published for examination, experimental implementation, and evaluation.
This document defines an Experimental Protocol for the Internet community. This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc6833.
1. Introduction
The Locator/ID Separation Protocol [RFC6830] specifies an architecture and mechanism for replacing the addresses currently used by IP with two separate name spaces: Endpoint IDs (EIDs), used within sites; and Routing Locators (RLOCs), used on the transit networks that make up the Internet infrastructure. To achieve this separation, LISP defines protocol mechanisms for mapping from EIDs to RLOCs. In addition, LISP assumes the existence of a database to store and propagate those mappings globally. Several such databases have been proposed; among them are the Content distribution Overlay Network Service for LISP (LISP-CONS) [LISP-CONS], LISP-NERD (a Not-so-novel EID-to-RLOC Database) [RFC6837], and LISP Alternative Logical Topology (LISP+ALT) [RFC6836].
The LISP Mapping Service defines two new types of LISP-speaking devices: the Map-Resolver, which accepts Map-Requests from an Ingress Tunnel Router (ITR) and "resolves" the EID-to-RLOC mapping using a mapping database; and the Map-Server, which learns authoritative EID-to-RLOC mappings from an Egress Tunnel Router (ETR) and publishes them in a database.
Conceptually, LISP Map-Servers share some of the same basic configuration and maintenance properties as Domain Name System (DNS) [RFC1035] servers; likewise, Map-Resolvers are conceptually similar to DNS caching resolvers. With this in mind, this specification borrows familiar terminology (resolver and server) from the DNS specifications.
Note that while this document assumes a LISP+ALT database mapping infrastructure to illustrate certain aspects of Map-Server and Map-Resolver operation, the Mapping Service interface can (and likely will) be used by ITRs and ETRs to access other mapping database systems as the LISP infrastructure evolves.
Section 5 of this document notes a number of issues with the Map-Server and Map-Resolver design that are not yet completely understood and are subjects of further experimentation.
The LISP Mapping Service is an important component of the LISP toolset. Issues and concerns about the deployment of LISP for Internet traffic are discussed in [RFC6830].
2. Definition of Terms
Map-Server: A network infrastructure component that learns of EID-Prefix mapping entries from an ETR, via the registration mechanism described below, or some other authoritative source if one exists. A Map-Server publishes these EID-Prefixes in a mapping database.
Map-Resolver: A network infrastructure component that accepts LISP Encapsulated Map-Requests, typically from an ITR, and determines whether or not the destination IP address is part of the EID namespace; if it is not, a Negative Map-Reply is returned. Otherwise, the Map-Resolver finds the appropriate EID-to-RLOC mapping by consulting a mapping database system.
Encapsulated Map-Request: A LISP Map-Request carried within an Encapsulated Control Message, which has an additional LISP header prepended. Sent to UDP destination port 4342. The "outer" addresses are globally routable IP addresses, also known as RLOCs. Used by an ITR when sending to a Map-Resolver and by a Map-Server when forwarding a Map-Request to an ETR.
Negative Map-Reply: A LISP Map-Reply that contains an empty Locator-Set. Returned in response to a Map-Request if the destination EID does not exist in the mapping database. Typically, this means that the "EID" being requested is an IP address connected to a non-LISP site.
Map-Register message: A LISP message sent by an ETR to a Map-Server to register its associated EID-Prefixes. In addition to the set of EID-Prefixes to register, the message includes one or more RLOCs to be used by the Map-Server when forwarding Map-Requests (re-formatted as Encapsulated Map-Requests) received through the database mapping system. An ETR may request that the Map-Server answer Map-Requests on its behalf by setting the "proxy Map-Reply" flag (P-bit) in the message.
Map-Notify message: A LISP message sent by a Map-Server to an ETR to confirm that a Map-Register has been received and processed. An ETR requests that a Map-Notify be returned by setting the "want-map-notify" flag (M-bit) in the Map-Register message. Unlike a Map-Reply, a Map-Notify uses UDP port 4342 for both source and destination.
For definitions of other terms -- notably Map-Request, Map-Reply, Ingress Tunnel Router (ITR), and Egress Tunnel Router (ETR) -- please consult the LISP specification [RFC6830].
3. Basic Overview
A Map-Server is a device that publishes EID-Prefixes in a LISP mapping database on behalf of a set of ETRs. When it receives a Map Request (typically from an ITR), it consults the mapping database to find an ETR that can answer with the set of RLOCs for an EID-Prefix. To publish its EID-Prefixes, an ETR periodically sends Map-Register messages to the Map-Server. A Map-Register message contains a list of EID-Prefixes plus a set of RLOCs that can be used to reach the ETR when a Map-Server needs to forward a Map-Request to it.
When LISP+ALT is used as the mapping database, a Map-Server connects to the ALT network and acts as a "last-hop" ALT-Router. Intermediate ALT-Routers forward Map-Requests to the Map-Server that advertises a particular EID-Prefix, and the Map-Server forwards them to the owning ETR, which responds with Map-Reply messages.
A Map-Resolver receives Encapsulated Map-Requests from its client ITRs and uses a mapping database system to find the appropriate ETR to answer those requests. On a LISP+ALT network, a Map-Resolver acts as a "first-hop" ALT-Router. It has Generic Routing Encapsulation (GRE) tunnels configured to other ALT-Routers and uses BGP to learn paths to ETRs for different prefixes in the LISP+ALT database. The Map-Resolver uses this path information to forward Map-Requests over the ALT to the correct ETRs.
Note that while it is conceivable that a Map-Resolver could cache responses to improve performance, issues surrounding cache management will need to be resolved so that doing so will be reliable and practical. As initially deployed, Map-Resolvers will operate only in a non-caching mode, decapsulating and forwarding Encapsulated Map Requests received from ITRs. Any specification of caching functionality is left for future work.
Note that a single device can implement the functions of both a Map-Server and a Map-Resolver, and in many cases the functions will be co-located in that way.
Detailed descriptions of the LISP packet types referenced by this document may be found in [RFC6830].
4. Interactions with Other LISP Components
4.1. ITR EID-to-RLOC Mapping Resolution
An ITR is configured with one or more Map-Resolver addresses. These addresses are "Locators" (or RLOCs) and must be routable on the underlying core network; they must not need to be resolved through LISP EID-to-RLOC mapping, as that would introduce a circular dependency. When using a Map-Resolver, an ITR does not need to connect to any other database mapping system. In particular, the ITR need not connect to the LISP+ALT infrastructure or implement the BGP and GRE protocols that it uses.
An ITR sends an Encapsulated Map-Request to a configured Map-Resolver when it needs an EID-to-RLOC mapping that is not found in its local map-cache. Using the Map-Resolver greatly reduces both the complexity of the ITR implementation and the costs associated with its operation.
In response to an Encapsulated Map-Request, the ITR can expect one of the following:
- An immediate Negative Map-Reply (with action code of "Natively-Forward", 15-minute Time to Live (TTL)) from the Map-Resolver if the Map-Resolver can determine that the requested EID does not exist. The ITR saves the EID-Prefix returned in the Map-Reply in its cache, marks it as non-LISP-capable, and knows not to attempt LISP encapsulation for destinations matching it.
- A Negative Map-Reply, with action code of "Natively-Forward", from a Map-Server that is authoritative for an EID-Prefix that matches the requested EID but that does not have an actively registered, more-specific ID-prefix. In this case, the requested EID is said to match a "hole" in the authoritative EID-Prefix. If the requested EID matches a more-specific EID-Prefix that has been delegated by the Map-Server but for which no ETRs are currently registered, a 1-minute TTL is returned. If the requested EID matches a non-delegated part of the authoritative EID-Prefix, then it is not a LISP EID and a 15-minute TTL is returned. See Section 4.2 for discussion of aggregate EID-Prefixes and details of Map-Server EID-Prefix matching.
- A LISP Map-Reply from the ETR that owns the EID-to-RLOC mapping or possibly from a Map-Server answering on behalf of the ETR. See Section 4.4 for more details on Map-Resolver message processing.
Note that an ITR may be configured to both use a Map-Resolver and to participate in a LISP+ALT logical network. In such a situation, the ITR should send Map-Requests through the ALT network for any EID-Prefix learned via ALT BGP. Such a configuration is expected to be very rare, since there is little benefit to using a Map-Resolver if an ITR is already using LISP+ALT. There would be, for example, no need for such an ITR to send a Map-Request to a possibly non-existent EID (and rely on Negative Map-Replies) if it can consult the ALT database to verify that an EID-Prefix is present before sending that Map-Request.
### 4.2. EID-Prefix Configuration and ETR Registration
An ETR publishes its EID-Prefixes on a Map-Server by sending LISP Map-Register messages. A Map-Register message includes authentication data, so prior to sending a Map-Register message, the ETR and Map-Server must be configured with a shared secret or other relevant authentication information. A Map-Server’s configuration must also include a list of the EID-Prefixes for which each ETR is authoritative. Upon receipt of a Map-Register from an ETR, a Map-Server accepts only EID-Prefixes that are configured for that
ETR. Failure to implement such a check would leave the mapping system vulnerable to trivial EID-Prefix hijacking attacks. As developers and operators gain experience with the mapping system, additional, stronger security measures may be added to the registration process.
In addition to the set of EID-Prefixes defined for each ETR that may register, a Map-Server is typically also configured with one or more aggregate prefixes that define the part of the EID numbering space assigned to it. When LISP+ALT is the database in use, aggregate EID-Prefixes are implemented as discard routes and advertised into ALT BGP. The existence of aggregate EID-Prefixes in a Map-Server’s database means that it may receive Map Requests for EID-Prefixes that match an aggregate but do not match a registered prefix; Section 4.3 describes how this is handled.
Map-Register messages are sent periodically from an ETR to a Map-Server with a suggested interval between messages of one minute. A Map-Server should time out and remove an ETR’s registration if it has not received a valid Map-Register message within the past three minutes. When first contacting a Map-Server after restart or changes to its EID-to-RLOC database mappings, an ETR may initially send Map-Register messages at an increased frequency, up to one every 20 seconds. This "quick registration" period is limited to five minutes in duration.
An ETR may request that a Map-Server explicitly acknowledge receipt and processing of a Map-Register message by setting the "want-map-notify" (M-bit) flag. A Map-Server that receives a Map-Register with this flag set will respond with a Map-Notify message. Typical use of this flag by an ETR would be to set it for Map-Register messages sent during the initial "quick registration" with a Map-Server but then set it only occasionally during steady-state maintenance of its association with that Map-Server. Note that the Map-Notify message is sent to UDP destination port 4342, not to the source port specified in the original Map-Register message.
Note that a one-minute minimum registration interval during maintenance of an ETR-Map-Server association places a lower bound on how quickly and how frequently a mapping database entry can be updated. This may have implications for what sorts of mobility can be supported directly by the mapping system; shorter registration intervals or other mechanisms might be needed to support faster mobility in some cases. For a discussion on one way that faster mobility may be implemented for individual devices, please see [LISP-MN].
Fuller & Farinacci Experimental [Page 7]
An ETR may also request, by setting the "proxy Map-Reply" flag (P-bit) in the Map-Register message, that a Map-Server answer Map-Requests instead of forwarding them to the ETR. See [RFC6830] for details on how the Map-Server sets certain flags (such as those indicating whether the message is authoritative and how returned Locators should be treated) when sending a Map-Reply on behalf of an ETR. When an ETR requests proxy reply service, it should include all RLOCs for all ETRs for the EID-Prefix being registered, along with the routable flag ("R-bit") setting for each RLOC. The Map-Server includes all of this information in Map-Reply messages that it sends on behalf of the ETR. This differs from a non-proxy registration, since the latter need only provide one or more RLOCs for a Map-Server to use for forwarding Map-Requests; the registration information is not used in Map-Replies, so it being incomplete is not incorrect.
An ETR that uses a Map-Server to publish its EID-to-RLOC mappings does not need to participate further in the mapping database protocol(s). When using a LISP+ALT mapping database, for example, this means that the ETR does not need to implement GRE or BGP, which greatly simplifies its configuration and reduces its cost of operation.
Note that use of a Map-Server does not preclude an ETR from also connecting to the mapping database (i.e., it could also connect to the LISP+ALT network), but doing so doesn’t seem particularly useful, as the whole purpose of using a Map-Server is to avoid the complexity of the mapping database protocols.
4.3. Map-Server Processing
Once a Map-Server has EID-Prefixes registered by its client ETRs, it can accept and process Map-Requests for them.
In response to a Map-Request (received over the ALT if LISP+ALT is in use), the Map-Server first checks to see if the destination EID matches a configured EID-Prefix. If there is no match, the Map-Server returns a Negative Map-Reply with action code "Natively-Forward" and a 15-minute TTL. This may occur if a Map Request is received for a configured aggregate EID-Prefix for which no more-specific EID-Prefix exists; it indicates the presence of a non-LISP "hole" in the aggregate EID-Prefix.
Next, the Map-Server checks to see if any ETRs have registered the matching EID-Prefix. If none are found, then the Map-Server returns a Negative Map-Reply with action code "Natively-Forward" and a 1-minute TTL.
If any of the registered ETRs for the EID-Prefix have requested proxy reply service, then the Map-Server answers the request instead of forwarding it. It returns a Map-Reply with the EID-Prefix, RLOCs, and other information learned through the registration process.
If none of the ETRs have requested proxy reply service, then the Map-Server re-encapsulates and forwards the resulting Encapsulated Map-Request to one of the registered ETRs. It does not otherwise alter the Map-Request, so any Map-Reply sent by the ETR is returned to the RLOC in the Map-Request, not to the Map-Server. Unless also acting as a Map-Resolver, a Map-Server should never receive Map-Replies; any such messages should be discarded without response, perhaps accompanied by the logging of a diagnostic message if the rate of Map-Replies is suggestive of malicious traffic.
4.4. Map-Resolver Processing
Upon receipt of an Encapsulated Map-Request, a Map-Resolver decapsulates the enclosed message and then searches for the requested EID in its local database of mapping entries (statically configured or learned from associated ETRs if the Map-Resolver is also a Map-Server offering proxy reply service). If it finds a matching entry, it returns a LISP Map-Reply with the known mapping.
If the Map-Resolver does not have the mapping entry and if it can determine that the EID is not in the mapping database (for example, if LISP+ALT is used, the Map-Resolver will have an ALT forwarding table that covers the full EID space), it immediately returns a negative LISP Map-Reply, with action code "Natively-Forward" and a 15-minute TTL. To minimize the number of negative cache entries needed by an ITR, the Map-Resolver should return the least-specific prefix that both matches the original query and does not match any EID-Prefix known to exist in the LISP-capable infrastructure.
If the Map-Resolver does not have sufficient information to know whether the EID exists, it needs to forward the Map-Request to another device that has more information about the EID being requested. To do this, it forwards the unencapsulated Map-Request, with the original ITR RLOC as the source, to the mapping database system. Using LISP+ALT, the Map-Resolver is connected to the ALT network and sends the Map-Request to the next ALT hop learned from its ALT BGP neighbors. The Map-Resolver does not send any response to the ITR; since the source RLOC is that of the ITR, the ETR or Map-Server that receives the Map-Request over the ALT and responds will do so directly to the ITR.
4.4.1. Anycast Map-Resolver Operation
A Map-Resolver can be set up to use "anycast", where the same address is assigned to multiple Map-Resolvers and is propagated through IGP routing, to facilitate the use of a topologically close Map-Resolver by each ITR.
Note that Map-Server associations with ETRs should not use anycast addresses, as registrations need to be established between an ETR and a specific set of Map-Servers, each identified by a specific registration association.
5. Open Issues and Considerations
There are a number of issues with the Map-Server and Map-Resolver design that are not yet completely understood. Among these are:
- Constants, such as those used for Map-Register frequency, retransmission timeouts, retransmission limits, Negative Map-Reply TTLs, et al. are subject to further refinement as more experience with prototype deployment is gained.
- Convergence time when an EID-to-RLOC mapping changes, and mechanisms for detecting and refreshing or removing stale, cached information.
- Deployability and complexity tradeoffs of implementing stronger security measures in both EID-Prefix registration and Map-Request/Map-Reply processing.
- Requirements for additional state in the registration process between Map-Servers and ETRs.
A discussion of other issues surrounding LISP deployment may also be found in Section 15 of [RFC6830].
The authors expect that experimentation on the LISP pilot network will help answer open questions surrounding these and other issues.
6. Security Considerations
The 2-way LISP header nonce exchange documented in [RFC6830] can be used to avoid ITR spoofing attacks.
To publish an authoritative EID-to-RLOC mapping with a Map-Server, an ETR includes authentication data that is a hash of the message using a pair-wise shared key. An implementation must support use of HMAC-SHA-1-96 [RFC2104] and should support use of HMAC-SHA-256-128 [RFC6234] (SHA-256 truncated to 128 bits).
During experimental and prototype deployment, all authentication key configuration will be manual. Should LISP and its components be considered for IETF standardization, further work will be required to follow the BCP 107 [RFC4107] recommendations on automated key management.
As noted in Section 4.2, a Map-Server should verify that all EID-Prefixes registered by an ETR match the configuration stored on the Map-Server.
The currently defined authentication mechanism for Map-Register messages does not provide protection against "replay" attacks by a "man-in-the-middle". Additional work is needed in this area.
[LISP-SEC] defines a proposed mechanism for providing origin authentication, integrity, anti-replay protection, and prevention of man-in-the-middle and "overclaiming" attacks on the Map-Request/Map-Reply exchange. Work is ongoing on this and other proposals for resolving these open security issues.
While beyond the scope of securing an individual Map-Server or Map-Resolver, it should be noted that a BGP-based LISP+ALT network (if ALT is used as the mapping database infrastructure) can take advantage of standards work on adding security to BGP.
7. References
7.1. Normative References
7.2. Informative References
Appendix A. Acknowledgments
The authors would like to thank Gregg Schudel, Darrel Lewis, John Zwiebel, Andrew Partan, Dave Meyer, Isidor Kouvelas, Jesper Skriver, Fabio Maino, and members of the lisp@ietf.org mailing list for their feedback and helpful suggestions.
Special thanks are due to Noel Chiappa for his extensive work on caching with LISP-CONS, some of which may be used by Map-Resolvers.
Authors’ Addresses
Vince Fuller
EMail: vaf@vaf.net
Dino Farinacci
Cisco Systems
Tasman Drive
San Jose, CA 95134
USA
EMail: farinacci@gmail.com
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc6833.pdf", "len_cl100k_base": 5220, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 25654, "total-output-tokens": 6314, "length": "2e12", "weborganizer": {"__label__adult": 0.0004992485046386719, "__label__art_design": 0.0003566741943359375, "__label__crime_law": 0.0009603500366210938, "__label__education_jobs": 0.000667572021484375, "__label__entertainment": 0.0001885890960693359, "__label__fashion_beauty": 0.0002435445785522461, "__label__finance_business": 0.0005388259887695312, "__label__food_dining": 0.0003275871276855469, "__label__games": 0.0008254051208496094, "__label__hardware": 0.021331787109375, "__label__health": 0.00045180320739746094, "__label__history": 0.0005936622619628906, "__label__home_hobbies": 0.00012314319610595703, "__label__industrial": 0.0010232925415039062, "__label__literature": 0.0003807544708251953, "__label__politics": 0.00041604042053222656, "__label__religion": 0.0005269050598144531, "__label__science_tech": 0.360107421875, "__label__social_life": 0.0001143813133239746, "__label__software": 0.139892578125, "__label__software_dev": 0.4677734375, "__label__sports_fitness": 0.00035500526428222656, "__label__transportation": 0.0020275115966796875, "__label__travel": 0.00031757354736328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24308, 0.02267]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24308, 0.4427]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24308, 0.86916]], "google_gemma-3-12b-it_contains_pii": [[0, 1694, false], [1694, 2472, null], [2472, 4484, null], [4484, 6670, null], [6670, 9049, null], [9049, 11639, null], [11639, 14255, null], [14255, 16684, null], [16684, 19228, null], [19228, 20738, null], [20738, 22351, null], [22351, 23760, null], [23760, 24308, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1694, true], [1694, 2472, null], [2472, 4484, null], [4484, 6670, null], [6670, 9049, null], [9049, 11639, null], [11639, 14255, null], [14255, 16684, null], [16684, 19228, null], [19228, 20738, null], [20738, 22351, null], [22351, 23760, null], [23760, 24308, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24308, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24308, null]], "pdf_page_numbers": [[0, 1694, 1], [1694, 2472, 2], [2472, 4484, 3], [4484, 6670, 4], [6670, 9049, 5], [9049, 11639, 6], [11639, 14255, 7], [14255, 16684, 8], [16684, 19228, 9], [19228, 20738, 10], [20738, 22351, 11], [22351, 23760, 12], [23760, 24308, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24308, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
44223b5200ca09632fef5eb83acd2865ea86e9f8
|
Generic Timer Module Configuration Tool
AMF-ENT-T1039
John H. Floros
Senior Software Engineer
Agenda
• **Overview: 40 minutes**
- Introduction and Objectives
- Overview of Generic Timer Module
- Overview of GTM Configuration Tool
- Matterhorn/GTM and GTM Configuration Tool Interaction
• **GTM Configuration Tool Examples Demos: 70 minutes**
- Building code with the GTM Configuration Tool
- TOM Example Demos: SIMPLE_IRQ
- ATOM Example Demo: ATOM_SOMP
- DPLL Example Demo: DPLL_DPLL_ACTION
• **Summary and Q&A: 10 minutes**
Introduction: WHAT DO WE DO?
- The Automotive Silicon Support Tools group’s objective is to develop software enablement tools to assist our customers with rapid prototyping and accelerate algorithm development on their target Freescale MCU.
- This includes software tools that automatically generate peripheral initialization code through GUI configuration, to generating peripheral driver code from a Model Based Design environment like Simulink™.
Objectives
• After this GTM Training, you will know:
− What the Generic Timer Module (GTM) is and what features it contains for Powertrain and Motor Control applications
− How the GTM and MCU interact at a high level
− What the GTM configuration tool can do to help accelerate the ramp up and development of Qorivva MCUs that contain the GTM IP.
− Understand the flexibility of being able to customize the GTM configuration tool to fit your coding conventions with the use of project settings and template files.
Overview - GTM Motivation
• In Powertrain applications there are two dominant timer implementation methods:
- One uses a more peripheral timer approach where the timer module consists of capture/compare units and counters (ex. EMIOS/Etimer). This approach has the problem of the main core needs to service the interrupts from the timer module interrupting other processing.
- The second approach uses a more software processing oriented approach with a micro-machine or co-processor that is programmable and fulfils timer specific tasks (ex. eTPU). This approach often has lower resolution for signal processing and is difficult to program because of special instruction sets.
• The GTM combines both the approaches. While there are some submodules that can fulfill a specific function in hardware, there is a RISC-like processing engine build within the GTM that can do a kind of signal processing and flexible signal generation using a special instruction set. Therefore, while the peripheral timer submodules offer real time processing capabilities, the RISC-like processing engine adds flexibility to the GTM.
Overview - GTM Purpose
- The GTM is a large scalable timer with a modular design and a central routing unit allowing for flexibility in channel numbers and application specific modules.
- Designed for 4 to 8 cylinder applications
- Powertrain, transmission control and some motor control
- Specific application sub-modules include Angle clock hardware, safety functions and motor commutation sub-modules
- The GTMs purpose is to unload the I/O core by allowing tasks to run independently with run once set up at MCU initialization.
Overview - GTM Architecture
- The GTM is built up from various sub-modules with each having dedicated functionality (six types):
- Data movement (ARU, BRC & PSM)
- Time base (TBU & CMU)
- I/O Modules (TIM, TOM & ATOM)
- Programmable core (MCS & MCFG)
- Special purpose sub-modules (DPLL, MAP & SPE)
- Safety related (CMP & MON)
- Submodules can be combined through ARU in a scalable and configurable manner to implement complex timer systems suitable for many different applications.
Overview - GTM Example
- Diagram of In Cylinder Pressure Sensing in angle domain using GTM and ADC without any CPU load.
Overview - GTM Benefits
• Most functions performed in parallel with dedicated hardware units, ensuring simple latency calculations.
• Reduced interrupt load removing the need for low-latency interrupts.
• CPU can be run with a slow clock in low-end projects giving low power dissipation and low EMI.
• Less data traffic between CPU and GTM due to dedicated hardware – Timers, ARU, programmable cores and engine position hardware.
- Each Source has a unique and fixed address
- ARU configures the destination address
- Data from a Source can only go to one Destination (See BRC)
Overview - Broadcast Module (BRC)
- Duplicates data streams
- 12 inputs, 22 outputs
Overview - CTBM (TBU, DPLL, CMU, MAP)
- Clock and Time Base Management Sub-modules
- Time Base Unit
- Generates 3 time bases
- Clock Management Unit
- Generates 12 sub module clocks
- Generates 3 external clocks
- Digital PLL Module
- Frequency Multiplier i.e increased precision of position
- TIM0 Input Mapping Module
- Generates TRIGGER and STATE for DPLL
Overview - Parameter Storage Module (PSM)
- Comprised of 3 sub units
- AEI-to-FIFO Data Interface (AFD)
- FIFO (a.k.a RFO)
- FIFO-to-ARU Interface (F2A)
- Data storage for incoming data characteristics or as parameter storage for outgoing data
- This data is stored in RAM that is logically located inside the FIFO subunit
Overview - Timer Units
• TIM – Timer Input Module
- Filters and captures input signals e.g.
- Time stamp of event
- Number of edges
- PWM duration or period
- Can be routed through ARU and SPE for processing
• TOM – Timer Output Module
- PWM generator
- Linked to SPE
• ATOM – ARU connected TOM
- Generates complex output signals
- Output signal characteristics routed through DPLL or MCS or PSM
Overview - Multi Channel Sequencer (MCS)
- This Data Processing Module can calculate complex output sequences based on TBU and ATOM
- There are Custom 24-bit RISC-like programmable cores inside of the GTM.
- Fine grain temporal multi-threaded
- Von Neumann (common bus) architecture
- 32-bit fixed instruction width
- Optimized instruction set
- 24-bit data operations
- Channel flow control
- Simple and complex triggering between channel
- Processes TIM data
- Process CPU sourced data
- These cores have their own internal RAM where the code and data can be stored.
Overview - Memory Configuration (MCFG)
- Organizes physical memory blocks and maps them to MCS submodules
- Defaults to 6K (4K + 2K) for each instance of MCS
- SWAP and BORROW for more or less memory
Overview - Sensor Pattern Evaluation (SPE)
- Supports BLDC engines by evaluation Hall sensor inputs (TIMx3)
- Drive BLDC with TOM
- Can be used at input to MAP for electric engine control
Overview - Deadtime Module (DTM)
- Deadtime generation in hardware to support motor control
- Electrical drive, charger control and sensor evaluation
- PWM Phase Shifting
Overview - Output Compare Unit (CMP)
- For use in Safety applications
- Ensures duplicate outputs match, if not generates an error
Overview - Monitor Unit (MON)
- Supervisor for use in Safety applications
- Monitors ARU and CMU (via Activity Checker – AC) activities
Overview - GTM Configuration Tool
• The GTM Configuration Tool is developed by Freescale to help enable the development of software for Freescale MCUs that contain the GTM by allowing the user to configure the GTM thru a Graphical User Interface (GUI) and then using the settings to automatically generate initialization code for the GTM which is run once by the CPU on initialization.
• The GTM Configuration Tool provides the means to configure all the individual register sets with the target goal of supporting specialized timer input and output signals for specific application goals like 4, to 8 cylinder applications for powertrain, transmission and motor control, including angle clock hardware and motor commutation.
• The GTM Configuration Tool is Eclipse based and is available as a standalone installation or a plug-in to existing Eclipse based Integrated Development Environments.
• The tool will allow the user to save multiple configurations in project files that can be later recalled. The user will also be able to export and import signals within projects to allow reuse in other GTM Configuration Tool projects.
• As part of the GTM Tool install package there are many examples included with pre-written CPU start up code to allow the user with the use of the examples to start seeing signals using the GTM very quickly. The examples support builds for the main Qorivva Compilers.
The GUI layout is in three sections:
**Project** – Project, Sub-Module Navigation, and Signal Explorer.
**Signal/Data Flow** – Will contain the signals/data flows built for a project and what elements are used.
**Register** – This is where the user will manipulate the registers of the GTM to obtain the desired signal/data flow desired.
Overview - GTM Configuration Tool Project Section
Project Section
The Project Section will contain the list of projects that are open thru the use of panes. Under each project there will be the following panes:
Sub-Module Navigator
Will contain a list of the all the sub-modules down to the channel level so that the user can select the desired element and drag that element to the Signal Section to activate for use and configuration.
Signal Explorer
Will contain a list of the all the Signals/Data flows constructed by the user down to the sub-module level so that the user can view/select all of the signals and channels used in a project in one view.
**Signal Section**
The Signal Section will contain the signals of a project with each signal on a different pane. The user is able to create new signals and name them. When a signal pane is selected the corresponding register panes become active in the Register Section. Register panes not associated with the signal are not shown. Under each signal there will be the following panes:
- **Description**
Will contain a list of the all the sub-module down to the channel level that the user has selected for their signal/data flow. When selecting the elements in the navigation tree the corresponding register tab will become active and displayed for the user to configure. Also, there will be a text box available for the user to put a description of the signal/data flow.
- **Block Diagram**
Will contain a block diagram of the GTM and will highlight only the blocks that the signal configuration is using. The rest of the blocks in the diagram will be grayed out.
Register Section
Contains the panes sub-modules down to the channel level. When a signal pane is selected the corresponding register panes become visible and selectable. Register panes not associated with the signal shall not be visible. Other panes in the Register Section are:
Description
Contains the register/bit description of the last configurable element that the mouse pointer was/is hovering over.
Problems
Contains the results of the last consistency check preformed which is a list of settings that the user may not have considered when configuring the GTM. Also, will generate links to the settings flagged taking the user directly to the problem location.
Console
This pane contains the results of the last code generation performed (ex. which files were generated, location of generation, and any problems that occurred).
Project Preferences
This menu which is accessible from the GTM Tool pull down will allow the user to customize how the code is generated. The user can modify the file names for the code files generated and the function names as well.
The user can also create their own code template file location to allow customization of the initialization code itself while still preserving the original template files that came with the GTM tool.
You can then point the GTM tool to use the default code templates for code generation or the user specific code templates.
## Overview - GTM Configuration Tool Toolbar
<table>
<thead>
<tr>
<th>Button</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Create New Project</td>
<td>Invokes the New GTM Project wizard for project parameters definition.</td>
</tr>
<tr>
<td>Load Project</td>
<td>Opens the Load Project dialog, the project can be selected.</td>
</tr>
<tr>
<td>Save All Signals</td>
<td>Saves all signals in current selected project.</td>
</tr>
<tr>
<td>Close Project</td>
<td>This command closes the currently selected project.</td>
</tr>
<tr>
<td>Create Signal</td>
<td>Creates new signal.</td>
</tr>
<tr>
<td>Import Signal</td>
<td>Imports signal.</td>
</tr>
<tr>
<td>Export Signal</td>
<td>Exports signal.</td>
</tr>
<tr>
<td>Delete Signal</td>
<td>Deletes signal.</td>
</tr>
<tr>
<td>Generate Code</td>
<td>Start code generation for the project.</td>
</tr>
<tr>
<td>Consistency Check</td>
<td>Start consistency checking.</td>
</tr>
<tr>
<td>Help</td>
<td>Open GTM tool help</td>
</tr>
</tbody>
</table>
The GTM configuration tool will generate software to download an image for the MCS cores of the GTM. The GTM tool does not generate a source image for MCS cores; however, the GTM will generate initialization code for the MCS configuration registers.
The tool does not create a build environment for the user but the software that is auto-code generated is tested for functionality on Qorivva compilers. As mentioned previously the example code provided with the installation does have a build and make using the main Qorivva compilers.
Consistency check support is available so that a list of settings can show the user options that may not have been considered when configuring the GTM.
Key Functional Characteristics
- Two independent 300 MHz Power Architecture z7 computational cores
- Single 300 MHz Power Architecture z7 core in delayed lockstep for ASIL-D safety
- Single I/O 200 MHz Power Architecture z4 core
- eDMA controller – 128 channels
- 8M Flash with ECC
- 596k total SRAM with ECC
- 404k of system RAM (incl. 64k standby)
- 192k of tightly coupled data RAM
- 10 ΣΔ & 12 SAR converters – 84 channels
- Ethernet (MII/RMII)
- DSP1 – 8 channels (3 supporting µSec ch.)
- LINFlex - 6 channels (3 supporting µSec ch.)
- MCAN-FD/TTCAN – 4x modules/1x module
- GTM – 248 timer channels
Key Electrical Characteristics
- -40 to +125 °C (ambient)
- 165 °C junction for KGD
- 1.26V Vdd, 5.0V I/O, 5V ADC
Package
- 292 PBGA, 416 PBGA, 512 PBGA
- eCal emulation device for each package
The GTM subsystem requires 2 clocks to operate.
All GTM I/O signals are connected via the multiplexing in SIU to pins on the device.
- The System Integration Unit Lite2 (SIUL2) provides control over all the I/O ports on this device and supports 13 ports with 16-bits of bidirectional, general-purpose input and output signals.
- Peripheral and input multiplexing assignments
Table in the McKinley RM details the MSCR
- See TIM 0 CH0 example here
### Table 271. Peripheral and input multiplexing assignments
<table>
<thead>
<tr>
<th>Instance</th>
<th>Input</th>
<th>SIUL2 MSCR Register</th>
<th>Source Signal Select (MSCR[SSS])</th>
<th>Source Instance</th>
<th>Source Signal</th>
</tr>
</thead>
<tbody>
<tr>
<td>GTM</td>
<td>TIM0_0</td>
<td>512</td>
<td>0000_0000</td>
<td>-</td>
<td>disable low</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_0001</td>
<td>IO_PAD</td>
<td>PD[14]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_0010</td>
<td>IO_PAD</td>
<td>PE[4]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_0011</td>
<td>IO_PAD</td>
<td>PF[1]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_0100</td>
<td>IO_PAD</td>
<td>PB[9]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_0101</td>
<td>IO_PAD</td>
<td>PF[10]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_0110</td>
<td>IO_PAD</td>
<td>PF[13]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_0111</td>
<td>IO_PAD</td>
<td>PH[3]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_1000</td>
<td>IO_PAD</td>
<td>PH[10]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_1001</td>
<td>IO_PAD</td>
<td>PH[13]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_1010</td>
<td>IO_PAD</td>
<td>PM[9]</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0000_1011 - 0010_1111</td>
<td>-</td>
<td>Reserved</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>0011_0000</td>
<td>SDADC_0</td>
<td>WATCHDOG_LIM IT_TRIGGER</td>
</tr>
</tbody>
</table>
GTM Example Projects
- You can immediately start generating code with the GTM Configuration Tool by opening one of the available examples.
GTM Loading an Example Project
- Select GTM > Load Project to open the load dialog. All GTM Configuration Tool examples are located at: {GTM tool installation root}\Examples.
- Use the Browse button and select the examples directory.
- The list of available example projects will be displayed, select the project(s) you would like to use.
- Click Finish.
GTM Creating a New Project
• Specify project name in the **Project name** field, the directory with this name will be created, so make sure that project name is convenient for project configuration management and build system.
• The project location either can be specified explicitly in the **Location field**, or defined workspace can be used – then check the **Use default location** checkbox.
• In the **Select project device** pull down list select required device name. The list of devices might depend on tool version.
• Click **Finish** when all parameters are specified, new project will be listed in the **Project view**
GTM Generating Code
• After opening an example, invoke the code generation (GTM > Generate) of the project to obtain all sources. In the Code Generation Wizard dialog select the project and click Finish.
• The results of the build are displayed in the Console pane.
The generated code (outlined in Red) is placed in the Code sub-folder of the project.
Building a project using example provided
- The example projects contain main application code which initializes the MCU, some pin configuration used by the example, etc.
- To build the executable the Examples directory contains build_<compiler>.bat and makefiles which are used to generate a binary. The parameter to start the batch file is the path to the project.
Building a project using example provided make
- For example, to build the SIMPLE_IRQ project for the TOM module with Greenhills compiler, type the following command:
```
build_ghs.bat
tom\SIMPLE_IRQ
```
- When the build is complete the executable files will be created in the bin subdirectory.
Timer Output Module (TOM)
- Contains 16 output channels
- Arranged into 2 8-channel groups for synchronous PWM output
- Channels are 16-bits wide
- Generates PWM outputs
- 0% & 100% duty cycle supported of up to 20kHz with a 0.025% resolution
- There is complex triggering to support start/stop of PWM signals, output enable/disable and update of PWM period and duty (synch and asynch) values, and Force Immediate Update
- 5 dedicated prescaler clocks can be used
- Greater than or equal compares used
- One dedicated channel to generate pulse code modulated (PCM) signal
One Channel Architecture
Counter Compare Unit 0 (CCU0)
counter compare unit 1 (ccu1)
signal generation unit (sou)
output logic
3 key registers
CN0 is the Timer counter
CM0 is the compare (frequency/period)
CM1 is the duty cycle control
TOM – PWM generator
- Complex triggers sources:
- TBU0,1,2 values
- CPU
- Internal triggers
- Complex trigger mechanism for synchronous:
- Start/Stop of PWM signal generation
- Output enable/disable functionality
- Update of period and duty cycle
- Forced Immediate Update
- Synchronous and asynchronous update of duty cycle
- Continuous and single shot mode of operation
TOM SIMPLE_IRQ Demo
• Description:
− This demo will initialize TOM 0 Channel 0 to generate PWM with frequency 2500Hz and duty cycle 50%. Interrupts occurs on CCU0 and CCU1 trigger.
• Setup:
− Configure GTM Configuration Tool (see next slides)
− Build application by one of the following commands depends on compiler:
▪ build_ghs.bat tom\SIMPLE_IRQ_Demo
▪ build_diab.bat tom\SIMPLE_IRQ_Demo
▪ build_htc.bat tom\SIMPLE_IRQ_Demo
− Load application into the target start execution.
Clocks are derived from main GTM-IP module input clock.
Generates 13 sub module and 3 external clocks.
3 sub units:
- External Generation Unit (EGU)
- Configurable Clock Generation Subunit (CFGU)
- Fixed Clock Generation (FXU)
TOM SIMPLE_IRQ Demo – CMU
- Generates 8 sub module clocks
- TIM, ATOM, TBU and MON
- Global Clock Divider like External Clock Divider
- Clock Source n Divider changes duty cycle
- CH6 and CH7 differ from other channels as SUB_INCn [DPLL] can be used as a clock enable
\[ T_{SYS\_CLK} = \frac{1}{f_{SYS\_CLK}} \]
\[ T_{CMU\_CLK[x]} = \frac{1}{f_{CMU\_CLK[x]}} \]
TOM SIMPLE_IRQ Demo – Time Base Unit
• Provides common/global time bases for GTM
• The time base channels can run independently of each other and can be enabled and disabled synchronously by control bits in a global TBU channel enable register
TOM SIMPLE_IRQ Demo
Summary of steps:
1. Create New Project “SIMPLE_IRQ_DEMO”
2. Create a New Signal “PWM”
3. Drag in the CMU Fixed Clock Signal to be used as the clock for PWM
4. Drag in the TOM channel to configure the output of the PWM
5. Perform Consistency Check
6. Correct any Errors
7. Generate Code
TOM SIMPLE_IRQ Demo – Create Signal PWM
This demo will initialize TOM 0 Channel 0 to generate PWM with frequency 2500Hz and duty cycle 50%. Interrupts occurs on CCU0 and CCU1 trigger.
TOM SIMPLE_IRQ Demo – Drag in the CMU
Enable all fixed clock signal
FXCLK_SEL - Clock source selection: CMU_GCLK_EN
CMU_FCLK0 Clock Frequency (Hz) 4000000
CMU_FCLK1 Clock Frequency (Hz) 2600000.0
CMU_FCLK2 Clock Frequency (Hz) 168250.0
CMU_FCLK3 Clock Frequency (Hz) 9785.625
CMU_FCLK4 Clock Frequency (Hz) 610.36087566785689
This demo will initialize TOM0
Channel 0 to generate PWM with frequency 2500Hz and duty cycle 50%. Interrupts occurs on CCU0 and CCUI trigger.
TOM SIMPLE_IRQ Demo – Drag in the TOM CH
This demo will initialize TOM 0 Channel 0 to generate PWM with frequency 2500Hz and duty cycle 50%. Interrupts occurs on CC0 and CC1 trigger.
TOM SIMPLE_IRQ Demo – Configure TOM CH
TOM SIMPLE_IRQ Demo – Consistency Check
Consistency check completed with no errors! 08-22-2013 04:32:07
TOM SIMPLE_IRQ Demo – Consistency Check
Consistency check completed with 1 error(s)! 08-22-2013 04:33:55
1: Clock source CMU signal CMU_FXCLK for TOM CLK_SRC_SR is disabled
TOM SIMPLE_IRQ Demo – Generate Code
---
Code Generation: Success
Created 95 Files
Consistency Check Errors: 0
Code Generation Errors: 0
TOM SIMPLE_IRQ Demo – Program and Run
• Results on Oscilloscope:
- PB[9] – Pin control by PWM TOM 0 channel 0 output
- PA[1] – Pin Toggles when TOM 0 CCU0 trigger occurs
- PA[3] – Pin Toggles when TOM 0 CCU0 trigger occurs
ATOM – ARU Connected TOM
- Contains 8 output channels
- Similar to TOM channels except
- 24-bits wide
- Connected to ARU
- More modes and complex triggers
- 4 channel output modes
- Signal Output Mode PWM (SOMP) – TOM mode
- Signal Output Mode Immediate (SOMI)
- Signal Output Mode Compare (SOMC)
- Signal Output Mode Serial (SOMS)
- As on TOM, complex trigger mechanism for **synchronous**:
- Start/Stop of PWM signal generation
- Output enable/disable functionality
- Update of PWM signal characteristics
The Signal Output Mode PWM (SOMP) is principally the same as the output mode for the TOM sub module except PCM mode is not included in the ATOM.
It is possible to reload the shadow registers over the ARU without the need of a CPU interaction.
ATOMIC SOMP Demo
• Description:
- Sample initializes ATOM 0 Channel 0 and Channel 1 to SOMP mode.
- ATOM0 CH1 get signal parameters from FIFO CH0.
• Setup:
- Configure GTM Configuration Tool (see next slides)
- Build application by one of the following commands depends on compiler:
- build_ghs.bat atom\SOMP
- build_diab.bat atom\SOMP
- build_htc.bat atom\SOMP
- Load application into the target start execution.
ATOM SOMP Demo Uses the PSM – AFD
- Data interface between the AEI bus and the FIFO, of eight logical FIFO channels
- One Buffer Access Register AFD[i]_CH[x]_BUFF_ACC which CPU access.
Configurable (per ch) storage unit:
- Size (start and end address)
- FIFO operation modes (normal or ring buffer operation mode)
- Fill level control / memory region read protection
Organized as a single RAM (29 wide) which is mapped into the address space of the MCU
In ring buffer mode the FIFO provides a continuous data stream to the F2A sub module. The first word of the FIFO is delivered first and after the last word is provided by the FIFO to the ARU, the first word can be obtained again.
If the application needs to change some data inside the continuous data stream this can be done through direct memory access provided by the FIFO AEI interface.
The F2A has to distribute data from and to the FIFO channels in a configurable manner.
Data transfer between FIFO and ARU is organized with eight different streams that are connected to the eight different channels of the corresponding FIFO module.
Within these streams the F2A can transmit/receive the lower, the upper or both 24 bit values of the ARU together with the ARU control bits according to the configuration.
ATOM SOMP Demo Data Transfer Overview
Summary of steps:
1. Create New Project “SOMP_DEMO”
2. Create a New Signal “PWM”
3. Drag in the CMU Configurable Clock Signal to be used as the clock for ATOM channel.
4. Drag in the TOM channel to configure the output of the
5. Perform Consistency Check
6. Correct any Errors
7. Generate Code
ATOM SOMP Demo – Create Signal PWM
ATOM SOMP Demo – Drag in the CMU
ATOM SOMP Demo – Drag in the ATOM CH 0
ATOM SOMP Demo – Configure ATOM CH 0
ATOM SOMP Demo – Drag in the ATOM CH 1
ATOM SOMP Demo – Configure ATOM CH 1
ATOM SOMP Demo – Drag in the FIFO CH 0
ATOM SOMP Demo – Configure FIFO CH 0
ATOM SOMP Demo – Generate Code
---
**Code Generation: Success**
- Created 95 files
- Consistency Check Errors: 0
- Code Generation Errors: 0
---
Results on Oscilloscope:
- PC[2] - ATOM 0 CH 0 output
- PC[0] - ATOM 0 CH 1 output
Timer Input Module – TIM
- Contains 8 24-bit input channels with dedicated input filters and timeout units for each channel
- Dedicated filter mechanism with two different filter strategies and edge filter thresholds for each channel
- Shadow registers to hold measurement data while new input signal is processed
- Control by CPU and/or ARU possible
- Five different edge characterization modes configurable
- Measure duty & period
- Measure active time of signal
- Count edges
- Prescaler mode
- Bit concentrator mode
TIM – Architecture
- Contains 8 24-bit input channels with dedicated input filters and timeout units for each channel
TIM Channel Architecture
- Data stored in GPR0 and GPR1
- Edge Counter (ECNT)
- Clock Counter and Shadow (CNTS)
- Various Interrupt Events
TIM TPWM_NEWVAL_IRQ Demo
• Description:
– Sample initializes TIM 0 Channel 3 to PWM Measurement Mode.
– Input signal is generated on PA[1] pin by PIT 0 Channel 2 Interrupts.
– When input signal rising edge is detected then TIM_NEWVAL_IRQ is raised.
– Interrupt handler reads registers containing Duty Cycle and Period value in CMU_CLK_0 ticks and recalculate them into microseconds and percentage.
• Setup:
– Start GTM plugin, open "project" and generate code for the project.
– Build application by one of the following commands depends on compiler:
▪ build_ghs.bat tim\TPWM_NEWVAL_IRQ
▪ build_diab.bat tim\TPWM_NEWVAL_IRQ
▪ build_htc.bat tim\TPWM_NEWVAL_IRQ
– Load application into the target start execution.
DPLL
• Transforms an input signal into a higher frequency/resolution signal called a micro tick which is dependent on TRIGGER and STATE inputs.
• Module is highly configurable and does a lot of calculations.
• Angle clock for automotive applications:
- Configurable profiles for camshaft and crankshaft.
- Angle calculations and information centralized in DPLL.
- Position minus time prediction (AN013).
- Plausibility checks of input signal conditions.
- Supports normal and emergency (loss of Cam/Crank) modes with transition between them.
- Integrated RAM blocks (3 kB – 13.5 kB) contain position and increment duration history of crankshaft and camshaft PWM signals.
• Encoder support for Permanent Magnet Synchronous electric motors:
- 2 channels with 3 HALL sensors each with support for missing input signal (hybrid support).
DPLL Tasks
- Predict of the duration of the current increment
- Generate of SUB_INC1,2 pulses for up to 2 position counters in normal or emergency mode
- Seamlessly switch to emergency mode and back to CPU control
- Synchronize actual position (under CPU control)
- Predict position and time related events
DPLL Block Diagram
The DPLL is configured to generate 4 micro ticks per input tick (one tick from rising to rising edge) on the TRIGGER input signal. The four micro ticks are generated for the intervals a, and b correctly.
The DPLL predicts for interval c the same adder value. But the input signal frequency for interval c decreases.
When the DPLL is programmed in Automatic End Mode, which can be controlled by DMO bit, the sub_inc1c output ticks will stop after 4 ticks were generated.
Another adder value is calculated which results in a smaller frequency of the sub_inc signal. Now, since the input signal accelerates, there are not enough micro ticks generated.
To compensate, the DPLL offers two possibilities. One possibility is to distribute for the next interval evenly six micro ticks or to generate two micro ticks fast and the regular 4 micro ticks evenly in the next interval. If the micro ticks should be generated fast, the CMU_CLK0 is used as tick frequency.
The micro tick generation holds for signal `sub_inc1c`. As can be seen from the figure, `sub_inc1` is always generated with the frequency calculated from the last increment duration. Therefore, the `sub_inc1` ticks do not reflect the physical position of the TRIGGER input signal.
How to turn GTM on
• The GTM module is gated off out of reset
• The GTM Integration Registers are located at GTM Base Addr + 0xC0, with the Module Disable (MDIS) bit in the Module Config Register
- This bit must be cleared to enable writes to the GTM registers for configuration before operation
• Next, the top level GTM Configuration Registers should be set
- These include the GTM Control, Bridge Mode and IRQ registers
• In general, the next stage is the Clocks
- The CMU sub module controls the GTM clocking
- The TBU can then be configured
Summary
• Q&A
|
{"Source-Url": "https://www.nxp.com/docs/en/supporting-information/DWF13_AMF_ENT_T1039.pdf", "len_cl100k_base": 7459, "olmocr-version": "0.1.53", "pdf-total-pages": 90, "total-fallback-pages": 0, "total-input-tokens": 124518, "total-output-tokens": 10507, "length": "2e12", "weborganizer": {"__label__adult": 0.0005640983581542969, "__label__art_design": 0.0005345344543457031, "__label__crime_law": 0.0003025531768798828, "__label__education_jobs": 0.0006513595581054688, "__label__entertainment": 8.946657180786133e-05, "__label__fashion_beauty": 0.00032138824462890625, "__label__finance_business": 0.0003085136413574219, "__label__food_dining": 0.00043845176696777344, "__label__games": 0.0010786056518554688, "__label__hardware": 0.03692626953125, "__label__health": 0.0003662109375, "__label__history": 0.0002624988555908203, "__label__home_hobbies": 0.0002884864807128906, "__label__industrial": 0.00394439697265625, "__label__literature": 0.00016355514526367188, "__label__politics": 0.00024437904357910156, "__label__religion": 0.0008382797241210938, "__label__science_tech": 0.06103515625, "__label__social_life": 6.765127182006836e-05, "__label__software": 0.0191650390625, "__label__software_dev": 0.869140625, "__label__sports_fitness": 0.000640869140625, "__label__transportation": 0.0024089813232421875, "__label__travel": 0.0002123117446899414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31450, 0.02046]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31450, 0.37877]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31450, 0.80033]], "google_gemma-3-12b-it_contains_pii": [[0, 95, false], [95, 545, null], [545, 996, null], [996, 1530, null], [1530, 2649, null], [2649, 3188, null], [3188, 3685, null], [3685, 3807, null], [3807, 4241, null], [4241, 4389, null], [4389, 4474, null], [4474, 4862, null], [4862, 5192, null], [5192, 5615, null], [5615, 6201, null], [6201, 6402, null], [6402, 6591, null], [6591, 6767, null], [6767, 6899, null], [6899, 7036, null], [7036, 8441, null], [8441, 8782, null], [8782, 9441, null], [9441, 10413, null], [10413, 11253, null], [11253, 11813, null], [11813, 13247, null], [13247, 13937, null], [13937, 14746, null], [14746, 14794, null], [14794, 17005, null], [17005, 17145, null], [17145, 17501, null], [17501, 18136, null], [18136, 18404, null], [18404, 18490, null], [18490, 18859, null], [18859, 19163, null], [19163, 19740, null], [19740, 19740, null], [19740, 19982, null], [19982, 20369, null], [20369, 20869, null], [20869, 21098, null], [21098, 21465, null], [21465, 21711, null], [21711, 22019, null], [22019, 22204, null], [22204, 22676, null], [22676, 22860, null], [22860, 22899, null], [22899, 23004, null], [23004, 23178, null], [23178, 23316, null], [23316, 23546, null], [23546, 24074, null], [24074, 24074, null], [24074, 24318, null], [24318, 24756, null], [24756, 24942, null], [24942, 25211, null], [25211, 25603, null], [25603, 26025, null], [26025, 26063, null], [26063, 26358, null], [26358, 26393, null], [26393, 26426, null], [26426, 26465, null], [26465, 26502, null], [26502, 26541, null], [26541, 26578, null], [26578, 26617, null], [26617, 26654, null], [26654, 26802, null], [26802, 26885, null], [26885, 27416, null], [27416, 27535, null], [27535, 27675, null], [27675, 28463, null], [28463, 29314, null], [29314, 29622, null], [29622, 29641, null], [29641, 29845, null], [29845, 30112, null], [30112, 30291, null], [30291, 30599, null], [30599, 30880, null], [30880, 31435, null], [31435, 31450, null], [31450, 31450, null]], "google_gemma-3-12b-it_is_public_document": [[0, 95, true], [95, 545, null], [545, 996, null], [996, 1530, null], [1530, 2649, null], [2649, 3188, null], [3188, 3685, null], [3685, 3807, null], [3807, 4241, null], [4241, 4389, null], [4389, 4474, null], [4474, 4862, null], [4862, 5192, null], [5192, 5615, null], [5615, 6201, null], [6201, 6402, null], [6402, 6591, null], [6591, 6767, null], [6767, 6899, null], [6899, 7036, null], [7036, 8441, null], [8441, 8782, null], [8782, 9441, null], [9441, 10413, null], [10413, 11253, null], [11253, 11813, null], [11813, 13247, null], [13247, 13937, null], [13937, 14746, null], [14746, 14794, null], [14794, 17005, null], [17005, 17145, null], [17145, 17501, null], [17501, 18136, null], [18136, 18404, null], [18404, 18490, null], [18490, 18859, null], [18859, 19163, null], [19163, 19740, null], [19740, 19740, null], [19740, 19982, null], [19982, 20369, null], [20369, 20869, null], [20869, 21098, null], [21098, 21465, null], [21465, 21711, null], [21711, 22019, null], [22019, 22204, null], [22204, 22676, null], [22676, 22860, null], [22860, 22899, null], [22899, 23004, null], [23004, 23178, null], [23178, 23316, null], [23316, 23546, null], [23546, 24074, null], [24074, 24074, null], [24074, 24318, null], [24318, 24756, null], [24756, 24942, null], [24942, 25211, null], [25211, 25603, null], [25603, 26025, null], [26025, 26063, null], [26063, 26358, null], [26358, 26393, null], [26393, 26426, null], [26426, 26465, null], [26465, 26502, null], [26502, 26541, null], [26541, 26578, null], [26578, 26617, null], [26617, 26654, null], [26654, 26802, null], [26802, 26885, null], [26885, 27416, null], [27416, 27535, null], [27535, 27675, null], [27675, 28463, null], [28463, 29314, null], [29314, 29622, null], [29622, 29641, null], [29641, 29845, null], [29845, 30112, null], [30112, 30291, null], [30291, 30599, null], [30599, 30880, null], [30880, 31435, null], [31435, 31450, null], [31450, 31450, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31450, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31450, null]], "pdf_page_numbers": [[0, 95, 1], [95, 545, 2], [545, 996, 3], [996, 1530, 4], [1530, 2649, 5], [2649, 3188, 6], [3188, 3685, 7], [3685, 3807, 8], [3807, 4241, 9], [4241, 4389, 10], [4389, 4474, 11], [4474, 4862, 12], [4862, 5192, 13], [5192, 5615, 14], [5615, 6201, 15], [6201, 6402, 16], [6402, 6591, 17], [6591, 6767, 18], [6767, 6899, 19], [6899, 7036, 20], [7036, 8441, 21], [8441, 8782, 22], [8782, 9441, 23], [9441, 10413, 24], [10413, 11253, 25], [11253, 11813, 26], [11813, 13247, 27], [13247, 13937, 28], [13937, 14746, 29], [14746, 14794, 30], [14794, 17005, 31], [17005, 17145, 32], [17145, 17501, 33], [17501, 18136, 34], [18136, 18404, 35], [18404, 18490, 36], [18490, 18859, 37], [18859, 19163, 38], [19163, 19740, 39], [19740, 19740, 40], [19740, 19982, 41], [19982, 20369, 42], [20369, 20869, 43], [20869, 21098, 44], [21098, 21465, 45], [21465, 21711, 46], [21711, 22019, 47], [22019, 22204, 48], [22204, 22676, 49], [22676, 22860, 50], [22860, 22899, 51], [22899, 23004, 52], [23004, 23178, 53], [23178, 23316, 54], [23316, 23546, 55], [23546, 24074, 56], [24074, 24074, 57], [24074, 24318, 58], [24318, 24756, 59], [24756, 24942, 60], [24942, 25211, 61], [25211, 25603, 62], [25603, 26025, 63], [26025, 26063, 64], [26063, 26358, 65], [26358, 26393, 66], [26393, 26426, 67], [26426, 26465, 68], [26465, 26502, 69], [26502, 26541, 70], [26541, 26578, 71], [26578, 26617, 72], [26617, 26654, 73], [26654, 26802, 74], [26802, 26885, 75], [26885, 27416, 76], [27416, 27535, 77], [27535, 27675, 78], [27675, 28463, 79], [28463, 29314, 80], [29314, 29622, 81], [29622, 29641, 82], [29641, 29845, 83], [29845, 30112, 84], [30112, 30291, 85], [30291, 30599, 86], [30599, 30880, 87], [30880, 31435, 88], [31435, 31450, 89], [31450, 31450, 90]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31450, 0.05882]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
ba15e7f405869f0d14c454f2409394a81b76b74e
|
ADVANTAGES OF MULTIAGENT SYSTEMS IN OPTIMIZATION:
CASE OF THE DISTRIBUTED GENETIC ALGORITHM
Sadok Bouamama*, khaled Ghédira **
SOIE (ex Uriasis)
SOIE/ISG/Université Tunis
B. 204, Département d'Informatique
41, rue de la liberté, 2000 cité Bouchoucha. Tunisie
* sadok.bouamama@laposte.net, ** khaled.ghedira@isg.rnu.tn
Abstract
This paper aims to determine the advantages of the Multi-agent use in an optimization process. For this it will focus on the case of genetic algorithms which are known to be “expensive” in time. Especially it presents and studies a distributed Guided Genetic Algorithm (DGGA) dealing with Maximal Constraint Satisfaction Problems. This algorithm consists of dynamically created agents cooperating in order to satisfy the maximal number of constraints. Each agent performs its own GA, guided by both the template concept and the Min-conflict-heuristic, on a sub-population composed of chromosomes violating the same number of constraints. The objective here is to determine the effect of each one of the improvement factors (guidance by both min-conflict heuristic and the template concept and the same fitness chromosomes species) if or not they are combined with multi-agent approach. So here we will proceed by experimental comparison of a family of genetic algorithms having included all possible combination of factors described above. The result shows clearly that, thanks to agents interactions, the use of multi-agent systems intensifies the betterment given by the other improvement factors.
Key Words
Max_CSPs, multi-agent systems, genetic algorithms, Min-conflict-heuristic, template concept.
1. Introduction
CSP formalism consists of variables associated with domains and constraints involving subsets of these variables. A CSP solution is an instantiation of all variables with values from their respective domains. The instantiation must satisfy all constraints. In addition to their simple and generic formalization, CSPs are omni-present in many real-life problems ranging from industrial applications, such as scheduling and planning to school examples, such as n-queens and graph coloring problems. A CSP solution, as defined above, is costly to get and does not necessarily exist in every problem. In such cases, one had better search an instantiation of all variables that satisfies the maximal number of constraints. Such problems, called Maximal CSPs and referred to as Max-CSPs, make up the framework of this paper.
Max-CSPs have been dealt with by complete or incomplete methods. The first ones, such as extended forward checking algorithm [17, 7], and branch and bound algorithms [17, 26] are able to provide an optimal solution. Unfortunately, this advantage is thwarted by the combinatorial explosion. The second ones, such as Simulated Annealing, Genetic Algorithms [13] and Tabu Search have the same property to avoid the trap of local optima. They also sacrifice completeness for efficiency. There is another incomplete but distributed method known as Distributed Simulated Annealing (DSA). It has been successfully applied to several fields: static Max-CSP [10], dynamic Max-CSP [8], Resource allocation problem [Ghédira 95] and graph partitioning problem [4]. As DSA outperforms the centralized Simulated Annealing in terms of both optimality and quality, the same idea is adopted for Centralized Genetic Algorithms (CGAs) which are especially known to be expensive. The result was the distributed guided genetic algorithm for Max_CSPs [11]. Our interest in GAs is also motivated by their proven usefulness in hard optimization problems [24], solving multiprocessor scheduling problems [Michael & Al 99, 25], optimal image enhancement [20], etc.
This paper is organized as follows: The next subsection recalls presents the Distributed Guided Genetic Algorithm: the basic concepts, the agent structure and finally the global dynamic. The following one details both experimental design and results used to study the effect of multiagent systems use on GAs. Finally, concluding remarks and possible extensions to this work are proposed.
2. Distributed Guided Genetic Algorithm
2.1. Basic principles
The relationship between both genetic and CSP formalisms is outlined in figure 1. Each chromosome (respectively gene) is equivalent to CSP potential solution (respectively variable). Moreover, each allele corresponds to a value. On the other hand, each chromosome is attached to a template [24] that is made up of weights referred to as templatei,j. Each one of them corresponds to geneij where i refers to the chromosome and j to the position. δij represents the number of violated constraints by geneij. These weights are updated through the penalty operator.
The DGGA approach draws basically on the concept of both species and ecological niches. The species consists of several organisms having common characteristics whereas the ecological niche represents the task performed by a given species. Goldberg sets that the sexual differentiation based on specialization via both the building of species and the exploitation of ecological niches provides good results [13]. A certain number of methods have been settled in order to favor the building of ecological niches [DEJONG 89, 12] in GAs.
| Variables: | $V_1, V_2, V_3, V_4$ |
| Di: Domain of $V_i$ | $D_i = \{1, 2, 3\}$ for all $i$ in $\{1, 2, 3, 4\}$ |
| Constraints: | $C_1 : V_1 + V_3 > 4$, $C_2 : V_2 - V_4 \geq 2$, $C_3 : V_3 - V_2 = 2$ |
**Figure 1. Relationship between Genetic and CSP formalisms**
So, the idea here is to partition the initial population into sub-populations and to assign each one of them to an agent called Specie agent. A given sub-population consists of chromosomes violating the same number of constraints. This number, say $n$, is called the specificity of the Specie agent Specie$_n$. Thus, DGGA involves at most $n_c$ (total number of constraints) Specie agents in interaction, in order to reach a total satisfaction namely the maximal number of satisfied constraints. For this reason, each Specie agent performs its own GA which has been enriched by the Min-conflict-heuristic and the template concept. An intermediary agent is necessary between the society of Specie agents and the user, essentially to detect the best partial solution reached during the dialogue between the Species. This agent, called Interface, may also possibly create new Specie agents.
### 2.2. Agent Structure
Each agent has a simple structure: its acquaintances (the agents it knows and with which it can communicate), a local knowledge composed of its static and dynamic knowledge, and a mailbox where it stores the received messages to be later processed one by one.
#### 2.2.1. Specie agent
A Specie agent has got as acquaintances the other Specie agents and the Interface agent. Its static knowledge consists of the CSP data (i.e. the variables, their domains of values and the constraints), the specificity (i.e. the number of violated constraints) and its local GA parameters (mutation probability, cross-over probability, number of generations, etc.). Its dynamic knowledge takes components as the population pool which varies from one generation to another (chromosomes, population size).
#### 2.2.2. Interface agent
An Interface agent has as acquaintances all the Specie agents. Its static knowledge consists of the CSP data. Its dynamic knowledge includes the best chromosome (i.e. best partial solution).
### 2.3. Global agents dynamic
The Interface agent randomly generates the initial population and then partitions it into sub-populations accordingly to their specificities. After that the former creates Specie agents to which it assigns the corresponding sub-populations. Then the Interface agent asks these Specie agents to perform their optimization processes (figure 2 line 3). So, before starting its optimization process, i.e. its behavior (figure 3), each Specie agent, Specie$_n$, initializes all templates corresponding to its chromosomes (figure 3 line 3). After that it carries out its genetic process on its initial sub-population, i.e. the sub-population that the Interface agent has associated to it at the beginning. This process, which will be detailed in the following subsection, returns a sub-population “pop” (figure 3 line 4) that has been submitted to the crossing and mutating steps only once, i.e. corresponding to one generation. For each chromosome of pop, Specie$_n$ computes the number of violated constraints “nvc” (figure 3 line 6). Consequently, two cases may occur. The first one corresponds to a chromosome violating the same number of constraints of its parents. In this case, the chromosome replaces one of the latter randomly chosen (figure 3 line 8). In the second case is that this number (nvc) is different from (n), i.e. the specificity of the corresponding Specie$_n$. Then the chromosome is sent to another Specie$_{nvc}$ (figure 3 line 10) if such an agent already exists, otherwise it is sent to the Interface agent (figure 3 line 11). The latter creates a new agent having nvc
as specificity and transmits the quoted chromosome to it. Whenever a new Specie agent is created, the Interface agent informs all the other agents about this creation (figure 2 line 7) and then asks the new Specie to perform its optimization process (figure 2 line 3). Note that message processing is given a priority. So, whenever an agent receives a message, it stops its behavior, saves the context, updates its local knowledge, and restores the context before resuming its behavior.
1. \( m \leftarrow \text{getMsg (mailBox)} \)
2. case \( m \) of
3. optimization-process (sub-population) :
4. apply-behavior (sub-population)
5. take-into-account (chromosome) :
6. population-pool \( \leftarrow \) population-pool \( \cup \) \{chromosome\}
7. inform-new-agent (Specie\(_{nvc}\)) :
8. list-acquaintances \( \leftarrow \) list-acquaintances \( \cup \) \{Specie\(_{nvc}\}\}
9. stop-process : stop-behavior
Figure 2. Message processing relative to Specie\(_{nvc}\)
If all the Specie agents did not meet a chromosome violating zero constraints at the end of their behavior, they successively transmit one of their randomly chosen chromosomes, linked to its specificity to the Interface agent. The latter determines and displays the best chromosome namely the one which violates the minimal number of constraints.
2.3.1. The detailed genetic process
This process differs from the canonical GA described [12] of the use of both templates and min-conflict-heuristic [9, TSANG & AL 99]. It starts with determining the mating-pool which consists of pairs of chromosomes, randomly selected by means of the matching procedure (figure 4 line 1). Out of each pair of chromosomes, the cross-over operator produces a new child as described in figure 6. The child inherits the best genes, i.e. the “lighter” ones, from its parents. The probability, for a parent chromosome, \( (i = i_1 \) or \( i_2) \), where \( \text{sum} = \text{template}_{i_1,j} + \text{template}_{i_2,j} \) to propagate its gene\(_{i,j}\) to its child chromosome is equal to \( 1 - \text{template}_{i,j}/\text{sum} \). This confirms the fact that the “lighter” genes, i.e. having the smallest number of violated constraints, are more likely than the other to be passed to the child. For each one of its chromosomes selected according to the mutation probability \( P_m \) (figure 8 line 2 ), Specie\(_n\) uses the min-conflict-heuristic (figure 8 line 3) first to determine the gene (variable) involved in the maximal number of violated constraints (see figure 7 line 1), secondly to select from this gene domain the value that violates the minimal number of constraints (figure 7 lines 2-7) and finally to instantiate this gene with this value (figure 7 line 8).
It is tempting to note that the change of both gene (variable) and allele (value) is dynamic. Indeed, it depends on other variable values relative to the same chromosome. So different variables (see figure 7 line 1) as well as different values may be chosen (figure 7 lines 2-7). Thanks to these changes, new values may appear and consequently new chromosomes (that were not in the initial population) may occur. Thereby, the population will be more diversified thus enhancing the search and giving it more chance to converge.
If the obtained chromosome does not violate any constraint (figure 8 line 4), Specie\(_n\) asks the Interface agent to stop the whole process (figure 8 line 5). Thus, the latter first requests all the Specie agents to stop their behaviors (figure 2 line 9) and then displays the quoted chromosome. Otherwise, this chromosome is added to the offspring pool and Specie\(_n\) carries on with its behavior. Note that the stopping message also occurs in the crossing step (figure 5 line 8).
Here we describe the syntax used in the figures:
- \( \text{sendMsg (sender, receiver,'message') } \): ‘message’ is sent by “sender” to “receiver”.
- \( \text{getMsg (mailBox)} \): retrieves the first message in mailBox.
Apply-behavior (initial-population)
1. init-local-knowledge
2. for \( i := 1 \) to number-of-generations do
3. template-updating (initial-population)
4. pop \( \leftarrow \) genetic-process (initial-population)
5. for each chromosome in pop do
6. nvc \( \leftarrow \) compute-violated-constraints (chromosome)
7. if (nvc = n) then replace-by (chromosome)
8. else if exist-ag (Specie\(_{nvc}\)) then sendMsg (Species\(_n\), Species\(_{nvc}\), 'take-into-account (chromosome)' )
9. else sendMsg (Species\(_n\), Interface, 'create-agent (chromosome)' )
10. sendMsg (Species\(_n\), Interface, 'result (one-chromosome, specificity)' )
Figure 3. Behavior relative to Species\(_n\)
3. Experimentation
3.1 Introduction
The goal of our experimentation is to compare a family of genetic algorithms using or not multiagent systems. The implementation has been done with ACTALK [BRIOT 89], a concurrent object language implemented above the Object Oriented language SMALLTALK-80.
3.2 Experimental design
Our experiments are performed on binary CSP-samples randomly generated. The generation is guided by classical CSP parameters: number of variables (n), domain size (d), constraint density p (a number between 0 and 100% indicating the ratio between the number of the problem effective constraints to the number of all possible constraints, i.e. a complete constraint graph) and constraint tightness q (a number between 0 and 100% indicating the ratio between the number of forbidden pairs of values (not allowed) by the constraint to the size of the domain cross product). As numerical values, we use n = 20, d = 20. Having chosen the following values 0.1, 0.3, 0.5, 0.7, 0.9 for the parameters p and q, we obtain 25 density-tightness combinations. For each combination, we randomly generate 30 examples to obtain 750 examples. Moreover and considering the random aspect of genetic algorithms, we have performed 10 experimentations per example and then we take the average without considering outliers. For each combination density-tightness, we also take the average of the 30 generated examples. Regarding GA parameters, all implementations use a number of generations (NG) equal to 10, an initial-population size equal to 1000, a cross-over probability equal to 0.5, a mutation probability equal to 0.2 and a random replacement. The performance is assessed by the two following measures:
- Run time: the CPU time requested for solving a problem instance,
- Satisfaction: the number of satisfied constraints.
The first one shows the complexity whereas the second tells about the quality. In order to have a quick and clear comparison of the relative performance of two approaches A1 and A2, we compute ratios of A1 and A2 performance using the Run time and the satisfaction, as follows:
\[
\text{CPU-ratio} = \frac{\text{A1-Run-time}}{\text{A2-Run-time}} \\
\text{Satisfaction-ratio} = \frac{\text{A2-Satisfaction}}{\text{A1-Satisfaction}}.
\]
Thus, A1 performance is the numerator when measuring the CPU time ratios, and the denominator when measuring satisfaction ratio. Then, any number greater than 1 indicates superior performance by A2. Consequently the advantage of specialization based distribution. Note that this kind of comparison has already been proven to be efficient [11].
4. How to determine MAS effects on GAs
4.1 Introduction
DGGA uses a society of dynamically created agents cooperating in order not only to provide an optimal solution for Max-CSP but also to minimize temporal complexity of Genetic Algorithms (GAs). Each agent is responsible for a sub-population of chromosomes, that it handles by a local GA guided by both the template concept (tc) and the min-conflict-heuristic (mch). All the chromosomes of a given Specie agent violate the same number of constraints (having the same fitness). Experimentally compared with a centralized implementation of “GA + tc + mch” in terms of quality and CPU time, DGGA has been always proven to be better [11].
It’s clear that DGGA uses simultaneously three improvement factors that are:
- Distribution using multi-agent systems,
- Guidance with "tc" and "mch",
- Use of the same fitness.
In order to analyze which improvement factor gave the DGGA its progress and what the surplus given to by MAS use, we proposed to study the effect of each one of these factors. The idea is, then, to compare results given once trying the following approaches on the same Max_CSPs (randomly generated);
- DGGA (MAS + tc + mch + same fitness)
- DGGA_DF: which is a distributed GA where agents are responsible of a subset of chromosomes having different fitness. The idea is to send chromosomes randomly, at the end of every generation, to one of the existing subset. In this case the species theory will not be considered.
- DGA: which is a not guided distributed GA so both crossover and mutation mechanism are random ones i.e only mutation operator manages mutation and only crossover operator intervene in crossover.
- GGA (centralized GA having heterogynous chromosomes)
In each attempt we will take two approaches having only one difference, so that
- DGGA versus DGGA_DF determines the effect of using the same fitness
- DGGA versus DGA gives the effect of guidance
- DGGA_DF versus GGA treats distribution effects
4.2 Experimental results
● Same fitness using effects:
In this part, DGGA (MAS + tc + mch + same fitness) and DGGA_DF (chromosomes having different fitness) will be compared. Using chromosomes having the same fitness or not using them is the only difference between the two approaches. We compute ratios of DGGA and DGGA_DF performance using the Run time and the satisfaction, as follows:
CPU-time-ratio = DGGDF -Run-time / DGGF-Run-time
Satisfaction-ratio = DGGF-Satisfaction / DGGDF -Satisfaction
Both figure 10 and figure 11 show the performance ratios from which we draw out the following results:
- From the CPU time point of view (figure 10), DGGA requires up to 1.8 times less for the most weakly constrained and most weakly tight set of examples. Nevertheless, and in some problems especially for the over-constrained and for the most strongly tight set of examples, the CPU time ratio is less than 1. CPU-time ratios average value is about 1.25.
- From the satisfaction point of view (figure 11), the DGGA always finds more satisfaction than DGGA_DF or the same one. It finds about 1.2 times more for the most strongly constrained and most tight set of problems. Satisfaction ratios value is about 1.3 and can attain 3 for the most weakly constrained and most weakly tight set of examples.
These experimentations had shown that the use of chromosomes having the same fitness by a genetic algorithm gives better results than the use of heterogeneous ones.
**Guidance effects:**
As we did in the last experimental part, we will try to find out which is better than the other: a GA using guidance with "tc" and "mch" or which does not. For this reason, DGGA versus DGA experimentation results will be treated. Let’s mention here that only guidance (with "tc" and "mch") makes the difference between the two algorithms; in fact DGA is a not guided distributed GA. Using the Run time and the satisfaction, Ratios of DGA and DGGA performance are computed as follows:
CPU-time-ratio = DGA-Run-time / DGGA-Run-time
Satisfaction-ratio = DGGA-Satisfaction / DGA-Satisfaction.
Figure 12 shows that, the CPU-time ratio is greater than 1. This demonstrates that DGGA is better than DGA for the majority of the token problems set. The average value is equal to 1.17. This ratio reaches its maximum (1.7) for problems which densities are located around 0.1 and have some values less than 1.
**Distribution effects:**
Our main concern in this part is to determine distribution effects. Then a distributed approach and a centralized one will be compared; DGGA_DF versus GGA will treat distribution effects. It is clear that what differs the only the two GAs is only whether to use the distribution by multi-agent systems or not. Ratios of GGA and DGGA performance ratios will be considered as follows:
CPU-time-ratio = GGA -Run-time / DGGA-Run-time
Satisfaction-ratio = DGGF-Satisfaction / GGA-Satisfaction.
Figure 14. CPU-time ratio
Figure 14 illustrate the CPU time point of view, and shows that DGGA_DF furnish results are twice as good as those given by the centralized version of the GA. Let us mention that this ratio attains it maximum (3.2) in case of the most strongly constrained (i.e. over-constrained) set of examples. So the distributed approach advantage is more significant for problems which tightness are located around 0.5 (see peaks in figure 14). The location and the sharpness of this area depend on the density parameter. This area corresponds to the transition phase. The transition covers the passage from under-constrained problems which are relatively easy to solve, to over-constrained problems that are relatively easy to prove insoluble [BARBARA 94]. The average value of these ratios is about 1.63.
From the satisfaction point of view shown in figure 15, the DGGA_DF always finds more satisfaction than GGA or same one. It finds about 2 times more for the most weakly constrained set of problems which are located in the area corresponding to the transition phase. Satisfaction ratios average is 1.13.
● The way to the best approach:
Last series of experimentations are done considering the same set of problems for the four algorithms at every trial. Thus we can determine the most efficient algorithm. We will consider figure 15 and figure 16 which represent the CPU-time average and the satisfaction average for the four approaches as well as for different values of density.
Figure 16 shows clearly that the DGGA CPU-time is the shortest one, otherwise this approach needs lesser time than the others. The centralized GA comes in the last rank as the approach having the greatest CPU-time. These results are clearly shown in area corresponding to the transition phase (i.e density value is 0.5). In figure 15 attention is focused on satisfaction; the number of satisfied constraints, when using DGGA approach is the greatest among all the others. Figure 16 and figure 17 demonstrate that the DGGA approach gave better results comparing to those tested approaches.
5. Conclusion
This paper has shown that Distribution by MAS, guidance by Min-conflict-heuristic and template concept and using species having same fitness chromosomes are improvement factors. So when used, these factors could better the result given not only in CPU-time but also in satisfaction.
While each one of these improvement factors can improve the result until 3 time, DGGA requires up to six times less than GGA[11]. Therefore, one may conclude that the combination of improvement factors gives better results than their separation. This is, in fact, the result of the interaction of agents in the used multi-agent system. Thanks to this interaction complexity time is reduced and better solution is given. This agents interaction intensifies the effect given by the other improvement factors, allowing, then, the reduction of CPU-time and the rise of satisfied constraints. No doubt further refinement of this work would allow its performance to be improved.
6. References
|
{"Source-Url": "http://web1.see.asso.fr/csimta2004/cdrom/Papers/209.pdf", "len_cl100k_base": 5481, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 22685, "total-output-tokens": 7434, "length": "2e12", "weborganizer": {"__label__adult": 0.00037479400634765625, "__label__art_design": 0.0004954338073730469, "__label__crime_law": 0.000614166259765625, "__label__education_jobs": 0.0016021728515625, "__label__entertainment": 0.00014603137969970703, "__label__fashion_beauty": 0.0002281665802001953, "__label__finance_business": 0.0005474090576171875, "__label__food_dining": 0.00046443939208984375, "__label__games": 0.000965118408203125, "__label__hardware": 0.00131988525390625, "__label__health": 0.0013227462768554688, "__label__history": 0.000579833984375, "__label__home_hobbies": 0.0001832246780395508, "__label__industrial": 0.0010499954223632812, "__label__literature": 0.0003960132598876953, "__label__politics": 0.0005621910095214844, "__label__religion": 0.0006146430969238281, "__label__science_tech": 0.484130859375, "__label__social_life": 0.00017023086547851562, "__label__software": 0.01059722900390625, "__label__software_dev": 0.491943359375, "__label__sports_fitness": 0.0005092620849609375, "__label__transportation": 0.0008897781372070312, "__label__travel": 0.00026798248291015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29195, 0.02875]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29195, 0.48801]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29195, 0.87813]], "google_gemma-3-12b-it_contains_pii": [[0, 4728, false], [4728, 9082, null], [9082, 13747, null], [13747, 14042, null], [14042, 18741, null], [18741, 21269, null], [21269, 24337, null], [24337, 29195, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4728, true], [4728, 9082, null], [9082, 13747, null], [13747, 14042, null], [14042, 18741, null], [18741, 21269, null], [21269, 24337, null], [24337, 29195, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29195, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29195, null]], "pdf_page_numbers": [[0, 4728, 1], [4728, 9082, 2], [9082, 13747, 3], [13747, 14042, 4], [14042, 18741, 5], [18741, 21269, 6], [21269, 24337, 7], [24337, 29195, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29195, 0.02041]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
0274afe32226204660d52305197839fe36d880fa
|
Integrating Intelligent Methodological and Tutoring Assistance in a CASE Platform: The PANDORA Experience
Elena Castro
edcastro@inf.uc3m.es
Dolores Cuadra
dcuadra@inf.uc3m.es
Paloma Martinez
pmf@inf.uc3m.es
Ana Iglesias
aiglesia@inf.uc3m.es
University Carlos III of Madrid, Spain
Abstract
Database Design discipline involves so different aspects as conceptual and logical modelling knowledge or domain understanding. That implies a great effort to carry out the real world abstraction task and represent it through a data model. CASE tools emerge in order to automating the database development process. These platforms try to help to the database designer in different database design phases. Nevertheless, this tools are frequently mere diagrammers and do not carry completely out the design methodology that they are supposed to support; furthermore, they do not offer intelligent methodological advice to novice designers.
This paper introduces the PANDORA tool (acronym of Platform for Database Development and Learning via Internet) that is being developed in a research project which tries to mitigate some of the deficiencies observed in several CASE tools, defining methods and techniques for database development which are useful for students and practitioners. Specifically, this work is focused on two PANDORA components: Conceptual Modelling and Learning Support subsystems.
Keywords: CASE tools, Database Design Methodologies, Intelligent Tutoring systems.
Introduction
Currently, there exist two main tendencies in the database design methodologies; one of them concerns those methodologies that are derived from Teorey, Yang & Fry (1986) which are calling relational databases design methodology. The second approach is related to the inclusion of object oriented data models in the database design (Rumbaugh, Blaha & Premerlani, 1991). Both of them have CASE tools which provide automated support to database design methodologies.
Although this paper does not concern object oriented database design, it is important to indicate that Unified Process (OMG, 2000) or IDEA (Ceri & Fraternali, 1996) are outstanding methodologies with commercial CASE support such as Rational Rose environment.
Database Design Methodologies (Teorey, Yang and Fry, 1986) generally use as conceptual data model the Entity Relationship (ER) model (Chen, 1976). Next methodological steps such as logical design, normalisation process and physical design, vary from some methodologies to others; for instance, in some methodologies it is supposed that if a good conceptual design is achieved, the normalisation phase is not necessary (Elmasri & Navathe, 2000, Silberschatz; Korth & Sudarshan, 2001).
Commercial CASE tools for database development do not usually cover database design phases with
Integrating Intelligent Methodological and Tutoring
real ER schemata, and they do not incorporate capabilities for refinement and validation processes and, in most cases, they manage hybrid models (merging aspects from ER and Relational models or using a subset of ER graphical notation for representing relational schemata) and sometimes these models are too close to physical aspects.
Next section is devoted to show the principal deficiencies in current CASE tools. Following, PANDORA project, whose main objective is to build a prototype of a CASE tool to be useful in database relational design and learning, is described focusing on its main contributions. Finally, some conclusions are presented.
**Comparative Analysis Of Case Platforms**
<table>
<thead>
<tr>
<th>Construct</th>
<th>BINARY MODELS</th>
<th>N-ARY MODELS</th>
</tr>
</thead>
<tbody>
<tr>
<td>Entity Type</td>
<td>Designer 2000</td>
<td>ERWIN (IDEF1X)</td>
</tr>
<tr>
<td>Regular</td>
<td>YES</td>
<td>YES</td>
</tr>
<tr>
<td>Weak</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Domains</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Attribute Types</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Main Identifier</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Alternate Identifier</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Derived</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Multivalued</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Composed</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Optional</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Foreign Key</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>Relationship Types</td>
<td></td>
<td></td>
</tr>
<tr>
<td>1:N Binary</td>
<td>YES</td>
<td>YES</td>
</tr>
<tr>
<td>N:M Binary</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>N-ary</td>
<td>NO</td>
<td>NO</td>
</tr>
<tr>
<td>Cardinality Constraints</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Max Cardinality</td>
<td>Look across</td>
<td>Look across</td>
</tr>
<tr>
<td>Min Cardinality</td>
<td>Look here</td>
<td>Look across</td>
</tr>
<tr>
<td>Attribute Relationship</td>
<td>NO</td>
<td>NO</td>
</tr>
<tr>
<td>Generalization/Specialization</td>
<td>Complete and Disjoint</td>
<td>Complete or Incomplete and Disjoint</td>
</tr>
<tr>
<td>Existence</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Identifying</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Methodological Assistant</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Syntactic Validation</td>
<td>Yes</td>
<td>YES</td>
</tr>
<tr>
<td>Semantic Validation</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>Assistant designer</td>
<td>No</td>
<td>No</td>
</tr>
</tbody>
</table>
Table 1: Comparative table
In order to analyse the capabilities and inadequacies of CASE technology, below a comparative study of four CASE tools (Designer2000, Erwin, Sylverrun and Power Designer because of their popularity) is presented. For each CASE tool, three features have been studied: ER constructs supported, ER schemata validation performed and, finally, if methodological assistance to facilitate modelling processed is provided. Table 1 shows a summary report.
The ER constructs analysed are:
1. **Entity Types:** All CASE tools studied draw regular entities, but weak entities only can be represented when there exist an identifying relationship.
2. **Relationship Types:** CASE tools can be classified in binary CASE and n-ary CASE tools depending on if they support binary or n-ary models (Song, Evans & Park, 1995). Binary CASE tools only represent binary relationships and, as they draw a line for each relationship, attributes in relationships can not be represented. Relating to cardinality constraints, the terminology *look across* and *look here* refers the place where the cardinality (maximum) or participation (minimum) constraints are specified in ER schemata (by looking across the relationship from the other direction or looking here first).
3. **Generalizations:** All CASE tools let users to represent generalisations but, in general, only complete and disjoint generalisations are allowed. Also they can represent identifying dependencies, but only two of them represent existence dependencies.
4. **Semantic constraints on attributes:** Main and alternative identifiers can be defined in all tools; nevertheless composed and derived attributes can not be declared. Another important aspect that we have considered in our study is the foreign key constraint. The foreign key constraint appears in Erwin CASE as an attribute property, and in some binary models it appears when a one-to-many relationship is established, so we understand that some CASE tools mix logical and conceptual designs.
5. **Syntactic and semantic validation:** CASE platforms should provide tools incorporating syntactic and semantic validation rules in order to be able to refine schemata and to help beginners to learn techniques for improving their designs (Bouzeghoub, Kedad & Métais, 2000).
All CASE tools studied here have the following syntactic validations:
- The possibility of having different entity and relationship names (uniqueness).
- One attribute can not exist independently of its entity or relationship.
- A relationship is a binary association between two entities not necessarily different entities.
- A relationship can not participate in other relationship (there are not relationships of relationships).
- Cardinalities are defined as positive integer number intervals, so minimum cardinality must be always less than maximum cardinality.
- There not exist cycles in the generalisation hierarchies.
Semantic validation is more complex due to the complexity associated to the domain knowledge (Batra & Antony, 1994, Batra & Zanakis, 1994). Silverrun CASE tool presents several types of semantic validation interacting with the user by some questions that help, for example, to choose a main identifier for an entity or to validate cardinality constraints in a relationship.
6. **Transformation into logical models:** Perhaps, the most important gap in most CASE environments is that all ER constraints are lost when transformation rules are applied. All CASE tools automatically apply some basic rules, (Teory, 1999), but they don’t support other kind of rules in order to check, for example, the completeness and disjoint of a hierarchy. So they don’t carry out an “intelligent transformation”.
Concerning this topic, it would be necessary a methodological assistant that gives advice to beginners and practitioners in order to which learn and help them to achieve a good design.
PANDORA (CASE Platform for Database development and learning via Internet. Spanish research CICYT project (TIC99-0215)) is a research project devoted to develop a CASE platform for database learning, design and implementation. It is composed of a set of modules (Figure 1) that can be independently used or in a methodological framework. In a first level, three layers are identified: conceptual modelling, design and automatic code generation subsystems. Over these layers, there is a learning support subsystem that provides an intelligent tutor for methodological assistance through the use of the different tools as well as a Web-based learning component.
The core of PANDORA platform is the Repository (metabase) that keeps all the resources and that supports the storage of Extended Entity-Relationship (EER) as well as relational schemata, SQL scripts, triggers and so forth. The design of the repository was decomposed in two metamodels, one for storing EER schemata and the other for storing relational schemata. Both of them describe all the constructs that they support. Moreover, this separation clearly distinguishes the two fundamental phases of the database development: conceptual and logical designs. Current CASE tools lack of this important distinction. Following, a brief description of PANDORA components along with their main contributions are given.
Conceptual Modelling Subsystem is composed of two modules: the EER Modelling and the Natural Language Analysis modules. The former is used by designers in drawing and verifying conceptual schemata; it also allows to store and retrieve conceptual schemata. The Natural Language Analysis module provides some facilities to interpreter a descriptive text in order to get proposals of EER schemata according to requirements appearing in the text, Martínez and García-Serrano (2000). Moreover, this module also supports an interactive process for identifying and validating binary relationship cardinalities in the conceptual modelling phase. This component profits from natural language processing techniques, first-order
logic and some modelling heuristics. Cardinality constraints, especially in higher order relationships, are difficult to understand and model by students, and some validation methods are required. What we propose is an approach that combines syntax (grammatical categories, word collocations, etc.), semantics (meanings of words, phrases and sentences) as well as first order logic to extract cardinality constraints and validate them with the user.
Concerning the Design Subsystem, it includes four modules: Transformation into Relational Model, Relational Modelling, Normalisation Algorithms and SQL-3 code (Melton & Simon, 2002) plus Triggers Generation modules. In order to achieve a transformation of EER schemata into the Relational model without loss of semantics, an exhaustive analysis of translating the different EER constructs has been performed. The aim is to develop databases that keep all the integrity constraints and that force their verification regardless of which program accesses the database.
In this subsystem, there are two main contributions: the first one is related to the repository, covering all elements proposed in SQL-3 (Melton & Simon, 2002), including the relational model constraints such as assertions, checks, primary keys, alternate keys and foreign keys constraints. Furthermore, inherent constraints are validated by triggers and checks. The second contribution concerns to the relational model transformation, converting the EER constructs into relational model's constructors preserving their associated semantics. A correct transformation of conceptual schemata and their associated constraints is necessary in order to preserve their intended meaning. Although relational model is insufficient for reflecting the complete semantics that can be presented in a conceptual schema, it can be enhanced with specific elements that are used to preserve the original semantics, such as active capabilities (triggers).
In the Automatic Code Generation Subsystem, the Commercial DBMS Code Generation module transforms the standard logical schema into a specific logical schema, taking into account the DBMS's characteristics and resolving the relational model's constraints.
Finally, the Learning Support Subsystem gives a coherent unification to the CASE environment from two perspectives. Firstly, the Intelligent Tutor module plays the role of a methodological assistant for guide the designer during the database development process (through the different phases) providing support in the design alternatives; the methodology for database development incorporated in PANDORA tool is explained in De Miguel, Piattini & Marcos (1999). Secondly, the Intelligent Tutor incorporates some teaching and training strategies of database design concepts that can be used via Web. For this purpose, a set of didactic units together with a set of exercises, have been designed.
In order to perform this pedagogic component we collaborate with people from the New Information Technologies and Communication Programme (PNTIC) belonging to the Spanish Ministry of Education, Culture and Sports that has a wide experience in designing and implementing Web courses for distance learning. They help us to define the didactic units and also they will provide us a platform to test the Learning Support subsystem with a national coverage.
**Conceptual Modelling Subsystem**
The main contributions in the Conceptual Modelling Subsystem are to include all EER model constructs such as regular and weak entities, high degree relationships, minimum and maximum cardinality constraints, attributes in relationships, optional and mandatory attributes, monovalued and multivalued attributes, single and composite attributes, partial and complete hierarchies, overlapping and disjoint hierarchies, existence and identifying relationships. The repository has to permit storing schemata containing all these elements. It supposes not only to allow drawing these constructs but to consider them in the verification of schemata (to check if conceptual schemata are non redundant, consistent and complete) (Bouzeghoub, Kedad & Métais, 2000), and in transforming conceptual schemata into relational model in
the Design Subsystem. Figure 2 shows the conceptual schema of repository for EER schemata (metaschema).
This metaschema collects all those aspects commented in previous sections. As conceptual models are not supported in a DBMS, the repository of the figure 2 must be transformed to the relational model in order to be implemented (Cuadra et al., 2002, Fahrner & Vossen, 1995). The DBMS used for this task has been Oracle 8i. Besides, some procedures and triggers has been implemented to check semantic validation.
**Learning Support Subsystem**
We are developing an Intelligent Tutor System (ITS), known as *RLITS* (Reinforcement Learning in ITS), to assist learners through the use of the different tools in this project as well as to methodologically teach them how to design and develop databases through the Web (Iglesias et al., 2002).
An ITS is defined as "computer-aided instructional systems with models of instructional content that specify what to teach, and teaching strategies that specify how to teach" (Wenger, 1987). *RLIT* System decides (by learning) not only what and how (text, image, video, animation, etc.) to show system's content (problems, exercises, tests, simulations) to current student, but where to do it, based only in the previous experience with others students with similar learning characteristics.
A typical structure of an ITS is composed of four well differentiated modules (Burns & Capps, 1988). The student module contains all necessary information about the student in the learning process: student knowledge, personal characteristics, historical behaviour, etc. The interface module facilitates the communication between the ITS and the student. The domain module includes all the characteristics about the knowledge to teach. The traditional hierarchical knowledge structure (topics, sub-topics, etc.) is used to define the tutor system's objectives. Figure 3 shows a proposal of a hierarchical structure for database
design knowledge, where each topic has been divided into sub-topics, and these in others sub-topics, and so on. At the same time, each node of the tree contains sets of definitions, examples, problems, exercises, etc. in several formats (image, text, video, etc.). Finally, the pedagogical module decides what, how and when to teach the domain module contents, taking the better pedagogical decisions according to the student needs, that is, the ITS finds the best way to teach the knowledge items, corresponding with the internal nodes of the knowledge tree (topics), to the current student.
The definition of this problem as a reinforcement learning problem is fulfilled as follow (Iglesias, Martínez & Fernández, 2002): the agent's state is the current student's knowledge, represented by a vector of representative values of the student's knowledge for each topic. The ITS perceives the current student's knowledge (s1) by evaluations (tests). Given one state, the system chooses an action to be executed according to the current action policy, B (Kaelbling, Littman & Moore, 1996). The action corresponds with showing leaves of the knowledge tree (definition, exercise, problem, etc.). This action is supposed to change the current state of the student's knowledge to a new state (s2), generating a state transition and a reinforcement signal (positive or negative) to the system. This signal is used to update the system's action policy. The system behaviour, B, should choose actions that tend to maximise the long-run sum of values of the reinforcement signal, choosing in this way the optimal tutoring strategy (what, when, and how; the best sequence of contents and how to teach them) to coach the current learner by trial and error, like an human tutor does.
**Conclusions**
The two main aspects of PANDORA related in this paper are:
- The inclusion of the elements necessary to provide support to the relational databases methodology, taking into account deficiencies observed in several representative CASE tools.
The use of a CASE tool as support of the database design teaching. Use this kind of tools may help designers in the most complicated design steps, carrying out the validation and refinement tasks as well as interacting with users. In addition, to assists learners through the use of the different tools in this project as well as to methodologically teach them how to design and develop databases through the Web.
Currently, we are at a prototype stage in which the repository has been implemented in Oracle 8 together with the EER and Relational diagrammers with their complete graphical features as well as associated functionalities. The prototype is implemented in Microsoft Visual Basic 6.
We are working in the Transformation into Relational model, implementing the translation rules for all EER constructs (Al-Jumaily, Cuadra & Martínez, 2002).
Relating the Natural Language Analysis module, the interpreter for cardinality constraints is also implemented in Prolog using the DCG (Definite Clause Grammars) formalism (Martínez et al., 2000). This component is calling by the Intelligent Tutor to give support in database design learning to teach the cardinality constraint concept.
References
Biographies
**Elena Castro** received the M. Sc. in Mathematics from Universidad Complutense of Madrid in 1995. Since 1998 she works as assistant lecturer at the Advanced Databases Group in the Computer Science Department of Universidad Carlos III of Madrid. She is currently teaching Relational Databases. Her research interests include database conceptual and logical modelling, Advanced Database CASE environments and Information Engineering.
**Dolores Cuadra** received the M. Sc. in Mathematics from Universidad Complutense of Madrid in 1995. Since 1997 she works as assistant lecturer at the Advanced Databases Group in the Computer Science Department of Universidad Carlos III of Madrid. She is currently teaching Database Design and Relational Databases. Her research interests include database conceptual and logical modelling and Advanced Database CASE environments.
**Paloma Martínez Fernández** got a degree in Computer Science from Universidad Politécnica of Madrid in 1992. Since 1992, she has been working at the Advanced Databases Group in the Computer Science Department at Universidad Carlos III of Madrid. In 1998 she obtained the Ph.D. degree in Computer Science from Universidad Politécnica of Madrid. She is currently teaching Database Design, Advanced Databases in the Computer Science Department at the Universidad Carlos III de Madrid. She has been working in several European and National research projects about Natural Language Processing, Advanced Database Technologies, knowledge-based systems and Software Engineering.
**Ana Iglesias** got a degree in Computer Science from Universidad Carlos III of Madrid in 1999. She works as assistant lecturer and teacher at the Advanced Databases Group in the Computer Science Department of Universidad Carlos III of Madrid. Her research interests include conceptual modelling theories as well as Intelligent Tutoring Systems.
---
|
{"Source-Url": "https://e-archivo.uc3m.es/bitstream/handle/10016/16778/integrating_iglesias_IS_2002.pdf;jsessionid=62A8E1A763B2A3CE620BAAFCD8B2B62B?sequence=1", "len_cl100k_base": 4763, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 24381, "total-output-tokens": 6260, "length": "2e12", "weborganizer": {"__label__adult": 0.0006532669067382812, "__label__art_design": 0.0023097991943359375, "__label__crime_law": 0.0007390975952148438, "__label__education_jobs": 0.1181640625, "__label__entertainment": 0.00018393993377685547, "__label__fashion_beauty": 0.0004584789276123047, "__label__finance_business": 0.001064300537109375, "__label__food_dining": 0.0008234977722167969, "__label__games": 0.0008459091186523438, "__label__hardware": 0.0012960433959960938, "__label__health": 0.0013885498046875, "__label__history": 0.00080108642578125, "__label__home_hobbies": 0.0003991127014160156, "__label__industrial": 0.0013170242309570312, "__label__literature": 0.0010509490966796875, "__label__politics": 0.0006194114685058594, "__label__religion": 0.0012102127075195312, "__label__science_tech": 0.1092529296875, "__label__social_life": 0.00046896934509277344, "__label__software": 0.019775390625, "__label__software_dev": 0.73486328125, "__label__sports_fitness": 0.00046753883361816406, "__label__transportation": 0.0011930465698242188, "__label__travel": 0.00042819976806640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26373, 0.01849]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26373, 0.16456]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26373, 0.87156]], "google_gemma-3-12b-it_contains_pii": [[0, 2801, false], [2801, 5373, null], [5373, 8820, null], [8820, 10913, null], [10913, 15133, null], [15133, 17099, null], [17099, 19130, null], [19130, 23107, null], [23107, 26373, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2801, true], [2801, 5373, null], [5373, 8820, null], [8820, 10913, null], [10913, 15133, null], [15133, 17099, null], [17099, 19130, null], [19130, 23107, null], [23107, 26373, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26373, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26373, null]], "pdf_page_numbers": [[0, 2801, 1], [2801, 5373, 2], [5373, 8820, 3], [8820, 10913, 4], [10913, 15133, 5], [15133, 17099, 6], [17099, 19130, 7], [19130, 23107, 8], [23107, 26373, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26373, 0.22481]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
25cd4f8c753d511eef05375dd141a7997f4baca7
|
1 Introduction
2 Data warehousing concepts
3 Schemas for multidimensional data models
4 OLAP server architectures
5 Reading
Data warehouses generalize and consolidate data in multidimensional space.
Construction of data warehouses involves data cleaning, data integration, and data transformation.
Data warehouses provide online analytical processing (OLAP) tools for interactive analysis of multidimensional data of varied granualities, which facilates effective data mining.
Data mining functions such as clustering, classification, and associative rule mining can be integrated with OLAP functions to enhance interactive data mining.
As a conclusion, data warehousing form an essential step in knowledge discovery process.
What is a data warehouse?
A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management decision making process. (William H. Inmon)
The following keywords distinguish data warehouse from other data repository systems such as relational database systems.
- **Subject-oriented** A data warehouse is organized around major subjects such as customer, supplier, product, and sales.
- **Integrated** A data warehouse is usually constructed by integrating multiple heterogeneous sources, such as relational databases, flat files, and online transaction records.
- **Time-variant** Data are stored to provide information from an historic perspective (e.g., the past 510 years).
- **Nonvolatile** A data warehouse does not require transaction processing, recovery, and concurrency control mechanisms. It usually requires only two operations in data accessing: initial loading of data and access of data.
### Differences between operational databases and data warehouses
<table>
<thead>
<tr>
<th>Feature</th>
<th>OLTP</th>
<th>OLAP</th>
</tr>
</thead>
<tbody>
<tr>
<td>Characteristic</td>
<td>operational processing</td>
<td>informational processing</td>
</tr>
<tr>
<td>Orientation</td>
<td>transaction</td>
<td>analysis</td>
</tr>
<tr>
<td>User</td>
<td>clerk, DBA, database professional</td>
<td>knowledge worker (e.g., manager, executive, analyst)</td>
</tr>
<tr>
<td>Function</td>
<td>day-to-day operations</td>
<td>long-term informational requirements decision support</td>
</tr>
<tr>
<td>DB design</td>
<td>ER-based, application-oriented</td>
<td>star/snowflake, subject-oriented</td>
</tr>
<tr>
<td>Data</td>
<td>current, guaranteed up-to-date</td>
<td>historic, accuracy maintained over time</td>
</tr>
<tr>
<td>Summarization</td>
<td>primitive, highly detailed</td>
<td>summarized, consolidated</td>
</tr>
<tr>
<td>View</td>
<td>detailed, flat relational</td>
<td>summarized, multidimensional</td>
</tr>
<tr>
<td>Unit of work</td>
<td>short, simple transaction</td>
<td>complex query</td>
</tr>
<tr>
<td>Access</td>
<td>read/write</td>
<td>mostly read</td>
</tr>
<tr>
<td>Focus</td>
<td>data in</td>
<td>information out</td>
</tr>
<tr>
<td>Operations</td>
<td>index/hash on primary key</td>
<td>lots of scans</td>
</tr>
<tr>
<td>Number of records accessed</td>
<td>tens</td>
<td>millions</td>
</tr>
<tr>
<td>Number of users</td>
<td>thousands</td>
<td>hundreds</td>
</tr>
<tr>
<td>DB size</td>
<td>GB to high-order GB</td>
<td>≥ TB</td>
</tr>
<tr>
<td>Priority</td>
<td>high performance, high availability</td>
<td>high flexibility, end-user autonomy</td>
</tr>
<tr>
<td>Metric</td>
<td>transaction throughput</td>
<td>query throughput, response time</td>
</tr>
</tbody>
</table>
Note: Table is partially based on Chaudhuri and Dayal [CD97].
Support requires historic data, whereas operational databases do not typically maintain historic data. In this context, the data in operational databases, though abundant, are usually far from complete for decision making. Decision support requires consolidation (e.g., aggregation and summarization) of data from heterogeneous sources, resulting in high-quality, clean, integrated data. In contrast, operational databases contain only detailed raw data, such as transactions, which need to be consolidated before analysis. Because the two systems provide quite different functionalities and require different kinds of data, it is presently necessary to maintain separate databases. However, many vendors of operational relational database management systems are beginning to optimize such systems to support OLAP queries. As this trend continues, the separation between OLTP and OLAP systems is expected to decrease.
#### 4.1.4 Data Warehousing: A Multitiered Architecture
Data warehouses often adopt a three-tier architecture, as presented in Figure 4.1.
Data warehouses often adopt a three-tier architecture, as presented below.
1. The bottom tier is a warehouse database server that is almost always a relational database system. Back-end tools and utilities are used to feed data into the bottom tier from operational databases or other external sources (e.g., customer profile information provided by external consultants). These tools and utilities perform data extraction, cleaning, and transformation (e.g., to merge similar data from different sources into a unified format), as well as load and refresh functions to update the data warehouse (see Section 4.1.6). The data are extracted using application program interfaces known as gateways. A gateway is supported by the underlying DBMS and allows client programs to generate SQL code to be executed at a server. Examples of gateways include ODBC (Open Database Connection) and OLEDB (Object...
From the architecture point of view, there are three data warehouse models:
**Enterprise warehouse** An enterprise warehouse collects all of the information about subjects spanning the entire organization.
**Data mart** data mart contains a subset of corporate-wide data that is of value to a specific group of users. The scope is confined to specific selected subjects.
**Virtual warehouse** A virtual warehouse is a set of views over operational databases.
What are the pros and cons of the top-down and bottom-up approaches to data warehouse development?
- The top-down development of an enterprise warehouse serves as a systematic solution and minimizes integration problems. However, it is expensive, takes a long time to develop, and lacks flexibility due to the difficulty in achieving consistency and consensus for a common data model for the entire organization.
- The bottom-up approach to the design, development, and deployment of independent data marts provides flexibility, low cost, and rapid return of investment. It, however, can lead to problems when integrating various disparate data marts into a consistent enterprise data warehouse.
Data warehouse systems use back-end tools and utilities to populate and refresh their data. These tools and utilities include the following functions:
- **Data extraction** This typically gathers data from multiple, heterogeneous, and external sources.
- **Data cleaning** This detects errors in the data and rectifies them when possible.
- **Data transformation** This converts data from legacy or host format to warehouse format.
- **Load** This sorts, summarizes, consolidates, computes views, checks integrity, and builds indices and partitions.
- **Refresh** This propagates the updates from the data sources to the warehouse.
Besides the above functions, data warehouse systems usually provide a good set of data warehouse management tools.
Metadata are data about data. When used in a data warehouse, metadata are the data that define warehouse objects.
A metadata repository should contain the following:
- **A description of the data warehouse structure** including the warehouse schema, view, dimensions, hierarchies, and derived data definitions, as well as data mart locations and contents.
- **Operational metadata** such as history of migrated data and the sequence of transformations applied to it and monitoring information (warehouse usage statistics, error reports, and audit trails).
- **The algorithms used for summarization** including measure and dimension definition algorithms, data on granularity, partitions, subject areas, aggregation, summarization, and predefined queries and reports.
- **Mapping from the operational environment to the data warehouse** including source databases and their contents, gateway descriptions, data partitions, data extraction, cleaning, transformation rules and defaults, data refresh and purging rules, and user authorization and access control.
- **Data related to system performance** including indices and profiles that improve data access and retrieval performance, in addition to rules for the timing and scheduling of refresh, update, and replication cycles.
- **Business metadata** including business terms and definitions, data ownership information, and charging policies.
Data warehouse modeling
- Data warehouses and OLAP tools are based on a multidimensional data model.
- This model views data in the form of a data cube.
- What is a data cube?
- A data cube allows data to be modeled and viewed in multiple dimensions. It is defined by dimensions and facts.
- Dimensions are the perspectives or entities with respect to which an organization wants to keep records.
- Each dimension may have a table associated with it, called a dimension table, which further describes the dimension.
- Dimension tables can be specified by users or experts, or automatically generated and adjusted based on data distributions.
- A multidimensional data model is typically organized around a central theme represented by a fact table. Facts are numeric measures.
- Fact table contains the names of the facts, or measures, as well as keys to each of the related dimension tables.
Data cube (example)
- Relational schema for a relational database
<table>
<thead>
<tr>
<th>table</th>
<th>attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td>customer</td>
<td>cust_ID, name, address, age, occupation, annual_income, credit_information,</td>
</tr>
<tr>
<td></td>
<td>category, ...</td>
</tr>
<tr>
<td>item</td>
<td>item_ID, brand, category, type, price, place_made, supplier, cost, ...</td>
</tr>
<tr>
<td>employee</td>
<td>empl_ID, name, category, group, salary, commission, ...</td>
</tr>
<tr>
<td>branch</td>
<td>branch_ID, name, address, ...</td>
</tr>
<tr>
<td>purchases</td>
<td>trans_ID, cust_ID, empl_ID, date, time, method_paid, amount</td>
</tr>
<tr>
<td>items_sold</td>
<td>trans_ID, item_ID, qty</td>
</tr>
<tr>
<td>works_at</td>
<td>empl_ID, branch_ID</td>
</tr>
</tbody>
</table>
- 2-D view of sales data
<table>
<thead>
<tr>
<th>time (quarter)</th>
<th>location = “Vancouver”</th>
<th>item (type)</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>home entertainment</td>
<td>computer</td>
</tr>
<tr>
<td>Q1</td>
<td>605</td>
<td>825</td>
</tr>
<tr>
<td>Q2</td>
<td>680</td>
<td>952</td>
</tr>
<tr>
<td>Q3</td>
<td>812</td>
<td>1023</td>
</tr>
<tr>
<td>Q4</td>
<td>927</td>
<td>1038</td>
</tr>
</tbody>
</table>
Data cube (example)
- 3-D view of sales data
<table>
<thead>
<tr>
<th>location = “Chicago”</th>
<th>location = “New York”</th>
<th>location = “Toronto”</th>
<th>location = “Vancouver”</th>
</tr>
</thead>
<tbody>
<tr>
<td>item</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>home</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>time</td>
<td>ent. comp. phone sec.</td>
<td>ent. comp. phone sec.</td>
<td>ent. comp. phone sec.</td>
</tr>
<tr>
<td>Q1</td>
<td>854 882 89 623</td>
<td>1087 968 38 872</td>
<td>818 746 43 591</td>
</tr>
<tr>
<td>Q2</td>
<td>943 890 64 698</td>
<td>1130 1024 41 925</td>
<td>894 769 52 682</td>
</tr>
<tr>
<td>Q3</td>
<td>1032 924 59 789</td>
<td>1034 1048 45 1002</td>
<td>940 795 58 728</td>
</tr>
<tr>
<td>Q4</td>
<td>1129 992 63 870</td>
<td>1142 1091 54 984</td>
<td>978 864 59 784</td>
</tr>
</tbody>
</table>
Note: The sales are from branches located in the city of Vancouver. The measure displayed is dollars sold (in thousands).
A 3-D data cube representation of the data (previous slide)

- Time (quarters): Q1, Q2, Q3, Q4
- Location (cities): Chicago, New York, Toronto, Vancouver
- Item types: Home entertainment, Computer, Phone, Security
- Measures: Dollars sold (in thousands)
The measure displayed is dollars sold (in thousands).
Figure 4.3: A 3-D data cube representation of the data in Table 4.3, according to time, item, and location. The measure displayed is dollars sold (in thousands).
Figure 4.4: A 4-D data cube representation of sales data, according to time, item, location, and supplier. The measure displayed is dollars sold (in thousands). For improved readability, only some of the cube values are shown.
The 0-D cuboid, which holds the highest level of summarization, is called the apex cuboid. In our example, this is the total sales, or dollars sold, summarized over all four dimensions. The apex cuboid is typically denoted by all.
A 4-D data cube representation of the data
In the data warehousing, a data cube like those shown in (previous slides) is often referred to as a **cuboid**.
- Given a set of dimensions, we can generate a cuboid for each of the possible subsets of the given dimensions.
- The result would form a **lattice of cuboids**, each showing the data at a different level of summarization, or group-by.
- The **lattice of cuboids** is then referred to as a **data cube**.
- The cuboid that holds the lowest level of summarization is called the **base cuboid**.
- The 0-D cuboid, which holds the highest level of summarization, is called the **apex cuboid**.
4.2 Data Warehouse Modeling: Data Cube and OLAP
Lattice of cuboids
- Lattice of cuboids
![Diagram of Lattice of cuboids]
- 0-D (apex) cuboid
- 1-D cuboids
- 2-D cuboids
- 3-D cuboids
- 4-D (base) cuboid
The entity-relationship data model is commonly used in the design of relational databases, where a database schema consists of a set of entities and the relationships between them.
The entity-relationship data model is appropriate for online transaction processing.
A data warehouse, however, requires a concise, subject-oriented schema that facilitates online data analysis.
The most popular data model for a data warehouse is a multidimensional model, which can exist in the form of a
- Star schema
- Snowflake schema
- Galaxy schema
The most common modeling paradigm is the star schema, in which the data warehouse contains:
1. A large central table (fact table) containing the bulk of the data, with no redundancy.
2. A set of smaller attendant tables (dimension tables), one for each dimension.
The schema graph resembles a starburst, with the dimension tables displayed in a radial pattern around the central fact table.
In star schema, each dimension is represented by only one table, and each table contains a set of attributes.
This constraint may introduce some redundancy.
The attributes within a dimension table may form either a hierarchy (total order) or a lattice (partial order).
The snowflake schema is a variant of the star schema model, where some dimension tables are normalized, thereby further splitting the data into additional tables.
The resulting schema graph forms a shape similar to a snowflake.
The major difference between snowflake and star schema models is that the dimension tables of the snowflake model may be kept in normalized form to reduce redundancies.
Tables in snowflake schema are easy to maintain and save storage space.
This space savings is negligible in comparison to the typical magnitude of the fact table.
The snowflake structure can reduce the effectiveness of browsing, since more joins will be needed to execute a query.
The snowflake schema reduces redundancy, it is not as popular as the star schema in data warehouse design.
Snowflake schema (cont.)
An example of snowflake scheme
- **time**
- Dimension table
- `time_key`
- `day`
- `day_of_week`
- `month`
- `quarter`
- `year`
- **sales**
- Fact table
- `time_key`
- `item_key`
- `branch_key`
- `location_key`
- `dollars_sold`
- `units_sold`
- **item**
- Dimension table
- `item_key`
- `item_name`
- `brand`
- `type`
- `supplier_key`
- **supplier**
- Dimension table
- `supplier_key`
- `supplier_type`
- **branch**
- Dimension table
- `branch_key`
- `branch_name`
- `branch_type`
- **location**
- Dimension table
- `location_key`
- `street`
- `city_key`
- **city**
- Dimension table
- `city_key`
- `city`
- `province_or_state`
- `country`
Galaxy schema
- Sophisticated applications may require **multiple fact tables** to share dimension tables.
- This kind of schema can be viewed as a collection of stars, and hence is called a **galaxy schema** or a **fact constellation**.
A concept hierarchy defines a sequence of mappings from a set of low-level concepts to higher-level, more general concepts.
- **country**
- **province_or_state**
- **city**
- **street**
- **year**
- **quarter**
- **month**
- **week**
- **day**
Concept hierarchies may also be defined by discretizing or grouping (following figure) values for a given dimension or attribute.
There may be more than one concept hierarchy for a given attribute or dimension, based on different user viewpoints.
Concept hierarchies may be provided manually by system users, domain experts, or knowledge engineers, or may be automatically generated based on statistical analysis of the data distribution.
A data cube measure is a numeric function that can be evaluated at each point in the data cube space.
A measure value is computed for a given point by aggregating the data corresponding to the respective dimension value pairs defining the given point.
Measures can be organized into three categories based on the kind of aggregate functions.
1. **Distributive**: An aggregate function is distributive if it can be computed in a distributed manner. For example, count(), min(), and max() are distributive aggregate functions. (appropriate for large data cube)
2. **Algebraic**: An aggregate function is algebraic if it can be computed by an algebraic function with M arguments (where M is a bounded positive integer), each of which is obtained by applying a distributive aggregate function. For example, avg() can be computed by sum()/count(), where both sum() and count() are distributive aggregate functions. (appropriate for large data cube)
3. **Holistic**: An aggregate function is holistic if there is no constant bound on the storage size needed to describe a subaggregate. That is, there does not exist an algebraic function with M arguments (where M is a constant) that characterizes the computation. For example, median(), mode(), and rank(). A measure is holistic if it is obtained by applying a holistic aggregate function. (There are some techniques to approximate the computation of some holistic measures)
In the multidimensional model, data are organized into multiple dimensions, and each dimension contains multiple levels of abstraction defined by concept hierarchies. This organization provides users with the flexibility to view data from different views. A number of OLAP operations exist to materialize these different views.
- **Roll-up** The roll-up operation performs aggregation on a data cube.
A number of OLAP operations exist to materialize these different views.
- **Drill-down**
Drill-down is the reverse of roll-up.
A number of **OLAP operations** exist to materialize these different views.
- **Slice** The slice operation performs a selection on one dimension of the given cube, resulting in a subcube.
A number of **OLAP operations** exist to materialize these different views.
- **Dice** The dice operation defines a subcube by performing a selection on two or more dimensions.
OLAP operations (cont.)
- A number of **OLAP operations** exist to materialize these different views.
- **Pivot(rotate)** Pivot is a visualization operation that rotates the data axes in view to provide an alternative data presentation.
Implementations of a warehouse server for OLAP processing include the following:
1. **Relational OLAP (ROLAP) servers** These are the intermediate servers that stand in between a relational back-end server and client front-end tools. They use a relational or extended-relational DBMS to store and manage warehouse data, and OLAP middleware to support missing pieces.
2. **Multidimensional OLAP (MOLAP) servers** These servers support multidimensional data views through array-based multidimensional storage engines. They map multi-dimensional views directly to data cube array structures.
3. **Hybrid OLAP (HOLAP) servers** The hybrid OLAP approach combines ROLAP and MOLAP technology, benefiting from the greater scalability of ROLAP and the faster computation of MOLAP.
4. **Hybrid OLAP (HOLAP) servers** Some database system vendors implement specialized SQL servers that provide advanced query language and query processing support for SQL queries over star and snowflake schemas in a read-only environment.
Read chapter 4 of the following book
J. Han, M. Kamber, and Jian Pei, *Data Mining: Concepts and Techniques*, Morgan Kaufmann, 2012.
|
{"Source-Url": "http://ce.sharif.edu/courses/94-95/1/ce714-1/resources/root/Handout/40714-4&5-handout.pdf", "len_cl100k_base": 4971, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 62275, "total-output-tokens": 5905, "length": "2e12", "weborganizer": {"__label__adult": 0.0003695487976074219, "__label__art_design": 0.000576019287109375, "__label__crime_law": 0.000759124755859375, "__label__education_jobs": 0.004688262939453125, "__label__entertainment": 8.45789909362793e-05, "__label__fashion_beauty": 0.00019228458404541016, "__label__finance_business": 0.0034694671630859375, "__label__food_dining": 0.00046372413635253906, "__label__games": 0.0005478858947753906, "__label__hardware": 0.0023899078369140625, "__label__health": 0.000667572021484375, "__label__history": 0.00046372413635253906, "__label__home_hobbies": 0.00022864341735839844, "__label__industrial": 0.00212860107421875, "__label__literature": 0.0004222393035888672, "__label__politics": 0.0002636909484863281, "__label__religion": 0.0004382133483886719, "__label__science_tech": 0.27685546875, "__label__social_life": 0.00013709068298339844, "__label__software": 0.0867919921875, "__label__software_dev": 0.61669921875, "__label__sports_fitness": 0.00018513202667236328, "__label__transportation": 0.000640869140625, "__label__travel": 0.00025272369384765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21950, 0.01488]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21950, 0.76594]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21950, 0.85201]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 128, false], [128, 734, null], [734, 1696, null], [1696, 4894, null], [4894, 5795, null], [5795, 6959, null], [6959, 7708, null], [7708, 9110, null], [9110, 10016, null], [10016, 11579, null], [11579, 12692, null], [12692, 13714, null], [13714, 13757, null], [13757, 14362, null], [14362, 14569, null], [14569, 15109, null], [15109, 15502, null], [15502, 16565, null], [16565, 17310, null], [17310, 17549, null], [17549, 17794, null], [17794, 18235, null], [18235, 19660, null], [19660, 20062, null], [20062, 20194, null], [20194, 20384, null], [20384, 20562, null], [20562, 20802, null], [20802, 21818, null], [21818, 21950, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 128, true], [128, 734, null], [734, 1696, null], [1696, 4894, null], [4894, 5795, null], [5795, 6959, null], [6959, 7708, null], [7708, 9110, null], [9110, 10016, null], [10016, 11579, null], [11579, 12692, null], [12692, 13714, null], [13714, 13757, null], [13757, 14362, null], [14362, 14569, null], [14569, 15109, null], [15109, 15502, null], [15502, 16565, null], [16565, 17310, null], [17310, 17549, null], [17549, 17794, null], [17794, 18235, null], [18235, 19660, null], [19660, 20062, null], [20062, 20194, null], [20194, 20384, null], [20384, 20562, null], [20562, 20802, null], [20802, 21818, null], [21818, 21950, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21950, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21950, null]], "pdf_page_numbers": [[0, 0, 1], [0, 128, 2], [128, 734, 3], [734, 1696, 4], [1696, 4894, 5], [4894, 5795, 6], [5795, 6959, 7], [6959, 7708, 8], [7708, 9110, 9], [9110, 10016, 10], [10016, 11579, 11], [11579, 12692, 12], [12692, 13714, 13], [13714, 13757, 14], [13757, 14362, 15], [14362, 14569, 16], [14569, 15109, 17], [15109, 15502, 18], [15502, 16565, 19], [16565, 17310, 20], [17310, 17549, 21], [17549, 17794, 22], [17794, 18235, 23], [18235, 19660, 24], [19660, 20062, 25], [20062, 20194, 26], [20194, 20384, 27], [20384, 20562, 28], [20562, 20802, 29], [20802, 21818, 30], [21818, 21950, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21950, 0.1875]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
54b50a0386352fde0c64d6e5313263ce781c3c81
|
# Table of Contents
**Track 1: Parallel and Distributed Systems**
## 1.1 Parallel Computing
1.1.1 Planning Management of Multiagent-Based Distributed Open Computing Environment Model
He. Y., Cooley D.H., and Zhang J., Utah State University
1.1.2 Benchmarking a Network of PCs Running Parallel Applications
Hollingsworth J.K., Guven E., and Akinlar C., University of Maryland
1.1.3 Near-Optimal Broadcast in All-Port Wormhole-Routed Hypercubes Using Error Correcting Codes
Ko H., Latifi S., University of Nevada Las Vegas, and Srimani P.K., Colorado State University
1.1.4 Heuristic Algorithms for Priority Assignment in Flow Shops
Etemadi R., Majumdar S., Carleton University, Canada, and Karam G., AT&T
## 1.2 Distributed Algorithms
1.2.1 A Practical Building Block for Solving Agreement Problems in Asynchronous Distributed Systems
Hurfin M., Raynal M., and Tronel F., IRISA, France
1.2.2 Channel Reification: A Reflective Model for Distributed Computation
Ancona M., Dodero G., Gianuzzi V., DISI-University of Genova Cazzola W., DSI-University of Milano
1.2.3 A Fault Tolerant Distributed Sorting Algorithm in Tree Networks
Alali G., Universite catholique de Louvain, Belgium; Beauquier J., Tixeuil S., Universite de Paris-Sud, France; Chacko J., Datta A.K., University of Nevada Las Vegas
1.2.4 A Fault Tolerant Token Passing Algorithm on Tree Networks
Alali G., Universite catholique de Louvain, Belgium; Beauquier J., Johnen C., Universite de Paris-Sud, France; Datta A.K., Thiagarajan V., University of Nevada Las Vegas
## 1.3 Distributed Databases
1.3.1 Performance Comparison of Three Alternatives of Distributed Multidatabase Systems: A Global Query Perspective
Chen C.-M., Sun W., and Rishe N., Florida International University
1.3.2 Efficient Quorum Operations in Replicated Databases
Helal A., MCC Corporation
1.3.3 The Effect of Object-Agent Interactions on the Performance of CORBA Systems
Abdul-Fatah I., Nortel Technologies, Canada; Majumdar S., Carleton University, Canada
*Paper was not available at time of printing*
PERFORMANCE COMPARISON OF THREE ALTERNATIVES OF DISTRIBUTED MULTIDATABASE SYSTEMS: A GLOBAL QUERY PERSPECTIVE
Chung-Min Chen, Wei Sun and Naphtali Rishe
High Performance Database Research Center
School of Computer Science
Florida International University
Miami, FL 33199
E-mail: chungmin@cs.fiu.edu
ABSTRACT
Diversity and evolution in database applications often result in a multidatabase environment in which corporate data are stored in multiple, distributed data sources, each managed by an independent database management system. One of the essential functions of a multidatabase system is to provide inter-database access: the capability of evaluating global queries that require access to multiple data sources. This paper compares three common relational multidatabase approaches: the federated approach, the gateway approach, and the middleware approach from the perspective of global query performance. In particular, we examine their architectural impact on the applicability of pipelined query processing techniques and load balancing. We present a performance comparison based on a detailed simulation. The study suggests that the middleware approach, which is the most cost-effective solution among the three, provides better or comparable performance to the other two approaches.
I. INTRODUCTION
A multidatabase system is a collection of interconnected database systems (or data sources). Each data source is governed by an independent database management system (DBMS). Most of the time these DBMSs operate autonomously, but from time to time they need to cooperate with each other on answering global queries - queries that require access to more than one data sources. Multidatabase systems are common in today's enterprise-level information systems, often a result of certain application requirements or other strategic considerations [7]. Multidatabase systems are often configured in a distributed, client-server computing architecture: each component DBMS is installed and operates on a dedicated server machine, whereas database applications are run on client machines. All servers and clients are interconnected through a communication network. The servers retrieve data from the databases in response to client requests.
There are three common alternatives to establish a multidatabase system: the "federated" approach, the "gateway" approach, and the "middleware" approach 1. Both the federated and gateway approaches use an inter-database-access-enabling DBMS to handle global queries. The only difference is that the federated approach adds a dedicated DBMS, whereas the gateway approach extends one of the existing servers to include such capability. The middleware, in contrast, is merely a software that coordinates the DBMS servers for collaborative activities. A middleware is not a fully loaded DBMS and must rely on the component DBMS servers to evaluate global queries. Middleware products are appealing to many multidatabase users as they are less expensive and more portable than the other two alternatives. However, there has been some doubt about the use of middleware due to a performance concern: the lack of an internal DBMS for efficient global query processing.
This paper examines the impact of the architectural differences among the three aforementioned multidatabase system approaches on the performance of global queries. We investigated this issue by analyzing the implications on applicability of pipeline, communication overhead, disk IO overhead, and load balancing. In particular, we seek answers to the following questions: (1) Is the performance of the middleware
---
1 The terms "federated", "gateway", and "middleware" have been used broadly, sometimes ambiguously, in the literature to mean the kinds of software that enable inter-database access. Our use of these terms in this article does not necessarily reflect exactly what is interpreted by others.
approach comparable to that of the gateway approach? (2) Does the federated approach, at the cost of an additional server, achieve a performance gain (with respect to the middleware and gateway approaches) in proportion to the cost?
There has been much research work on heterogeneous or multidatabase systems that focused on such issues as architectural design, schema integration, query optimization, and transaction processing (e.g., [6], [11], [1] and [10]). None of them, to the best of our knowledge, has addressed the comparative performance issue related to the three models described above. Recent examples of federated multidatabase systems include DATAPLEX [4] and DB Integrator [9]. Many relational DBMS vendors have also supported their own DBMS products with gateway access to other DBMSs (e.g., ORACLE's Open Gateways and Sybase's OmniConnect). Some early research prototypes, for example, Mermaid [13] and MDAS [5], can be considered middleware systems. Recently, middleware products from third-party vendors have been emerging in the market (e.g., Intersolv's DataDirect ODBC Driver and Information Builders's EDA/SQL). The most recent industry development on multidatabase architectures and APIs can be found in [8].
II. THREE MULTIDATABASE APPROACHES
A. Federated Approach
Figure 1 shows a federated multidatabase system. In this configuration, a separate federal DBMS (FDBMS) server with inter-database access capability is added to the system. The FDBMS has its own relational data storage system and query evaluation engine, and must provide, for each type of DBMS it supports, a software module that performs necessary SQL dialect translation and data format conversion. Local queries – queries that access data in a single data source – are processed at the respective DBMS server. The FDBMS server is used exclusively for processing global queries. It decomposes a global query into a number of remote sub-queries and a local assembly sub-query. The remote sub-queries must be expressed in SQL format and sent to the respective DBMSs for execution. The assembly sub-query, which is executed at the FDBMS server, collects the results from remote sub-queries and merges them into a final result. The outlined arrows in the figure indicate the data flow directions in the execution of a global query. Having its own relational database engine, the FDBMS may speed up the assembly sub-query by pipelining the input data streams from other servers with the local assembling operations. This avoids the need to store the input data streams explicitly on the local disk first.
B. Gateway Approach
Figure 2 shows the gateway approach for a multidatabase system. In this setting, one of the existing DBMS servers in the system is enhanced with the inter-database access capability. This DBMS, called a gateway DBMS (GDBMS), assumes two roles: as an autonomous DBMS server that continues to process local queries, and as a federal DBMS that handles global queries. From the view of global queries, the internal workings of the GDBMS is similar to that of a FDBMS. The difference between the two is configurational rather than architectural. Just like the federated approach, the gateway approach may also take advantage of pipelined query processing at the GDBMS server. In addition, global queries which involve tables stored in the GDBMS will not need to send these table across the network to a remote site. This is a potential performance gain over the federated approach which requires all tables to be staged at the FDBMS. The main problem with the gateway approach is that the GDBMS may become the bottleneck as it is overloaded with global and local queries.
C. Middleware Approach
A middleware is a software that supports global queries in a multidatabase system by relying only on the processing power of the component DBMSs. Figure 3 shows a multidatabase system using a middleware. The middleware, though drawn separately, can be run at any of the DBMS server machines. It accepts and generates, for each global query, an execution schedule that contains a number of sub-queries (to be executed at various DBMS servers) and necessary data transfers between the servers. The middleware is responsible for routing the sub-queries to the servers, translating SQL dialects as needed, converting data from one format to another, and coordinating data exchanges between the servers.
A middleware can only interact with the database servers through a high-level query interface (typically a call-level SQL). And since it does not have its own relational DBMS engine, pipelined global processing is not applicable. Rather, temporary tables must be created in some of the servers to hold intermediate results or remote tables. In comparison to the federated and gateway approaches, the need to hold data in temporary tables incurs extra I/O overhead and increases the load on server disks. However, the middleware approach has a better load balancing nature since the workload of global queries is equally distributed among all DBMS
servers. Another merit of middleware has to do with the cost: being a "lightweight" software, a middleware is usually priced lower than the other two approaches, consumes relatively few resources, and possibly requires less administration.
III. MULTIDATABASE JOIN PROCESSING: PIPELINED VS. NON-PIPELINED
Global queries are indeed distributed join queries [12]. Distributed join processing in a multidatabase system differs from that in a traditional distributed database system in that the relations stored in autonomous DBMS servers are accessible only through SQL interfaces. The implication is that pipelined processing is rather limited in multidatabase systems. In the following, we describe variants of distributed join algorithms, with different degrees of pipelining, that are applicable in the three multidatabase alternatives and discuss their performance implications. We consider both hash-join and merge-join strategies [12], with focus on the former. In this study, we do not consider index-join strategies using pre-existing indices as the federated approach can not utilize any pre-existing index, neither do we consider nested-loop join strategy as it is much less efficient than hash-join. Throughout the discussion, we consider an equal-join query $r \bowtie_{r.A=s.s} A s$ where $r$ and $s$ are relations located at different sites and $A$ is the join attribute.
A. Hash-Join Algorithms
Figure 4 outlines a basic hash-join algorithm. The algorithm consists of two phases: the partition phase, and the probe phase. Without loss of generality, let $s$ be the smaller relation. In the first phase, tuples in each of the relations are partitioned into "buckets" by applying a hash function, $h : D_A \rightarrow \{1, 2, \ldots, m\}$, to the join attribute, where $D_A$ is the domain of $A$ and $m$, the number of buckets, is chosen so that a one-block output buffer can be allocated for each bucket. An output buffer is flushed out to the disk when it is full or when the partition is finished. In the probe phase, each pair of buckets $B_{r,i}$ and $B_{s,i}$ is compared for matching tuples. We assume that each bucket $B_{r,i}$ along with the hash index always fits in the memory.
When both relations are located at the same site (the case for a local join query), the partition phase accounts for one pass of read and one pass of write for both relations $^2$; the probe phase accounts for another pass of read for both relations. We call this a loc-loc hash-join, indicating both operands are local relations. We will describe next the variants of hash-join algorithms that can be applied to global queries in each of the three multidatabase approaches. To facilitate the discussion, we call the DBMS server at which the final result is assembled the join site. A relation is a local relation if it is located at the join site, otherwise it is called a remote relation.
A.1. Federated Variants
In the federated approach, both operand relations of a global join are remote to the FDBMS server and thus must be accessed through the network. Since the FDBMS has its own query evaluation engine, the partition phase can be performed directly against the incoming data streams in a pipelined manner, without having to first store the data in temporary tables. In other words, the input of the remote relation from the network are overlapped with the partition task. Once the partition is done, the probe phase can proceed as usual. We call this a str-str hash-join, indicating that both join operands are in the form of data streams. The overall disk I/O cost of a str-str hash-join is the same as
$^2$Assuming all disk blocks, except the last one, allocated to the buckets are completely filled so the total size of the buckets is the same as the original size of the input relation.
for each \( t_r \) in \( r \) do /* partition phase */
\[ i = h(t_r,A); \]
Insert \( t_r \) into bucket \( B_{r,i} \);
end
for each \( t_s \) in \( s \) do
\[ i = h(t_s,A); \]
Insert \( t_s \) into bucket \( B_{s,i} \);
end
for \( i = 0 \) to \( m \) do /* probe phase */
Load \( B_{r,i} \) and build an in-memory hash index for \( B_{s,i} \) based on attribute \( s.A \);
Perform the join \( B_{r,i} \bowtie B_{s,i} \) by probing, for each \( t_r \in B_{r,i} \), the hash index to locate those tuples \( t_s \in B_{s,i} \) such that \( t_s.A = t_r.A \);
end
Figure 4: Basic Hash-Join Algorithm
that of a \( \text{loc-loc} \) hash-join. The former, however, incurs additional communication and CPU overhead.
A.2. Gateway Variants
There are two variants applicable in the gateway approach: the \( \text{str-str} \) and the \( \text{str-loc} \) variants. The former, which has been described above, applies when both operand relations are remote to the GDBMS server; the latter applies when only one relation is remote. Similar to the \( \text{str-str} \) variant, the \( \text{str-loc} \) variant may overlap the transfer of the remote relation with the partition task in a pipeline. The IO cost of a \( \text{str-loc} \) hash-join remains the same as that of a \( \text{loc-loc} \) hash-join. The \( \text{str-loc} \) variant will require additional communication and CPU costs over the \( \text{loc-loc} \) variant, but less than those incurred by the \( \text{str-str} \) variant (which access two remote relations).
A.3. Middleware Variants
In the middleware approach, the join site always hosts one of the operand relations. This means that for any global query, only one relation needs to be accessed through the network. However, since a middleware has no control of and can only interact with the DBMS at the join site through a high-level SQL interface, the remote relation must be sent to the join site and \textit{fully imported} into a temporary database table before an SQL query can be issued to perform the join. We call this variant a \( \text{imp-loc} \) hash-join. Compared to the other variants, a \( \text{imp-loc} \) hash-join requires an additional pass of write and read for the imported relation. Though the middleware approach is unable to take advantage of pipelined processing, it may produce a better load balance since all component DBMS servers are potential join sites for global queries. Table 1 summarizes the hash-join variants that are applicable in the three multidatabase approaches (\( \text{FED} \), \( \text{GAT} \), and \( \text{MID} \) stands for federated, gateway, and middleware approaches, respectively). Among them, \( \text{loc-loc} \) is the most efficient one, followed by \( \text{str-loc} \), with \( \text{str-str} \) and \( \text{imp-loc} \) trailing with a tradeoff between network and disk IO overhead.
B. Merge-Join Algorithms
When both relations are physically sorted based on the join attribute, merge-join could be more efficient than other join strategies. Figure 5 outlines the basic merge-join algorithm. The algorithm scans the relations sequentially, locating and bringing into memory the next groups of tuples, \( B_r \) and \( B_s \), from both relations that hold the same value on the join attribute. It then performs a Cartesian product between these two sets of tuples and add the composite tuples to the result. Variants of merge-join algorithm for different multidatabase approaches can be reasoned in a similar way to those described in the hash-join strategy and coincide with Table 1.
IV. SIMULATION MODEL
We have implemented a simulation package in C to evaluate the global query performance of the three multidatabase architectures based on the distributed join algorithms described in the previous section. To lay a foundation for a fair performance comparison, the simulation model assumes that (1) all server machines have equivalent processing power (same CPU speed and IO access time), and (2) all DBMSs are equally "intelligent" in the sense that, for each given query, they will select the same join strategy (hash-join vs. merge-join). Figure 6 shows a \textit{closed queueing network} simulation model which consists of a number of DBMS server
modules and a network module.
**DBMS Server Modules** Each DBMS server module contains a CPU and a disk device, both associated with a process queue. Processes are scheduled for CPU based on round-robin and for the disk device based on FCFS. To avoid performance degradation caused by thrashing, the server restricts the number of concurrent processes by maintaining a *mutliprogramming control queue* (MPC queue). A newly arriving process must enter the MPC queue before it can be admitted to compete for the resources.
The execution of a query is simulated in terms of processes. While a local query spawns only one process at a database server, a global join query needs to spawn processes at more than one database server. We model a process as a state transition diagram (STD) that contains a detailed schedule of the operations to be performed. Each state specifies an operation and the queue (CPU, disk, or blocked queue) at which the operation is to be performed. The time a process spends in a queue depends on the length of the queue and the service time to complete the operation. Typically, a process would go through a number of CPU and disk IO cycles before completion. Due to space restriction, we refer the readers to [3] for more details on the STD model.
**Network Module** The communication between the database servers is based on a message-passing model with two blocking primitives: send() and receive(). Data are divided into and transmitted in *packets*. A process calling send() will enter a packet into the packet queue and be temporarily blocked until the packet is delivered. Similarly, a call to receive() will not return control until a packet is received by the calling process. The network processor and the packet queue simulate the latency caused by the network protocol stack, physical bandwidth restriction, and packet contention. In our experiments, we use the same size for packets and disk blocks. The pipelined processing in variants str-str and str-loc is performed at the granularity of data blocks.
**Query Workload** A random query generator is used to produce the query stream workload. The generator allows one to specify the values for such parameters as relation sizes, join selectivities, and percentages of local and global queries. All database servers prefer merge-join (if applicable) to hash-join because the former is more efficient. When a query is completed, a new query is generated immediately and enters the system. The number of concurrent queries allowed in the system, called the *concurrency level* (CL), is specified as a parameter. Since the system is arranged as a closed queuing network, by varying the concurrency level we are able to observe the scalability of the performance with respect to system loads.
**V. SIMULATION RESULTS**
Table 2 shows the default values of the parameters used in the experiments. We used three autonomous database servers in all experiments. In the case of the federated approach, a fourth FDBMS server is added. The CPU costs of the various operations (in terms of number of instructions executed per data block) are estimated based on the implementation of a client-server heterogeneous DBMS prototype [2]. The default query stream consists of 80% local queries and 20% global queries. Similarly, 20% of all queries are joined on sorted attribute and use merge-join strategy; the other 80% use hash-join strategy. We believe this is the norm in a multidatabase environment where most queries are local and most relations are not sorted on join attributes.
A. Effect of Query Loads
Figure 7 shows the average throughputs of the three approaches versus the concurrency level (CL). The average throughput is calculated by dividing the number of queries completed (including global and local queries) by the total elapsed time, measured in terms of number of queries per minute (QPM). For each point of observation, a sufficient number of queries (200-1000) were run so that the system reaches a stable state with a 95% confidence on the average throughput before finishing. As can be seen, the throughputs of all approaches level off when the number of concurrent queries reaches 20. GAT and MID yield close performance. When the load is mild (10< CL <30), MID outperforms GAT because of better load balancing. However, when the load is heavy and the system becomes critically IO-bound (CL ≥ 40), GAT surpasses MID because it needs fewer IO operations for global queries (due to the applicability of pipelined processing). The FED approach, using an additional DBMS server for global queries, produces better performance than the other two in all cases. The improvement, however, is less than linear with the cost: the throughput per server produced by FED (\( \frac{18}{4 \text{ servers}} \)) is less than those produced by GAT and MID (which are greater than \( \frac{15}{3 \text{ servers}} \)).
Figure 8 shows the average response time for global queries. The response time of a query is measured as the elapsed time between the time the query is submitted and the time the query is completed. In all configurations, the response time increases almost linearly with the number of concurrent queries. The FED approach constantly yields the best response time due to the additional dedicated FDBMS server for global queries. The MID configuration, thanks to its load balancing nature, produces shorter response time than the GAT approach (which is more likely to develop a contention at the gateway server) under moderate to heavy loads (CL ≥ 15).
B. Effect of Network Bandwidth
Today's multidatabase systems are not necessarily confined to a local-area-network. The database servers could be connected through a wide-area-network such as the Internet or other types of proprietary networks that bear a lower effective data transfer rate (typically in the range of tens to hundreds of kilobits per second). On the other hand, emerging network technologies continue to improve the data transfer rate (e.g. ATM at 155 Mbps and Fast Ethernet at 100 Mbps).
Figure 9 compares the throughputs (under a workload of CL = 20) over a wide spectrum of effective network data transfer rates. The middleware approach outperforms the other two when the effective network data transfer rate is relatively low (≤ 1 Mbps). This is because MID requires less data transfer (one relation per global query) than the other two approaches (two relations for FED and \( 1 + \frac{N-1}{N} \) relations for GAT, where N is the number of database servers). When the effective network data transfer rate increases, the system bottleneck shifts from the network to the IO. Eventually the system becomes IO-bound and the throughputs level off. The FED performs the best under such a condition as it has an additional disk to share the loads of global queries. The performance gain, again, is less than linear scale-up to the cost.
C. Effect of Query Mix
Figure 10 shows the average throughput as a function of the percentage of global queries. The throughput of GAT declines as the frequency of global queries increases since it causes the gateway server to become the bottleneck, and leaves other servers under-utilized (by not having enough local queries to keep them busy). In the case of MID, the throughput is less sensitive to the change of global query load. This is largely attributed to MID's load balancing nature in processing global queries. In contrast, the FED approach is most
sensitive to the global query load. The throughput increases initially as a result of better federal server utilization. It reaches a summit at which the resources of the federal database server is fully utilized. The throughput then starts to drop as a consequence of resource contention.
VI. CONCLUSIONS
In this paper we have examined the implications and compared the performance of three alternatives of distributed multidatabase systems from the perspective of global join queries. Our study has shown that, on a fair comparison ground, the middleware approach, though lacking its own query evaluation engine, produces comparable performance with the gateway approach, sometimes better. The middleware’s lack of pipelined processing capability (and thus higher disk I/O costs) is compensated by its better balanced servers for global query workloads and lower network overhead. The federated approach, at the expense of an additional DBMS server, is able to enhance the system throughput, but at a rate less than linear with the cost. Among the three, the middleware is the most cost-effective one if performance is not a dominant concern. In practice, the middleware approach has another advantage: it may utilize pre-existing indices in processing global queries, a technique that is not applicable in the federated approach and is limited in the gateway approach.
VII. REFERENCES
|
{"Source-Url": "http://cake.fiu.edu/Publications/Chen+al-98-PC_PerformanceComparisonOfThree_IEEEInternational.published-scan.pdf", "len_cl100k_base": 5978, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 45193, "total-output-tokens": 6957, "length": "2e12", "weborganizer": {"__label__adult": 0.0003094673156738281, "__label__art_design": 0.0003533363342285156, "__label__crime_law": 0.0003905296325683594, "__label__education_jobs": 0.0020313262939453125, "__label__entertainment": 0.00012624263763427734, "__label__fashion_beauty": 0.0001589059829711914, "__label__finance_business": 0.0005278587341308594, "__label__food_dining": 0.0003745555877685547, "__label__games": 0.0006709098815917969, "__label__hardware": 0.0023822784423828125, "__label__health": 0.0008020401000976562, "__label__history": 0.0004515647888183594, "__label__home_hobbies": 0.00011008977890014648, "__label__industrial": 0.0007376670837402344, "__label__literature": 0.0003495216369628906, "__label__politics": 0.0002682209014892578, "__label__religion": 0.0004930496215820312, "__label__science_tech": 0.337158203125, "__label__social_life": 0.00011157989501953124, "__label__software": 0.035400390625, "__label__software_dev": 0.61572265625, "__label__sports_fitness": 0.00025153160095214844, "__label__transportation": 0.000644683837890625, "__label__travel": 0.00024211406707763672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29546, 0.01712]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29546, 0.45762]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29546, 0.89961]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 2053, false], [2053, 5976, null], [5976, 11035, null], [11035, 14843, null], [14843, 19086, null], [19086, 22649, null], [22649, 26560, null], [26560, 29546, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 2053, true], [2053, 5976, null], [5976, 11035, null], [11035, 14843, null], [14843, 19086, null], [19086, 22649, null], [22649, 26560, null], [26560, 29546, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29546, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29546, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 2053, 3], [2053, 5976, 4], [5976, 11035, 5], [11035, 14843, 6], [14843, 19086, 7], [19086, 22649, 8], [22649, 26560, 9], [26560, 29546, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29546, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
d9ed5008f8d7eb7a0434f56a07452baa1de85d7b
|
RISK MANAGEMENT FOR SOFTWARE PROJECTS IN AN AGILE ENVIRONMENT – ADOPTING KEY LESSONS FROM THE AUTOMOTIVE INDUSTRY
Oana Iamandi
Technical University of Cluj-Napoca, Romania
oana.iamandi@muri.utcluj.ro
Marius Dan
Technical University of Cluj-Napoca, Romania
marius.dan@calitop.ro;
Sorin Popescu
Technical University of Cluj-Napoca, Romania
posorin@gmail.com
Abstract:
The present paper seeks to improve the risk management practices undertaken in software development projects in agile environments. Software development projects, clearly having benefited from a steep incensement in both number and importance, is reportedly lagging behind in key project management areas, one of which being risk management. By providing key insights from one of the most successful and most rigorously structured industries – the automotive manufacturing – we aim to bring new value to the risk management practices in the form of a set of tools and techniques. The paper is strongly based on both a thorough relevant scientific literature investigation and the personal experience of the authors in both software and web development project management and quality and risk management systems in the automotive industry. In developing the theoretical part of the paper, we employed tools from and techniques from the fields of information structuring, quality management and engineering and competitive engineering. The scope of the paper is restricted to a theoretical framework, by proposing a context resulted from the over-positioning of APQP as a Stage-Gate model and the structure of the software development lifecycle adapted for to an agile environment. The application of the risk management process and the associated proposed tools and techniques are in strict adherence with the previously mentioned context.
Keywords: risk management, project management, agile environment, stage-gate model, APQP model, information technology
1. INTRODUCTION
As much of the literature on Agile project management and agile software development agrees, the agile environment, although until recently very utilized and proclaimed more successful than other methodologies, still lacks in rigor and structure. Until recently, the performance requirements of software development projects were quite low and these conditions spawned a plethora of companies and individuals dealing with just that to their best of their competence. In the current economic context, such approaches leave much to be desired and, as a result, the practices have very quickly shifted towards better performance of software projects and higher delivered value with the same costs. The current high failure rate among software development projects demonstrates that this is simply not enough. As a response, practitioners and academicians are striving to create and adapt the best processes, tools, techniques and best practices to better accommodate the current peak in demand for a new type of software development projects and their management.
As a result, this paper is trying to respond to the need of structure in the area of risk management related to software development projects. By applying both a thorough research process, a critical approach and the experience of the authors in software development project management and the automotive industry, we attempted to apply the lessons learned from one of the most standardized and thorough industries – automotive manufacturing – into a very dynamic and emergent by comparison software development industry.
2. LITERATURE REVIEW
The focus of the literature review is twofold: (1) to identify the practices and applications of risk management in both stage-gate models and agile environments as well as to spot the deficiencies and shortcomings of each of the two in relation to risk management, and (2) to identify previous work done on adapting different tools and techniques from one area of application to another and possibly “cocktail” models.
2.1. Stage-Gate model
The Stage-Gate model was developed by dr. Robert Cooper and was initially used for New Product Development. The general Stage-Gate model has long been used as the preferred approach in developing new products. It consists of a series of “gates”, placed at the end of each stage of the process that act as both decision enabling and as corrective mechanisms. Each gate requires a set of deliverables, which in turn have been previously designed in the project plan. Each stage gate also involves customer or user feedback so as to allow the passing to the next stage if validation was received (Cooper, 1994).
Being a generic process, the Stage-Gate framework benefits from many adaptations in various industries and fields. The Advanced Product Quality Planning – APQP is a very good adaptation in the automotive field and displays all the components of a mature methodology (Chrysler Corporation, Ford Motor Company and General motors Company, 2008): structure, process, tools and techniques, industry reach and a number of significant case studies form both practitioners and academia.
2.2. Agile environments
Agile processes have been in place for a number of years, even if not by name. The flexible manufacturing paradigm is an early manifestation of this “school of thought”. With the appearance of the Agile Manifesto (Beedle, et al., 2001), agile received its name and has taken up ever since more and more adopters. (Conboy, 2009) defines software development agility as the continued readiness “to rapidly or inherently create change, proactively or reactively embrace change, and learn from change while contributing to perceived customer value (economy, quality, and simplicity), through its collective components and relationships with its environment.”
The current agile practices have little theoretical foundation, with just a few frameworks defined, and lack in “establishing theoretical underpinnings, when investigating agile development and its various practices” (Dingsøyr, Nerur, Balljepally, & Brede Moea, 2012). The observation is valid also in the area of risk management where, although we could identify some implicit practices (Noor & Khan, 2011).
2014) – especially in the scrum methodology (Alharbi & Qureshi, 2014) – there is little attention paid and even lesser structure and dedicated tools.
Risk management in agile environments opens a new perspective on agility in organizations. It proposes a new, more mature and comprehensive agile model where agile models and frameworks are not constricted to the people and team size (Kettunen, 2009) – it integrates agility at the enterprise level and proposes that the deployment be done downwards instead of upwards.
The future of Agile, as seen from the literature review is to incorporate new structured tools and techniques and even to allow hybrid models, customized towards industrial specific practice and even to company specificity. As (Boehm & Turner, 2004) concluded, the future of projects involves both high agility and great discipline.
2.3. Models and related literature
In (Kettunen, 2009) the author creates a link between agile manufacturing and the new product development process. He states “some of the roots and basic ideas of agile software development appear to have much in common with the origins of AM (n.a. agile manufacturing) (NPD)”. The same interesting parallel between NPD and AM also appears in (Buyukozkan & Baykasoglu, 2004), where AM is stated to have “useful enabling technologies and physical tools which may support the efficient product development”. However apparent it may seem that there are great benefits from adopting agile principles in manufacturing, we must retain in mind that agile manufacturing benefits from the historical development of manufacturing management systems, providing a sound basis for structured and comprehensive processes and tools in order to deliver best results. The agile applications in the software development are far newer and do not benefit from an established context.
In (Cooper), the author clearly demonstrates the compatibility between the two. He argues that, although successful in delivering fully working products or increments by the end of every sprint, agile is not appropriate for all the project lifecycle. This is why it is mostly used by technical departments and staff and cannot reach the whole organization, thus limiting the scope of the project.
3. RESEARCH METHODOLOGY
The research starts from the premises proposed by (Kettunen, 2009) which concludes, “there is clearly a need to expand the view in software engineering, adopting applicable key learnings from other disciplines”. Moreover, in the case of agile practices, there is a clear need for more structured and established risk management processes and practices. As a result, this paper attempts to adapt the lessons learned from an established and well-standardized industry, the automotive sector, and bring them into an agile context.
3.1. Research questions and methodology
In order to achieve its objective, we posed a series of research questions:
- Q1: Are Stage-gate type models and Agile models incompatible?
The objective of this question is to identify whether a parallel implementation or even a mixed model of agile and Stage-Gate is possible. The question poses great importance, as most of the well-established management and risk management models are a practical application of the Stage-gate framework. When attempting to reconcile two types of models or to adapt parts of one for another it is essential that we previously validate whether the two belong or not to two mutually exclusive philosophies.
- Q2: If Q1 is infirmed, what can a Stage-Gate model bring towards the improvement of the Agile methods and practices?
The second question represents the fundamental premise of the research. It attempts to identify the elements which (1) have a risk management dimension and (2) are compatible and transmissible towards the agile environments. Answering the second question creates the basis for developing an
adapted risk management framework, a risk management tools and techniques roadmap, specifically adapted for agile software development projects. The structure of the paper follows the methodology proposed in Figure 1.
**Figure 1: Research methodology**
For the Stage-Gate model, we chose the Advanced Product Quality Planning – APQP model, widely utilized in the automotive industry. The choice was motivated by two factors: (1) the automotive industry is one of the most prolific and the most successful in developing new products. It is also one of the highly standardized, regularized and controlled industries, thus benefiting from designated frameworks and tools. Lessons learned from the automotive environment have been repeatedly proven valuable in other environments such as other manufacturing areas, services and others. (2) the authors have recognized experience in the implementation and use of APQP model.
For evaluating the risk management tools and techniques we used the QFD (Quality Function Deployment) tool to help us deploy the tools through the agile principles and characteristics. QFD is a competitive engineering structured instrument that is used primarily in quality management and quality planning both for processes and products. Its aim is to identify the primary characteristics that meet the customer requirements and to deploy them into the product or process. We used a simplified QFD template called the House of Quality (HOQ) as follows:
- The principles and characteristics of agile environments – scrum in particular – were identified and previously assigned an importance factor (in relation to their impact on risk management) using Analytical Hierarchy Process. They further become inputs – as requirements – in the QFD HOQ matrix;
- The risk management tools and techniques identified from the APQP are the second input for the QFD HOQ matrix, which will be deployed through the agile requirements and the most relevant will become the input for the customized risk management process adapted for software development agile environments;
- The roof of the QFD HOQ matrix was used to eliminate any redundancies that might occur from applying two similar tools and techniques, through solving the identified conflicts.
4. **RESULTS AND DISCUSSIONS**
4.1. **Demonstrating the compatibility between stage-gate and agile**
In the first step of the paper we deal with Q1 and validating whether the Stage-Gate model and agile are compatible and complementary or not. The second part deals with identifying and adapting risk management tools and techniques. In order to validate Q1, we chose the most widespread and adopted Agile framework that is currently in practice and high demand in terms of application and know-how: Scrum.
For the purpose of validating the first research question we employed two methods. The first is a review of the relevant literature. (Dyba & Dingsøyr, 2008) found through a literature study that “agile methods give the stage-gate model powerful tools for micro-planning, day-to-day work control, and reporting on progress”. As well, “the stage-gate model provided the agile methods with a means to coordinate with other development teams and to communicate with marketing and senior management”. They therefore conclude that it is highly feasible and possibly even beneficial to integrate the two schools of thought.
probably the most relevant view regarding the compatibility between the stage-gate model and agile comes from the developer of the model, Robert Cooper. He states that, in response to the criticism that this model has faced during the years (since its development in the 1980s) regarding it being “too linear, too rigid, and too planned to handle more innovative or dynamic projects” (Cooper, 2014), the model has greatly evolved through the years (Cooper, 1994), (Cooper, Edgett, & Kleinschmidt, 2002) and (Cooper, 2011). The findings in (Cooper, 2014) show that many leading companies have integrated key elements of the Agile Manifesto into their own adaptation of the Stage-Gate model – when discussing about manufacturing processes. We can therefore conclude that while agile establishes a context, a set of guiding rules and the principles of conducting a project, it is the stage-gate model that gives the structure and can bring the tools and techniques to effectively manage the risks and the processes as a whole. The result of the literature analysis is that there is no impediment in trying to reconcile the two models, and in some cases it is even recommended (Kettunen, 2009).
for the second method we attempted an over-positioning of the process model of APQP and SDLC (software development lifecycle) – following Agile Scrum principles –, which is graphically depicted in Figure 2. As it can be observed, not only are the two not mutually exclusive, but, when aligned correctly (as the authors attempted in Figure 2) the SDLC can greatly benefit from applying a Stage-Gate model in general and the APQP model in particular. In the agile environment context, the stage-gate framework should act as a navigation guide throughout the product and project lifecycle, to aid in defining different milestones and to offer the appropriate inputs and outputs for the risk management process.
**Figure 2:** Over-positioning of APQP and SDLC
4.2. Identifying the improvement points for agile processes derived from APQP
In order to answer the second research question, we must first identify the risk management processes and tools employed in the APQP model. As a Stage-Gate model, APQP uses milestones as gates in order to verify the results of the stage that has just ended and to provide a decision opportunity. In itself the process is a very good risk management application, enabling GO / NO GO decisions at the end of every major stage and delivering stag outputs that are relevant for adjusting the course of action. The risk management tools and methods employed in APQP are detailed in the table below:
Table 1: Risk management tools and techniques in APQP (Chrysler Corporation, Ford Motor Company and General motors Company, 2008)
<table>
<thead>
<tr>
<th></th>
<th>Inputs</th>
<th>Outputs</th>
</tr>
</thead>
</table>
| Concept initiation and approval | • Voice of the customer
• Data and information from the client
• Hypotheses relating to the product and the process | • Project objectives
• Quality objectives
• Process flow diagram
• Quality and risk assurance plan
• Product generic specifications |
| Program approved | • Project objectives
• Quality objectives
• Process flow diagram
• Quality and risk assurance plan
• Product specifications | • Risk and quality system analysis
• Design specifications
• Equipment, tools and verification instruments requirements
• Project analysis and verification
• DFMEA – design failure modes and effects analysis
• Prototype |
| Prototype | • Risk and quality system analysis
• Design specifications
• Equipment, tools and verification instruments requirements
• Project analysis and verification
• DFMEA – design failure modes and effects analysis
• Prototype | • Product characteristics matrix
• Work instructions
• PFMEA – process failure modes and effects analysis
• Process capability study plan |
| Pilot | • Product characteristics matrix
• Work instructions
• PFMEA
• Process capability study plan | • Measurement system evaluation
• Process capability – preliminary study
• Control plan for product launch |
| Launch | • Measurement system evaluation
• Process capability – preliminary study
• Control plan for product launch | |
importance these are: PFMEA (9.5%), DFMEA (8.8%), Quality and risk assurance plan (8.1%), Design specifications (7.9%), Quality objectives (7.4%), Project objectives (6.7%), Product generic specifications (6.3%), Prototype (6.3%), Product characterization matrix (6.3%), Data and information from the client (5.4%), Process flow diagram (5.3%). Considering that leanness of the process is a high consideration in developing the tools and techniques map, we will consider the very similar Product generic specifications, Design specifications and Product characterization matrix into one tool: Product specifications with an averaged importance factor of (6.8%).
Also a number of bottlenecks emerged, because of the difficulty of the implementation and support of the different tools and instruments (in order of their importance): PFMEA (9.5%), DFMEA (8.8%), Quality and risk assurance plan (8.1%), Quality objectives (7.4%), Project objectives (6.7%), Prototype (6.3%), Data and information from the client (5.4%), Process flow diagram (5.3%). The bottlenecks function as indicators of the most important tools and techniques that have emerged from the deployment analysis. We have used the bottleneck analysis to identify the critical to risk management tools to be implemented and, as a consequence, adapted to the agile environment.
**Figure 3:** QFD House of Quality for risk tools and instruments deployment
For mapping the tools and techniques applying risk management to the agile processes we utilized a standard risk management process as defined by PMBOK (Project Management Institute, 2013) and AMA Handbook (Dinsmore & Cabanis-Brewin, 2014), which we further adapted to the SDLC process, as an adaptation of risk management process to the agile environment in the context of the APQP framework (see figure 4):
1. Risk process and system planning
2. Risk identification
3. Risk analysis which includes:
a. Qualitative analysis
b. Quantitative analysis
4. Response planning
5. Monitoring and control
In order to evaluate the coverage of all the risk management tools deployed in the QHD HOQ, we plotted the tools and techniques to the adapted risk management framework. The results of the analysis are presented in Figure 4. The major constraint in streamlining the risk management process
through an agile environment was to maintain it “lean” and “light”. The primary concern of agile environments is to keep non-value adding activities as low in the priorities as possible, which includes documentation, supporting tools and techniques and heavy processes. The results in figure 5 also contain a number of agile specific risk management tools such as: Risk Adjusted Backlog, Risk Burn-down Graphs and the meetings “system” proprietary to Scrum methodology. The rationale behind including these into the proposed risk management process is that they are proprietary to the agile framework and complementary to the proposed risk tools and techniques.
**Figure 4:** Risk management process with associated tools and techniques
The stage gate framework allows for the tools and techniques to be implemented in a structured way throughout the risk management process. Furthermore, it enables the GO / NO GO decision making opportunities that agile environments generally lack. However, when coupled with an agile process (as presented in Figure 4), the deliverables and inputs from every stage are tightly coupled and repetitive. This is the same situation as for the adapted risk management process. In an industrial project setting, the risk management process steps are better established in time and more structured. The lengthened stages of the risk management process are primarily due to the iterative deployment characteristic of the agile environments. It is necessary in this case to iterate the risk management process (the identification – analysis – response planning – monitoring cycle) along with the sprint iteration. In order to plot the iterations we considered the more appropriate solution to graphically present it as a linear process, throughout the entire lifecycle – given though the effort at all times is not equal in intensity.
As for the tools and techniques assigned to all stages of the lifecycle, we can clearly also see that all the risk management process stages are well covered. In order to balance the risk undergone by iterating risk management tools and techniques, we introduced the “risk register” tool, a widely utilized risk management tool also proposed by the PMBOK. The role of this risk register would be to create, as per its name, a risk registry with the intent to pass on from stage to stage the risk management process know-how.
**5. CONCLUSIONS AND FUTURE WORK**
It is obvious that agile and Stage-Gate models are compatible and can bring great benefits to each other. Agile practices and processes can benefit greatly from the application of “lessons learned” in other, more advanced and structured domains. The adaptation of a rich framework such as APQF can be of great help to both managers and practitioners.
An important area of future work is to properly validate the proposed risk tools and techniques roadmap, as the research proposition in Figure 1 depicts. As the scope of this paper did not allow even a partial validation, a case study application is required so as to provide a practical view on possible implementation possibilities, constraints and outcomes.
The scope of this paper was to add value to the risk management practices in agile environments by applying the lessons learned from the APQP model. However, throughout the research it became obvious that the stage-gate framework has the potential of scaling the agile processes organization wide, which is to create the setting for the organization-wide processes to interact with the project management agile processes.
ACKNOWLEDGEMENTS
This paper is supported by the Sectoral Operational Programme Human Resources Development (SOP HRD), ID134378 financed from the European Social Fund and by the Romanian Government.
REFERENCE LIST
|
{"Source-Url": "http://www.toknowpress.net/ISBN/978-961-6914-13-0/papers/ML15-228.pdf", "len_cl100k_base": 4660, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 35459, "total-output-tokens": 6031, "length": "2e12", "weborganizer": {"__label__adult": 0.0003986358642578125, "__label__art_design": 0.0003821849822998047, "__label__crime_law": 0.0004405975341796875, "__label__education_jobs": 0.00403594970703125, "__label__entertainment": 5.817413330078125e-05, "__label__fashion_beauty": 0.0001931190490722656, "__label__finance_business": 0.0017499923706054688, "__label__food_dining": 0.0004086494445800781, "__label__games": 0.0006146430969238281, "__label__hardware": 0.0004668235778808594, "__label__health": 0.0005726814270019531, "__label__history": 0.00022351741790771484, "__label__home_hobbies": 0.00010454654693603516, "__label__industrial": 0.0008101463317871094, "__label__literature": 0.0003604888916015625, "__label__politics": 0.00021564960479736328, "__label__religion": 0.0003764629364013672, "__label__science_tech": 0.00962066650390625, "__label__social_life": 0.0001156330108642578, "__label__software": 0.006031036376953125, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.0003800392150878906, "__label__transportation": 0.0007500648498535156, "__label__travel": 0.00020694732666015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26992, 0.01795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26992, 0.10145]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26992, 0.92836]], "google_gemma-3-12b-it_contains_pii": [[0, 1929, false], [1929, 6177, null], [6177, 10084, null], [10084, 13474, null], [13474, 16097, null], [16097, 18072, null], [18072, 20384, null], [20384, 23163, null], [23163, 26992, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1929, true], [1929, 6177, null], [6177, 10084, null], [10084, 13474, null], [13474, 16097, null], [16097, 18072, null], [18072, 20384, null], [20384, 23163, null], [23163, 26992, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26992, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26992, null]], "pdf_page_numbers": [[0, 1929, 1], [1929, 6177, 2], [6177, 10084, 3], [10084, 13474, 4], [13474, 16097, 5], [16097, 18072, 6], [18072, 20384, 7], [20384, 23163, 8], [23163, 26992, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26992, 0.01504]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
9aa79430d74e9b394c5e92704e31c44ebd3a37a4
|
A Framework for BPMS Performance and Cost Evaluation on the Cloud
Guillaume Rosinosky
Bonitasoft
Grenoble, France
Email: guillaume.rosinosky@bonitasoft.com
Samir Youcef, Francois Charoy
Université de Lorraine, Inria, LORIA
Nancy, France
Email: samir.youcef@loria.fr, francois.charoy@loria.fr
Abstract—In this paper, we describe a framework that allows to automate and repeat business process execution on different cloud configurations. We present how and why the different components of the experimentation pipeline -like Ansible, Docker and Jenkins- have been set up, and the kind of results we obtained on a large set of configurations from the AWS public cloud. It allows us to calculate actual prices regarding the cost of process execution, in order to compare not only pure performance but also the economic dimension of process execution.
1. Introduction
A Business Process Management System (BPMS) is usually deployed on traditional software stacks composed of an application engine and databases. Sizing the resources it requires to support a given load or number of transactions per second is not easy, since it may imply to test it on different computer size with different configurations for the engine and its components. Thus, most of the time, BPMS deployment is done on oversized systems. Thanks to the public cloud, it is possible today to conduct these tests in a semi-controlled environment that provides a wide variety of possible deployment configurations and to repeat benchmarking activities as much as one wants. Even better, it is possible to associate a cost to these configurations. Still, setting up a framework that would allow repeatable and controllable execution is not easy. In this paper, we describe and demonstrate that the framework we have designed allows us to evaluate the performances and the cost of the execution of processes on the BonitaBPM execution engine on a whole set of configurations in the cloud, such as the used cloud instance types and the parallel number of executed business processes, combining different well known open source software including Docker and Ansible. Our framework allows to generate data that relate the operating performances to a cost per BPM transactions. It could be adapted to other kinds of BPM systems or frameworks.
In the section 2, we describe the context and the hypothesis that we made regarding the deployment and the execution of BPM engines. Then we describe the different initiative that are proposing solutions to similar problems. In section 4 we describe the technical principles of our framework and how its components are combined. Section 5 describes the experimentation and, in the last part, we discuss them in regard of our objectives.
2. Motivation and hypothesis
Our original goal is to propose elastic resource allocation and scheduling methods for a BPMaaS provider. In order to size our infrastructure in a realistic way, we have to estimate different cloud configurations. Thus, we set up a framework to test multiple combinations of cloud resources by deploying in the Amazon Web Services (AWS) public cloud all the components needed for a BPMS (see figure 1 for an example of BPMS architecture) including the testing tools. Here, we test only configurations with one node for the database and one node for the application server and the BPM engine. More precisely, we want to be able to:
- allocate three separated IaaS virtual machines, one for the database, one for the BPMS engine, and one the BPMN process injector
- deploy the relational database on its cloud instance
- deploy the application server and the BPMS on their cloud instance
- deploy the testing tool (process injector) on its cloud instance
- launch the tests
- get the results and the meta informations of the test
• get other useful data (database dump, logs, etc.)
• archive them
• deallocate the cloud instances
At the best of our knowledge, there is no existing tool today that can manage all these steps. It has to be automated since we want to repeat the tests, and later reuse this framework on different cloud providers, databases vendors, BPMS, with different cloud configurations. Furthermore, the tool needs to be customizable on several parameters such as the number of parallel BPMN processes instances launched against the BPM engine, the type and number of cloud resources used for the database or the BPMS engine, the size of the number of working threads of the BPM engine, etc. Some parameters of the application server and the database need to be tuned in regard of multiple cloud configurations with varying number of CPU cores, memory, etc. For instance, the number of available working threads for the BPMS to execute work, the number of connections, or the memory assigned to the application server are important parameters that can completely change the behaviour and the processing speed. These criteria must be passed as parameters or directly as files that have to be be incorporated.
This high variability and the large amount of tests that we have to conduct and to repeat oblige us to set up a highly automated environment. From the execution to the data collection, we must be able to execute the whole test pipeline automatically. In this part, we describe these components, how they are deployed and used for our experimentations. Figure 2 presents examples of configurations we want to obtain.
Figure 2. Example of test launches on three different configurations for the database tier and the BPMS tier. m1.small, t2.micro, m3.medium and m3.large are AWS instances classes with different memory, computing power, etc.
3. Related work
Since its inception, the Cloud has been recognized as an excellent platform to evaluate different kind of software stacks and to provide a good potential to conduct reproducible experimentation. As a programmable infrastructure, it allows to automate the experimentation pipeline in a very seamless and cost effective way, competitive with large international infrastructures like grid500k1 or Planetlab2. Public Cloud have also the advantage of being actually used by companies for production deployment. The result that we can obtain can be exploited almost directly.
On the case of BPM engine evaluation, Benchflow [1] is a very interesting approach. It is a benchmarking framework based on Faban3 - a generic measurement framework, and Docker. The authors use a process composed of an empty automated task and a timer event. Their main metric is the process throughput. They have tested two anonymized BPMS for a defined duration with various user loads, and observed differences between the performances and the general behaviour of these two systems. However, their tests are not deployed on the cloud, and the framework seems to lack an orchestrator for intensive testing. Betsy [2] is another interesting approach, but it focuses mainly on the BPMN or BPEL compliance of BPMS, even if a performance benchmarking is planned.
There exist several cloud-related generic attempts such as Smart CloudBench [3], where it is possible to test generic application on cloud resources. However these solutions are commercial and, even if they could theoretically be used in our case they are not directly BPMS related.
4. The framework components
Our framework for performance testing is primarily dedicated to benchmark Cloud configurations performance in order to compare different combinations and parameters. It is composed or uses the following components.
4.1. BonitaBPM BPMS
We used Bonita BPM4 7.3.1 in its community version as the BPMS engine. Bonita BPM is an opensource BPM solution originally developed in Inria [4] and now supported by the Bonitasoft company. It is compliant with BPMN and can be deployed on many SQL databases like H2, MySQL, PostgreSQL, Oracle, or SQL Server. In our case, we conducted our tests on Postgresql database. It is the reference database for BonitaBPM.
4.2. The Process injector
We need a component able to inject BPMN schemas, execute processes and measure the performances of the engine. For this, we used a testing tool developed by Bonitasoft. It allows to deploy business process model in a BonitaBPM installation, to execute them by mocking process variables,
We decided to use Docker. Docker has three main components: it can be difficult to master them, and we need a way to create isolated and repeatable tests. We decided to use Docker. Docker is an open source project combining LXC containers, virtualization, and a configuration management platform. As [5] explains, Docker can be very useful for scientific experimentation. It is now used in many production cases, and allows to make repeatable and isolated executions faster than with a virtual machine. A Dockerfile contains the list of commands needed to initialize an environment such as installing libraries, setting environment variables, and execute the required programs. It is possible to version the Docker images, and to save them in a tar file and load afterwards in order to reproduce seamlessly the whole execution environment each time it is needed.
We have adapted a BonitaBPM Docker image in order to be able to select the database vendor, and to use the performance tool. We have developed a Docker image for the performance tool and we have specified environment variables as parameters, such as the type of injected process, the number of parallel processes run by the injector in the BPM engine, the total number of processes runned, etc.
4.4. IaaS provider
For our tests, we have used Amazon Web Services. AWS provides a very well defined API to automate our tests and a wide selection of virtual machine configurations. It can also be considered as the reference public Cloud provider. Since our first goal is not to compare Clouds we did not reproduce the experiments on another IaaS but as we will see in the next part the framework could be easily adapted to another environment. We have mainly used here Elastic Compute Cloud (EC2) instances on which we have deployed the different needed components.
4.5. Orchestration tool
Thanks to Docker, we are able to manipulate only containers instead of deploying a lots of files. This is an important step, but not sufficient for our needs. Indeed we need to be able to allocate and de-allocate cloud resources, and keep the different IP addresses for each instance for various references. For instance the BPMS instance needs the address of the database, including some credentials. The performance tool instance needs the BPMS instance address. We needed a tool able to execute scripts, deploy, instantiate cloud, keep the inventory of the various instances and is able to inject in an instance configuration other instance data. Many tools exists for these kind of automation and could be considered, such as Puppet, Ansible, Chef, Salt [6]. We selected Ansible for our framework. As written in Ansible’s documentation, Ansible is an IT automation tool programmed in Python. It is able to configure systems, deploy software and orchestrate IT tasks with files usually in YAML. It does not need a component on the client side, as it uses internally SSH for the communications with the target instances.
Several concepts exists in Ansible:
- hosts and groups : target instances and their types - who can be used to obtain a subset of hosts
- inventory : current list of hosts and groups. It can be statically defined or dynamically defined
- task : a call to an Ansible module or a script
- role : a reusable group of tasks
- playbook : a model of configuration linking hosts and roles, and execute the latter on the concerned hosts
It is possible to use variables inside a playbook or a role. These can be defined at several places (for instance, in parameter of the playbook, in the host, the group, the role, the playbook, etc.) and can also be overloaded when calling the command line. The template engine Jinja is used by Ansible for variable operations.
We have set up several roles and playbooks for our needs, such as a database role, a benchmark role, etc. We have used the EC2 modules for the allocation and deallocation of EC2 resources, and the Docker module for image instantiation.
For the customization part, we have mainly used default variable definition, variables files for the mandatory customization parameters such as the size of the memory of the Java Virtual Machine used in the application server for the BPMS engine. Indeed this value should be adapted to the memory the instance type is able to provide. For instance, an EC2 m3.medium instance has 3.75 Go of RAM, and a c4.xlarge instance has 7.5 Go de RAM. For these we have assigned respectively 3 Go for the memory allocation pool of the m3.medium and 6.75 Go for the c4.xlarge. A list of the main internal parameters is in table 1.
We have set default values for our various parameters, but needed to be able to overload several with the needed variation in the tests. For this, we have used the possibility to pass variables directly in the command line (the highest priority), in order to make variation in the studied internal variables such as the number of launches, or the types of EC2 instances.
For the resource part, as it is possible to launch roles against a subset of hosts with the group concept, we have prepared one group for each tier (database, BPMS, bench tool). For instance, the database role will be launched against the hosts of the database group only. The dynamic inventory gets each hosts and its corresponding groups by querying
---
5. https://www.docker.com/
7. https://www.ansible.com/
AWS inventory servers. Furthermore, in order to identify and be able to launch several tests simultaneously, we have assigned a unique identifier to each current test. For this we have added a variable named test_userid that we use in the instantiation part as a supplementary group. For this, we have also modified the generic AWS dynamic inventory so it filters the hosts with this identifier. EC2 instances are declared with this additional EC2 tag in the creation playbook. Furthermore, we have added this value as a filter in the AWS dynamic inventory.
### 4.6. Test result collection
For this part, we used the s3cmd tool to send to an Amazon S3 bucket all our current results. For obvious speed and cost reasons, the scripts synchronizes only the files not already present in the bucket. We use the same command to download results for analysis.
### 4.7. Jenkins
Even with the orchestration offered by Ansible, launching all these scripts manually is not very efficient and can be complicated, mostly since we want to execute a lot of tests. In this case we have launched tests on more than 70 different configurations, each one launched 6 times. Indeed, we need to test a lot of instance types combinations for the BPMS and the database, including several different numbers of injected parallel processes. It was necessary to find a tool able to launch tests for us in an easy way, with a defined list of parameters and able to show logs feedback.
Jenkins\(^8\) is an open source tool designed for continuous integration. It is possible to use it to make tests runs within a web-based user interface. We used it for the sake of simplicity, its scripting ability, and the capability to show different runs, and their log files. We used several plugins to simplify even more the processing, namely: Ansible plugin, Rebuilder plugin, Environment injector plugin, Conditional Buildstep plugin. We have two parts: parameterized jobs who call their Ansible counterparts, a job calling the command for result archiving and a parametrized Jenkins pipeline which launches the jobs and is able to launch multiple times the test. A Jenkins pipeline is a customizable job orchestration command where it is possible to put a Groovy script for calling jobs. The scripting language is very powerful, for instance it is possible to put exception catching, and we have used to trigger in all the cases the destroy job. Indeed in case of blocking errors, the reserved instances continue to be rented and it can become very expensive: it is important to destroy as soon as they are not needed anymore. We have also added a parameter for the number of test launches, adding simply a loop who will call the specified number of times the test job.
### 4.8. Overview
Figure 3 presents the global architecture. Setting up this entire test pipeline to conduct performance evaluation of different deployments remains a tedious task but we have seen that thanks to the cloud and of current administration automation tools it is possible to deploy and test a software stack under different configuration in a fully automated way and to conduct repeatable execution. In the next section, we describe the kind of result that we were able to produce based on this environment. At this point it is interesting to note that changing the software to test is mainly a matter of Docker Image production and to define the needed parameters in the Ansible playbook and the Jenkins jobs and pipeline.
\(^8\) https://jenkins.io/
5. Experimentation
5.1. Overview
A first version of this framework was used for the resource size estimation in [7]. In this paper, various AWS m3 family EC2 instances were used for the database and the application server. For this experimentation, we have launched tests against RDS Postgres, with db.r3 family instances (memory optimized) for the persistence tier, and with c4 (computing power optimized) family instances for the BPMS part.
For our tests, we have used the standard process, the reference process used for performance comparison between versions of BonitaBPM. This process is composed of 20 sequential automated tasks each one updating 15 string process variables from a constant and executing afterwards a connector repeatedly computing the Fibonacci number of 25 with a recursive method, until 150 milliseconds or more have passed.
The goal of our experimentation is to find the capacity of cloud resources in tasks per seconds. As a task could be very fast to execute, and the main metric used for size estimation in BonitaBPM benchmark tool is process based, we have used a reference time for the mean process duration. More precisely, we wanted our process to have a mean duration of 10 seconds: an arbitrary but realistic value for the standard process. Indeed, if we look only at the duration of the Fibonacci connectors, we obtain 3 seconds where we must add the process time of each process instantiation, variables allocation, workflow evaluation, and process termination. When launched with only one parallel inject business process, the duration of a process is about 5 seconds, as we will see in the figure 4.
We tested with different numbers of parallel process execution, and instance types for the database and application server. We have tuned BonitaBPM on the number of threads reserved for connectors and execution of tasks and on the database connection count, each one based on the parallel process number. The memory assigned to the Java(TM) VM of the BPMS’ application server has also been tuned for each cloud configuration. We have also kept a unique specific parameter group for each RDS database, this one having most important parameters calculated from the size of the memory of the instance type. A parameter group is an AWS RDS reusable list of parameters used to tune the database. The list of parameters we used is listed in table 2.
As the tests can be long, and expensive - we pay the cloud configurations during the tests - we have experimented a limited number of parallel processes, and then simply done a linear regression between the nearest upper and lower tests results around 10 seconds. For instance, if we obtain for 10 parallel processes a meantime of 5 seconds and for 20 parallel processes a meantime of 15 seconds, we will deduce we can use 15 process for a meantime of 10 seconds. We have then looked at the mean number of tasks per seconds for this number of processes.
5.2. Results
Figure 4. Process mean time for number of parallel process. Errors bars represent minimum and maximum values obtained. The red line represent the 10 seconds reference.
In figure 4, we can find the mean process execution time compared to the number of injected parallel processes. The throughput per second is simply obtained by dividing the total duration by the total number of executed tasks, 60000 in this case.
The task throughput and task throughput for one dollar is in table 3. This last metric is important to see which type of cloud instances are the cheapest to use for a given quality of service. We can see here that less powerful instances are more interesting, except for the generic instance type m3.medium used for the database tier or both the database tier and BPMS tier. However this configuration remains less expensive for a low throughput of tasks. More powerful resources are more expensive to use, but are still useful when higher task throughput is required. The configuration db.r3.large / c4.large is the cheapest configuration to use (with the higher throughput for one dollar).
Another interesting thing to notice here is that the db.r3.large / c4.2xlarge is a lot more expensive to use than the other db.r3.large based configurations, and is not able provide a throughput as high as the db.r3.xlarge / db.r3.large while being more expensive. This can be correlated with the fact that a cloud configuration uses both the database tier
<table>
<thead>
<tr>
<th>Name</th>
<th>Tier</th>
<th>Usage</th>
</tr>
</thead>
<tbody>
<tr>
<td>name</td>
<td>global</td>
<td>test name</td>
</tr>
<tr>
<td>nb_launch</td>
<td>global</td>
<td>number of launches of the test</td>
</tr>
<tr>
<td>database_instance_type</td>
<td>database</td>
<td>instance type file for database</td>
</tr>
<tr>
<td>parallel_launch</td>
<td>benchmark</td>
<td>instance type file for Bonita</td>
</tr>
<tr>
<td>configuration</td>
<td>BPMS</td>
<td>number of parallel instances</td>
</tr>
<tr>
<td>worker_size</td>
<td>BPMS</td>
<td>Bonita configuration file</td>
</tr>
<tr>
<td>bonita_version</td>
<td>BPMS</td>
<td>number of assigned threads version</td>
</tr>
<tr>
<td>test_userid</td>
<td>global</td>
<td>unique test identifier</td>
</tr>
</tbody>
</table>
**TABLE 2. USED PARAMETERS.**
and the BPMS tier, and paying more only for one of the tier could become counterproductive. This is visible in the figure 5. However, this needs to be tested more, as noisy neighbor effects could be partially at the origin of these results, as we can see in the error bars of figure 4.
Figure 5. Price vs mean task throughput for each tested configuration. Shapes represent the database tier resource type, colors the BPMS tier resource type.
6. Conclusion
In this paper, we have presented our framework that allows to conduct extensive tests of the execution of a BPM engine under different cloud and engine configuration and to collect data in order to compare the results. Thanks to the entire automation of the test pipeline, based only a collection of open source tools it is possible to execute the test with as many configuration as we need the number of time we need. It remains also very flexible and cost effective. We spent around 250 $ in AWS credit to conduct all the tests needed for this paper, including runs that were required to validate the framework. This price could be reduced on AWS by using spot instances, that would allow to use VM at a fraction of the price. This framework also works with Vagrant for testing purposes, and with on premises computers, with a static inventory including hard coded IP addresses for each tier, and without the cloud instance creation and destruction.
In the near future, we plan to enhance the framework with the capability of testing clustered configurations and other database vendors. These operations are easy to add, since we just have to rely on the creation and orchestration of new docker images. We also want to add the possibility to test other BPMS, and to couple this framework with a load balancer and an resource allocation and scheduling algorithm as in [7]. Finally, we plan to make more intensive benchmarks in order to better estimate the price and efficiency of BPMS on cloud configurations.
Acknowledgments
The authors would like to thank Amazon Web Services for the free credits (this paper is supported by an AWS in Education Research Grant Award).
References
|
{"Source-Url": "https://hal.science/hal-01379167/file/PID4486585.pdf", "len_cl100k_base": 5041, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 19169, "total-output-tokens": 5898, "length": "2e12", "weborganizer": {"__label__adult": 0.0003643035888671875, "__label__art_design": 0.0005497932434082031, "__label__crime_law": 0.000499725341796875, "__label__education_jobs": 0.00351715087890625, "__label__entertainment": 0.00015151500701904297, "__label__fashion_beauty": 0.0002160072326660156, "__label__finance_business": 0.004436492919921875, "__label__food_dining": 0.00044918060302734375, "__label__games": 0.0005202293395996094, "__label__hardware": 0.0012102127075195312, "__label__health": 0.0007724761962890625, "__label__history": 0.0003969669342041016, "__label__home_hobbies": 0.00019252300262451172, "__label__industrial": 0.0009245872497558594, "__label__literature": 0.0003883838653564453, "__label__politics": 0.0003542900085449219, "__label__religion": 0.0003578662872314453, "__label__science_tech": 0.172607421875, "__label__social_life": 0.0002319812774658203, "__label__software": 0.06976318359375, "__label__software_dev": 0.7412109375, "__label__sports_fitness": 0.00022041797637939453, "__label__transportation": 0.0005469322204589844, "__label__travel": 0.0002837181091308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26201, 0.02409]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26201, 0.32988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26201, 0.9131]], "google_gemma-3-12b-it_contains_pii": [[0, 3816, false], [3816, 8287, null], [8287, 13692, null], [13692, 17197, null], [17197, 22288, null], [22288, 26201, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3816, true], [3816, 8287, null], [8287, 13692, null], [13692, 17197, null], [17197, 22288, null], [22288, 26201, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26201, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26201, null]], "pdf_page_numbers": [[0, 3816, 1], [3816, 8287, 2], [8287, 13692, 3], [13692, 17197, 4], [17197, 22288, 5], [22288, 26201, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26201, 0.09524]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
a7e4f39d1e1e4791e193c090c84a4722963fd949
|
[REMOVED]
|
{"Source-Url": "https://inria.hal.science/hal-00949560/file/978-3-642-45065-5_20_Chapter.pdf", "len_cl100k_base": 7320, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 36921, "total-output-tokens": 9447, "length": "2e12", "weborganizer": {"__label__adult": 0.00035762786865234375, "__label__art_design": 0.0005636215209960938, "__label__crime_law": 0.0003075599670410156, "__label__education_jobs": 0.001110076904296875, "__label__entertainment": 0.0001550912857055664, "__label__fashion_beauty": 0.00019240379333496096, "__label__finance_business": 0.0006809234619140625, "__label__food_dining": 0.0003459453582763672, "__label__games": 0.0006794929504394531, "__label__hardware": 0.002147674560546875, "__label__health": 0.0006747245788574219, "__label__history": 0.0004193782806396485, "__label__home_hobbies": 0.00012302398681640625, "__label__industrial": 0.0005130767822265625, "__label__literature": 0.00041103363037109375, "__label__politics": 0.0002460479736328125, "__label__religion": 0.0004487037658691406, "__label__science_tech": 0.2347412109375, "__label__social_life": 0.00012862682342529297, "__label__software": 0.031707763671875, "__label__software_dev": 0.72314453125, "__label__sports_fitness": 0.0002524852752685547, "__label__transportation": 0.0005888938903808594, "__label__travel": 0.0002620220184326172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37665, 0.03322]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37665, 0.38678]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37665, 0.868]], "google_gemma-3-12b-it_contains_pii": [[0, 1225, false], [1225, 3912, null], [3912, 6896, null], [6896, 8798, null], [8798, 11401, null], [11401, 12998, null], [12998, 14998, null], [14998, 17936, null], [17936, 20044, null], [20044, 21667, null], [21667, 24502, null], [24502, 27529, null], [27529, 28669, null], [28669, 28917, null], [28917, 30940, null], [30940, 33537, null], [33537, 36857, null], [36857, 37665, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1225, true], [1225, 3912, null], [3912, 6896, null], [6896, 8798, null], [8798, 11401, null], [11401, 12998, null], [12998, 14998, null], [14998, 17936, null], [17936, 20044, null], [20044, 21667, null], [21667, 24502, null], [24502, 27529, null], [27529, 28669, null], [28669, 28917, null], [28917, 30940, null], [30940, 33537, null], [33537, 36857, null], [36857, 37665, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37665, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37665, null]], "pdf_page_numbers": [[0, 1225, 1], [1225, 3912, 2], [3912, 6896, 3], [6896, 8798, 4], [8798, 11401, 5], [11401, 12998, 6], [12998, 14998, 7], [14998, 17936, 8], [17936, 20044, 9], [20044, 21667, 10], [21667, 24502, 11], [24502, 27529, 12], [27529, 28669, 13], [28669, 28917, 14], [28917, 30940, 15], [30940, 33537, 16], [33537, 36857, 17], [36857, 37665, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37665, 0.06944]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
e2932e5452e3fd1703958a28bc0968fff319ae4c
|
Integrating IoT and IoS with a Component-Based approach
Grégory Nain, François Fouquet, Brice Morin, Olivier Barais, Jean-Marc Jézéquel
To cite this version:
HAL Id: inria-00538469
https://inria.hal.science/inria-00538469
Submitted on 22 Nov 2010
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Integrating IoT and IoS with a Component-Based approach*
Grégory Nain¹, François Fouquet², Brice Morin¹, Olivier Barais², Jean-Marc Jézéquel¹,²
¹ INRIA, Centre Rennes - Bretagne Atlantique, ²IRISA, ²Université de Rennes 1
Campus de Beaulieu, 35042 Rennes, France
Email: {Prenom.Nom}@irisa.fr
Abstract
There is a growing interest in leveraging Service Oriented Architectures (SOA) in domains such as home automation, automotive, mobile phones or e-Health. With the basic idea (supported e.g. OSGi) that components provide services, it makes it possible to smoothly integrate the Internet of Things (IoT) with the Internet of Services (IoS). The paradigm of the IoS indeed offers interesting capabilities in terms of dynamicity and interoperability. However in domains that involve "things" (e.g. appliances), there is still a strong need for loose coupling and a proper separation between types and instances that are well-known in Component-Based approaches but that typical SOA fail to provide. This paper presents how we can still get the best of both worlds by augmenting SOA with a Component-Based approach. We illustrate our approach with a case study from the domain of home automation.
Acknowledgment
*The research leading to these results has received funding from the European Community’s Seventh Framework Program FP7 under grant agreements 215483 (S-Cube, http://www.s-cube-network.eu/) and 215412 (DiVA, http://www.ict-diva.eu/).
1 Introduction
Building easily configurable applications for house automation, e.g. in the context of Ambient Assisted Living (AAL), is a complex undertaking because it has to marry the ever evolving needs of the customers with the diversity of devices, appliances, communication links, communication protocols available in this domain. There is thus a growing interest in leveraging Service Oriented Architectures (SOA), with execution platforms such as OSGi that make it possible to smoothly integrate the Internet of Things (IoT) with the Internet of Services (IoS). The paradigm of the IoS indeed offers interesting capabilities in terms of dynamicity and interoperability. For instance, we developed EnTiMid [15] as a service oriented framework to cope with compatibility problems in the domain of house automation, as well as for managing systems (devices, versions, communication links) and their configuration in an easy and unified way. However typical SOAs fail to provide the loose coupling and proper separation between types and instances that are needed in domains that involve “things” (e.g. home automation). For instance two light appliances may offer the same type of service (turning light on and off) but different actual services, if only because they are located in different rooms. These loose coupling and proper separation between types and instances are however well known in Component Based approaches.
Besides, IoT-based applications are also characterized by the fact that things and their associated software are introduced or removed from an execution environment at runtime. Supporting this degree of dynamism is usually done programmatically, and our experience intends to simplify this task and to provide developers with an explicit view of the system architecture, while supporting its dynamic evolution. This paper presents how we can get the best of both worlds by augmenting an OSGi-based SOA with a Component Based approach. The rest of this paper is organized as follows. An investigation on existing tools and approaches in component and services domain is conducted in section 2, and leads to a list of requirements for integrating service based approach and component based approach for IoT system. Then our contribution using component-based approach on a service-based execution environment is detailed, illustrated and evaluated in section 3. Section 4 places our contribution with relation to other approaches close to ours. Section 5 concludes and highlights future work.
2 Existing tools and approaches
As a preliminary work, this section investigates on tools and approaches already developed in both components and service worlds. Using an IoT / IoS integration vision, pros and cons of each platforms are listed to better understand and specify the needs for a new platform getting the best of two paradigms for IoT/IoS system.
2.1 In the services domain
Java Business Integration (JBI) or JSR 208 is an industrial Java standard that eases the software integration over a Service-Oriented Architecture. Its goals are to avoid specific developments and to allow a reuse of Java technologies such as WebServices, BPEL, JMS. OpenESB by SunMicrosystem, ServiceMix by Apache Foundation or PeTaLs by OW2 are mature projects that comply with JBI specifications. This standard defines a component model on top of an Enterprise Service Bus (ESB).
Enterprise Service Bus refers to a business middleware family built around the SOA paradigm. These middlewares provide a runtime environment for deploying business ser-
services. They offer a mean to integrate legacy software as services into the business service orchestration. Every service is declared within the scope of the ESB runtime, which acts as the only mediator of services in the enterprise.
The JBI component model defines components with independent life-cycle, that communicate through their services over a normalized message middleware. This middleware acts as an abstraction layer for communications, and facilitates the integration of legacy software. According to their function, JBI components are split into two categories:
- **Service Engine Components** are directly hosted by the JBI runtime environment and cannot communicate outside of this scope. They are in charge of message processing, routing or orchestrating services.
- **Binding Components** expose or consume standard JBI services and perform the bindings with external non-standard software.
The component framework also describes a packaging for components. Service descriptions are encapsulated into Service Units, which are then encapsulated into deployable business component called Service Assemblies.
Besides the good properties in terms of interoperability offered by the message middleware and in terms of openness with the Binding Components, there is no clear separation between types and instances of services/components. Moreover, no introspection of services is offered and the interconnections between components, are not explicitly expressed, and sometimes, even hard-coded inside the components.
The OSGi Service Platform provides functionality to Java that makes Java the first environment for software integration and thus for development. The OSGi technology provides standardized primitives to construct applications from small, reusable and collaborative components. These components (also called Bundles) can be composed into an application and deployed.
The OSGi Service Platform makes it possible to dynamically change the composition of bundles with no need to restart the application. The OSGi technology provides a service-oriented architecture to minimize and to manage the coupling between bundles, enabling the components to dynamically discover each other for collaboration.
A service is specified by a Java interface. Bundles can implement this interface and register the service with the Service Registry. Clients of the service can find it in the registry, or react to it when it appears or disappears. This is similar to the service-oriented architecture made popular with web services. The key difference between web services and OSGi services is that web services always require some transport layers, which make them much slower than OSGi services, which may use direct method invocations. Also, OSGi components can directly react on the appearance and disappearance of services.
The mechanisms provided by OSGi to manage the coupling are not sufficient in the context of IoT. Since the coupling is explicitly hard-coded inside the bundles, any change to the service discovery and the binding policies implies replacing the bundle. Indeed, the application architecture is never made explicit and the OSGi platform only offers very limited introspection primitives.
### 2.2 In Components domain
**Fractal** [2] is a modular and extensible component model to design, implement, deploy and reconfigure various systems and applications. Famous implementations of Fractal are Julia and AOKell (Java), Cecilia (C), FractNet (.NET) and FracTalk (SmallTalk).
The Fractal component model supports the definition of primitive and composite components. Each Fractal component consists of two parts: a controller which exposes the component’s interfaces, and a content which can be either a user class or other components in composite components. The model makes explicit the bindings between the interfaces provided or required by these components, and hierarchic composition (including sharing).
Primitive components contain the actual code, and composite components are only used as a mechanism to deal with a group of components as a whole, while potentially hiding some of the features of the subcomponents. Primitives are actually simple, standard Java classes (in the Java distributions of Fractal) conforming to some coding conventions. Fractal does not impose any limit on the levels of composition, hence its name.
All interactions between components pass through their controller. The model thus provides two mechanisms to define the architecture of an application: bindings between interfaces of components, and encapsulation of a group of components into a composite. By default, Fractal proposes 6 controllers that may be present in components: Attribute, Name, Binding, Content, Lifecycle and Super controller.
Reflective execution platforms like Fractal or OpenCOM [1], do not provide a clear distinction between the reflection model and the reality. Modifying the reflection model implies modifying the reality: there is no mean to preview the effect of a reconfiguration before actually executing it, or to execute what-if scenarios to evaluate different possible configurations, etc. This lack of an explicit and independent reflection model require to perform most of the verifications (e.g. pre-condition on reconfiguration actions, as proposed by Leger [12]) during the reconfiguration process itself, and roll-back if it encounters a problem.
In addition, component models as Fractal are a bit opaque with respect to the outside world, making the opening and
reuse by third party applications complicated if not foreseen in advance. At last, the dynamicity of an application running over Fractal is compromised because the deployment of new components can not be done at runtime without restart.
2.3 Requirements for component for the Internet of Services
As a conclusion, this sub-section summarizes the benefits of both worlds (SOA and CBSE) and outlines the requirements an execution environment fully adapted for IoT and IoS integration must comply to.
Rq1: An explicit and independent reflective model of the architecture living at runtime. Reflecting the actual application, the model makes it possible to reason about the application state. Then an adaptation engine is able to select, test and validate an adaptation scenario on the model, before actually performing the adaptation on the running system [12]. Component-based execution systems often offer introspection capabilities making it possible to build a model view of the running system. SOA execution platforms do not have such an ability.
Rq2: Components coupling managed from outside. For the components to be highly independent, they must not embed any dependency resolution mechanism. Moreover, this extraction would make it possible to modify the resolution policies, or change the connections to adapt the system with no need to deal with business components. Having a clear and explicit description of the relations between components gives a better understanding and makes the analyze of the system much more accurate and so, leads to better adaptation decisions.
The component connections are often explicit in Component-Based systems, but are never in SOA, and dependencies resolutions are even hard coded in services.
Rq3: Interoperability and opening to the outside world is an essential principle in IoS. The goal is to offer a service in a standardized way to any other system that would like to use it. Even if the system is managed as a component based application, any third party application must be able to use the services offered by the managed devices (IoT speaking) and more generally components. Services must thus be exposed as classical services, while their ‘component like’ management should remain hidden.
This is natively offered by SOA using interfaces and registries to expose services to the world. Component-Based applications are in the other hand, living in their close world. Rq4: Hot deployment ability is absolutely necessary to ensure future evolutions and adaptations to the protocols and devices. The execution platform must support dynamic deployments and adaptations of the application during runtime with no restart.
SOA consider that services appear and disappear at any time. The hot deployment is thus essential and natively taken into account. Component-based applications do not address this concern.
Rq5: Minimize the adaptation time. Another strong constraint working with ‘things’ is that the entire time of realization of an action starting from the moment a person acts on a sensor, to the moment something happens, is less than 250 milliseconds for it to be considered as immediate by a human person. The reconfigurations of the system must fit within this constraint, or more specifically, the transition time from a stable configuration to another one should not exceed this limit.
3 Description of the solution
Models At Runtime techniques are used to address the first requirement. An engine capable of reasoning on the runtime model manages the connections and deployments of components. The wrapping of components into bundles makes it possible to register services from a component and open it to the world as required in Rq3. Then the underlying OSGi platform natively supports the Rq4.
The next paragraphs describes our contribution in more details through different point of view highlighting our answers to the requirements. A little example and an evaluation of the tool are presented on follow.
3.1 Solution in details
Models at Runtime
In order to ensure reflexivity (Rq1), the system keeps a model of the actual running objects/components and their connections (bindings) at any time. Configurations, reconfigurations and adaptations of the system are handled as follow.
Model creation. In the cases of a first configuration or a reconfiguration of the system, the target model is given to the system by a human person. For an adaptation, the target model is automatically derived from a Dynamic Software Product Line model [4, 13, 10, 8]. This derivation is driven by a reasoner component which selects the features most adapted for the current context. This reasoner then derives the corresponding architectural model using model composition techniques. This model (if valid) is stored in a cache, which is managed according to a standard caching algorithm.
Online Validation. This step relies on invariant checking: for all the produced configurations, we check that all the

invariants are satisfied. The open-source Kermeta metamodeling language [14] is used to manage different checking strategies and check all the invariants (expressed using an OCL-like syntax). We distinguish between two types of invariants: generic and application-specific. Generic invariants can for example check that all the mandatory client ports are bound to compatible server ports. Application-specific invariants can for example check that the EnTiMid application always has a communication component.
Component coupling managed from outside
Identify and validate the changes (Fig. 2, Step A) After validation, the first task is to identify the differences between the model representing the running system (source model) and the target model the system must switch to, as illustrated in Figure 1. During the comparison, the next 7 types of primitive commands can be found. 1. start and stop components. 2. add and remove components. 3. add and remove bindings. 4. update components.
These primitive commands represent atomic differences between the two configuration models. To allow the change of the component management policy, the comparison system only deals with abstract commands. This advocates for Rq2 by decreasing coupling. The real commands are instantiated (not yet executed) according to the actual policy, during the model comparison. These commands are stored in a collection and ordered according to an heuristic [11, 16] that ensures a safe migration from the current to the target configuration. Before actually executing the commands, the list is parsed to verify that all the commands can be executed. For example, for all AddComponent commands, the presence of the specific component factory is checked to ensure all components can actually be added without problem. Doing this kind of verification for all commands ensures that the command execution will properly execute. If a command is detected as non-executable, a report clearly describes the problem, and no command at all is executed. This way, the system is always kept consistent.
Because of an adaptation, some links (bindings) between components may appear or disappear, for the system to act differently. In case of classic components, adding or removing bindings is as simple as setting or unsettng a variable. Generally, a component missing one mandatory binding is stopped because it cannot run any longer. However, in the case of service based systems such as EnTiMid, the component may still offer its services to third-party applications and thus should not always be stopped. In other words, a "light component", virtual representation of a real light, may not be bound to any other component, but might still serve another application to control this light.
Other behavioral constraints can require more complex actions that just a set or an unset. For instance, if an alarm has been triggered, and if the user does not process this alarm, the system must be able to propagate the information anywhere else for the alarm to be treated. If the communication link is asked to be removed, this is structurally correct, but it may be part of an operation being treated, and so it does not have to be removed immediately.
Two component families, for transparency and openness
In classical component architecture, the component assembly is made by an entity, and the components are not available from external applications. Working on a service-based platform implies that the system takes into account the services registrations and unregistrations. EnTiMid has been developed to allow third-party applications to access devices in a unified way, whatever the vendor, through different protocols. Managing all the devices as pure components hinders the interoperability and access with/ from third party applications. To tackle this issue, we divide components into two families: functional components and device components.
Functional components are designed to be as light as possible to reduce the transition time Rq5. They only exist in memory and cannot be accessed from the outside. They cannot publish services for third-party application to use them. They are abstract components (such as timers, event publishers, parallelizers, sequencers, etc.) used to obtain a specific behavior, or to connect components in different ways than a simple binding.
Device components, on the other hand, support the requirement Rq3. They wrap components standing for real-life physical devices (lights, switches, alarm sensors, weather sensors, etc.) into on-demand generated OSGi bundles. The start of a device component is made into two steps. Since the component is contained in a bundle, the bundle must be started first (and stopped last). When starting, the bundle creates the component instance, sets global properties and publishes the services needed for the component to be manageable by the M@RT Engine. In a second step the actual component starts. After all the needed variables have been set for the component to run properly, it publishes its services on the OSGi context for them to be used by other applications.
The wrapping of device components and their hot deploy-
ment (Rq4) is made using a chain of actions. This chain is presented in Fig. 2 and explained in the following. **Step A** shows the model of the new configuration has to reach. This step starts the chain. **Step B** The Model® Run-Time Engine asks the *Instance Creator* to embed a new component instance into a new bundle. To do so, the *Instance Creator* generates an activator and a manifest, according to the instance informations given by the M@Rt engine. The Manifest, like any classic OSGi bundle manifest, gives information about the packages needed for the bundle to run, and which are the new provided packages. The role of the activator is to ask a factory for a new instance of the device at bundle start. Then, it registers the component as a service implementing the component type. The properties of the service registration gives the instance name for the system to be able to find the components. The activator is a Java class that needs to be compiled before being included in the bundle. **Step C** The generated Java file is then given to an Eclipse Java Compiler (ECJ) embedded on the platform. After linking all the libraries, the compiler produces Java compiled files. **Step D** consists in creating the actual bundle to be deployed on the platform. All the files and necessary resources are packaged and returned. **Step E** On return, the bundle is first saved in a local repository for it to be available in case of a future reconfiguration, avoiding a new generation step. **Step F** The bundle is finally installed on the running OSGi platform.
### 3.2 Application in the context of Ambient Assisted Living
The simple example presented here has been extracted from a study conducted in collaboration with elderly people in the context of Ambient Assisted Living. The EnTiMid application is running on a MSI Wind® equipped with a touchscreen, adapted to elderly people.
The example uses a device that exposes lights. These are functional components, and TYXIA
<table>
<thead>
<tr>
<th>Device</th>
<th>Implementation</th>
</tr>
</thead>
<tbody>
<tr>
<td>bedLight</td>
<td>TYXIA_510</td>
</tr>
<tr>
<td>kitchenLight</td>
<td>TYXIA_510</td>
</tr>
</tbody>
</table>
**p1:**Parallelizer is a functional component, which is only living in memory and is not embedded into a bundle. The parallelizer *p1* is not available from the outside (from third party applications). Its role here is to take part in the application logic by simultaneously launching the execution of the connected components. *bedLight:TYXIA_510* is a device component, which must be available for third party applications. It controls the light placed at the head of the bed. This component is embedded into a bundle to be able to expose a service to the world, via the OSGi registry. Any other application running on the OSGi execution platform can thus access and use this bundle to control the light. A service query on this type of component results in a real action on the actual device in the house (switch a on the light for example).
### 3.3 Evaluation of the solution
At this point, it is interesting to note the clear separation of component type/implementation/instance. There are only two component types *Functional components* and *Device components*. These components can have multiple implementations: Parallelizer, Sequencer, EventPublisher, for functional components, and TYXIA_510, RMG4S, Nabaztag for device components. At last, multiple instances of each component can exist at runtime, because each component is set to be connected to a specific device or to have a specific behavior. *bedLight:TYXIA_510* and *kitchenLight:TYXIA_510* are clearly illustrating that.
The availability of services on the OSGi platform opens new possibilities. Anybody familiar with OSGi developments is then able to create his own plugins to control the devices of his house. Let’s illustrate this with an example exposing devices through an interactive GoogleTalk instant messaging conversation. XMPP is an instant messaging protocol. This protocol is used by various instant messaging providers (e.g. Jabber, GTalk). In this context, the bundle exposing devices on GTalk is considered as a third-party application, running on the OSGi platform. Using an interactive question/answer robot, one can manage his home (through EnTiMid) from his computer. This way, it is possible to switch on/off the TV, the lights, the heaters, etc from your office or from your smart-phone, exactly as if you were discussing with your best friend on GTalk. The XMPP bundle simply lists (on demand) all the EnTiMid services published on the platform, and interactively describes the available things you can do. It is interesting to note that the XMPP developer does not need to know how the components are managed to be able to create an EnTiMid third-party application. The only thing to know is how OSGi works, and that EnTiMid services are published as org.entimid.services.* on the OSGi service registry. Everyone can then develop his own plugins and act on the house by using EnTiMid provided services.
---
1CPU: Intel Atom 230®1.6GHz, RAM: 1Gb and OS: Ubuntu 9.10 (Linux kernel: 2.6.31-17-generic)
During the day, the system is composed (amongst other components) of a simple remote control (Tyxia 110) and a parallelizer, connected as shown in Fig. 3. The parallelizer on its side, is connected to two components not presented in the figure. The first is an SMS Sender component which is in charge of sending a pre-established text message to an emergency service. In parallel, the second component Nabaztag (text-to-speech) informs the person that the request for assistance has been considered.
In summary, the alert system is composed of four components during the day.
At night, in addition to the four components of the day, two more components (bedLight and kitchenLight) are present on the system. This way, the house is lighted in case of emergency. The components are connected to the parallelizer as presented on Fig. 3, at the same level as the SMS and Nabaztag components.
Let describe the execution scenario:
Initial Deployment: the system is deployed during the day, and configured to meet the requirement of the elderly person living in the house. In this configuration, the elderly person can send emergency messages via SMS to a control center by using a simple device with one button (Tyxia 110). While the SMS is sent, the person is notified via a device equipped with text-to-speech capabilities (Nabaztag). In this configuration, the complete EnTiMid system is composed of 18 components (bundles or simple components in memory) and 9 bindings among these components.
Night: when the night falls, or more precisely when the light detector sends values lower than a given threshold, the application is reconfigured. The emergency feature is still active and switches to the night configuration. Then, the system is composed of 20 components and 18 bindings among these components.
Day: when the day rises, the system is reconfigured into the day configuration.
The Fig. 4 presents a sequence of reconfigurations of the system. After the initial deployment (State 1), we iterate night states (State 2 and State 4) and day states (State 3 and State 5) during the next two days.
During the initial configuration, all the components needs to be deployed. In particular, all the components that need to be wrapped into OSGi bundles have to be compiled, packaged and deployed on a network. This explains the rather long reconfiguration time of step 1: 2.5 seconds.
During the first reconfiguration (day → night, step 2) all the already deployed components are reused. In addition, the bedLight and kitchenLight have to be compiled, packaged, properly deployed and connected to other components. This is realized in less than 400 ms.
The next 3 reconfiguration (night → day → night) are much faster. Step 3 simply consists in unbinding and removing the 'light' component. Step 4 is similar to step 2. However, we maintain a cache of pre-generated components (jar files). The 'light' component is thus directly reused, with no need to compile and package it. Step 5 is similar to step 4. The actual reconfiguration time of these steps is less than 100 ms. For each reconfiguration, we first perform a model comparison. This model comparison takes an almost constant time of 400 ms to compare models with about 20 components. This comparison step is executed before the actual reconfiguration to plan the reconfiguration (i.e. instantiate the reconfiguration commands). It thus delays the actual reconfiguration of the system but does not impact the (un)availability of some components during dynamic reconfiguration. This way, the Rq5 about minimizing the reconfiguration time for it to be unperceived by the user is fulfilled.
3.4 Discussion
Our Model-Driven dynamic adaptation process is obviously less efficient (w.r.t. to the time needed to actually reconfigure an application) than hard-coded reconfiguration scripts executed on platforms like Fractal, for several reasons: A global model comparison is systematically performed between the source and the target configuration to determine all the actions needed to reconfigure the system. The wrapping of some components (embedded into OSGi bundles) is time consuming, because of the compilation, packaging and deployment.
However, these two drawbacks have significant advantages, and can easily be minimized:
The model comparison discharges developers from writing low-level and error-prone reconfiguration scripts. The system automatically computes a safe reconfiguration script that takes care of the life-cycle of components. It is important to note that this model comparison does not make an overhead on the actual dynamic reconfiguration time, it simply delays the reconfiguration. In other words, it does not impact the availability of services. Moreover, this model comparison could be performed by a third-party system (more powerful than a MSI Wind), which would return a list of reconfiguration actions. Indeed, each configuration is serialized in about 30 Kb (only a few Kb if zipped) so that it can be quickly transmitted on a network.
The OSGi components cache significantly improves the performances of the reconfiguration process seen in our exper-
ment. This cache of OSGi components can easily be initialized by a component that periodically iterates on a set of configurations (e.g., every 5 or 10 seconds during idle time), and generates the necessary bundles to reconfigures the application with each configuration. This way, it is possible to initialize the cache a few minutes after the installation of EnTiMid. Moreover, the compiling and packaging of a component that would not be present in cache could also be externalized to a third party system, with more computation capacities, that would simply return the jar file of the required component.
4 Related works
Besides the tools listed in section 2, the approaches described in the following clearly appears as alternative solutions. Indeed, this section aims at positioning our proposition with relation to these.
BluePrint. OSGi describes a dynamic component framework. This framework provides low-level mechanisms for implementing modular and dynamic applications. OSGi is already part of many applications, mature implementation are Felix by the Apache foundation or Equinox by the Eclipse Foundation. Blueprint container was added in the OSGi 4.2 specification to define a dynamic dependency injection mechanism on top of OSGi components. It is highly inspired by Spring Dynamic Module or Gravity [3].
This framework allows managing the dynamic nature of OSGi applications through a declarative XML-based service mechanism. Instead of directly using the OSGi packaging, BluePrint uses POJO objects, whose services and dependencies are declared in an XML file mainly inspired by Spring DM 2 and iPojo [6]. Based on this description, the container instantiates new components at runtime and properly binds them together by injection. Inversion of Control [7] imposes a clear separation between components. BluePrint makes it possible to separately declare hosted and required services and their implementations, thus reducing the complexity of the code. BluePrint Container monitors the OSGi platform and reacts to event like Bundle discovery or Bundle shutdown. With this introspection the container can dynamically declares new bundle and resolve dependencies with an equivalent service when a component is shut down.
BluePrint and our solution are very close in many points. Both of them can realize adaptations at runtime to keep the system in conformance with given constraints. In our solution, the constraints are solved at a business level, by working on high-level models of the running system. BluePrint simply solves technical needs in term of mandatory services. When a service fails, BluePrint replaces it by another one, with no overview on the changes this action can imply. This may lead to an unstable application, because the new deployed service may require other new services to run, and conflicts can appear. Working at a higher level of abstraction can help finding these unwanted conflicts and avoid them by selecting the most appropriate available service.
The resolution mechanism of BluePrint, its dynamicity offered by the event listener mechanism, and the use of XML declarative files instead of compiled code, can really be helpful in our case, but a development effort is needed for BluePrint to be able to load and work with completely specified models. In the future, we will probably try to integrate it, and measure how efficient the BluePrint resolution is, compared to our implementation.
The main benefit of our proposition is the fact that the component model is still available at runtime, consequently we obtain a reflexive component runtime for OSGi. This idea of models@runtime can ease the implementation of runtime model checking or the implementation of the adaptation layer.
SCA. SCA is a standard specification that provides a component model for Services Based Applications. It has several advantages: First it decouples the application business logic from the details of its invoked service calls. It supports a multitude of programming language for the component and services implementation. The ability to seamlessly work with various communications mode (asynchronous, MOM, RPC). It provides several types of binding to easily interact with legacy components or services providing access by technologies such as Web Services, EJB, JMS, JCA, RMI, RPC, CORBA and others. The value proposition of SCA is to separate the implementation of services and the wiring logic (Assembly model) of a service based application. The overhead of business logic programmer concerns regarding platforms, infrastructure, plumbing, policies and protocols are removed. Indeed, assembly model offers a mean to provide quality of service features for security or transactions. This specification is implemented in Frascati by OW2 or in Tuscany by the Apache Foundation.
SCA takes benefits from research experience and industrial experiences in building a reflexive component model. As a component model, we could reuse this architecture description language. For the integration of IoT and IoS, it misses the component implantation life-cycle management. Consequently, in the SCA specification, a component implementation is not a runtime artefact that can be easily deploy, undeploy, migrate etc. On that issue, OSGi specification provides the concept of bundle and a management layer for bundle which is highly valuable for IoT systems. Software to manage plug'n play device should be hot-deployed and removed when a component appear or disappear. Our proposition reuse this concept of bundle for managing component implementation and component instances. Consequently, component implementation and component instances can be managed uniformly. As a result, our work can be seen as an experience of taking the best of SCA concepts (a reflexive component model for SOA) and OSGi concepts (a Dynamic Module System for Java).
Other approaches. Providing an ADL on top of OSGi is an issue that has been addressed by several works [6]. For example, Cervantes et al presents in [9] an approach to describe dynamic sensor-based applications using a declarative language called WADL. As IoT system dynamic sensor-based applications are characterized by the fact that measurement producers (sensors) and consumers are introduced or removed from an execution environment at runtime. Supporting this degree of dynamism is usually done programmatically, and the WADL intends to simplify this task and to provide developers with an explicit view of the system architecture, while supporting its dynamic evolution. They also show in [5] how a scripting language can be used to reconfigure the running system. WADL addresses mainly the producer/consumer interaction type between components. Indeed, WADL does not managed Component Assembly and Component Instances (wireapps) uniformly.
5 Conclusions and perspectives
Building easily configurable applications for house automation, e.g. in the context of Ambient Assisted Living (AAL), is a complex undertaking because it has to marry the ever evolving needs of the customers with the diversity of devices, appliances, communication links, communication protocols available in this domain. The paradigm of the IoS indeed offers interesting capabilities in terms of dynamicity and interoperability, but fails to provide the loose coupling, a proper separation between types and instances, and mechanisms to deploy and manage services, that are needed in domains that involve things.
After a study of different tools existing in service-based and component-based domains, we identified a list of requirements for an execution environment to be fully adapted to the integration of Internet of Things and Internet of Services.
To meet the identified requirement, a new tool having an explicit and independent reflexion model of the architecture living at runtime has been created. In this model, °things° and °services° are managed the same way to enforce the interoperability and the opening to the outside world of IoT applications. Moreover, components and the application behavior are managed from outside of the components offering the system a great flexibility. Then components can be deployed or removed with no restart of the system making any change or adaptation transparent for users.
Home automation and more generally the domain of human-machine interactions, considers the time for an action to be perceived as immediately realized must be less than 250 milliseconds. Our experiments show that the system we create is not far from this limit and may require more investigation on alternative solutions to gain on time.
References
|
{"Source-Url": "https://inria.hal.science/inria-00538469/file/Nain10a.pdf", "len_cl100k_base": 8030, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28487, "total-output-tokens": 9753, "length": "2e12", "weborganizer": {"__label__adult": 0.00027680397033691406, "__label__art_design": 0.0004978179931640625, "__label__crime_law": 0.0002567768096923828, "__label__education_jobs": 0.0003802776336669922, "__label__entertainment": 6.324052810668945e-05, "__label__fashion_beauty": 0.00013947486877441406, "__label__finance_business": 0.00018155574798583984, "__label__food_dining": 0.0002689361572265625, "__label__games": 0.0004341602325439453, "__label__hardware": 0.0011854171752929688, "__label__health": 0.00037598609924316406, "__label__history": 0.00025534629821777344, "__label__home_hobbies": 7.861852645874023e-05, "__label__industrial": 0.0003676414489746094, "__label__literature": 0.0001913309097290039, "__label__politics": 0.00021076202392578125, "__label__religion": 0.00039887428283691406, "__label__science_tech": 0.0308990478515625, "__label__social_life": 6.902217864990234e-05, "__label__software": 0.00897216796875, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.00024187564849853516, "__label__transportation": 0.0004773139953613281, "__label__travel": 0.00019741058349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44197, 0.02716]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44197, 0.34217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44197, 0.91602]], "google_gemma-3-12b-it_contains_pii": [[0, 1166, false], [1166, 6176, null], [6176, 11719, null], [11719, 16792, null], [16792, 21977, null], [21977, 27097, null], [27097, 32247, null], [32247, 38127, null], [38127, 44197, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1166, true], [1166, 6176, null], [6176, 11719, null], [11719, 16792, null], [16792, 21977, null], [21977, 27097, null], [27097, 32247, null], [32247, 38127, null], [38127, 44197, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44197, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44197, null]], "pdf_page_numbers": [[0, 1166, 1], [1166, 6176, 2], [6176, 11719, 3], [11719, 16792, 4], [16792, 21977, 5], [21977, 27097, 6], [27097, 32247, 7], [32247, 38127, 8], [38127, 44197, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44197, 0.02941]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
fd12051be51532beaaa097700bfca6e16a830537
|
Nested Parallel Call Optimization
Enrico Pontelli & Gopal Gupta*
Laboratory for Logic, Databases, and Advanced Programming
New Mexico State University
{epontell, gupta}@cs.nmsu.edu
Abstract
We present a novel optimization called Last Parallel Call Optimization (LPCO) for parallel systems. The last parallel call optimization can be regarded as a parallel extension of last call optimization found in sequential systems. While the LPCO is fairly general, we use and-parallel logic programming systems to illustrate it and to report its performance on multiprocessor systems. The last parallel call optimization leads to improved time and space performance for a majority of and-parallel programs. We also present a generalization of the Last Parallel Call Optimization called Nested Parallel Call Optimization (NPCO). A major advantage of LPCO and NPCO is that parallel systems designed for exploiting control parallelism can automatically exploit data parallelism efficiently.
Keywords: Implementation optimizations, Parallel logic programming, And-parallelism.
1 Introduction
Non-determinism is an inherent component of problem solving in many areas of computer science. Search problems, generate-and-test problems, constraint relaxation applications are all examples of situations where non-determinism arises naturally. By non-determinism we mean the existence of multiple execution paths, each leading to (potentially) multiple solutions to the original problem.
Non-determinism is found in various programming languages: logic programming languages (e.g. Prolog), constraint programming languages (e.g. Chip [11]), rule-based languages (e.g. OPS5 [4]), etc.
Non-determinism represents not only a powerful way of expressing solutions to complex problems (by allowing programs to be written at a very high level of abstraction), it also offers a very rich source of parallelism. The possibility of extracting parallelism from the execution without affecting the structure of the solution and without heavy user intervention is making these languages more and more attractive.
A non-deterministic problem in such a language is generally expressed as a goal to be achieved/proved using a set of rules (or clauses) that specify how a goal can be reduced to “smaller” subgoals.
The process of solving a goal can be abstractly visualized as the construction of an and/or tree. Each node of the tree represents a goal to be solved. A goal composed of multiple subgoals is reduced by solving each of the individual subgoals (and-node). A single subgoal that can be solved using different rules represents, instead, an or-node. Figure 1 shows an example of and/or tree for a simple logic program (“?- gf(john, X)” is the goal to be solved).
Languages that include non-determinism allow considerable freedom in the way programs are executed, with particular reference to the order in which (i) different execution paths are explored (i.e. different rules for the same goal are attempted), and (ii) the different subgoals composing a goal are solved. This latitude permits one to exploit parallelism implicitly (without the need for programmer intervention) during program execution.
Two principal forms of parallelism are typically considered: (i) Or-parallelism: the different potential solutions to a goal can be explored in parallel (i.e., given a subgoal which can be reduced with different rules, the different reductions are concurrently attempted); (ii) And-parallelism: while looking for a specific solution, the different operations involved can be executed in parallel (e.g., the different subgoals composing a goal can be solved in parallel). And-parallelism is the “traditional” form of parallelism found, for example, in imperative programming languages, while or-parallelism is a direct result of the presence of non-determinism. With reference to the and/or tree construction, each or- and and-node represents a source of parallelism, i.e. a point of execution where parallel activities can be forked. The parallel construction of the tree should be realized respecting the dataflow dependencies of the program—
similarly to what happens in loop parallelization of Fortran programs [13].
A parallel system that builds an and/or tree to solve a non-deterministic problem may look trivial to implement at first glance, but experience shows that this is indeed a challenging task. A naive parallel implementation may incur an excessive overhead compared to a corresponding sequential system, which, in turn, may effectively translate to a slow-down of the execution.
A simple analysis shows that the major sources of overhead are related to the need of managing the tree structure of the computation at run-time (i.e., the tree need to be explicitly maintained, using additional data structures, and repeatedly traversed in search for multiple solutions).
One can formulate a number of principles that an implementor of a parallel system should follow to minimize parallel time and space overhead [7]. Two such principles are:
- **Reduced Nesting Principle**: The level of nesting of control structures in a computation should be reduced whenever possible.
- **Memory Reuse Principle**: Memory should be reused whenever possible.
These two principles step from the most intuitive way to simplify execution: reduce the complexity of the tree by flattening it. A simpler tree structure will lead to saving of memory (less data structures are required) and to faster execution (simpler structure to traverse). In fact, if an execution spawns \( n \) parallel computations and, furthermore, one of these subcomputations spawns other \( m \) parallel branches, then two distinct descriptors (for the two parallel calls) are required, one “nested” inside the other. The principle of reduced nesting will instead attempt to use a single descriptor (i.e., a tree of depth one) associated to a parallel call of \( m + n \) computations. Of course, the reduce nesting principle should be applied in a way such that program semantics are unaltered (e.g., in the case of logic programming this means that the order in which backtrack literals are chosen is preserved).
The reduced nesting principle manifests itself in many situations, both in non-deterministic as well as deterministic systems. The tail recursion optimization [12], the flattening used in Paralation Lisp [3], the distributed last call optimization [8] are just some examples of applications of the reduced nesting principle.
In this paper we present the **Last Parallel Call Optimization (LPCO)**, and its generalization, the **Nested Parallel Call Optimization (NPCO)**. These are meant to be the instantiation of the two principles above to the case of generic non-deterministic systems. In particular we will describe them in the context of logic programming (they can be applied to independent and-parallel systems, like &ACE [6], dependent and-parallel systems, like DDA[9], and more general and- and or-parallel implementations, like MUSE [1] and ACE [5]), although the concepts can be applied in a straightforward way to other non-deterministic systems (AI systems, theorem provers, etc.).
The LPCO is triggered when the last call in a Prolog clause is itself a parallel conjunction (from now on a parallel conjunction will also be referred to as a parcall for brevity). The NPCO is triggered when a Prolog clause has a nested parallel conjunction, and all goals following the parallel conjunction (these goals are termed continuation of the parcall) satisfy certain conditions.
The same principles can be used in the case of or-parallelism, giving rise to the **Last Alternative Optimization (LAO)**.
Even though the LPCO and NPCO have been developed in the context of non-deterministic programming, they can also be applied to parallel implementations of traditional languages, like Fortran. Figure 2 shows an ideal situation for the application of an optimization like LPCO: if at runtime the condition of the test in the if statement is satisfied, then the innermost loop can be parallelized and merged with the outermost one, potentially saving one barrier during execution.
2 Last parallel call optimization
The intent of the Last Parallel Call Optimization (LPCO) is to merge, whenever possible, distinct parallel conjunctions. Last Parallel Call Optimization can lead to a number of advantages (discussed later).
To illustrate LPCO in its generality, let us consider (fig. 3(i)) the parallel conjunction \((p \& q)\) where
\[
p := e, f, g, (e \& s).
\]
\[
q := i, j, k, (t \& u).
\]
LPCO will apply to \(p\) (resp. \(q\)) if (i) There is only one (remaining) matching clause for \(p\) (resp. \(q\)); (ii) All goals preceding the parallel conjunction in the clause for \(p\) (resp. \(q\)) are determinate. If these conditions are satisfied then a new parcall frame is not needed for the parallel conjunction in the clause (see figure 3). Rather, we can pretend as if the clause for \(p\) was defined as \(p := (e, f, g, t) \& s\)

(although the bindings generated by \( e, f, g \) would be produced before starting the execution of \( s \)). Following the previous example, we extend the parcall descriptor for \((p \& q)\) with an appropriate number of slots and insert the nested parallel call in place of \( p \). Likewise for the clause for \( q \), if it contains a parallel call as its last call. This is akin to last call optimization in sequential systems [12], where the optimization is triggered when the last clause for a goal is tried. Note also that the conditions for LPCO do not place any restrictions on the nature of the parallel subgoals in the clause for \( p \) (resp. \( q \)).
The advantages of LPCO are amplified in presence of backtracking. The search for alternative solutions is considerably simplified if the computation has the structure in figure 3(ii) (a simple linear scan of the parallel call is sufficient). It is obvious that LPCO indeed leads to saving in space as well as time during parallel execution. In fact: (i) space is saved by avoiding allocation of the nested parcall descriptors; (ii) time is saved during forward execution; and, (iii) considerable time is always saved during parallel backtracking\(^1\), since the number of control structure to traverse is considerably reduced.
### 3 Nested parallel call optimization
As we mentioned in the previous sections, LPCO can be applied whenever certain conditions on the determinacy of given parts of the computations are met. These requirements can be relaxed and two different generalizations are discussed below.
#### Nondeterministic Computations: LPCO cannot be applied whenever a non-deterministic computation is performed between the two nested parallel calls, like in \((p \& q)\), where a clause \( p : e \), \((f \& g)\) is used and \( e \) has multiple solutions. Extending LPCO to these cases is possible but it requires more involved changes to maintain the correct backtracking semantics. In particular, some information regarding the depth of nesting of each subgoal need to be kept, to limit the extent of propagation of backtracking.
#### Continuations: LPCO cannot be applied whenever a computation is present in the continuation of the nested parallel calls. In a goal \( :- (p \& q)\), \( c \), if the clause \( p :- (p_1 \& p_2)\), \( c_1 \) is used, LPCO will not be able to guarantee that the execution of \( c_1 \) is started after \( p_1 \) and \( p_2 \) but before \( c \). A safe possibility is to delay all the continuations of the nested parallel calls until the main parcall has completed. In the example, this equates to executing the goal \((p_1 \& p_2 \& q)\), \( c_1, c \). The soundness of this solution is guaranteed by the independence of the subgoals. Nevertheless, in order to have soundness we must also guarantee that the continuations are executed in the proper order (i.e., if a subgoal \( b \) is expanded with a clause containing \((c \& d)\), \( h \) and \( d \) is expanded with \((e \& f)\), \( i \), then \( h \)
\(^1\)In conventional languages there is no backtracking, however, the descriptors stored in the stack have to be removed at the end of the parallel computation. Reducing the level of nesting of parcalls is going to make space reclamation from the stack faster.
cannot be executed before \( i \). Furthermore, if it is known that the continuation is deterministic and non-failing, then the continuation goals can be executed without regard to continuation goals of other parcalls. In practice, it turns out that for most parcalls, either the continuation is empty, or it contains deterministic, non-failing goals.
Note that relaxing the conditions imposed on LPCO makes it a more general optimization scheme, increasing its applicability to a wide family of computations involving nested parallel calls. For this reason we term the generalization of LPCO the Nested Parallel Call Optimization (NPCO).
### 4 Experimental results
The LPCO optimization has been implemented as part of the current version of the \&ACE and-parallel system running on Sequent Symmetry [6] and Sun Sparc Multiprocessors\(^2\). The experimental tests that we have performed consist of running various benchmarks, measuring the time elapsed and memory consumed during execution. We selected the benchmarks in order to separately study the effects of the LPCO on programs whose executions present different degrees of backtracking.
Furthermore, we have separated our experimental analysis into two phases, by first running the benchmarks on the system with only LPCO and next executing them on the system with both LPCO/NPCO and other optimizations. Often, other optimizations [7] become applicable because of the flattening of the computation tree.
**LPCO:** The execution of the system with the use of LPCO produces considerable speed-ups while maintaining a good efficiency in execution; table 1 shows execution times obtained on some commonly used benchmarks (like MatrixMult—which computes multiplication between matrices, and BT-Cluster—an extract of a clustering program used by British Telecom) and on a “real-life” application, like the abstract analyzer (pam) used by the \&ACE compiler.
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Execution Time</th>
<th>Improvement</th>
</tr>
</thead>
<tbody>
<tr>
<td>MatrixMult</td>
<td></td>
<td></td>
</tr>
<tr>
<td>BT-Cluster</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Abstract Analyzer</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Table 1 compares the execution times obtained during sequential execution. The \( fw \) columns analyze the forward execution (i.e., without any backtracking), while the \( bw \) columns describe the execution times in presence of heavy backtracking. The improvements obtained using LPCO are particularly evident in presence of backtracking (e.g., execution is 74% faster in the list_search benchmark).
Interesting results are also seen by examining the effect of inside failures (i.e., failure of one goal within a parallel call) during execution. The presence of a single parcall descriptor considerably reduces the delay of propagating Kill signals to sibling parallel goals. In programs with sufficient nesting of parcalls, the improvement in total execution time due to faster killing improves by as much as 42%.
Figure 4 summarizes memory savings obtained by LPCO: the picture compares the usage of control stack measured
\(^2\)The results obtained on a Sun Sparc 10 are consistent with those presented in this paper—which are obtained on the Sequent Symmetry.
<table>
<thead>
<tr>
<th>Goals executed</th>
<th>&ACE Execution</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>fw/no lpco</td>
</tr>
<tr>
<td>BtCluster(0)</td>
<td>890</td>
</tr>
<tr>
<td></td>
<td>(5% )</td>
</tr>
<tr>
<td>Deriv(0)</td>
<td>94</td>
</tr>
<tr>
<td></td>
<td>(64% )</td>
</tr>
<tr>
<td>Occur(5)</td>
<td>3216</td>
</tr>
<tr>
<td></td>
<td>(5% )</td>
</tr>
<tr>
<td>pann(5)</td>
<td>1327</td>
</tr>
<tr>
<td>MatrixMult(20)</td>
<td>1724</td>
</tr>
<tr>
<td></td>
<td>(4% )</td>
</tr>
<tr>
<td>Ilist_search(1500)</td>
<td>2354</td>
</tr>
<tr>
<td></td>
<td>(17% )</td>
</tr>
</tbody>
</table>
Table 1. Unoptimized/Optimized Execution times in msec (single processor)
during the execution of some benchmarks, comparing the unoptimized vs. the optimized case. The percentages indicated show the reduction in memory consumption obtained.
NPCO: In the present version of &ACE only the second extension described in section 3 has been implemented. This version of NPCO has been tested on several benchmarks and the general results obtained are consistent with those presented in the previous section for LPCO: a moderate improvement in execution time for purely deterministic executions, a more considerable speed-up for computations involving backtracking across parallel executions, and, in general, a dramatic improvement in memory usage. Table 2 shows the results obtained for two benchmarks, hanoi and quicksort (both cannot take advantage of LPCO since the parallel call is not the last call in the clause).
In the case of both the optimizations, as mentioned before, backtracking across parallel computations is considerably improved. Figure 5 compares the speedup curves obtained with and without these optimizations in the case of benchmarks with backtracking. As we can observe in certain cases (e.g., Map—a program which applies a function to the elements of nested lists) this optimization results in an exceptionally good speedup, which was lost in the unoptimized case due to the intense overhead during backtracking.
5 Last alternative optimization
It is possible to develop an optimization analogous to LPCO for or-parallelism, named Last Alternative Optimization (LAO). LAO applies whenever a new choice-point is created in the execution of the last alternative of a previous choice-point. In a sequential execution, techniques like shallow backtracking allows the new choice-point to reuse the space previously occupied by the old choice-point. In an or-parallel execution this is not possible in general, since when the last alternative is started some of the previous ones may still be active. The idea is to avoid creation of a new choice point and promote the new alternatives to the old choice-point.
We have tested a prototypical implementation of the LAO on the MUSE System [1], which is based on stack copying (i.e., an agent steals work from another one by creating a local copy of the computation). The use the LAO produced some improvements, as illustrated in table 3. These are mainly due to the fact that, by promoting alternatives to the previous choice-point, the new work locally produced is automatically made available to all the other agents that have copied the older choice-point. In this way the number of sharing operations (i.e., operations in which one agent steals work) is reduced, minimizing the parallel overhead. The optimization has been shown to scale with larger number of processors.
LAO is conceptually independent from the way in which or-parallelism is managed (stack copying or some other technique) and we expect it to produce improved execution speed also in other or-parallel systems.
6 Data-parallel programming
LPCO and NPCO can be seen as instruments for taking advantage of occurrences of Data Parallelism in Prolog programs. Typical instances of data-parallelism are represented by recursive clauses whose iterations can be performed in parallel. For example, the process_list program:
```prolog
process_list([H|T], [Hout | Tout]) :-
(process(H, Hout) & process_list(T, Tout)),
process_list([], []).
```
is a data-parallel program, because once the recursion is completely unfolded a number of identical looking calls for the process goal are produced. LPCO can be seen as a way to efficiently executing data parallel programs. Given the process_list program, although a system like &ACE will produce one iteration at a time, the LPCO will actually collect all the iterations under a single parallel call—obtaining an effect analogous to a complete unfolding of the recursion. The efficiency of execution of data-parallel programs using LPCO and NPCO compares favorably to other proposals made in the literature for exploitation of data parallelism (like Reform Prolog [2]).
The same considerations apply to LAO, which can be used to efficiently exploit data or-parallelism. This form of parallelism occurs quite frequently in various application areas; a good example is represented by constraint optimization problems based on finite domain constraints (e.g., as in the chip system [11]). Use of forward checking techniques to reduce the domains associated to the variables in the problem can be simply rephrased as an or-parallel activity originated by applying a member predicate to each domain and solving the corresponding problem instantiation—and this can be easily optimized by LAO. Applying LAO should allow a generic or-parallel system to achieve speedups close to those obtained by dedicated data or-parallel systems, like MultiLog [10].
7 Conclusions
In this paper we presented a novel optimization, called Nested Parallel Call Optimization. This optimization applies well known optimization principles to practical parallel programming languages implementations. It not only allows considerable savings in memory consumption, it speeds up the execution of a majority of parallel programs. This optimization has been implemented in the &ACE parallel systems, and the experimental results confirm its effectiveness. This optimization was illustrated in the context of and-parallel logic programming system, but it can be easily applied to any arbitrary parallel system that allows nested parallel computations.
References
|
{"Source-Url": "http://ipdps.cc.gatech.edu/1996/PAPERS/S06/EPONTEL/EPONTEL.PDF", "len_cl100k_base": 4686, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17554, "total-output-tokens": 5587, "length": "2e12", "weborganizer": {"__label__adult": 0.0003437995910644531, "__label__art_design": 0.00027942657470703125, "__label__crime_law": 0.0004780292510986328, "__label__education_jobs": 0.0007839202880859375, "__label__entertainment": 8.207559585571289e-05, "__label__fashion_beauty": 0.00015437602996826172, "__label__finance_business": 0.00025081634521484375, "__label__food_dining": 0.00038814544677734375, "__label__games": 0.0005931854248046875, "__label__hardware": 0.0013399124145507812, "__label__health": 0.0006418228149414062, "__label__history": 0.0002713203430175781, "__label__home_hobbies": 0.00011163949966430664, "__label__industrial": 0.0006842613220214844, "__label__literature": 0.0003008842468261719, "__label__politics": 0.0003445148468017578, "__label__religion": 0.0005679130554199219, "__label__science_tech": 0.0869140625, "__label__social_life": 9.560585021972656e-05, "__label__software": 0.00896453857421875, "__label__software_dev": 0.89501953125, "__label__sports_fitness": 0.00031685829162597656, "__label__transportation": 0.0007467269897460938, "__label__travel": 0.0002104043960571289}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23530, 0.02971]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23530, 0.37801]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23530, 0.91106]], "google_gemma-3-12b-it_contains_pii": [[0, 4142, false], [4142, 9082, null], [9082, 15617, null], [15617, 19520, null], [19520, 23530, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4142, true], [4142, 9082, null], [9082, 15617, null], [15617, 19520, null], [19520, 23530, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23530, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23530, null]], "pdf_page_numbers": [[0, 4142, 1], [4142, 9082, 2], [9082, 15617, 3], [15617, 19520, 4], [19520, 23530, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23530, 0.17593]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
5757d782454d030e0803ffe33a1a9f5d6ae030de
|
Activity Report 2013
Project-Team DAHU
Verification in databases
IN COLLABORATION WITH: Laboratoire specification et vérification (LSV)
RESEARCH CENTER
Saclay - Île-de-France
THEME
Data and Knowledge Representation and Processing
Table of contents
1. Members ........................................................................................................................................ 1
2. Overall Objectives ......................................................................................................................... 1
2.1. Introduction ............................................................................................................................ 1
2.2. Highlights of the Year ............................................................................................................. 1
3. Research Program .......................................................................................................................... 2
4. Application Domains ...................................................................................................................... 2
5. New Results ................................................................................................................................... 2
5.1. Specification and Verification of Database Driven Systems .................................................. 2
5.2. Distributed data management ................................................................................................. 3
5.3. Query Processing for the Web ............................................................................................... 3
6. Partnerships and Cooperations ........................................................................................................ 4
6.1. European Initiatives .............................................................................................................. 4
6.2. International Initiatives ......................................................................................................... 4
6.3. International Research Visitors ............................................................................................. 4
7. Dissemination .................................................................................................................................. 4
7.1. Scientific Animation .............................................................................................................. 4
7.2. Teaching - Supervision - Juries ............................................................................................ 5
7.2.1. Teaching ......................................................................................................................... 5
7.2.2. Supervision .................................................................................................................... 6
7.2.3. Juries ............................................................................................................................. 6
7.3. Popularization ......................................................................................................................... 6
8. Bibliography .................................................................................................................................... 6
Keywords: Data Management, Databases, Web, Verification, Distributed System
Creation of the Project-Team: 2009 January 01.
1. Members
Research Scientists
Luc Segoufin [Team leader, Inria, Senior Researcher, HdR]
Serge Abiteboul [Inria, Senior Researcher, HdR]
Faculty Members
Arnaud Durand [Université Paris 7, Professor, from Sep 2013, HdR]
Sylvain Schmitz [ENS-Cachan, Associate Professor, from Sep 2013]
Cristina Sirangelo [ENS Cachan, Associate Professor]
PhD Students
Nadime Francis [ENS Cachan]
Wojciech Kazana [Inria, FP7 ERC WEBDAM project, until Jun 2013]
Émilien Antoine [Inria, FP7 ERC WEBDAM project]
Post-Doctoral Fellow
Johann Brault-Baron [Inria, from Sep 2013]
Visiting Scientists
Sergio Abriola [University of Buenos Aires, from Jun 2013 until Aug 2013]
Benoît Larose [Concordia University, from Nov 2013 until Nov 2013]
Victor Vianu [UCSD, from Jun 2013]
Administrative Assistant
Thida Iem [Inria]
2. Overall Objectives
2.1. Introduction
For more information see http://www.lsv.ens-cachan.fr/axes/DAHU/dahu.php.
The need to access and exchange data on the Web has led to database management systems (DBMS) that are increasingly distributed and autonomous. Data extraction and querying on the Web is harder than in classical DBMS, because such data is heterogeneous, redundant, inconsistent and subject to frequent modifications. DBMS thus need to be able to detect errors, to analyze them and to correct them. Moreover, increasingly complex Web applications and services rely on DBMS, and their reliability is crucial. This creates a need for tools for specifying DBMS in a high-level manner that is easier to understand, while also facilitating verification of critical properties.
The study of such specification and verification techniques is the main goal of Dahu.
2.2. Highlights of the Year
Serge Abiteboul was awarded the 2013 Milner awards.
3. Research Program
3.1. Research Program
Dahu aims at developing mechanisms for high-level specifications of systems built around DBMS, that are easy to understand while also facilitating verification of critical properties. This requires developing tools that are suitable for reasoning about systems that manipulate data. Some tools for specifying and reasoning about data have already been studied independently by the database community and by the verification community, with various motivations. However, this work is still in its infancy and needs to be further developed and unified.
Most current proposals for reasoning about DBMS over XML documents are based on tree automata, taking advantage of the tree structure of XML documents. For this reason, the Dahu team is studying a variety of tree automata. This ranges from restrictions of “classical” tree automata in order to understand their expressive power, to extensions of tree automata in order to understand how to incorporate the manipulation of data.
Moreover, Dahu is also interested in logical frameworks that explicitly refer to data. Such logical frameworks can be used as high level declarative languages for specifying integrity constraints, format change during data exchange, web service functionalities and so on. Moreover, the same logical frameworks can be used to express the critical properties we wish to verify.
In order to achieve its goals, Dahu brings together world-class expertise in both databases and verification.
4. Application Domains
4.1. Application Domains
Databases are pervasive across many application fields. Indeed, most human activities today require some form of data management. In particular, all applications involving the processing of large amounts of data require the use of a database. Increasingly complex Web applications and services also rely on DBMS, and their correctness and robustness is crucial.
We believe that the automated solutions that Dahu aims to develop for verifying such systems will be useful in this context.
5. New Results
5.1. Specification and Verification of Database Driven Systems
Participants: Serge Abiteboul, Luc Segoufin, Victor Vianu.
We continued our investigation on the verification of database driven systems using an automata model with registers. We have exhibited new classes of decidable scenarios using nominal set theory [25]. These new classes contain the previously known relational cases but also the some semistructured ones.
We introduce in [24] and study a model of collaborative data-driven workflows. In a local-as-view style, each peer has a partial view of a global instance that remains purely virtual. Local updates have side effects on other peers’ data, defined via the global instance. We also assume that the peers provide (an abstraction of) their specifications, so that each peer can actually see and reason on the specification of the entire system. We study the ability of a peer to carry out runtime reasoning about the global run of the system, and in particular about actions of other peers, based on its own local observations. A main contribution is to show that, under a reasonable restriction (namely, key-visibility), one can construct a finite symbolic representation of the infinite set of global runs consistent with given local observations. Using the symbolic representation, we show that we can evaluate in pspace a large class of properties over global runs, expressed in an extension of first-order logic with past linear-time temporal operators, PLTL-FO. We also provide a variant of the algorithm allowing to incrementally monitor a statically defined property, and then develop an extension allowing to monitor an infinite class of properties sharing the same temporal structure, defined dynamically as the run unfolds. Finally, we consider an extension of the language, augmenting work-flow control with PLTL-FO formulas. We prove that this does not increase the power of the workflow specification language, thereby showing that the language is closed under such introspective reasoning.
5.2. Distributed data management
Participants: Serge Abiteboul, Émilien Antoine, Cristina Sirangelo.
We have studied the feasibility of query answering in the presence of incomplete information in data. In particular we have investigated when it is the case that classical query evaluation techniques, which are commonly used over complete data, suffice to answer queries also in the presence of incompleteness [26]. These results allowed to find syntactic classes of queries that can be answered efficiently under many well known semantics of incompleteness, using query answering techniques which are already implemented (and optimized) in classical database systems.
The management of Web users’ personal information is increasingly distributed across a broad array of applications and systems, including online social networks and cloud-based services. While users wish to share and integrate data using these systems, it is increasingly difficult to avoid the risks of unintended disclosures or unauthorized access by applications.
In [21], [20], we propose a novel access control model that operates within a distributed data management framework based on datalog. Using this model, users can control access to data they own and control applications they run. They can conveniently specify access control policies providing flexible tuple-level control derived using provenance information. We present a formal specification of the model, a theoretical analysis, and an implementation. We show that the computational cost of access control is acceptable.
5.3. Query Processing for the Web
Participants: Johann Brault-Baron, Arnaud Durand, Nadime Francis, Wojciech Kazana, Luc Segoufin, Cristina Sirangelo.
In many applications the output of a query may have a huge size and enumerating all the answers may already consume too many of the allowed resources. In this case it may be appropriate to first output a small subset of the answers and then, on demand, output a subsequent small numbers of answers and so on until all possible answers have been exhausted. To make this even more attractive it is preferable to be able to minimize the time necessary to output the first answers and, from a given set of answers, also minimize the time necessary to output the next set of answers - this second time interval is known as the delay. We have shown that this was doable with a linear preprocessing time and constant enumeration delay for first-order queries over structures of bounded expansion [27] and for monadic second-order queries over structures of bounded tree-width [15]. We also presented a survey about this work at the Intl. Conf. on Database Theory (ICDT) [19].
Web data is often structured in the XML format. In [18] we have surveyed results about static analysis of pattern-based queries over XML documents. These queries are analogs of conjunctive queries, their unions and Boolean combinations, in which tree patterns play the role of atomic formulae. These can be viewed as both queries and incomplete documents, and thus static analysis problems can also be viewed as answering queries over such documents. We looked at satisfiability of patterns under schemas, containment of queries for various features of XML used in queries, query answering, and applications of pattern-based queries in reasoning about schema mappings for data exchange.
6. Partnerships and Cooperations
6.1. European Initiatives
6.1.1. FP7 Projects
6.1.1.1. Webdam
Title: WebDam
Type: IDEAS
Instrument: ERC Advanced Grant (Advanced)
Duration: December 2008 - November 2013
Coordinator: Serge Abiteboul, Inria (France)
Others partners: Pierre Senellart, Telecom Paristech.
See also: http://webdam.inria.fr
Abstract: The goal is to develop a formal model for Web data management. This model will open new horizons for the development of the Web in a well-principled way, enhancing its functionality, performance, and reliability. Specifically, the goal is to develop a universally accepted formal framework for describing complex and flexible interacting Web applications featuring notably data exchange, sharing, integration, querying and updating. We also propose to develop formal foundations that will enable peers to concurrently reason about global data management activities, cooperate in solving specific tasks and support services with desired quality of service.
6.2. International Initiatives
6.2.1. Inria International Partners
6.2.1.1. Declared Inria International Partners
Victor Vianu, UC San Diego, USA.
6.3. International Research Visitors
6.3.1. Visits of International Scientists
- Benoît Larose
Subject: Constraint Satisfaction Problems
Institution: concordia University, Montreal, Canada.
7. Dissemination
7.1. Scientific Animation
Organization of workshops and conferences.
Program Committees.
- Cristina Sirangelo: 16th Intl. Conf. on Database Theory (ICDT 2013). ICDT 2013 Test of Time Award.
- Luc Segoufin: Intl. Conf. on Logic in Computer Science (LICS’13).
Responsibilities.
- Luc Segoufin is since 2010 part of the “bureau du comité des projets” à l’Inria Saclay. Since 2011 he is part of the scientific board of Inria. Since 2010 he is responsible of the groupe de travail “Complexité et Modèles Finis” du GDR “Mathématique et Informatique” (http://www.gdr-im.fr/).
- S. Abiteboul is the principal investigator of the European Research Council Grant Webdam on Web Data Management.
As a member of the Sciences Academy, S. Abiteboul wrote a report on "L’enseignement de l’informatique en France - Il est urgent de ne plus attendre".
S. Abiteboul is since 2013 a member of the Conseil national de la recherche. As a member, he participated in 2013 in reports on Net neutrality, Computer science education, and digital inclusion.
S. Abiteboul is chairman of the Inria Awards committee.
S. Abiteboul is chairman of the Scientific Board of Société d’Informatique de France.
S. Abiteboul is a member of the Academic Senat of the University Paris-Saclay.
S. Abiteboul is a member of the Academia Europea.
In particular, S. Abiteboul gave talks on Big data in insurances at Journée SCOR sur les Big data et les assurances, and Journée Ifpas de l’assurance in Paris; on Big data and health at Open data et santé, Congrès Health IT. He also discussed the place of fiction in the Web at Séminaire Vérification, CNAM, Paris, 2013.
S. Abiteboul a été également auditionné à l’Assemblée nationale par la commission des affaires économiques, Mission d’information sur l’économie numérique; et par l’Office parlementaire d’évaluation des choix scientifiques et technologiques, sur le Risque numérique. 2013.
S. Abiteboul gave interviews to Le Monde, Famille Chrétienne, and 01Net.
7.2. Teaching - Supervision - Juries
7.2.1. Teaching
Master : Cristina Sirangelo, Complexité avancée, 18 hours ETD, M1, MPRI, France
Master : Cristina Sirangelo, Algorithms, 15 hours ETD, Préparation à l’agrégation, École Normale Supérieure de Cachan, France
Licence : Cristina Sirangelo, Bases de données, 30 hours ETD, L3, École Normale Supérieure de Cachan, France
Doctorat : Cristina Sirangelo, Bases de données et sites Web dynamiques, 18 hours ETD, École Normale Supérieure de Cachan, France
Doctorat : Cristina Sirangelo, Création de sites Web, 18 hours ETD, École Normale Supérieure de Cachan, France
Licence : Serge Abiteboul, Base de données, ENS Cachan and ENS Paris
Master : Serge Abiteboul, Web data management, MPRI Paris
Master : Luc Segoufin, Finite Model Theory and Descriptive Complexity, MPRI.
Licence : Émilien Antoine, Algorithme et complexité, 32h, L3, Université de Paris-Sud, France
7.2.2. Supervision
PhD: Wojciech Kazana, Query Evaluation with Constant Delay, 16/09/2013, Luc Segoufin
PhD in Progress: Nadime Francis, graph databases, 01/09/2011, Cristina Sirangelo and Luc Segoufin
PhD : Émilien Antoine, Data management in social network, 05/12/2013, Serge Abiteboul
7.2.3. Juries
• Luc Segoufin was reviewer for the PhD thesis of Stefan Mengel, Paderborn, Germany.
• Luc Segoufin was reviewer for the PhD thesis of Johann Brault-Baron, Université de Caen, France.
7.3. Popularization
Serge Abiteboul participated to a popularization book on mathematics, « Mathématiques, l’explosion continue », with an article, Chercher sur le Web : juste un point fixe et quelques algorithmes.
Serge Abiteboul wrote with Pierre Senellart an article on “ Un déluge de données”, in Pour la science sur le Big bang numérique, 2013.
8. Bibliography
Major publications by the team in recent years
**Publications of the year**
**Doctoral Dissertations and Habilitation Theses**
[12] W. KAZANA. *l'évaluation de requêtes avec un délai constant*, École normale supérieure de Cachan - ENS Cachan, September 2013, [http://hal.inria.fr/tel-00908434](http://hal.inria.fr/tel-00908434)
**Articles in International Peer-Reviewed Journals**
[13] S. ABITEBOUL, Y. KATSIS, B. T. CATE. *On the equivalence of distributed systems with queries and communication*, in "Journal of Computer and System Sciences", 2013, [http://hal.inria.fr/hal-00879029](http://hal.inria.fr/hal-00879029)
[14] B. T. CATE, L. SEGOUFIN. *Unary negation*, in "Logical Methods in Computer Science", 2013, vol. 9, n° 3, [http://hal.inria.fr/hal-00904567](http://hal.inria.fr/hal-00904567)
[15] W. KAZANA, L. SEGOUFIN. *Enumeration of monadic second-order queries on trees*, in "ACM Transactions on Computational Logic", 2013, vol. 14, n° 4, [http://hal.inria.fr/hal-00916400](http://hal.inria.fr/hal-00916400)
**Articles in National Peer-Reviewed Journals**
[16] S. ABITEBOUL. *Vers une nouvelle science des risques ?*, in "Risques", September 2013, [http://hal.inria.fr/hal-00908090](http://hal.inria.fr/hal-00908090)
**Invited Conferences**
[18] A. GHEEERBRANT, L. LIBKIN, C. SIRANGELO. *Reasoning About Pattern-Based XML Queries*, in "RR - 7th International Conference on Web Reasoning and Rule Systems, 2013", Mannheim, Germany, July 2013, [http://hal.inria.fr/hal-00908414](http://hal.inria.fr/hal-00908414)
[19] L. SEGOUFIN. *Enumerating with constant delay the answers to a query*, in "Intl. Conf. on Database Theory", Genes, Italy, March 2013, [http://hal.inria.fr/hal-00907085](http://hal.inria.fr/hal-00907085)
**International Conferences with Proceedings**
Research Reports
|
{"Source-Url": "http://raweb.inria.fr/rapportsactivite/RA2013/dahu/dahu.pdf", "len_cl100k_base": 4275, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 24379, "total-output-tokens": 6788, "length": "2e12", "weborganizer": {"__label__adult": 0.0004575252532958984, "__label__art_design": 0.0007753372192382812, "__label__crime_law": 0.0006260871887207031, "__label__education_jobs": 0.00905609130859375, "__label__entertainment": 0.00018346309661865232, "__label__fashion_beauty": 0.00029540061950683594, "__label__finance_business": 0.0007081031799316406, "__label__food_dining": 0.0005750656127929688, "__label__games": 0.0006837844848632812, "__label__hardware": 0.0010833740234375, "__label__health": 0.0014753341674804688, "__label__history": 0.0006728172302246094, "__label__home_hobbies": 0.00021779537200927737, "__label__industrial": 0.0007410049438476562, "__label__literature": 0.0007476806640625, "__label__politics": 0.00064849853515625, "__label__religion": 0.00079345703125, "__label__science_tech": 0.37255859375, "__label__social_life": 0.00037384033203125, "__label__software": 0.01309967041015625, "__label__software_dev": 0.5927734375, "__label__sports_fitness": 0.00032806396484375, "__label__transportation": 0.000762939453125, "__label__travel": 0.00023233890533447263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24431, 0.04169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24431, 0.13306]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24431, 0.68573]], "google_gemma-3-12b-it_contains_pii": [[0, 235, false], [235, 235, null], [235, 3392, null], [3392, 3392, null], [3392, 5274, null], [5274, 7771, null], [7771, 12058, null], [12058, 14158, null], [14158, 17154, null], [17154, 19548, null], [19548, 22310, null], [22310, 24431, null]], "google_gemma-3-12b-it_is_public_document": [[0, 235, true], [235, 235, null], [235, 3392, null], [3392, 3392, null], [3392, 5274, null], [5274, 7771, null], [7771, 12058, null], [12058, 14158, null], [14158, 17154, null], [17154, 19548, null], [19548, 22310, null], [22310, 24431, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24431, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24431, null]], "pdf_page_numbers": [[0, 235, 1], [235, 235, 2], [235, 3392, 3], [3392, 3392, 4], [3392, 5274, 5], [5274, 7771, 6], [7771, 12058, 7], [12058, 14158, 8], [14158, 17154, 9], [17154, 19548, 10], [19548, 22310, 11], [22310, 24431, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24431, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-12
|
2024-12-12
|
fab3784da16dca75bc7db253b29a71665d146bb7
|
Comparison of hierarchies for occlusion culling based on occlusion queries
V.I. Gonakhchyan
pusheax@ispras.ru
Ivannikov Institute for System Programming of the RAS, Moscow, Russia
Efficient interactive rendering of large datasets still poses a problem. Widely used algorithm frustum culling is too conservative and leaves a lot of hidden objects in view. Occlusion culling with hardware occlusion queries is an effective technique of culling of hidden objects inside view. In the paper, we perform the comparative analysis of popular indexing techniques in conformity to occlusion culling.
Keywords: Occlusion culling, occlusion query, opengl draw call.
1. Introduction
Occlusion culling algorithms remove occluded objects to enhance frame buffer composition speed. In the paper, object is defined as a mesh that can be rendered using one draw call. Two categories of invisible objects are illustrated in Fig. 1: occluded objects and objects beyond camera frustum. Frustum culling quickly discards objects beyond camera frustum but leaves occluded objects inside the camera. We consider the algorithm to find occluded objects. The goal of this paper is to analyze how different hierarchies for scene organization affect occlusion culling and rendering performance.
Occlusion culling algorithms are described in-depth in seminal papers [2][5]. We are interested in online algorithms based on hardware occlusion queries that are widespread today.
Some of the previous algorithms were implemented using OpenGL 2 which is using the immediate rendering mode [10]. In immediate mode all of the geometry is sent each frame resulting in CPU-GPU bandwidth bottleneck and driver overhead caused by excessive number of commands in driver queue. Display lists were used to compile multiple rendering commands once and reduce driver overhead. OpenGL 3 introduced retained rendering mode and immediate mode techniques such as display lists were deprecated and later removed from specification. It raised the question of effective use of object indexing techniques in modern OpenGL.
Previously developers had to worry about the amount of geometry passed to GPU. Now developers have to think about driver overhead and state changes as well [18]. In this paper, we analyze performance considerations when using different hierarchies in retained rendering mode.
Although Vulkan API is out of the scope of this paper, it was shown that it gives a better multithreaded performance by utilizing all of the cores of CPU for rendering [3]. It is still based on the same hardware and using retained rendering mode so all of the results in the paper stand.
Hardware occlusion query is GPU technique to find visible faces of a polyhedron [6]. Visibility check stops when the first visible face is found. Checking query result in the frame queries were sent is a blocking function call and it causes CPU starvation. Visibility query result is available without noticeable delay in the next frame because by that time all draw calls required for query execution were sent. So visibility information in given frame is based on the previous frame.
Space coherency is a relationship between nodes in which one node's visibility determines the visibility of other nodes. For example, if building is invisible then objects inside building are also invisible. Time coherency determines visibility in the future by visibility at given time. For example, if object is visible in the given frame then we can consider it visible some number of frames and avoid sending expensive queries.
Fig. 1. Different types of culled objects. Frustum culled objects are depicted as triangles. Occluded objects are depicted as rectangles.
2. Previous work
Greene et al. introduced occlusion culling algorithm based on hierarchical z-buffer [7]. Z-buffer is stored as pyramid structure. Z-buffer is divided into 4 components, each one having maximum z value in its region. The process is repeated until pixels are reached. Z-pyramid allows for quick triangle culling by comparing minimum z value of triangle and z value in the corresponding region. This method can be implemented in hardware and software. Software implementation requires expensive rasterisation and pyramid structure updates on CPU. In hardware implementation (ATI Hyper-Z) triangles still have to be transformed and rasterized on GPU which does not replace quick method for geometry culling with a hierarchical structure.
Bittner et al. considered ways for optimal usage of occlusion queries for hierarchical scenes based on extension NV_occlusion_query [1]. Previous frame visibility results are used in the next frames. Main performance problems come from CPU starvation and GPU starvation. Visibility results are checked in the next frame to remove CPU starvation. Previously visible objects are rendered at the beginning of the current frame to remove GPU starvation. Authors used KD-tree constructed according to the surface-area heuristic [11]. Visibility queries are sent for leaf nodes and results are propagated to upper hierarchy levels.
Guthe et al. observed that in many cases visibility queries make performance worse than frustum culling, proposed probability criterion to minimize the number of queries, performance model which helps to avoid queries when rasterisation is cheaper [9]. Mattausch et al. suggested further ways of minimizing the number of queries like sending one query for group [12]. Authors used p-bhvo (polygon-based hierarchical bounding volume decomposition) [13], which is
well-suited for static scenes and expensive to maintain for dynamic scenes.
Software rasterisation and visibility checks can be used instead of hardware occlusion queries [4]. First, triangles of significant occluders are rendered on CPU, hierarchical z-buffer of specified resolution is created. Then bounding boxes of objects or hierarchy nodes are rasterized to determine their visibility against created z-buffer. This helps to avoid expensive occlusion query read-back but requires occluder selection that is best done manually. Also, occluder rasterisation can be expensive and low level-of-detail models give only approximate results.
Scene preprocessing can be effectively applied to static scenes. Teller et al. proposed to use BSP tree with decomposition along axes for architectural scenes [17]. Achieved hierarchy corresponds to room structure in a building. Visibility information is stored as the graph with rooms and portals. Room visibility can be determined by rasterising portals. However, that graph is expensive to compute and works effectively only for static scenes. Commercial solution Umbra computes voxel representation of a scene [14]. Empty voxels serve as portals between different parts of a scene. Software rasterisation of portals determines the visibility of parts of a scene. That algorithm effectively finds occluded objects for static 3d scenes and is widely used in video games.
Greene introduced image space algorithm based on precomputed occlusion masks [8]. As input, it takes a list of polygons in front-to-back order. It recursively subdivides image space into quadrant until visibility of polygon can be determined for each quadrant. Main advantages of this approach are small memory requirements and no pixel overwrites. However, it requires special hardware to implement efficiently. Zhang et al. proposed visibility culling based on hierarchical occlusion maps, which is better suited for modern hardware [19]. It constructs occlusion map hierarchy by rendering chosen occluders and then traverses bounding volume hierarchy of the model database to perform visibility culling. The algorithm allows for approximate visibility when opacity threshold is set to value lower than one. The main disadvantage of the algorithm is the sophisticated process of occluder selection which favors large objects with small polygon count for faster construction of occlusion map hierarchy.
3. Occlusion culling algorithm
Hierarchical occlusion culling algorithm in this paper is based on Coherent hierarchical culling [1]. Although our implementation does not include query batching, tight bounding volumes, probabilistic estimation of visibility, it allows comparing benefits and limitations of different subdivision hierarchies.
Pseudocode of the algorithm:
```plaintext
function RenderFrame(rootNode, frustum, queries, sentNodes)
PerformFrustumCulling(frustum, rootNode)
for i in 0..queries.size-1
vis <- GetQueryResult(queries[i])
SetNodeCulled(sentNodes[i], vis == 0)
end for
PropagateVisibilityUpHierarchy()
for n in nodes
for inst in n
if !InstRendered(inst)
Render(inst)
SetInstRendered(inst)
end for
end for
end for
sentNodes <= GetLeafNodesInFrustum()
for i in 0..sentNodes.size-1
SendQuery(queries[i], sentNodes[i].boundingBox)
end for
end function
```
Function "RenderFrame" renders one frame. Function "PerformFrustumCulling" recursively sets frustum culled bit for every node outside the frustum. First for loop checks visibility results of occlusion queries sent in the previous frame. Second for loop renders objects of visible nodes in the frustum. Last for loop sends queries for all leaf nodes in the frustum.
During the first frame, all objects inside frustum are rendered, and all hierarchy leaves inside frustum are queried. During the second frame, query results are checked, and only visible objects are rendered. Hierarchy leaves inside frustum are queried each frame. We propagate visibility up the hierarchy to optimize performance of hierarchy traversal. Space decomposition hierarchies allow multiple nodes per object. Rendered state of each instance is stored in bit array to make sure that all objects are rendered only once.
Frame buffer composition time for a large number of objects can be approximated by the formula:
\[
T_{frame} = T_{check} + T_{render} + T_{queries},
\]
where \(T_{check}\) — time it takes to check query results,
\(T_{render}\) — time to render visible objects,
\(T_{queries}\) — time it takes to send queries for leaves nodes inside frustum.
When the number of queries is small \(T_{render}\) is the bottleneck. When the number of queries is large \(T_{queries}\) and \(T_{check}\) is the bottleneck. Let's rewrite the formula by expanding the terms:
\[
T_{frame} = c_1 N_q + c_2 N_{obj} + c_4 N_q,
\]
where \(N_q\) — number of camera positions (leaf nodes inside frustum),
\(N_{obj}\) — number of visible objects inside the frustum. \(c_4\) is the function of camera position and hierarchy height for the given object distribution.
Let's consider a common case where a scene is indexed by octree and camera is positioned outside the scene. In the worst case, three sides of the bounding box of the scene are visible. Assuming that objects are distributed uniformly across the bounding box of the scene, the number of objects per octree leaf equals \(N_{total}/2^h\), where \(N_{total}\) is the total number of objects in the scene, \(h\) is octree height. Then the number of visible objects approximately equals \(3N_{total}/2^h\). We get the formula for frame duration that depends only on octree height and constants:
\[
T_{frame} = (c_1 + c_2)2^{3h} + c_3 3N_{total}/2^h.
\]
Optimal octree height for given scenario is \(\frac{1}{4} \log_2 \frac{N_{total}}{c_1 + c_3}\). For example, for dataset 1 calculated octree height \(h = 4\) gives the best performance in practice because dataset 1 can be described by that theoretical model.
LBVH and BVH SAH perform better on scenes where the density of objects is uneven. Better clustering helps to lower the number of queries to get visible objects inside the frustum.
4. Hierarchies
4.1 Octree
Octree is uniform space decomposition structure that uses three axis-perpendicular planes to simultaneously split the scene's bounding box into eight regions at each step [15]. When object's bounding box intersects the splitting plane, it is either
assigned to the internal node (single reference octree) or propagated below and assigned to multiple leaf nodes (multiple reference octree). Storing geometry in leafs increases clustering quality and as a result reduces number of visible objects. The downside is the increased number of occlusion queries, which are sent for every leaf in the frustum. We performed rendering performance comparison to find out which technique is more effective. Dataset 1 is small enough that GPU can handle rendering and queries quite efficiently (fig. 2). As a result, multiple reference octree gives the best time because of efficient clustering. Many objects intersect upper levels of octree resulting in redundant draw calls in case of single reference octree. Dataset 2 has non-uniform object distribution where several planes occupy half of the scene. It produced a lot of redundant leaf nodes and occlusion queries that degraded performance when storing objects in leafs. Dataset 3 has many buildings with tightly packed objects inside buildings. Even though the number of objects is large, it can be very efficiently subdivided requiring only small number of nodes. Overall, multiple reference octree provides the most efficient occlusion culling of large architectural scenes.

When rendering a visible node, contained object is skipped if it was already rendered in the current frame. Out of all hierarchical structures considered in the paper, octree gives the least effective clustering because of wasted space without any objects. Octree allows dynamic scenes because visibility results of octree nodes can be used for subsequent frames as they have fixed position in space. Also, it does not need to be rebalanced as other space decomposition hierarchies like kd-tree. Maximum octree level can be restricted depending on GPU performance.
### 4.2 LBVH
LBVH is primitive space decomposition hierarchy that is based on sorting along space filling curve [15]. All centers of object bounding boxes are sorted along Z space filling curve and grouped hierarchically from bottom to top [16]. LBVH achieves tighter clustering than octree and as a result less number of occlusion queries. Estimating the number of queries is simple because the number of leafs is determined at the start of the construction. LBVH cannot handle dynamic scenes because moving objects make occlusion queries for previously constructed LBVH useless in the current frame.
### 4.3 BVH
BVH with surface area heuristic for choosing the splitting plane to minimize the number of ray and bounding box intersection tests was developed for ray tracing, but it also gives efficient clustering of primitives for occlusion culling [13][15]. BVH construction is a top-down recursive process; on each step, we create two axis aligned bounding boxes. Triangles are sorted by the longest scene dimension and splitting plane with minimum cost is taken according to surface area heuristic. Because of top-down construction, BVH SAH sometimes creates clusters that cannot be subdivided into two nodes. We try to subdivide such cluster for each axis in order, and in case of failure leave it as a leaf node.
We compare rendering performance when using BVH SAH for storing three types of primitives: objects, subdivided objects, triangles. Object is set of triangles that can be rendered with one draw call. Subdivided object is an object that was subdivided into multiple objects to achieve better triangle clustering. Storing each triangle as an object generates hierarchy with the best clustering. For rendering efficiency, large number of triangles is stored in a leaf node, render state changes are avoided when encountering triangles of the same object. Storing triangles in BVH gives many additional draw calls creating a CPU bottleneck (fig. 3). Storing subdivided objects gives much faster performance. However, clustering efficiency increase is not enough to cover for additional draw calls for considered datasets using simple shader. Let's consider the difference in rendering time of dataset 1 for objects (8.7ms) and subdivided objects (45.6ms). Even though object subdivision helped to lower average number of query calls \( N_q \) from 192 to 171, it raised the average number of draw calls \( N_{obj} \) from 4546 to 19452 (tests were conducted for \( \text{bvh height}=10 \)).

### 5. Performance comparison
#### 5.1 Datasets
For rendering performance comparison we took three large datasets (figs. 4–6):
1. Dataset 1: 5,012,582 triangles, 50,521 objects. Buildings are tightly packed with objects having small variation in size.
2. Dataset 2: 10,827,713 triangles, 71,601 objects. Scene has many relatively large objects, half of the scene's volume is occupied by several planes.
3. Dataset 3: 10,154,304 triangles, 221,796 objects. Artificial test scene with 36 buildings. Each building has cluster of objects that can be culled after rendering exterior consisting of small number of objects.
5.2 Clustering
Let's calculate the average number of rendered objects to compare object clustering efficiency of considered hierarchies for occlusion culling. Better object clustering should result in fewer visible nodes and fewer draw calls. During tests, we fixed the number of leaves for all hierarchies. BVH has the most efficient clustering of objects (fig. 7). It could be better but top-down subdivision process leads to the scenario where relatively long objects are gathered in a node and cannot be subdivided efficiently. In dataset 2 we encountered clusters with 40–70 objects where subdivision by any axis produced singleton. Algorithm based on octree issues more draw calls when space decomposition gives bounding volumes with a lot of empty space. Octree is more efficient for dataset 1 because it has very little empty space. It gives worst clustering in spacious dataset 3 because it cannot decompose it as efficiently using fixed number of leaves.
5.3 Frame rendering
All scene geometry is uploaded once at the beginning, shader with one directional light is used. Test results were produced on the system: AMD FX 8320 Processor, 24GB DDR3 RAM, AMD Radeon HD 6770 1GB.
Camera walkthrough is performed diagonally from the lower left to the upper right corner of a scene. Average and maximum frame rendering times are measured along the camera path (figs. 8, 9). Fig. 8 shows average rendering performance of all considered hierarchies on three datasets. Octree showed the fastest time for dataset 1 because it produced the best clustering (fig. 7). It performed better than expected for dataset 3 because most of the objects can be culled with relatively small number of queries. Dataset 2 was problematic for all hierarchies because it has most of the objects in one building. For efficient rendering careful balance of draw calls and occlusion queries is required. BVH SAH showed the fastest time because of efficient clustering. LBVH is close in performance to BVH SAH for all datasets.
Occlusion culling may give worse performance than frustum culling when GPU can efficiently render all of the objects inside the frustum. However, frustum culling shows worst performance on datasets 2 and 3 because of the large number of visible objects inside the frustum. Note that occlusion culling algorithm in the paper is not state-of-the-art and can be improved further to reduce the number of queries using visibility prediction and multiqueries [9][12].
6. Conclusion
We performed the comparison of frame rendering performance when using different types of primitives and found that using objects instead of subdivided objects is more effective (fig. 3).
Octree efficiently handles datasets where most of the scene's volume is occupied by objects (fig. 7, dataset 1). Although storing objects in interior nodes of octree helps to select large objects and get better performance (fig. 2, dataset 2), storing objects in leafs is overall more effective and can be used to determine the number of leafs by scene's volume.
BVH SAH gives the most effective clustering of objects (fig. 7), and it positively affects frame rendering time (fig. 8). LBVH is close in performance to BVH SAH. Also, it is faster to construct, and bottom-up construction is better suited to get the optimal number of leafs.
7. References
About the authors
Gonakhyechyan Vyacheslav Igorevich, junior researcher at the department of System integration and multi-disciplinary collaborative environments of Ivanikov Institute for System Programming of the RAS. His email is pusheax@ispras.ru.
|
{"Source-Url": "https://www.graphicon.ru/html/2017/papers/pp32-36.pdf", "len_cl100k_base": 4316, "olmocr-version": "0.1.46", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 18725, "total-output-tokens": 5495, "length": "2e12", "weborganizer": {"__label__adult": 0.0007190704345703125, "__label__art_design": 0.002262115478515625, "__label__crime_law": 0.0007548332214355469, "__label__education_jobs": 0.0006480216979980469, "__label__entertainment": 0.0003261566162109375, "__label__fashion_beauty": 0.00032520294189453125, "__label__finance_business": 0.0003108978271484375, "__label__food_dining": 0.0005216598510742188, "__label__games": 0.004344940185546875, "__label__hardware": 0.003997802734375, "__label__health": 0.0007748603820800781, "__label__history": 0.0007805824279785156, "__label__home_hobbies": 0.00013244152069091797, "__label__industrial": 0.0008959770202636719, "__label__literature": 0.00044846534729003906, "__label__politics": 0.0004198551177978515, "__label__religion": 0.0008482933044433594, "__label__science_tech": 0.2320556640625, "__label__social_life": 0.00010496377944946288, "__label__software": 0.0157623291015625, "__label__software_dev": 0.7314453125, "__label__sports_fitness": 0.0005698204040527344, "__label__transportation": 0.0009946823120117188, "__label__travel": 0.00040221214294433594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23397, 0.03016]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23397, 0.65836]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23397, 0.88942]], "google_gemma-3-12b-it_contains_pii": [[0, 5543, false], [5543, 12078, null], [12078, 17273, null], [17273, 19746, null], [19746, 23397, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5543, true], [5543, 12078, null], [12078, 17273, null], [17273, 19746, null], [19746, 23397, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23397, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23397, null]], "pdf_page_numbers": [[0, 5543, 1], [5543, 12078, 2], [12078, 17273, 3], [17273, 19746, 4], [19746, 23397, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23397, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-23
|
2024-11-23
|
88db6349cc6508ba502497bf0e16932b45adc7bb
|
[REMOVED]
|
{"Source-Url": "https://www.rug.nl/research/portal/files/64559354/Engineering_Cloud_based_Applications_Towards_an_Application_Lifecycle.pdf", "len_cl100k_base": 6612, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36258, "total-output-tokens": 9682, "length": "2e12", "weborganizer": {"__label__adult": 0.0002827644348144531, "__label__art_design": 0.0004374980926513672, "__label__crime_law": 0.00025534629821777344, "__label__education_jobs": 0.0007677078247070312, "__label__entertainment": 6.520748138427734e-05, "__label__fashion_beauty": 0.00013625621795654297, "__label__finance_business": 0.0003495216369628906, "__label__food_dining": 0.00029158592224121094, "__label__games": 0.0004031658172607422, "__label__hardware": 0.0006232261657714844, "__label__health": 0.0004949569702148438, "__label__history": 0.00023746490478515625, "__label__home_hobbies": 6.979703903198242e-05, "__label__industrial": 0.0002620220184326172, "__label__literature": 0.0002760887145996094, "__label__politics": 0.0002034902572631836, "__label__religion": 0.00035953521728515625, "__label__science_tech": 0.0264739990234375, "__label__social_life": 8.171796798706055e-05, "__label__software": 0.0100250244140625, "__label__software_dev": 0.95703125, "__label__sports_fitness": 0.00022089481353759768, "__label__transportation": 0.0004093647003173828, "__label__travel": 0.0001908540725708008}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42278, 0.04229]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42278, 0.23017]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42278, 0.91703]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2287, false], [2287, 5621, null], [5621, 8593, null], [8593, 11571, null], [11571, 14859, null], [14859, 17897, null], [17897, 20984, null], [20984, 22853, null], [22853, 26404, null], [26404, 27771, null], [27771, 29401, null], [29401, 32463, null], [32463, 35530, null], [35530, 38919, null], [38919, 42278, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2287, true], [2287, 5621, null], [5621, 8593, null], [8593, 11571, null], [11571, 14859, null], [14859, 17897, null], [17897, 20984, null], [20984, 22853, null], [22853, 26404, null], [26404, 27771, null], [27771, 29401, null], [29401, 32463, null], [32463, 35530, null], [35530, 38919, null], [38919, 42278, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42278, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42278, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2287, 2], [2287, 5621, 3], [5621, 8593, 4], [8593, 11571, 5], [11571, 14859, 6], [14859, 17897, 7], [17897, 20984, 8], [20984, 22853, 9], [22853, 26404, 10], [26404, 27771, 11], [27771, 29401, 12], [29401, 32463, 13], [32463, 35530, 14], [35530, 38919, 15], [38919, 42278, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42278, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
57e94c6bbb4965298a8ff8a8e7ab6e59b783e580
|
[REMOVED]
|
{"Source-Url": "https://kclpure.kcl.ac.uk/portal/files/68480371/main.pdf", "len_cl100k_base": 5671, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 31504, "total-output-tokens": 7317, "length": "2e12", "weborganizer": {"__label__adult": 0.0002853870391845703, "__label__art_design": 0.0002104043960571289, "__label__crime_law": 0.00029277801513671875, "__label__education_jobs": 0.00022017955780029297, "__label__entertainment": 4.965066909790039e-05, "__label__fashion_beauty": 0.00010979175567626952, "__label__finance_business": 0.00011813640594482422, "__label__food_dining": 0.00029754638671875, "__label__games": 0.0004782676696777344, "__label__hardware": 0.0006442070007324219, "__label__health": 0.0002923011779785156, "__label__history": 0.00011664628982543944, "__label__home_hobbies": 5.137920379638672e-05, "__label__industrial": 0.00024509429931640625, "__label__literature": 0.00015807151794433594, "__label__politics": 0.0002027750015258789, "__label__religion": 0.0003161430358886719, "__label__science_tech": 0.007511138916015625, "__label__social_life": 6.16908073425293e-05, "__label__software": 0.00592803955078125, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.000225067138671875, "__label__transportation": 0.00029778480529785156, "__label__travel": 0.00014591217041015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28328, 0.02304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28328, 0.48131]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28328, 0.83698]], "google_gemma-3-12b-it_contains_pii": [[0, 518, false], [518, 3191, null], [3191, 6144, null], [6144, 7870, null], [7870, 11014, null], [11014, 13189, null], [13189, 16281, null], [16281, 18573, null], [18573, 20694, null], [20694, 22515, null], [22515, 24578, null], [24578, 27630, null], [27630, 28328, null]], "google_gemma-3-12b-it_is_public_document": [[0, 518, true], [518, 3191, null], [3191, 6144, null], [6144, 7870, null], [7870, 11014, null], [11014, 13189, null], [13189, 16281, null], [16281, 18573, null], [18573, 20694, null], [20694, 22515, null], [22515, 24578, null], [24578, 27630, null], [27630, 28328, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28328, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28328, null]], "pdf_page_numbers": [[0, 518, 1], [518, 3191, 2], [3191, 6144, 3], [6144, 7870, 4], [7870, 11014, 5], [11014, 13189, 6], [13189, 16281, 7], [16281, 18573, 8], [18573, 20694, 9], [20694, 22515, 10], [22515, 24578, 11], [24578, 27630, 12], [27630, 28328, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28328, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
56b9fda97032bbf923c9be9ec7ae7a40e7d98dab
|
Object Links in the Repository
Jon Beck
David Eichmann
West Virginia University Research Corporation
9/27/91
The RICIS Concept
The University of Houston-Clear Lake established the Research Institute for Computing and Information Systems (RICIS) in 1986 to encourage the NASA Johnson Space Center (JSC) and local industry to actively support research in the computing and information sciences. As part of this endeavor, UHCL proposed a partnership with JSC to jointly define and manage an integrated program of research in advanced data processing technology needed for JSC's main missions, including administrative, engineering and science responsibilities. JSC agreed and entered into a continuing cooperative agreement with UHCL beginning in May 1986, to jointly plan and execute such research through RICIS. Additionally, under Cooperative Agreement NCC 9-16, computing and educational facilities are shared by the two institutions to conduct the research.
The UHCL/RICIS mission is to conduct, coordinate, and disseminate research and professional level education in computing and information systems to serve the needs of the government, industry, community and academia. RICIS combines resources of UHCL and its gateway affiliates to research and develop materials, prototypes and publications on topics of mutual interest to its sponsors and researchers. Within UHCL, the mission is being implemented through interdisciplinary involvement of faculty and students from each of the four schools: Business and Public Administration, Education, Human Sciences and Humanities, and Natural and Applied Sciences. RICIS also collaborates with industry in a companion program. This program is focused on serving the research and advanced development needs of industry.
Moreover, UHCL established relationships with other universities and research organizations, having common research interests, to provide additional sources of expertise to conduct needed research. For example, UHCL has entered into a special partnership with Texas A&M University to help oversee RICIS research and education programs, while other research organizations are involved via the "gateway" concept.
A major role of RICIS then is to find the best match of sponsors, researchers and research objectives to advance knowledge in the computing and information sciences. RICIS, working jointly with its sponsors, advises on research needs, recommends principals for conducting the research, provides technical and administrative support to coordinate the research and integrates technical results into the goals of UHCL, NASA/JSC and industry.
RICIS Preface
This research was conducted under auspices of the Research Institute for Computing and Information Systems by Jon Beck and Dr. David Eichmann of West Virginia University. Dr. E. T. Dickerson served as RICIS research coordinator.
Funding was provided by the Information Technology Division, Information Systems Directorate, NASA/JSC through Cooperative Agreement NCC 9-16 between NASA Johnson Space Center and the University of Houston-Clear Lake. The NASA technical monitor for this activity was Ernest M. Fridge, III of the Information Technology Division, Information Systems Directorate, NASA/JSC.
The views and conclusions contained in this report are those of the authors and should not be interpreted as representative of the official policies, either express or implied, of UHCL, RICIS, NASA or the United States Government.
Object Links in the Repository
Interim Report
Jon Beck & David Eichmann
Software Reuse Repository Lab
Dept. of Statistics and Computer Science
West Virginia University
Morgantown, WV 26506
SoRReL – RBSE – 91 – 1
September 27, 1991
Object Links in the Repository
Interim Report
Jon Beck & David Eichmann
1. Introduction
This interim report explores some of the architectural ramifications of extending the Eichmann/Atkins lattice-based classification scheme [1] to encompass the assets of the full life-cycle of software development. In particular, we wish to consider a model which provides explicit links between objects in addition to the edges connecting classification vertices in the standard lattice.
The model we consider here uses object-oriented terminology [3, 4]. Thus the lattice is viewed as a data structure which contains class objects which exhibit inheritance.
This report contains a description of the types of objects in the repository, followed by a discussion of how they interrelate. We discuss features of the object-oriented model which support these objects and their links, and consider behaviors which an implementation of the model should exhibit. Finally, we indicate some thoughts on implementing a prototype of this repository architecture.
2. A Bestiary of Objects
The repository is designed to contain the full set of assets created during the software life-cycle. Therefore, there are many types of objects we wish the repository to contain. Listed below are some obvious candidates for inclusion in the repository. This is an open list, indicative but not exhaustive. Extensibility of the system, a strength of faceted classification, is a necessity.
* This work is supported in part by NASA subcontract 089, cooperative agreement NCC-9-16, project no. RICIS SE.43.
Our discussion uses a simplified waterfall life-cycle model solely for the purposes of illustration. Our choice of models for this report was made on the basis of reaching the most general audience, rather than upon the suitability of any particular modeling technique. The arguments presented below apply equally well to any such technique.
2.1 Requirements
A repository containing the assets of a full life-cycle of some software development project will contain one or more requirements documents or requests for proposal which delineate the need which the software met. These documents will be written in human text (possibly with diagrams and figures) but will refer to functionality provided by code.
2.2 Specifications
Based on the requirements, there will be specifications documents, also written in human text. These documents describe the architecture of a software system which will provide the functionality demanded in the requirements. Code is written based upon the architecture which the specifications provide.
2.3 Code
Code is the central category type for the repository. While all the other objects are necessary to a fully functioning repository, code is the repository’s focus, and the main attraction for users.
In the prototype stage we concentrate on the Ada language, but extensible support for other languages is essential. Given a grammar or specification for a language, the repository structure must be able to accommodate code in that language.
2.4 Validation and Acceptance Documents. Test Data
After the software has been coded, the development team bears the burden of proving that it meets the requirements and follows the specifications. There can be textual descriptions of how the requirements are satisfied. There can also be files of test input data or script files which demonstrate test cases. There may be files of output data captured to show compliance with the
specifications. There may be caveats listing limitations or implementation dependencies. All of these refer back to the requirements, specifications, and actual code of the software system.
2.5 Versions
All of the above assets may exist in the repository in multiple versions. Version 2.0 of a word processor is very similar to, but distinct from, version 2.1, and it is valid for both versions to exist in the repository. This means that all assets of that word processor package, from requirements to acceptance report, may exist in multiple versions. There could also be a Differences document relating one version to the next, which belongs to two versions.
3. Object Granularity
The repository will contain not just code, but code at a number of different levels of granularity. For example, a repository object might be a word processor, available for retrieval as a complete word processing module. But embedded within that package are many other code objects. There might be a queue package for input buffering, which in turn contains a linked list package. The search–and–replace module is an object, but from it can be generated two separate submodules by the technique of program slicing [2, 7], the search submodule and the replace submodule. Each of these is a repository object in its own right, separately retrievable via a query on its own classification.
Similarly, a specifications document for the word processor will exist. But within that document are one or more sections detailing the specification for the search–and–replace module.
A file of test data may be input which exercises the entire package, or it may be input for testing only a very small functional piece of the system. For example, a file containing misspelled words for ensuring that the spell checker functions correctly may have nothing to do with testing the printer output module of a word processing package. However, the file of misspelled words properly resides in the repository as a member of the comprehensive test suite.
Every large object in the repository may contain or be composed of smaller objects also in the repository in their own right. Conversely each small object may be not only a valid repository object but also a constituent of a larger asset.
The issue here is one of complex structure; we use a canonical notion of a document to illustrate the concepts. Consider the general concept of a document with a fixed structuring scheme (sections, subsections, paragraphs, and sentences) as shown in figure 1. Any given document
Figure 1. A Sample Document
contains an arbitrary number of sections, which in turn contain an arbitrary number of subsections, and so on.
Every large object in the repository may contain or be composed of smaller objects also in the repository in their own right. Conversely each small object may be not only a valid repository object but also a constituent of a larger asset.
The model includes the definition of the limits of granularity. In the prototype presented here, a Document, the coarsest level, contains successively finer objects, down to paragraphs, the finest level. The document class definition limits the number of granularity levels. For code, a recursively defined class, there is no fixed number of granularity levels. Every bona fide block in the code, no matter how deeply nested, is a repository object at its own level of granularity. Thus the reference given in section 2.1 for the language's specification to allow parsing code into its block structure.
We do not imagine, however, that each lowest-level object will be replicated in every coarser object of which it is a constituent part. A paragraph will not be replicated in every subsection, section, and document which contains it. Rather, the larger-grained objects will contain references to the finer-grained ones, references which are transparent to the user. In object-oriented terminology, the larger-grained objects are composite. More exactly, the references from coarse- to fine-grained objects are shared independent composite references. The reference from a word processing system to one of its constituent string packages is a shared reference because the string package may be contained in more than one parent object. The reference is also independent because the existence of the string package does not depend on the existence of the word processing system. We might decide that the word processing system is of no further use in the repository and delete it, but retain the string package on its own merit.
4. Object Links
As outlined above, there are many objects which will reside in the repository. It is obvious that there are many relationships among them. A spell checker code module is related across granularity levels up to the word processing package which contains it and down to the buffer package it contains. It is related across life-cycle phases, back to the specifications section which discusses spell checking functionality and forward to the verification test of the spell checker module. It is related across versions of the software back to its predecessor and forward to its successor.
A person browsing in a conventional library has only one dimension by which to follow links to find related books. From a book of interest, the browser can search left or right along the shelf to try to find related works. But our repository has the ability to provide many dimensions of links to related objects. The basic lattice structure provides two mechanisms for browsing for related objects, relaxation of facet values in queries and use of closeness metrics which produce queries containing conceptually similar or related terms.
In addition to these, the data structure of the objects in the lattice should allow the inclusion of explicit links along all the dimensions given above. These links connect related objects and must be available to the browser as a means to identify objects related along the axes of granularity, life-cycle phase, and version. All repository object links are bidirectional and reflexive. They may be one-to-one, one-to-many, or many-to-many.
The combination of a rich linking structure within a lattice framework produces the potential for an extremely powerful interface mechanism. Traditional relational query systems can only retrieve data blindly, with no notion of their location in the database. Most current object-oriented systems provide only navigational access to data, with limited querying ability. Our model provides full query access to any node in the lattice through the facet-tuple mechanism. But our model also provides full navigational access via the object structure with its cross links. With
this combination of declarative queries and procedural navigation, it is thus possible for the user
to browse through the entire repository finding and pinpointing the exact object of interest.
Object-oriented database systems support our link concepts through object identity. A re-
flexive relationship implies that the parties (i.e., objects) to the relationship store the identity (or
identities) of the objects to which they relate. This is very similar, but not exactly equivalent, to
the concept of pointers in more traditional programming languages.
4.1 Phase Links
Phase links are those which join one object in the lattice to another object which is related by
virtue of being the “same” object at a different phase of the life cycle. This type of link joins, for
example, a requirement to its embodiment as a specification, and then similarly on to its imple-
mentation in code.
There must be a link not only between the word processor’s specification document and the
word processing code, but also between the section of the specification which treats of the
search-and-replace function and the code module which implements that functionality.
Figure 2 illustrates the duality of reference between the various artifacts in the life cycle. A
requirements document has as its specification some design document (a one-to-one relation-
ship); that same design document in turn was specified by the requirements document. A given
design document may specify aspects of multiple programs (illustrating a one-to-many relation-
ship).
4.2 Granularity Links
Granularity links are those which join objects across granularity levels. This type of link
joins, for example, a section in a document is linked to the paragraph it contains, and also to
the chapter which contains it. Similarly, in source code, a search program slice has links to the
search-and-replace module from which it was derived via slicing.
The transition from our conceptual model of a document as illustrated in figure 1 to the object model of a document as illustrated in figure 3 exemplifies the representation of complex structure in object-oriented systems.
Hence, a document is a title and an ordered collection of sections. A section is a title and an ordered collection of subsections, and so on. Object identity implies that the document does not actually contain all of its nested components, but rather it contains references to them (effectively pointers to the other objects).
4.3 Version Links
If versions are added to the repository, a new dimension is added. In this dimension there are links from an object forward to a later version or backward to a previous version of the same object concept. These links are orthogonal to the phase links between objects in the same project. It is possible, however, that the version relationship is not as simple as lineal descendancy. Rather, the versions of an object may form a directed acyclic graph, as shown by the bold lines in figure 4, designating the derivation of version 2 from version 1, and the derivation of version 3 from both version 1 and version 2. Any number of new versions may be derived from one or more existing versions. In other words, versioning can exhibit all the characteristics of temporal inheritance.

The set of versions for some document artifact in the life cycle is just a labeled association, with the version number acting as label for a specific instance of a document object. This leads to the distinction between a conceptual document and a document version. A conceptual document contains the named associations comprising the various versions, each of which are documents in their own right, as shown in figure 4.
Note that any given object can be referenced by any number of other objects, so that it is quite reasonable for a given section to appear unchanged in multiple versions of a document. This is accomplished by storing the identity of the section in each of the documents’ respective ordered sequence of sections.
5. The Model
The above sections describe an architecture for a lattice-based faceted repository of life-cycle assets. Many of the features of this architecture are couched in object-oriented terms. We use these terms because the object-oriented paradigm provides semantics closer to the abstract concept we are trying to model than any other yet developed. Use of object-oriented terminology and concepts, therefore, leads us directly into the use of an object-oriented data model for designing the data structures of the lattice.
The conceptual structure of the repository is a lattice, demanding an object-oriented model which explicitly includes multiple inheritance. As depicted schematically in figure 5 and textually in figure 6, the fundamental superclass of the lattice is the LatticeNode class. The two subclasses of LatticeNode are FacetNode and TupleNode, corresponding to the node types in the Facet and Tuple sublattices as explained in [1].

The Tuple sublattice contains the references to the items actually stored in the repository. An instance of TupleNode contains the attribute set of RepositoryElement to accomplish this. In our simplified example, a RepositoryElement is a class with only two subclasses, Document and Code. In a full repository implementation there would be other subclasses for storing test data and make scripts, for instance.
LatticeNode
set of LatticeNode — parents
set of LatticeNode — children
FacetNode : subclass of LatticeNode
set of FacetValue
TupleNode : subclass of LatticeNode
set of FacetNode
set of RepositoryElement
RepositoryElement
ObjectTitle
ObjectVersion
ObjectAuthor
ObjectDate
...other attributes
Document : subclass of RepositoryElement
...other attributes
set of SectionObject — constituent items
set of FigureObject — constituent items
Section
SectionHeader
SectionNumber
set of Document — parents
set of Subsection — constituent items
Subsection
SubsectionHeader
SubsectionNumber
set of Section — parents
set of Paragraph — constituent items
Paragraph
ParaNumber: Integer
set of Subsection — parents
ParaText: String
Code : subclass of RepositoryElement
CodeLanguage
...other attributes
set of CodeElement — constituent items
CodeElement
set of Code — parents
set of Declarations
set of Statements
Figure 6. The Class Definitions
The RepositoryElement class defines attributes of general interest such as Title, Author, Version, Date. These attributes constitute general metadata about repository object which would be displayed to the user. The subclasses Document and Code have further attributes which are specific to their types. For example, a Document instance might contain a Drawing, whereas a piece of Code would have a ProgrammingLanguage.
As explained in Section 3, a Document in the repository is not atomic but is composed of instances of the classes Section, Subsection, etc. Each of these classes is an object defined with its own appropriate attributes. Similarly a Code instance contains CodeElement instances.
The encapsulation feature of the object-oriented paradigm makes this model easily extensible. For example, if in the future we added to the repository a sound processing program which required a digitized audio score as an initialization file, the requisite class definition of that object could be added to the schema with no disruption of the current existing definitions.
6. Future Work
We have identified the major objects which will reside in the repository and we have proposed an object-oriented data model for our lattice. With this model it is possible to capture the abstract concept of a static lattice repository which exhibits inheritance among its objects and many complex linkages between them. This model also provides for the encapsulation of the functions which allow navigation between and display of the objects in the repository.
We now intend to examine a number of commercial and experimental object-oriented database management systems to determine the feasibility of implementing this model. The result of this examination should be a prototype of ASV4, the full life-cycle reuse repository. We anticipate that this prototyping phase will generate considerable feedback for refining and fine-tuning the object-oriented data model.
Particular areas that warrant further examination include:
- the role of methods (mechanisms that implement behavior) in the presentation of and navigation through the repository and its contents;
- the ties between an object-oriented model of the repository and a hypermedia representation of the repository; and
- the assistance an object-oriented model of the repository can provide in quality assessment [5, 6].
References
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19920023852.pdf", "len_cl100k_base": 4404, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30708, "total-output-tokens": 5401, "length": "2e12", "weborganizer": {"__label__adult": 0.0002366304397583008, "__label__art_design": 0.000331878662109375, "__label__crime_law": 0.00025534629821777344, "__label__education_jobs": 0.0009098052978515624, "__label__entertainment": 4.172325134277344e-05, "__label__fashion_beauty": 0.00010865926742553712, "__label__finance_business": 0.00023031234741210935, "__label__food_dining": 0.0002148151397705078, "__label__games": 0.00027060508728027344, "__label__hardware": 0.0005478858947753906, "__label__health": 0.00028705596923828125, "__label__history": 0.00019669532775878904, "__label__home_hobbies": 6.99162483215332e-05, "__label__industrial": 0.00024890899658203125, "__label__literature": 0.0001919269561767578, "__label__politics": 0.0001685619354248047, "__label__religion": 0.00029397010803222656, "__label__science_tech": 0.011566162109375, "__label__social_life": 8.004903793334961e-05, "__label__software": 0.00682830810546875, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.000164031982421875, "__label__transportation": 0.00033164024353027344, "__label__travel": 0.00014126300811767578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23744, 0.02367]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23744, 0.59986]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23744, 0.91769]], "google_gemma-3-12b-it_contains_pii": [[0, 111, false], [111, 2623, null], [2623, 3472, null], [3472, 3707, null], [3707, 5286, null], [5286, 7203, null], [7203, 9230, null], [9230, 9778, null], [9778, 11760, null], [11760, 13922, null], [13922, 15843, null], [15843, 16394, null], [16394, 17990, null], [17990, 19400, null], [19400, 20428, null], [20428, 22387, null], [22387, 23744, null]], "google_gemma-3-12b-it_is_public_document": [[0, 111, true], [111, 2623, null], [2623, 3472, null], [3472, 3707, null], [3707, 5286, null], [5286, 7203, null], [7203, 9230, null], [9230, 9778, null], [9778, 11760, null], [11760, 13922, null], [13922, 15843, null], [15843, 16394, null], [16394, 17990, null], [17990, 19400, null], [19400, 20428, null], [20428, 22387, null], [22387, 23744, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23744, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23744, null]], "pdf_page_numbers": [[0, 111, 1], [111, 2623, 2], [2623, 3472, 3], [3472, 3707, 4], [3707, 5286, 5], [5286, 7203, 6], [7203, 9230, 7], [9230, 9778, 8], [9778, 11760, 9], [11760, 13922, 10], [13922, 15843, 11], [15843, 16394, 12], [16394, 17990, 13], [17990, 19400, 14], [19400, 20428, 15], [20428, 22387, 16], [22387, 23744, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23744, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
5e6527adb2ec8dfc53515a873d212f11965ad5f5
|
[REMOVED]
|
{"Source-Url": "http://staff.lero.ie/stol/files/2011/12/OSS2011b.pdf", "len_cl100k_base": 7331, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 37933, "total-output-tokens": 10458, "length": "2e12", "weborganizer": {"__label__adult": 0.0002894401550292969, "__label__art_design": 0.0003402233123779297, "__label__crime_law": 0.0002465248107910156, "__label__education_jobs": 0.0004315376281738281, "__label__entertainment": 4.673004150390625e-05, "__label__fashion_beauty": 0.00010162591934204102, "__label__finance_business": 0.00013208389282226562, "__label__food_dining": 0.00022077560424804688, "__label__games": 0.0003707408905029297, "__label__hardware": 0.0005011558532714844, "__label__health": 0.0002160072326660156, "__label__history": 0.00014221668243408203, "__label__home_hobbies": 4.500150680541992e-05, "__label__industrial": 0.00016236305236816406, "__label__literature": 0.00016558170318603516, "__label__politics": 0.00017201900482177734, "__label__religion": 0.0002694129943847656, "__label__science_tech": 0.003421783447265625, "__label__social_life": 6.258487701416016e-05, "__label__software": 0.00536346435546875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0001652240753173828, "__label__transportation": 0.0002467632293701172, "__label__travel": 0.00013136863708496094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42283, 0.03182]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42283, 0.64445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42283, 0.91382]], "google_gemma-3-12b-it_contains_pii": [[0, 2411, false], [2411, 5770, null], [5770, 7953, null], [7953, 10373, null], [10373, 13243, null], [13243, 16414, null], [16414, 18667, null], [18667, 20885, null], [20885, 23802, null], [23802, 24553, null], [24553, 27502, null], [27502, 27548, null], [27548, 29660, null], [29660, 30268, null], [30268, 32947, null], [32947, 35913, null], [35913, 39231, null], [39231, 42283, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2411, true], [2411, 5770, null], [5770, 7953, null], [7953, 10373, null], [10373, 13243, null], [13243, 16414, null], [16414, 18667, null], [18667, 20885, null], [20885, 23802, null], [23802, 24553, null], [24553, 27502, null], [27502, 27548, null], [27548, 29660, null], [29660, 30268, null], [30268, 32947, null], [32947, 35913, null], [35913, 39231, null], [39231, 42283, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42283, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42283, null]], "pdf_page_numbers": [[0, 2411, 1], [2411, 5770, 2], [5770, 7953, 3], [7953, 10373, 4], [10373, 13243, 5], [13243, 16414, 6], [16414, 18667, 7], [18667, 20885, 8], [20885, 23802, 9], [23802, 24553, 10], [24553, 27502, 11], [27502, 27548, 12], [27548, 29660, 13], [29660, 30268, 14], [30268, 32947, 15], [32947, 35913, 16], [35913, 39231, 17], [39231, 42283, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42283, 0.06173]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
3f43ed809c2bd106968a78755aee97518515b463
|
Chapter 7: Eligibility Traces
Midterm
Mean = 77.33 Median = 82
N-step TD Prediction
- **Idea:** Look farther into the future when you do TD backup (1, 2, 3, …, n steps)
Mathematics of N-step TD Prediction
- **Monte Carlo:**
\[ R_t = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + \cdots + \gamma^{T-t-1} r_T \]
- **TD:**
\[ R_t^{(1)} = r_{t+1} + \gamma V_t(s_{t+1}) \]
- Use V to estimate remaining return
- **n-step TD:**
- 2 step return:
\[ R_t^{(2)} = r_{t+1} + \gamma r_{t+2} + \gamma^2 V_t(s_{t+2}) \]
- n-step return:
\[ R_t^{(n)} = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + \cdots + \gamma^{n-1} r_{t+n} + \gamma^n V_t(s_{t+n}) \]
Learning with N-step Backups
- Backup (on-line or off-line):
\[ \Delta V_t(s_t) = \alpha \left[ R^{(n)}_t - V_t(s_t) \right] \]
- Error reduction property of n-step returns
\[ \max_s |E_\pi \{ R^n_t | s_t = s \} - V_\pi(s) | \leq \gamma^n \max_s |V(s) - V_\pi(s) | \]
- Using this, you can show that n-step methods converge
Random Walk Examples
How does 2-step TD work here?
How about 3-step TD?
A Larger Example
- Task: 19 state random walk
- Do you think there is an optimal n (for everything)?
Averaging N-step Returns
- n-step methods were introduced to help with TD(λ) understanding
- Idea: backup an average of several returns
- e.g. backup half of 2-step and half of 4-step
$R_t^{avg} = \frac{1}{2} R_t^{(2)} + \frac{1}{2} R_t^{(4)}$
- Called a complex backup
- Draw each component
- Label with the weights for that component
Forward View of TD($\lambda$)
- TD($\lambda$) is a method for averaging all n-step backups
- weight by $\lambda^{n-1}$ (time since visitation)
- $\lambda$-return:
\[
R_t^\lambda = (1 - \lambda) \sum_{n=1}^{\infty} \lambda^{n-1} R_t^{(n)}
\]
- Backup using $\lambda$-return:
\[
\Delta V_t(s_t) = \alpha \left[ R_t^\lambda - V_t(s_t) \right]
\]
\[
\sum = 1 \quad \lambda^{T-t-1}
\]
\( \lambda \)-return Weighting Function
- Weight given to the 3-step return
- Decay by \( \lambda \)
- Weight given to actual, final return
- Total area = 1
Time
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction
Relation to TD(0) and MC
- \( \lambda \)-return can be rewritten as:
\[
R_t^\lambda = (1 - \lambda) \sum_{n=1}^{T-t-1} \lambda^{n-1} R_t^{(n)} + \lambda^{T-t-1} R_t
\]
Until termination After termination
- If \( \lambda = 1 \), you get MC:
\[
R_t^\lambda = (1 - 1) \sum_{n=1}^{T-t-1} 1^{n-1} R_t^{(n)} + 1^{T-t-1} R_t = R_t
\]
- If \( \lambda = 0 \), you get TD(0)
\[
R_t^\lambda = (1 - 0) \sum_{n=1}^{T-t-1} 0^{n-1} R_t^{(n)} + 0^{T-t-1} R_t = R_t^{(1)}
\]
Forward View of TD(\(\lambda\)) II
- Look forward from each state to determine update from future states and rewards:
**λ-return on the Random Walk**
- Same 19 state random walk as before
- Why do you think intermediate values of λ are best?
Backward View of TD(\(\lambda\))
- The forward view was for theory
- The backward view is for mechanism
- New variable called *eligibility trace* \(e_t(s)\) \(\sum^+\)
- On each step, decay all traces by \(\gamma\lambda\) and increment the trace for the current state by 1
- Accumulating trace
\[
e_t(s) = \begin{cases}
\gamma \lambda e_{t-1}(s) & \text{if } s \neq s_t \\
\gamma \lambda e_{t-1}(s) + 1 & \text{if } s = s_t
\end{cases}
\]
On-line Tabular TD(\(\lambda\))
Initialize \(V(s)\) arbitrarily and \(e(s) = 0\), for all \(s \in S\)
Repeat (for each episode):
Initialize \(s\)
Repeat (for each step of episode):
\(a \leftarrow \) action given by \(\pi\) for \(s\)
Take action \(a\), observe reward, \(r\), and next state \(s'\)
\(\delta \leftarrow r + \gamma V(s') - V(s)\)
\(e(s) \leftarrow e(s) + 1\)
For all \(s\) :
\(V(s) \leftarrow V(s) + \alpha \delta e(s)\)
\(e(s) \leftarrow \gamma \lambda e(s)\)
\(s \leftarrow s'\)
Until \(s\) is terminal
Backward View
\[ \delta_t = r_{t+1} + \gamma V_t(s_{t+1}) - V_t(s_t) \]
- Shout \( \delta_t \) backwards over time
- The strength of your voice decreases with temporal distance by \( \gamma \lambda \)
Relation of Backwards View to MC & TD(0)
- Using update rule:
\[ \Delta V_t(s) = \alpha \delta_t e_t(s) \]
- As before, if you set \( \lambda \) to 0, you get to TD(0)
- If you set \( \lambda \) to 1, you get MC but in a better way
- Can apply TD(1) to continuing tasks
- Works incrementally and on-line (instead of waiting to the end of the episode)
Forward View = Backward View
- The forward (theoretical) view of TD(\(\lambda\)) is equivalent to the backward (mechanistic) view for off-line updating
- The book shows:
\[
\sum_{t=0}^{T-1} \Delta V_t^{TD}(s) = \sum_{t=0}^{T-1} \Delta V_t^{\lambda}(s_t) I_{ss_t}
\]
Backward updates Forward updates
- On-line updating with small \(\alpha\) is similar
On-line versus Off-line on Random Walk
- Same 19 state random walk
- On-line performs better over a broader range of parameters
Control: Sarsa($\lambda$)
- Save eligibility for state-action pairs instead of just states
$$e_t(s, a) = \begin{cases}
\gamma \lambda e_{t-1}(s, a) + 1 & \text{if } s = s_t \text{ and } a = a_t \\
\gamma \lambda e_{t-1}(s, a) & \text{otherwise}
\end{cases}$$
$$Q_{t+1}(s, a) = Q_t(s, a) + \alpha \delta_t e_t(s, a)$$
$$\delta_t = r_{t+1} + \gamma Q_t(s_{t+1}, a_{t+1}) - Q_t(s_t, a_t)$$
$$\sum = 1$$
Sarsa(\(\lambda\)) Algorithm
Initialize \(Q(s,a)\) arbitrarily and \(e(s,a) = 0\), for all \(s,a\)
Repeat (for each episode) :
Initialize \(s,a\)
Repeat (for each step of episode) :
Take action \(a\), observe \(r,s'\)
Choose \(a'\) from \(s'\) using policy derived from \(Q\) (e.g. \(\epsilon\) - greedy)
\(\delta \leftarrow r + \gamma Q(s',a') - Q(s,a)\)
\(e(s,a) \leftarrow e(s,a) + 1\)
For all \(s,a\) :
\(Q(s,a) \leftarrow Q(s,a) + \alpha \delta e(s,a)\)
\(e(s,a) \leftarrow \gamma \lambda e(s,a)\)
\(s \leftarrow s';a \leftarrow a'\)
Until \(s\) is terminal
Sarsa(\(\lambda\)) Gridworld Example
- With one trial, the agent has much more information about how to get to the goal
- not necessarily the best way
- Can considerably accelerate learning
Three Approaches to $Q(\lambda)$
- How can we extend this to $Q$-learning?
- If you mark every state action pair as eligible, you backup over non-greedy policy
- Watkins: Zero out eligibility trace after a non-greedy action. Do max when backing up at first non-greedy choice.
$$e_t(s, a) = \begin{cases}
1 + \gamma \lambda e_{t-1}(s, a) & \text{if } s = s_t, a = a_t, Q_{t-1}(s_t, a_t) = \max_a Q_{t-1}(s_t, a) \\
0 & \text{if } Q_{t-1}(s_t, a_t) \neq \max_a Q_{t-1}(s_t, a) \\
\gamma \lambda e_{t-1}(s, a) & \text{otherwise}
\end{cases}$$
$$Q_{t+1}(s, a) = Q_t(s, a) + \alpha \delta_t e_t(s, a)$$
$$\delta_t = r_{t+1} + \gamma \max_{a'} Q_t(s_{t+1}, a') - Q_t(s_t, a_t)$$
Watkins’s Q(\(\lambda\))
Initialize \(Q(s,a)\) arbitrarily and \(e(s,a) = 0\), for all \(s,a\)
Repeat (for each episode) :
Initialize \(s,a\)
Repeat (for each step of episode) :
Take action \(a\), observe \(r,s'\)
Choose \(a'\) from \(s'\) using policy derived from \(Q\) (e.g. \(?\) - greedy)
\[a^* \leftarrow \arg\max_b Q(s',b)\] (if \(a\) ties for the max, then \(a^* \leftarrow a'\))
\[\delta \leftarrow r + \gamma Q(s',a') - Q(s,a^*)\]
\[e(s,a) \leftarrow e(s,a) + 1\]
For all \(s,a\) :
\[Q(s,a) \leftarrow Q(s,a) + \alpha \delta e(s,a)\]
If \(a' = a^*\), then \(e(s,a) \leftarrow \gamma \lambda e(s,a)\)
else \(e(s,a) \leftarrow 0\)
\[s \leftarrow s'; a \leftarrow a'\]
Until \(s\) is terminal
Peng’s Q(\(\lambda\))
- Disadvantage to Watkins’s method:
- Early in learning, the eligibility trace will be “cut” (zeroed out) frequently resulting in little advantage to traces
- Peng:
- Backup max action except at end
- Never cut traces
- Disadvantage:
- Complicated to implement
Naïve Q(λ)
- Idea: is it really a problem to backup exploratory actions?
- Never zero traces
- Always backup max at current action (unlike Peng or Watkins’s)
- Is this truly naïve?
- Works well is preliminary empirical studies
What is the backup diagram?
Comparison Task
- Compared Watkins’s, Peng’s, and Naïve (called McGovern’s here) Q(\(\lambda\)) on several tasks.
- See McGovern and Sutton (1997). Towards a Better Q(\(\lambda\)) for other tasks and results (stochastic tasks, continuing tasks, etc)
- Deterministic gridworld with obstacles
- 10x10 gridworld
- 25 randomly generated obstacles
- 30 runs
- \(\alpha = 0.05, \gamma = 0.9, \lambda = 0.9, \varepsilon = 0.05\), accumulating traces
From McGovern and Sutton (1997). Towards a better Q(\(\lambda\))
Comparison Results
From McGovern and Sutton (1997). Towards a better $Q(\lambda)$.
R. S. Sutton and A. G. Barto: Reinforcement Learning: An Introduction
Convergence of the $Q(\lambda)$’s
- None of the methods are proven to converge.
- Much extra credit if you can prove any of them.
- Watkins’s is thought to converge to $Q^*$
- Peng’s is thought to converge to a mixture of $Q^\pi$ and $Q^*$
- Naïve - $Q^*$?
Eligibility Traces for Actor-Critic Methods
- **Critic:** On-policy learning of $V^\pi$. Use TD($\lambda$) as described before.
- **Actor:** Needs eligibility traces for each state-action pair.
- We change the update equation:
$$p_{t+1}(s,a) = \begin{cases} p_t(s,a) + \alpha \delta_t & \text{if } a = a_t \text{ and } s = s_t \\ p_t(s,a) & \text{otherwise} \end{cases} \quad \text{to} \quad p_{t+1}(s,a) = p_t(s,a) + \alpha \delta_t e_t(s,a)$$
- Can change the other actor-critic update:
$$p_{t+1}(s,a) = \begin{cases} p_t(s,a) + \alpha \delta_t [1 - \pi(s,a)] & \text{if } a = a_t \text{ and } s = s_t \\ p_t(s,a) & \text{otherwise} \end{cases} \quad \text{to} \quad p_{t+1}(s,a) = p_t(s,a) + \alpha \delta_t e_t(s,a)$$
where
$$e_t(s,a) = \begin{cases} \gamma \lambda e_{t-1}(s,a) + 1 - \pi_t(s_t,a_t) & \text{if } s = s_t \text{ and } a = a_t \\ \gamma \lambda e_{t-1}(s,a) & \text{otherwise} \end{cases}$$
Replacing Traces
- Using accumulating traces, frequently visited states can have eligibilities greater than 1
- This can be a problem for convergence
Replacing traces: Instead of adding 1 when you visit a state, set that trace to 1
\[
e_t(s) = \begin{cases}
\gamma \lambda e_{t-1}(s) & \text{if } s \neq s_t \\
1 & \text{if } s = s_t
\end{cases}
\]
Replacing Traces Example
- Same 19 state random walk task as before
- Replacing traces perform better than accumulating traces over more values of $\lambda$
Why Replacing Traces?
- Replacing traces can significantly speed learning
- They can make the system perform well for a broader set of parameters
- Accumulating traces can do poorly on certain types of tasks
Why is this task particularly onerous for accumulating traces?
More Replacing Traces
- Off-line replacing trace TD(1) is identical to first-visit MC
- Extension to action-values:
- When you revisit a state, what should you do with the traces for the other actions?
- Singh and Sutton say to set them to zero:
\[
e_t(s, a) = \begin{cases}
1 & \text{if } s = s_t \text{ and } a = a_t \\
0 & \text{if } s = s_t \text{ and } a \neq a_t \\
\gamma \lambda e_{t-1}(s, a) & \text{if } s \neq s_t
\end{cases}
\]
Implementation Issues
- Could require much more computation
- But most eligibility traces are VERY close to zero
- If you implement it in Matlab, backup is only one line of code and is very fast (Matlab is optimized for matrices)
Variable $\lambda$
- Can generalize to variable $\lambda$
\[
\lambda_t = \begin{cases}
\gamma \lambda_t e_{t-1}(s) & \text{if } s \neq s_t \\
\gamma \lambda_t e_{t-1}(s) + 1 & \text{if } s = s_t
\end{cases}
\]
- Here $\lambda$ is a function of time
- Could define
\[
\lambda_t = \lambda(s_t) \text{ or } \lambda_t = \lambda^{t/\tau}
\]
**Conclusions**
- Provides efficient, incremental way to combine MC and TD
- Includes advantages of MC (can deal with lack of Markov property)
- Includes advantages of TD (using TD error, bootstrapping)
- Can significantly speed learning
- Does have a cost in computation
Something Here is Not Like the Other
a) Backward View
b) Forward View
|
{"Source-Url": "http://www-anw.cs.umass.edu:80/~barto/courses/cs687/Chapter%207.pdf", "len_cl100k_base": 4165, "olmocr-version": "0.1.49", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 66799, "total-output-tokens": 5811, "length": "2e12", "weborganizer": {"__label__adult": 0.00047898292541503906, "__label__art_design": 0.0006442070007324219, "__label__crime_law": 0.0005245208740234375, "__label__education_jobs": 0.0025844573974609375, "__label__entertainment": 0.00020265579223632812, "__label__fashion_beauty": 0.00029087066650390625, "__label__finance_business": 0.000507354736328125, "__label__food_dining": 0.0005502700805664062, "__label__games": 0.0021762847900390625, "__label__hardware": 0.002071380615234375, "__label__health": 0.0010824203491210938, "__label__history": 0.0005426406860351562, "__label__home_hobbies": 0.0003132820129394531, "__label__industrial": 0.0011749267578125, "__label__literature": 0.0006351470947265625, "__label__politics": 0.0004215240478515625, "__label__religion": 0.0006380081176757812, "__label__science_tech": 0.474609375, "__label__social_life": 0.00017535686492919922, "__label__software": 0.010650634765625, "__label__software_dev": 0.49755859375, "__label__sports_fitness": 0.0007266998291015625, "__label__transportation": 0.001102447509765625, "__label__travel": 0.0003237724304199219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 12116, 0.0123]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 12116, 0.71003]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 12116, 0.6957]], "google_gemma-3-12b-it_contains_pii": [[0, 30, false], [30, 66, null], [66, 173, null], [173, 670, null], [670, 1001, null], [1001, 1075, null], [1075, 1178, null], [1178, 1531, null], [1531, 1940, null], [1940, 2175, null], [2175, 2641, null], [2641, 2760, null], [2760, 2885, null], [2885, 3332, null], [3332, 3893, null], [3893, 4096, null], [4096, 4455, null], [4455, 4811, null], [4811, 4940, null], [4940, 5346, null], [5346, 5958, null], [5958, 6151, null], [6151, 6831, null], [6831, 7546, null], [7546, 7840, null], [7840, 8101, null], [8101, 8622, null], [8622, 8778, null], [8778, 9038, null], [9038, 9956, null], [9956, 10311, null], [10311, 10469, null], [10469, 10742, null], [10742, 11191, null], [11191, 11424, null], [11424, 11768, null], [11768, 12045, null], [12045, 12116, null]], "google_gemma-3-12b-it_is_public_document": [[0, 30, true], [30, 66, null], [66, 173, null], [173, 670, null], [670, 1001, null], [1001, 1075, null], [1075, 1178, null], [1178, 1531, null], [1531, 1940, null], [1940, 2175, null], [2175, 2641, null], [2641, 2760, null], [2760, 2885, null], [2885, 3332, null], [3332, 3893, null], [3893, 4096, null], [4096, 4455, null], [4455, 4811, null], [4811, 4940, null], [4940, 5346, null], [5346, 5958, null], [5958, 6151, null], [6151, 6831, null], [6831, 7546, null], [7546, 7840, null], [7840, 8101, null], [8101, 8622, null], [8622, 8778, null], [8778, 9038, null], [9038, 9956, null], [9956, 10311, null], [10311, 10469, null], [10469, 10742, null], [10742, 11191, null], [11191, 11424, null], [11424, 11768, null], [11768, 12045, null], [12045, 12116, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 12116, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 12116, null]], "pdf_page_numbers": [[0, 30, 1], [30, 66, 2], [66, 173, 3], [173, 670, 4], [670, 1001, 5], [1001, 1075, 6], [1075, 1178, 7], [1178, 1531, 8], [1531, 1940, 9], [1940, 2175, 10], [2175, 2641, 11], [2641, 2760, 12], [2760, 2885, 13], [2885, 3332, 14], [3332, 3893, 15], [3893, 4096, 16], [4096, 4455, 17], [4455, 4811, 18], [4811, 4940, 19], [4940, 5346, 20], [5346, 5958, 21], [5958, 6151, 22], [6151, 6831, 23], [6831, 7546, 24], [7546, 7840, 25], [7840, 8101, 26], [8101, 8622, 27], [8622, 8778, 28], [8778, 9038, 29], [9038, 9956, 30], [9956, 10311, 31], [10311, 10469, 32], [10469, 10742, 33], [10742, 11191, 34], [11191, 11424, 35], [11424, 11768, 36], [11768, 12045, 37], [12045, 12116, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 12116, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
34d71946d23b78bcba1838641f929d5d3a8f7ea5
|
CBR in the Pipeline
Marc Goodman
Continuum Software, Inc.
800 West Cummings Park, Suite 4950
Woburn, Mass. 01801
marc@continuumsi.com
Abstract
In a variety of reasoning tasks, even ones for which CBR seems ideally suited, a stand-alone CBR component may not prove adequate. First, the data available in system construction may be too raw or noisy for direct processing and may require sophisticated reasoning before it is in a form suitable for CBR. Second, capacity demands and other run-time constraints may prohibit a straight CBR module from being deployed. This paper describes a pipelined architecture where one or more reasoning steps are used to preprocess data into a form suitable for use in CBR, and CBR is used as a synthesis component for the creation of a stand-alone, run-time database.
Introduction
The SideClick link referral system [Goodman 1998] is a web-based service for resource exploration. Given a URL (most often a link to a particular web page of interest), SideClick can provide a list of related URLs organized by topic as well as a list of related topics. Or, given a topic of interest, SideClick can provide a list of URLs related to that topic as well as other related topics. For example, given a URL for “The Dilbert Zone” [Adams 1998], SideClick returns links for “Over the Hedge” [Fry and Lewis 1998], “Rose is Rose” [Brady 1998], “Peanuts” [Schulz 1998], the United Media comics page [United Media 1998], “Doonesbury” [Trudeau 1998], etc. and the related topics, “Entertainment” and “Comics and Humor.” Clicking on the “Entertainment” topic returns links from baseball, movies, music, magazines, etc. and over 50 related topics from Art to UFOs. By following links and topics of interest, the user is free to discover new, interesting web resources in a serendipitous fashion.
SideClick models the way users of the web link together and organize information as embodied in bookmarks files and other on-line links pages. The core observation in the system is that people who create links pages tend to group links in sections, organized by content, with other similar links. Hence, a web page can be viewed as a case, composed of a number of snippets [Redmond 1992; Kolodner 1993] or microcases [Zito-Wolf and Alterman 1994]. Each snippet contains a group of links, a content header, a pointer to a parent snippet, and a set of pointers to child snippets. For example, a particular links page might contain a snippet consisting of links to peripheral manufacturers. Its header might be something like the text string “Peripherals”. It might appear on the page as a subsection under a supersection called “Computer Hardware,” and it might have child sections such as “Modems,” “Printers,” etc. Each of the child sections and the parent section would also be represented by snippets.
The process of recommending links, conceptually, consists of taking a particular link, retrieving all of the snippets that contain this link, synthesizing the snippets into a representative snippet, and displaying this snippet to the user. The process of listing the links that occur under a particular topic consists of retrieving all of the snippets that were indexed under an appropriate section header, synthesizing the snippets into a representative snippet, and displaying this snippet to the user. Stated more intuitively, the system is saying something like “Given that the user is interested in a particular link, other web users who have been interested in this link have tended to organize it with these other links, under these topics. Therefore, the user should find these links and topics interesting as well.”
Harder than it Sounds
Unfortunately, several factors conspire to make this simple conceptual framework for link recommendation insufficient. First, data on web pages is extremely noisy. This is hardly surprising given that most of these documents are generated by hand, and many of them are generated by people who have only passing familiarity with computers and computer programming. What is surprising is the sheer variety of types of noise in web pages. Some common types of noise include:
- Markup Noise: Web pages reflect their organizational structure primarily via the author’s choice of markup, or the way the layout of the page is expressed in terms of markup tags. One author, for example, might choose to create sections using delimited lists of links, with
various levels of headers used to label sections and the relative size of those headers intended to convey scoping information. Another author might present the same information in tabular form, with the headers relegated to a column along the side of the page and the links contained in separate cells in a different column. A third author might display information within free-form descriptive paragraphs that contain embedded links, separated from other sections by horizontal rules. The number of distinct markup styles approaches the number of web authors. This source of noise is further compounded by the majority of authors who use markup tags incorrectly, invert their own markup tags (which browsers simply ignore), and even introduce syntax errors within the tags themselves. Reducing the amount of markup noise is crucial for placing links correctly within snippets as well as understanding the relationships between snippets within a case.
• **URL Noise:** It is unfortunate that URL stands for “Uniform Resource Locator,” not “Unique Resource Locator.” In fact, there are usually several distinct ways of referring to any particular web document. For example, the Netscape Home Page can be found at any of the following URLs: http://www.netscape.com/, http://home.mcom.com/, http://mcom.com/index.html, http://www.home.netscape.com/home/, and several others. Of the 4.5 million distinct URLs referred to by documents within the SideClick casebase, over 500,000 of these URLs are redundant. Successfully canonicalizing URLs prevents the system from referring the user to the same web resource via multiple URLs, as well as increasing the number and usefulness of snippets indexed under those URLs.
• **Section Heading Noise:** As described above, markup noise can make it difficult to identify the piece of text (if any) that identifies the topic of a snippet. However, even if that piece of text is successfully located, different people tend to label the same content differently. For example, the section headings, “Search,” “Search Tools,” “Suchmachinen,” “Suchdienst,” “Metasearch,” “Keyword Search,” “Search Forms,” “Moteurs De Recherche,” and “Search Engines” all refer to the same topic. Successfully canonicalizing section headings prevents the system from referring the user to multiple versions of the same topic with different names, as well as increasing the number and usefulness of snippets indexed under those section headings. A related but unsolved problem is ambiguity in section headings. For example, some people label links about stock quotations under “Quotations,” while other people label links about stock information as “Recipes.” Or, some people might place stock chart links under “Charts,” while other people might place music charts under “Charts.” The result of this ambiguity is that the system currently contains some “interesting” mixed topics.
• **Taxonomic Noise:** Those of us who have experienced the joys of knowledge representation first-hand will not be surprised to learn that what look like section/subsection relationships between snippets often do not correspond to taxonomic or partonomic relationships. For example, one web page might place “Scotland” under “Food.” Perhaps the author intends the section to be “Scottish Foods.” Another author will place “Recipes” under “Scotland,” meaning “Scottish Recipes.” A third author will place “Recipes” under “Food,” and a fourth author will place “Chicken” under “Recipes.” Extracting a meaningful taxonomy of topics from the raw data is currently an unsolved problem.
• **Cobwebs:** It is a big exaggeration to say that half the web is “under construction,” and the other half is missing, relocated, or hopelessly out of date. In actual fact, only 18% of the URLs cited in pages on the web refer to documents that no longer exist, serve only to redirect the user to new locations, or live on servers that aren’t reachable or fail DNS (based on a sampling of over one million commonly cited web documents). The fewer such “cobwebs” that are contained within a service, the more useful that service becomes.
Another factor that makes creating a link referral service difficult is the sheer size of the web. According to Search Engine Watch [Search Engine Watch 1997], AltaVista [AltaVista 1998] had indexed over 100 million web pages in April of 1997, and their Chief Technical Officer, Louis Monier, estimated that there were as many as 150 million distinct pages on the web. Even a small subset of the web will contain millions of documents with tens of millions of snippets. Retrieving and synthesizing these snippets can be very computationally expensive.
Finally, a successful web service is, by definition, a high-volume web service. The most popular websites generate millions of page views per day. A scant million hits a day adds up to over 11 hits per second, and peak access times can easily reach two or three times as many hits per second as the average. At 33 hits per second, 30 msecs per query is about enough time to do three disk seek. There isn’t a lot of time for complicated run-time analysis.
**CBR in the Pipeline**
The solution we have developed to the above problems is to divide the system into a run-time component that does fast lookup on a pre-built database (or knowledge base), and a development component that builds the database. The development component is further broken down into several distinct processing steps, featuring one or more distinct form of reasoning/analysis at each step. These processing steps can be loosely grouped into 1) Fetching the data, 2) preprocessing the raw data, 3) using CBR to synthesize the run-time database, and 4) accessing the run-time database.
Fetching the Data
The system has been bootstrapped to the point where the analysis of a body of existing documents later in the pipeline has produced a list of canonical URLs to fetch. The actual mechanics of fetching the corresponding web pages are straightforward, and well documented elsewhere (see, for example, SideClick search results for HTTP and RFC [SideClick 1998]).
Preprocessing the Data
Preprocessing the data consists of several reasoning steps. These steps include, 1), learning a set of filtering rules for URL canonicalization, 2), parsing web pages into cases composed of snippets, and 3), canonicalizing section headers into SideClick topics.
Learning URL Filtering Rules. URL filtering rules are a set of regular expression patterns that map URLs into corresponding URLs that refer to the same document. For example, a filtering rule might specify that if a URL is of the form “http://*/index.html” and there is another URL that is of the form “http://*/” and the two URLs differ only in that one contains the “index.html” at the end and the other doesn’t, then the two URLs probably refer to the same document. Another rule might specify that “www.” in the host name of a URL can usually be stripped out if there is another known URL that differs only in that part of the host name.
Such rules are learned in a two-step process. First, an index of page similarity is created for all of the pair-wise combinations of documents in the set of web pages. Note that determining whether two documents are the same is, itself, a difficult problem. On the one hand, many documents are script generated and differ in the inclusion of banner ads, dates and times, number of page views, etc. on even subsequent fetches of the same document. Such documents will appear to differ, incorrectly, unless suitable fuzzy matching techniques are used with appropriate similarity thresholds. Similarly, pages change over time. Since the spider (the component that fetches the web pages) might take several days to fetch the millions of pages that comprise the set, it is quite possible that some pages will have changed between subsequent fetches. Hence, determining whether two pages are distinct often requires modification based on the time those pages were fetched. On the other hand, many documents from the same site are identical with respect to navigation content, layout, headers, and footers and differ only a small amount on the actual content of the web page. Such pages will appear to be similar if matching thresholds are set too low.
After the index of similarity is generated, a heuristic pattern learning algorithm is applied to generate the filtering rules. For a particular pair of similar pages, the algorithm creates a set of regular expressions of varying generality that describe how one URL can be mapped to another. These candidate rules are scored by applying them to the entire body of URLs, and counts are kept of the number of times a URL is incorrectly mapped into a differing URL, the number of times a URL is correctly mapped into a differing URL, and the number of times a URL is mapped into a URL that appears to differ, but might be the result of a document changing over time. These values are combined heuristically, and the most successful candidate rule is chosen (success is based on the most general rule that doesn’t introduce too many false mappings). The process repeats until all of the URL matches have been accounted for.
Parsing Web Pages into Cases and Snippets. Some organizational and scoping information for a web page is explicit in the (possibly broken) markup for that web page. For example, a delimited list within a delimited list represents that one snippet is a child of another snippet, and the scope of each snippet is defined by the scope of the delimited list. Other organizational information is implicit in the markup. For example, a sequence of markup tags and strings of the form: “string <a> string </a> <br>” probably denotes the section heading (the expression is fuzzy because it allows the last “<a> string </a>” of each subexpression “<a> string </a>” to implicitly define two groups of anchors, and could be represented by the fuzzy regular expression:
\[(string <a> string </a> <br>)* (p>)* \]
where the first string in each occurrence of the regular expression probably denotes the section heading (the expression is fuzzy because it allows the last “<a> string </a>” of each subexpression “<a> string </a>” to implicitly define two groups of anchors, and could be represented by the fuzzy regular expression:
\[(string <a> string </a> <br>)* (p>)* \]
Parsing a web page, therefore, consists of two steps. First, a fault-tolerant HTML grammar is used to organize the tags and strings in the web page into a set of scoped subexpressions. Next, for each sequence of tokens and strings within a subexpression, a pattern detector reduces the sequence of tokens into a set of scoped subsequences based on increasingly complex regular expressions. The result of this analysis is a set of fully scoped tokens. “Interesting” scopes are detected and output as scoped snippets, and likely section headers for each snippet are identified and output.
Canonicalizing Section Headers. As previously mentioned, the raw organizational information present in web pages is not sufficient to generate an accurate taxonomy of topics. As such, we have knowledge engineered a taxonomy of over 3000 topics, by hand, with much suffering and loss of life. The maintenance and extension of this taxonomy is an ongoing process and consumes much of the bulk of human labor in the system.
Mapping section headers extracted during the previous processing stage consists of applying a large number of phrase canonicalization rules (which were constructed and are maintained by hand) to the section header, and performing a statistical analysis of how well the resulting section header matches each of the known topics. This analysis is based on morphological analysis of the words in the section header and topic, the number of matching words in the section header, the frequency of occurrence of these matching words in the set of documents as a whole,
and the total length of the section header. Section headers that match topics above a certain threshold are canonicalized into the corresponding SideClick topics. The remaining section headers are rejected, and a knowledge engineer periodically reviews frequently occurring rejected headers for possible inclusion as new topics within SideClick.
The result of these preprocessing steps is a set of relatively clean and well-organized snippets and cases, which are fed into the CBR component.
Synthesizing the Database
Primary functions supported by the run-time system include:
- **Links Related to Links:** Given a URL, retrieve all of the snippets containing that URL. Synthesize these snippets into a new snippet, as follows: 1) count the number of snippets each URL appears in, 2) compare this count to the base probability that the URL will appear in a random collection of snippets, 3) if the URL occurs significantly more frequently than random chance, include the URL in the synthesized snippet.
- **Topics Related to Links:** Given a URL, retrieve all of the snippets containing that URL. Synthesize these snippets into a new snippet, as follows: 1) count the number of snippets each under each topic, 2) compare this count to the base probability that a randomly selected snippet will appear under each topic, 3) if the topic occurs significantly more frequently than random chance, include the topic in the synthesized snippet.
- **Links Related to Topics:** Given a topic, retrieve all of the snippets under that topic. Synthesize these snippets into a new snippet, as follows: 1) count the number of snippets containing that URL, 2) compare this count to the base probability that the URL will appear in a random collection of snippets, 3) if the URL occurs significantly more frequently than random chance, include the URL in the synthesized snippet.
- **Topics Related to Topics:** Consult the knowledge-engineered taxonomy for related topics.
Constructing a run-time database consists of iterating through all of the known URLs and topics, and generating lists of the most closely related URLs and topics along with the strength of the relationship, as described above, and saving these results into a database.
There is no theoretical reason why these functions couldn't be supported by a run-time CBR module. However, there are three practical reasons for using the CBR module to build an optimized run-time database and to respond to most queries using database lookup. The first reason is, of course, speed. Popular URLs, such as Yahoo [Yahoo 1998], occur in tens of thousands of snippets within the case base. Each snippet may, in turn, contain references to tens or hundreds of links. Synthesizing all of these snippets can take orders of magnitude longer than the maximum time allowed for responding to a query.
The second reason for having a run-time system distinct from the CBR module is code complexity. The CBR module requires code for loading cases, organizing case memory, retrieving snippets and synthesizing these snippets. Also, the internal data structures used to represent and index case memory are somewhat elaborate. It is a simple fact that a live system on the world-wide web is not allowed to crash (sometimes they do anyway, which is one of the reasons why large web services run two or three times as many servers in their server farms as they really need to handle capacity). The CBR module weighs in with six times as many lines of code as the run-time system. It is safe to assume that the run-time system is easier to modify and maintain.
Finally, the run-time database is actually smaller than the original case base. Instead of keeping around information about every link that appears in every snippet in every case that occurs in the case base, the run-time system only needs to know the relative strength of the relationship between a particular URL and its most closely related topics and URLs. In fact, the run-time database is small enough to fit within a gigabyte of RAM, and dual 200Mhz Pentium Pro servers with one gigabyte of RAM can be purchased for around $6000 (as of April, 1998). Avoiding any disk lookup whatsoever drastically increases the speed of the run-time system.
Using the Database
As described above, the run-time system consists of a large, precomputed database and a simple lookup mechanism. This run-time system is implemented as a TCP-based server that responds to requests from a set of front-ends. Each front-end is a web server that is responsible for processing web page requests, querying the back-end run-time system for link and topic referral information, and generating suitable HTML web pages. The back-end is capable of handling over 30 requests per second, and most of this time is spent in TCP socket setup and teardown. Perhaps surprisingly, it takes longer to query the back-end and format the web page under Microsoft's IIS web server, with C-language DLLs and Visual Basic Script web page generation under Windows NT than it does to process the back-end queries. Each front-end is only capable of processing around 11 requests per second.
What does this Say about CBR Integration?
The first observation is that while CBR seems to be an ideal technology for solving this problem, significant reasoning work is needed before the available data is in anything like a suitable format for processing. The system described here includes fuzzy page matching, a novel technique for inducing pattern matching rules, a fault tolerant grammar, pattern detection, some simple Natural Language pattern matching, statistical matching of patterns and phrases, and a hand-engineered taxonomy of over 3000 topics before the CBR can even begin. This is on top of
more "conventional" programming tasks such as creating a spider for fetching documents from the world-wide web, creating software for the efficient storage and retrieval of millions of web pages, etc.
The second observation is that even though a CBR module as "master" in a run-time system may be functionally adequate, it may be undesirable on practical grounds due to high-capacity requirements, code complexity and maintenance issues, and case base size.
For these reasons, we have ended up with a pipelined architecture of processing steps from raw data through a standalone database with CBR planted squarely in the middle.
**Is this General?**
While clearly an inappropriate architecture for some reasoning tasks (for example, the Battle Planner system where the ability to retrieve and examine cases forms an integral part of the decision support process [Goodman 1989]), this methodology has been applied to two other systems, Fido the Shopping Doggie [Goodman 1997], and FutureDB.
**Fido** is a web-based shopping service. As in SideClick, web pages are downloaded and preprocessed. In Fido, however, CBR is used to label parts of these web pages as product descriptions, product categories, vendors, prices, etc., based on a case library of pre-labeled web pages. These newly downloaded and labeled web pages are fed into a push-down automata that use the labels to construct a database of products and prices. The run-time system allows web users to perform keyword searches on this database to locate products of interest along with links back to the web page from which the product was extracted. As in SideClick, a variety of processing steps are needed to convert raw web pages into cases, and CBR is used as a component in a pipeline to synthesize an efficient run-time database.
In **FutureDB**, a product based on Projective Visualization [Goodman 1995], raw historical data is preprocessed and fused with external data sources, and CBR is used as a key component in constructing a simulator. This simulator is used to project historical data into the future, and the projected data is stored into a database in the same format as the historical database. This allows users to analyze the projected data using the same decision support systems and on-line analytical processing tools that they currently use to examine historical data. Once again, a variety of reasoning techniques are used to preprocess raw data into a form suitable for CBR, and CBR is used in a pipeline to produce a static run-time database.
Hence, while not universal, the architecture described here does support a variety of reasoning systems.
**References**
|
{"Source-Url": "http://www.aaai.org/Papers/Workshops/1998/WS-98-15/WS98-15-014.pdf", "len_cl100k_base": 5045, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 15885, "total-output-tokens": 5944, "length": "2e12", "weborganizer": {"__label__adult": 0.00020229816436767575, "__label__art_design": 0.0003972053527832031, "__label__crime_law": 0.0003066062927246094, "__label__education_jobs": 0.0009717941284179688, "__label__entertainment": 0.00010442733764648438, "__label__fashion_beauty": 0.00013148784637451172, "__label__finance_business": 0.0006818771362304688, "__label__food_dining": 0.000213623046875, "__label__games": 0.0005183219909667969, "__label__hardware": 0.0008974075317382812, "__label__health": 0.0002522468566894531, "__label__history": 0.0002168416976928711, "__label__home_hobbies": 0.00011491775512695312, "__label__industrial": 0.00035381317138671875, "__label__literature": 0.0003027915954589844, "__label__politics": 0.0001976490020751953, "__label__religion": 0.0002715587615966797, "__label__science_tech": 0.047607421875, "__label__social_life": 9.98377799987793e-05, "__label__software": 0.050689697265625, "__label__software_dev": 0.89501953125, "__label__sports_fitness": 0.0001531839370727539, "__label__transportation": 0.00031948089599609375, "__label__travel": 0.00015497207641601562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26523, 0.0198]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26523, 0.77549]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26523, 0.92105]], "google_gemma-3-12b-it_contains_pii": [[0, 4419, false], [4419, 10150, null], [10150, 16363, null], [16363, 22117, null], [22117, 26523, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4419, true], [4419, 10150, null], [10150, 16363, null], [16363, 22117, null], [22117, 26523, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26523, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26523, null]], "pdf_page_numbers": [[0, 4419, 1], [4419, 10150, 2], [10150, 16363, 3], [16363, 22117, 4], [22117, 26523, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26523, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
406e5ed5d1b2280f6959712fa8b802f913bd3c23
|
Problem: The Path Between a CPU Chip and Off-chip Memory is Slow
This path is relatively slow, forcing the CPU to wait for up to 200 clock cycles just to do a store to, or a load from, memory. Depending on your CPU's ability to process instructions out-of-order, it might go idle during this time. This is a huge performance hit!
Solution: Hierarchical Memory Systems, or "Cache"
The solution is to add intermediate memory systems. The one closest to the CPU is small and fast. The memory systems get slower and larger as they get farther away from the CPU.
Cache and Memory are Named by "Distance Level" from the ALU
L2 Cache and Memory are Named by "Distance Level" from the ALU.
L1 cache also exists on some high-end CPU chips.
Cache Hits and Misses
When the CPU asks for a value from memory, and that value is already in the cache, it can get it quickly. This is called a cache hit.
When the CPU asks for a value from memory, and that value is not already in the cache, it will have to go off the chip to get it. This is called a cache miss.
While cache might be multiple kilo- or megabytes, the bytes are transferred in much smaller quantities, each called a cache line. The size of a cache line is typically just 64 bytes.
Performance programming should strive to avoid as many cache misses as possible. That's why it is very helpful to know the cache structure of your CPU.
<table>
<thead>
<tr>
<th>Storage Level Characteristics</th>
<th>L1</th>
<th>L2</th>
<th>Memory</th>
<th>Disk</th>
</tr>
</thead>
<tbody>
<tr>
<td>Type of Storage</td>
<td>On-chip</td>
<td>On-chip</td>
<td>Off-chip</td>
<td>Disk</td>
</tr>
<tr>
<td>Typical Size</td>
<td>< 100 KB</td>
<td>< 8 MB</td>
<td>< 10 GB</td>
<td>Many GB</td>
</tr>
<tr>
<td>Typical Access Time (ms)</td>
<td>10 – 50</td>
<td>3 – 25.5</td>
<td>50 – 250</td>
<td>5,000,000</td>
</tr>
<tr>
<td>Scaled Access Time</td>
<td>1 second</td>
<td>33 seconds</td>
<td>7 minutes</td>
<td>15A days</td>
</tr>
<tr>
<td>Bandwidth (MB/sec)</td>
<td>50,000 – 500,000</td>
<td>5,000 – 20,000</td>
<td>2,500 – 10,000</td>
<td>50 – 500</td>
</tr>
<tr>
<td>Managed by</td>
<td>Hardware</td>
<td>Hardware</td>
<td>OS</td>
<td>OS</td>
</tr>
</tbody>
</table>
Usually there are two L1 caches – one for Instructions and one for Data. You will often see this referred to in data sheets as “L1 cache: 32KB + 32KB” or “I and D cache”.
Successful use of the cache depends on **Spatial Coherence**:
“If you need one memory address’s contents now, then you will probably also need the contents of some of the memory locations around it soon.”
If these assumptions are true, then you will generate a lot of cache hits. If these assumptions are not true, then you will generate a lot of cache misses, and you end up re-loading the cache a lot.
**Spatial and Temporal Coherence**
If these assumptions are true, then you will generate a lot of cache hits. If these assumptions are not true, then you will generate a lot of cache misses, and you end up re-loading the cache a lot.
**Temporal Coherence**:
“If you need one memory address’s contents now, then you will probably also need its contents again soon.”
How Bad Is It? -- Demonstrating the Cache-Miss Problem
C and C++ store 2D arrays a row-at-a-time, like this, `A[i][j]`
<table>
<thead>
<tr>
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>5</td>
<td>6</td>
<td>7</td>
<td>8</td>
<td>9</td>
</tr>
<tr>
<td>10</td>
<td>11</td>
<td>12</td>
<td>13</td>
<td>14</td>
<td>15</td>
</tr>
<tr>
<td>16</td>
<td>17</td>
<td>18</td>
<td>19</td>
<td></td>
<td>20</td>
</tr>
<tr>
<td>21</td>
<td>22</td>
<td>23</td>
<td>24</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
For large arrays, would it be better to add the elements by row, or by column? Which will avoid the most cache misses?
```c
float f = Array[i][j];
float f = Array[j][i];
```
Demonstrating the Cache-Miss Problem – Across Rows
```
#include <stdio.h>
#include <time.h>
#include <cstdlib>
#define NUM 10000
float Array[NUM][NUM];
double MyTimer();
int main(int argc, char *argv[])
{
float sum = 0;
double start = MyTimer();
for (int i = 0; i < NUM; i++)
{
for (int j = 0; j < NUM; j++)
{
sum += Array[i][j]; // access across a row
}
}
double row_secs = finish – start;
return 0;
}
```
Demonstrating the Cache-Miss Problem – Down Columns
```c
#include <stdio.h>
#include <time.h>
#include <cstdlib>
#define NUM 10000
float Array[NUM][NUM];
double MyTimer();
int main(int argc, char *argv[])
{
float sum = 0;
double start = MyTimer();
for (int i = 0; i < NUM; i++)
{
for (int j = 0; j < NUM; j++)
{
sum += Array[j][i]; // access down a column
}
}
double col_secs = finish – start;
return 0;
}
```
Demonstrating the Cache-Miss Problem
Time, in seconds, to compute the array sums, based on by-row versus by-column order:
<table>
<thead>
<tr>
<th>Dimension (NUM)</th>
<th>Total array size = NUMxNUM</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Array-of-Structures vs. Structure-of-Arrays:
```c
struct xyz
{
float x, y, z;
};
```
```c
float X[N], Y[N], Z[N];
```
1. Which is a better use of the cache if we are going to be using X-Y-Z triples a lot?
2. Which is a better use of the cache if we are going to be looking at all X's, then all Y's, then all Z's?
I've seen some programs use a "Shadow Data Structure" to get the advantages of both AOS and SOA.
Computer Graphics is often a Good Use for Array-of-Structures:
```c
struct xyz {
float x, y, z;
} Array[N];
```
```c
glBegin( GL_LINE_STRIP );
for( int i = 0; i < N; i++ )
{
glVertex3f( Array[i].x, Array[i].y, Array[i].z );
}
```
A Good Use for Structure-of-Arrays:
```c
float X[N], Y[N], Z[N];
float Dx[N], Dy[N], Dz[N];
```
```c
Dx[0:N] = X[0:N] - Xnow;
Dy[0:N] = Y[0:N] - Ynow;
Dz[0:N] = Z[0:N] - Znow;
```
Good Object-Oriented Programming Style can sometimes be Inconsistent with Good Cache Use:
```c
class xyz {
public:
float x, y, z;
xyz *next;
xyz( );
static xyz *Head = NULL;
};
```
```c
xyz::xyz( )
{
xyz * n = new xyz;
n->next = Head;
Head = n;
}
```
Why Can We Get This Kind of Performance Decrease as Data Sets Get Larger?
We are violating Temporal Coherence
We Can Help the Temporal Problem with Pre-Fetching
We will cover this in further detail when we discuss SIMD
An Example of Where Cache Coherence Really Matters: Matrix Multiply
The usual approach is multiplying the entire A row * entire B column. This is equivalent to computing a single dot product:
\[
\sum_{i} \sum_{j} A[i][k] \times B[k][j] \Rightarrow C[i][j]
\]
For \( k = 0 \) to \( k < \text{SIZE} \)
For \( j = 0 \) to \( j < \text{SIZE} \)
For \( i = 0 \) to \( i < \text{SIZE} \)
Problem: Column \( j \) of the B matrix is not doing a unit stride
Scalable Universal Matrix Multiply Algorithm (SUMMA)
Entire A row * one element of B row. Equivalent to computing one item in many separate dot products:
\[
A[i][k] \times B[k][j] \Rightarrow C[i][j]
\]
Add to
for \( j = 0 \) to \( j < \text{SIZE} \)
for \( k = 0 \) to \( k < \text{SIZE} \)
for \( i = 0 \) to \( i < \text{SIZE} \)
Cache Architectures
N-way Set Associative - a cache line from a particular block of memory can appear in a limited number of places in cache. Each "limited place" is called a set of cache lines. A set contains \( N \) cache lines.
The memory block can appear in any cache line in its set.
Most Caches today are N-way Set Associative
N is typically 4 for L1 and 8 or 16 for L2
How do you figure out where in cache a specific memory address will live?
Memory address in bytes modulo \( N \) cache sets \( \Rightarrow \) Cache Set #
Pick Least Recently Used Cache Line in that Cache Set
Cache Line #
Cache Offset in the Cache Line
Cache Line blocks in memory (the numbers) and what cache line set they map to (the colors)
Sets of Cache Lines
Cache: 64 bytes
This would be called "3-way"
A Specific Example with Numbers
Memory address = 1234 bytes
Cache Line Block in Memory = 1234 / 64 = 19
Cache Set # = 19 % 4 = 3
Offset in the Cache Line = 1234 – 19*64 = 18
Cache Can Interact with Cores in Unexpected Ways
Each core has its own separate L2 cache, but a write by one core can impact the state of the others.
For example, if one core writes a value into one of its own cache lines, any other core using a copy of that same cache line can no longer count on its values being up-to-date. In order to regain that confidence, the core that wrote must flush that cache line back to memory and the other core must then reload its copy of that cache line.
To maintain this organization, each core’s L2 cache has 4 states (MESI):
1. Modified
2. Exclusive
3. Shared
4. Invalid
A Simplified View of How MESI Works
1. Core A reads a value. Those values are brought into its cache. That cache line is now tagged Exclusive.
2. Core B reads a value from the same area of memory. Those values are brought into its cache, and now both cache lines are re-tagged Shared.
3. If Core B writes into that value, its cache line is re-tagged Modified and Core A’s cache line is re-tagged Invalid.
4. Core A tries to read a value from that same part of memory. But its cache line is tagged Invalid. So, Core B’s cache line is flushed back to memory and then Core A’s cache line is re-loaded from memory. Both cache lines are now tagged Shared.
This is a huge performance hit, and is referred to as False Sharing.
Note that False Sharing doesn’t create incorrect results – just a performance hit. If anything, False Sharing prevents getting incorrect results.
False Sharing – An Example Problem
#include <stdlib.h>
struct s
{
float value;
array[4];
} array[4];
omp_set_num_threads(4);
#pragma omp parallel for
for( int i = 0; i < SomeBigNumber; i++ )
{
Array[i].value = Array[i].value + (float)rand();
}
Some unpredictable function so the compiler doesn’t try to optimize the for-loop away.
False Sharing – Fix #1
#include <stdlib.h>
struct s
{
float value;
int pad[NUMPAD];
} array[4];
const int SomeBigNumber = 100000000; // keep less than 2B
omp_set_num_threads(4);
#pragma omp parallel for
for( int i = 0; i < SomeBigNumber; i++ )
{
Array[i].value = Array[i].value + (float)rand();
}
This works because successive Array elements are forced onto different cache lines, so less (or no) cache line conflicts exist.
A Simplified View of How MESI Works – One Core’s State Diagram
This core reads a value into the cache line
Another core reads a value into the same cache line
This core writes a value into this cache line
This core continues to read and write values into this cache line
This core writes a value into this cache line
Another core writes a value into this cache line
This core reads into this cache line. The cache line is written and reloaded.
Why do these curves look this way?
False Sharing – the Effect of Spreading Your Data to Multiple Cache Lines
NUMPAD = 5
False Sharing – Fix #1
NUMPAD = 6
False Sharing – the Effect of Spreading Your Data to Multiple Cache Lines
NUMPAD = 7
False Sharing – Fix #1
NUMPAD = 8
False Sharing – the Effect of Spreading Your Data to Multiple Cache Lines
NUMPAD = 14
False Sharing – the Effect of Spreading Your Data to Multiple Cache Lines
NUMPAD = 15
False Sharing – Fix #1
OK, wasting memory to put your data on different cache lines seems a little silly (even though it works). Can we do something else?
Remember our discussion in the OpenMP section about how stack space is allocated for different threads?
If we use local variables, instead of contiguous array locations, that will spread our writes out in memory, and to different cache lines.
False Sharing – Fix #2: Using local (private) variables
#include <stdlib.h>
struct s
{
float value;
} Array[4];
omp_set_num_threads( 4 );
const int SomeBigNumber = 100000000;
#pragma omp parallel for
for( int i = 0; i < 4; i++ )
{
float tmp = Array[ i ].value;
for( int j = 0; j < SomeBigNumber; j++ )
{
tmp = tmp + (float)rand( );
}
Array[ i ].value = tmp;
}
This works because a localized temporary variable is created in each core’s stack area, so little or no cache line conflict exists
False Sharing – Fix #2 vs. Fix #1
Note that Fix #2 with {1, 2, 4} threads gives the same performance as NUMPAD= {0,7,15}
malloc'ing on a cache line
What if you are malloc'ing, and want to be sure your data structure starts on a cache line?
Knowing that cache lines start on fixed 64-byte boundaries lets you do this. Consider a memory address. The top N-6 bits tell you what cache line number this address is a part of. The bottom 6 bits tell you what offset that address has within that cache line. So, for example, on a 32-bit memory system:
<table>
<thead>
<tr>
<th>Cache line number</th>
<th>Offset in that cache line</th>
</tr>
</thead>
<tbody>
<tr>
<td>32 - 6 = 26 bits</td>
<td>6 bits</td>
</tr>
</tbody>
</table>
So, if you see a memory address whose bottom 6 bits are 000000, then you know that that memory location begins a cache line.
```c
struct xyzw *p = (struct xyzw *) malloc( (ARRAYSIZE)*sizeof(struct xyzw) );
struct xyzw *Array = &p[0];
...
Array[i].x = 10.0;
```
If you wanted to make sure that array of structures started on a cache line boundary, you would do this:
```c
unsigned char *p = (unsigned char *) malloc( 64 + (ARRAYSIZE)*sizeof(struct xyzw) );
int offset = (long int)p & 0x3f; // 0x3f = bottom 6 bits are all 1's
struct xyzw *Array = (struct xyzw *) &p[64-offset];
...
Array[i].x = 10.0;
```
Remember that when you want to free this malloc'ed space, be sure to say:
```c
free( p );
```
not
```c
free( Array );
```
Now, Consider This Type of Computation
Should you allocate the data as one large global-memory block (i.e., shared)? Or, should you allocate it as separate blocks, each local to its own core (i.e., private)? Does it matter? Yes!
If you allocate the data as one large global-memory block, there is a risk that you will get False Sharing at the individual-block boundaries. Solution: make sure that each individual-block starts and ends on a cache boundary, even if you have to pad it. (Fix #1)
If you allocate the data as separate blocks, then you don’t have to worry about False Sharing (Fix #2), but you do have to worry about the logic of your program remembering where to find each Node #i-1 and Node #i+1.
|
{"Source-Url": "http://web.engr.oregonstate.edu/~mjb/cs575/Handouts/cache.6pp.pdf", "len_cl100k_base": 4187, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27046, "total-output-tokens": 4659, "length": "2e12", "weborganizer": {"__label__adult": 0.0005369186401367188, "__label__art_design": 0.00072479248046875, "__label__crime_law": 0.000579833984375, "__label__education_jobs": 0.0004780292510986328, "__label__entertainment": 0.00012576580047607422, "__label__fashion_beauty": 0.0002810955047607422, "__label__finance_business": 0.00036072731018066406, "__label__food_dining": 0.0006961822509765625, "__label__games": 0.0010194778442382812, "__label__hardware": 0.023193359375, "__label__health": 0.0008053779602050781, "__label__history": 0.0004775524139404297, "__label__home_hobbies": 0.0003573894500732422, "__label__industrial": 0.0016422271728515625, "__label__literature": 0.00024390220642089844, "__label__politics": 0.0004131793975830078, "__label__religion": 0.0008382797241210938, "__label__science_tech": 0.182373046875, "__label__social_life": 7.718801498413086e-05, "__label__software": 0.00821685791015625, "__label__software_dev": 0.77392578125, "__label__sports_fitness": 0.0006651878356933594, "__label__transportation": 0.0015211105346679688, "__label__travel": 0.00036406517028808594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14313, 0.0265]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14313, 0.50649]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14313, 0.84219]], "google_gemma-3-12b-it_contains_pii": [[0, 2288, false], [2288, 5351, null], [5351, 6291, null], [6291, 7881, null], [7881, 10775, null], [10775, 10810, null], [10810, 11055, null], [11055, 11055, null], [11055, 12290, null], [12290, 14313, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2288, true], [2288, 5351, null], [5351, 6291, null], [6291, 7881, null], [7881, 10775, null], [10775, 10810, null], [10810, 11055, null], [11055, 11055, null], [11055, 12290, null], [12290, 14313, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14313, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14313, null]], "pdf_page_numbers": [[0, 2288, 1], [2288, 5351, 2], [5351, 6291, 3], [6291, 7881, 4], [7881, 10775, 5], [10775, 10810, 6], [10810, 11055, 7], [11055, 11055, 8], [11055, 12290, 9], [12290, 14313, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14313, 0.07619]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
1573230b663893cdd11648fe82b29307c0a90c8f
|
Advanced Data Processing in the Business Network System
Daniel Ritter
Abstract—The discovery, representation and reconstruction of Business Networks (BN) from Network Mining (NM) raw data is a difficult problem for enterprises. This is due to huge amounts of e.g. complex business processes within and across enterprise boundaries, heterogeneous technology stacks, and fragmented data. To remain competitive, visibility into the enterprise and partner networks on different, interrelated abstraction levels is desirable.
We show the query and data processing capabilities of a novel data discovery, mining and network inference system, called Business Network System (BNS) that reconstructs the BN - integration and business process networks - from raw data, hidden in the enterprises’ landscapes. The paper covers both the foundation and the key data processing characteristics features of BNS, including its underlying technologies, its overall system architecture, and data provenance approach.
Index Terms—Data processing, data provenance, information retrieval, network mining.
I. INTRODUCTION
Enterprises are part of value chains consisting of business processes connecting intra- and inter-enterprise participants. The network that connects these participants with their technical, social and business relations is called a Business Network (BN). Even though this network is very important for the enterprise, there are few - if any - people in the organization who understand this network as the relevant data is hidden in heterogeneous enterprise system landscapes. Yet simple questions about the network (e.g., which business processes require which interfaces, which integration artifacts are obsolete) remain difficult to answer, which makes the operation and lifecycle management like data migration, landscape optimization and evolution hard and more expensive increasing with the number of the systems. To change that, Network Mining (NM) systems are used to discover and extract raw data [13] - be it technical data (e.g., configurations of integration products like Enterprise Service Bus (ESB) [8]) or business data (e.g., information about a supplier in a Supplier Relationship Management (SRM) product). The task at hand is to provide a system, that automatically discovers and reconstructs the "as-is" BN from the incomplete, fragmented, cross-domain NM data and make it accessible for visualization and analysis.
Previous work on NM systems [13], their extension towards a holistic management of BN [15] and a cloud-based reference architecture [16] provide a comprehensive, theoretical and practical foundation on how to build a system suited to this task, called the Business Network System (BNS).
In this work we discuss the data processing and provenance requirements of the BNS and shed light into its internal mechanics. The major contributions of this work are (1) a sound list of the most important requirements of the data processing in the BNS, building on previous work, (2) a data provenance approach suitable for these requirements, and (3) a system implementing this architecture for continuous and scalable end-to-end network query, traversal and update processing based on the data transformation and provenance approach. We applied our system to several real-world enterprise landscapes.
Section II guides from the theoretical work conducted in the area of NM [13], Business Network Management (BNM) [15], and its reference architecture [16] to the real-world query and data processing and provenance requirements and capabilities of a BNS (refers to (1)). Section III provides an overview of BNS's data transformations, provenance and data processing, including query and update processing (refers to (2)) and sketches a high-level view on the system’s architecture (refers to (3)). Section IV reviews and discusses related work and systems that influenced BNS. Section V concludes the paper and lists some of the future work.
II. THE BUSINESS NETWORK SYSTEM
The BN consists of a set of interrelated perspectives of domain networks (e.g., business process, integration, social), that provide a contextualized view on which business processes (i.e., business perspective) are currently running, implemented on which integration capabilities (i.e., integration perspective) and operated by whom (i.e., social perspective). To compute the BN, Network Mining (NM) systems automatically discover raw data from the enterprise landscapes [13]. These conceptual foundations are extended to theoretically ground the new BN data management domain [15].
The fundamental requirements and capabilities of a BNS are derived from the theoretical foundations in previous as well as related works are extensively discussed in [16]. In a nutshell they cover (a) the (semi-)automatic discovery of data within the enterprise landscapes and cloud applications, (b) a common, domain independent, network inference model, (c) the transformation of the domain data into this model, (d) a scalable, continuously running and declaratively programmable inference system, (e) the cross-domain and enterprise/tenant reconstruction, (f) the ability to check the data quality and compliance to the inference model, and (g)
the visualization of different perspectives (i.e., views) on the BN (e.g., business process, integration). When starting with a system, which fulfills these requirements ((a)-(g)), the query and data processing aspects of the BNS summarize to the following:
1) **REQ-1** The client API shall allow remote access as well as scalable query, traversal and full-text search across the interconnected BN perspectives (e.g., through index creation) (from [14]).
2) **REQ-2** The (remote) API shall provide a standard exchange and visualization format (i.e., common standard in NM related areas like BPM) (from [12] and [14]).
3) **REQ-3** The user shall be able to enrich (e.g., labeling, grouping) and to enhance the BN data (e.g., adding/removing nodes and edges) through the client API, while delta-changes from existing data sources and initial loads from new data sources are merged into the existing BN (from [15]).
4) **REQ-4** Through the whole system, the data origin shall be tracked through all transformations from the source to the BN (i.e., data provenance). This shall allow for continuous data source integration, user enrichments/enhancements as well as possible re-deployment from the BN to the data sources (from [15]).
5) **REQ-5** The source data shall be available at all times for continuous re-computation of the network (i.e., even if the original source is not accessible for a while) (from [15] and **REQ-4**).
To sketch an idea on what these requirements mean for the query and data processing of a BNS, Figure 1 helps to provide a high-level map to locate the core data processing capabilities of our BNS. On the bottom is reality - a mix of business process, social and integration artifacts stored in databases, packaged applications, system landscape directories (e.g., SAP SLD [18]), middleware systems (e.g., SAP PI [17]), documents/files, application back-ends, and so on. When pointed to an enterprise data source through configuration by a domain expert, the BNS introspects the source's metadata (e.g., WSDL file for Web service), discovers and transforms the domain data to a common representation (see (a)). Other functional and queryable data sources are similarly processed.
The center of Fig. 1 shows the core elements of a NM system, theoretically discussed in [13] and [15], which computes the perspectives of the BN for client access. After the loaded data has been checked for conformance to the inference model it is stored as raw data for the continuously running network reconstruction programs using logic programming (i.e., our approach uses Datalog due to the rationale in [16]). Since BN reconstruction works on cross-domain and the enterprise data, and (cloud) applications want to access the BN data, the NM-part of the system is located in the public cloud, while the discovery-part is located in the enterprise system landscapes. That means, the data sources are highly distributed and the permanent, efficient access is not guaranteed. For that, the source data is copied to the Business Network Server by a set of protocol adapters. The data is stored as raw data, but linked to its original source for later reference (see **REQ-5**).
The computation of the network results in interrelated network perspectives, which are accessed by the clients for network visualization, simulation or analytics (see (g), **REQ-1** and **REQ-2**). User updates are brought into the system through the client API and are visible (at least) for the authors (see **REQ-3**). In each of the steps the data provenance is updated to preserve the path to the origin of the data from the client queries to the source models (see **REQ-4**). Together with **REQ-5**, an end-to-end lineage from the original source data artifacts to the visualized instances of the computed network shall be possible.
III. DATA PROCESSING IN THE BUSINESS NETWORK SYSTEM
A. Query Processing Overview

Fig. 1 provides an overview of BNS's internal architecture. At the bottom of the figure are various types of data sources from which BNS can receive data (the default is the XML upload). During the upload of the data it is checked for conformance with the inference model (shown in the center) by an automata-based runtime, compiled and configured from the model. When the check is successful, the data is stored in the knowledge base as raw data (see **REQ-5**). The network inference programs (partially generated from the model) run decoupled from the upload or inbound processing, while working on snapshots of the raw data. The inference result is stored as network data, which automatically updates the indices for full-text search, query and traversal on the data. The client requests are again handled independent of the inbound processing and network inference only on the current BN. The access layer is based on a flexible resource representation from [14], which adapts to changes in the BN model automatically (see **REQ-1**, **REQ-2**). The BNS provides its own network visualization for the different perspectives and contextualization. Fig. 2 shows an excerpt of a real-world integration network perspective and the drill-down to the message flow details (see Fig. 3).
In addition, a Java/ OSGi declarative service interface and a HTTP/JSON remote API are provided to build own UIs, enrich or enhance the computed "as-is" network, and build
applications for network analytics, optimization or monitoring (see REQ-1).

Fig. 2. Integration network visualization showing a high-level view on an integration network
For instance, the following queries specify a keyword search with search term and restriction of the type in the result set to Host (i.e., the physical machine on which business applications are running):
```plaintext
http://localhost/search
?q=query=term&type=Host...
```
and field, as field specific search criteria:
```plaintext
http://localhost/search
?q=location=Sydney.
```
In the same way, the result set can be defined to return any information in the BN by traversing the network e.g. from a specific participant `system1` (i.e., an application or tenant),
```plaintext
http://localhost/SYSTEM1/
?show=meta,location,host.name,
```
which returns location information of the participant itself and the name of the connected host the participant runs on. Simple Friend of a Friend (FoAF) queries returning, e.g., the hosts of all neighbors of the participant equally straightforward:
```plaintext
http://localhost/SYSTEM1/neighbors/host/.
```
Due to the decoupling of the data query and traversal components from the network inference and through model-centric index generation, all requests are processed within short time even on larger networks (see [14] for performance numbers).
**B. Update Processing in BNS**
The BNS distinguishes three types of updates, (a) the steady loading of raw data from known and new data sources, which affect the already computed BN from the inference direction (see REQ-5), (b) the systematic enrichment (i.e., labeling, grouping w/o effect on the data source) and (c) enhancement (i.e., re-deployment, possibly affects the data sources) through the GUI and client APIs (see REQ-4).

Fig. 4. Data Provenance from domain data to the queryable network with transformation classes
For case (a), Fig. 4 shows the described BNS architecture with the data streams and their transformations (categorized according to [7]). The data from the sources (e.g., `rsys1`, `out1`, ..., `prop2`, `sys4`) is pushed without transformation (black-box) to the inference system, which stores the conform raw data, while keeping potential duplicate information. However, the unique identifiers from the source systems may not be unique in the inference system. To avoid "collisions" in case identifier occur more than once across different sources, the records are stored with a composed key containing their locally unique identifier and their origin. Keeping this in mind, records from the same origin with the same identifier are updated (i.e., remain as one record), while all other cases lead to different records (i.e., same origin, different keys; same key, different origin). That means, if a record `sys("h7", "myH7", "originH7")` is pushed to the inference system, which already contains a record `sys("h7", ",", "originH7")` with the same key and origin, the records are already merged in the inbound to `sys:("h7", "myH7", "originH7")`. In case of records without any primary key, the identity of the information cannot be determined. Hence a hash function is calculated over the record values, which leads to new records whenever the information is loaded. It then is the task of the inference programs to identify equivalence between the records and choose a meaningful surrogate. There are cases like for `same_sys("h7", "h8")` (i.e., an equivalence relation between logical applications or tenants), in which the record has more than one identifiers. These are simply represented as combined primary key. The lineage tracing for the black-box transformation leverages the unique keys within the source combined with the origin information, which directs to the correct source. In this way, an anchor to the sources is created, which however lies in the storage close to the inference system (see REQ-4, REQ-5).
One of the tasks of the inference programs is to find equivalences between the same entities of the BN model. Due to the nature of equivalence classes, the most common transformations to the BN (i.e., inferred network) are either black-box or aggregators (see Figure 4). The black-box transformations translate one record in the raw data to one
The more difficult case is the aggregation of information. For instance, Fig. 4 shows the reconstruction of the message flow mflow1 from sys5, and out1 raw data, the black-box transformation from sys5 to participant1 (as discussed before) and the aggregation from sys3 and sys4 extended by a fragment prop1, which might carry complementing information, to participant2. For the latter, at least three variants are possible: 1) perform a "destroying merge" that identifies a leading set of data (the leading object or surrogate) and enrich the record by missing information from equivalent objects (e.g., add description "myH7" to the leading object sys("h7", "", "originH7")) and update the surrogate by any change from the sources, 2) perform a "preserving merge", which keeps all equivalent records and identifies a surrogate filled up with missing information (similar to 1), while remembering from which object the information in the surrogate came from, or 3) do not aggregate at all, but make the equivalence information public for the applications and let them handle the merge. Option 3) is clearly the simplest one for the inference system, since it provides all information to the caller, but leaves it with the task of handling equivalences. The major drawback for the BNS is the computation of one connected network, which becomes difficult if equivalences cannot be addressed by only one surrogate. The most extreme alternative to that is 1), which makes the definition of connected components easier, while making update processing from the sources and lineage tracing from the BN (nearly) impossible. The "information preserving", surrogate approach 2) comes with the highest data management efforts, but fulfills REQ-4 to the BNS best. At all times, the lineage tracing down to the sources (i.e., for operations on discovered records) and the steady integration of new sources and updates is granted. For instance, if a new source is added, which adds a system with key sys6 equivalent to sys5, then it is simply added to the equivalence class and the surrogate is re-calculated based on the updated information. However, option 2) has some major disadvantages besides its complexity, upon them (i) finding a function that calculates a good surrogate, (ii) the constant re-evaluation of the surrogate, and (iii) removing records that contributed nearly exclusively to the surrogate. In the current BNS all of them are mitigated intermediated, but require further analysis and leave room for further research. To sketch some ideas, approach (i) currently takes the most complete record from the equivalence class and copies its values to the surrogate. If this record was deleted from the source (iii), then the second record is chosen and so on, until the equivalence class is empty and the object is removed from the network. The major issue with (iii) is the user's experience, when between two loads, some objects in the BN provide less information or maybe cannot be found any more due to sudden lack of information. Currently this can only be prevented by good user enrichment (e.g., labeling). The steady re-evaluation (ii) cannot be avoided, but only optimized (e.g., by a delta-calculation technique, which only re-evaluates the fields that have changed).
The enrichment of BNs (b) stands for non-modifying operations to the network data like adding labels to network instances or grouping them for more intuitive visualization (see REQ-3). These operations lead to artifacts stored along with the network, which are attached to the leading object (e.g., of a participant equivalence class), which determines the artifact's lifecycle. Fig. 4 shows them in the queryable network space, which means that they are treated in the same way as the computed instances (i.e., indexed, searchable and traversable), however they have their origin on the surface and won't be deployed into the known data sources.
The enhancement of the BN (c) is treated differently in terms of re-deployment to the sources (see REQ-3). For that, computed BN instances (e.g., participants, message flows) are modified, newly added or deleted. These changes can then be pushed back to the respective source systems. Clearly, the operations on existing instances (e.g., modify, delete) require a bijective mapping from the source to the raw data, and a stringent provenance from the inference model down to the sources. For the surrogates this requires a good book keeping on how the leading object's information was selected. For instance, if more than one of the equivalent records has a non-empty description, which they could contribute to the surrogate, the evaluation function has chosen only one of the two. In case of changes to the description, the origin can be clearly identified and re-deployed down to source system. With this approach, there are several issues, which are left open for further research. The BNS treats them according to the following leading principles:
- Enhancements from the user are valued higher than the information from the automatic discovery and replace the discovered information within the surrogate (refer to [15] for the theory of coming from the computed "as-is" to a "to-be" network through expert knowledge, i.e., enhancements)
- The enhancements can be re-deployed to the sources if sufficient lineage tracing exists (i.e., for the creation of new instances, e.g., a participant, the source has to be specified and enough domain specific information has to be provided to create and activate the instance in the source system. These sources can be middleware system, for which the new instance should be runnable as any other created locally. Since the inference system does not know about source domain specificities, the entered information is passed through to the sources without any further checking)
- The function that calculates the surrogate is supported by a configuration, which allows the user to order the type of source (e.g., system landscape, runtime, configuration) by relevance (i.e., more grain granular) or even on instance level, in which order the information from the equivalent records are considered.
The update processing from the client API works again from simple HTTP POST, PUT, DELETE or through the native Java/ OSGi API.
IV. RELATED WORK
For the overall system approach, related work is conducted in the area of Process Mining (PM) [1] and [2], which sits between computational intelligence and data mining. It has similar requirements for data discovery, conformance and enhancement with respect to NM [13], but does not work with network models and inference. PM exclusively strives to derive BPM models from process logs. Hence PM complements BNM in the area of business process discovery.
Gaining insight into the network of physical and virtual nodes within enterprises is only addressed by the Host entity in NIM, since it is not primarily relevant for visualizing and operating integration networks. This domain is mainly addressed by the IT service management [11] and virtualization community [6], which could be considered when introducing physical entities to our meta-model.
The linked (web) data research, shares similar approaches and methodologies, which have so far neglected linked data within enterprises and mainly focused on databases and data management. A distantly related field is the work on software ecosystems [10], in which the notion of interconnected enterprises is defined similar to BN, but reduced to the common software development process. Hence the approaches for modeling [9] and governance [3] of these ecosystems do rather complement, but do not overlap.
V. DISCUSSIONS AND FUTURE WORK
In this work, we present insights into a reference implementation of the query and data processing within a Business Network System based on the theory on Business Networks [13] and Business Network Management [15] as well as the reference architecture discussed in [16]. For that, we added a client API [14] and data provenance approach to complement the BNS towards an emergent, enterprise-ready system. The architecture constitutes a holistic network data management platform, reaching from the information retrieval, network mining and inference, up to the query and traversal on the network data, while providing some insights and solutions for the continuous source data load as well as user enhancements through data provenance.
The data provenance topic for the re-deployment of user information to the sources requires further research. Secondly, opportunities for performance improvements lie in the selection of the databases. Significant performance gains are expected by running the BNS on the SAP HANA computing engine. Early prototypes show promising results.
REFERENCES
Daniel Ritter received his M.Sc in computer science and mathematics in 2008 from the University of Heidelberg, Germany. He is currently working as Research and (Software) Development Architect (VP level) in Technology Development with the department of Process and Network Integration at the SAP AG, Walldorf, Germany. His current research interests include network mining and reconstruction, logic programming, computer language design and compilation, databases, and data management.
|
{"Source-Url": "http://www.ijmlc.org/papers/300-K0005.pdf", "len_cl100k_base": 4965, "olmocr-version": "0.1.53", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 16109, "total-output-tokens": 6077, "length": "2e12", "weborganizer": {"__label__adult": 0.0003619194030761719, "__label__art_design": 0.0004580020904541016, "__label__crime_law": 0.0005288124084472656, "__label__education_jobs": 0.0015850067138671875, "__label__entertainment": 0.00012624263763427734, "__label__fashion_beauty": 0.00018644332885742188, "__label__finance_business": 0.005367279052734375, "__label__food_dining": 0.0004150867462158203, "__label__games": 0.0005631446838378906, "__label__hardware": 0.0015249252319335938, "__label__health": 0.0005507469177246094, "__label__history": 0.00032901763916015625, "__label__home_hobbies": 9.989738464355467e-05, "__label__industrial": 0.0009665489196777344, "__label__literature": 0.0004019737243652344, "__label__politics": 0.00036525726318359375, "__label__religion": 0.0003933906555175781, "__label__science_tech": 0.142822265625, "__label__social_life": 0.00012552738189697266, "__label__software": 0.0572509765625, "__label__software_dev": 0.7841796875, "__label__sports_fitness": 0.000225067138671875, "__label__transportation": 0.0006999969482421875, "__label__travel": 0.0002372264862060547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27251, 0.01427]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27251, 0.30009]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27251, 0.91013]], "google_gemma-3-12b-it_contains_pii": [[0, 5244, false], [5244, 10690, null], [10690, 15159, null], [15159, 21459, null], [21459, 27251, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5244, true], [5244, 10690, null], [10690, 15159, null], [15159, 21459, null], [21459, 27251, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27251, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27251, null]], "pdf_page_numbers": [[0, 5244, 1], [5244, 10690, 2], [10690, 15159, 3], [15159, 21459, 4], [21459, 27251, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27251, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
90c9ee1cec0b74a3117cab473931530e0b194559
|
GrodDroid: a Gorilla for Triggering Malicious Behaviors
Adrien Abraham, Radoniaina Andriatsimandefitra, Adrien Brunelat, Jean-François Lalande, Valérie Viet Triem Tong
To cite this version:
HAL Id: hal-01201743
https://inria.hal.science/hal-01201743v2
Submitted on 8 Mar 2016
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
GroddDroid: a Gorilla for Triggering Malicious Behaviors
A. Abraham, R. Andriatsimandefitra, A. Brunelat†, J.-F. Lalande* and V. Viet Triem Tong
EPI CIDRE, CentraleSupelec, Inria, Université de Rennes 1, CNRS, IRISA UMR 6074, F-35065 Rennes, France
† ENS Cachan, F-94230 Cachan, France
* INSA Centre Val de Loire, Univ. Orléans LIFO EA 4022, F-18020 Bourges, France
Abstract
Android malware authors use sophisticated techniques to hide the malicious intent of their applications. They use cryptography or obfuscation techniques to avoid detection during static analysis. They can also avoid detection during a dynamic analysis. Frequently, the malicious execution is postponed as long as the malware is not convinced that it is running in a real smartphone of a real user. However, we believe that dynamic analysis methods give good results when they really monitor the malware execution. In this article, we propose a method to enhance the execution of the malicious code of unknown malware. We especially target malware that have triggering protections, for example branching conditions that wait for an event or expect a specific value for a variable before triggering malicious execution. In these cases, solely executing the malware is far from being sufficient. We propose to force the triggering of the malicious code by combining two contributions. First, we define an algorithm that automatically identifies potentially malicious code. Second, we propose an enhanced monkey called GroddDroid, that stimulates the GUI of an application and forces the execution of some branching conditions if needed. The forcing is used by GroddDroid to push the execution flow towards the previously identified malicious parts of the malware and execute it. The source code for our experiments with GroddDroid is released as free software. We have verified on a malware dataset that we investigated manually that the malicious code is accurately executed by GroddDroid. Additionally, on a large dataset of 100 malware we precisely identify the nature of the suspicious code and we succeed to execute it at 28%.
1. Introduction
Between 1% [15] and 9% [9] of Android applications are identified as malware. CheetahMobile reports that most of them come from alternative markets where automatic checks and malware sanitization procedures are missing [9]. Most of the time, users are infecting their own smartphone with a repackaged version of a legitimate application containing malicious code. Identifying a potential malware by studying the required permissions becomes difficult, especially because developers have difficulties to use permissions accurately [22].
To prevent the distribution of malware, Google has developed a service called Bouncer that analyzes statically and dynamically applications submitted on Google Play. Static analysis has strong limitations since malware resorts to a lot of techniques to hide the malicious behavior within legitimate applications. They can obfuscate their code, use reflection or dynamic libraries. Additionally, a lot of interesting information are only available at runtime, for example the content and the recipient of a SMS, the content of a message received by a remote server, etc. Dynamic analysis can bring more information on malware behaviors. Research efforts have to be done on the setup of efficient dynamic analysis platforms as it is not reliable to deploy large scale analysis tools on user’s devices.
Dynamic analysis faces several problems. Malware can load their code dynamically [20], detect a virtual sandboxing of the application [23], use transformation attacks to escape signature based techniques [21]. Thus, the effectiveness of dynamic analysis for building real time detection tools is an interesting and active debate. Dynamic analysis tools are only useful if the malware is executed during the analysis. If the environment is virtualized, if the network is not setup, or if some APIs are missing, some simple checks may lead a malware to prevent the run of its code. We think that this sub-problem should be addressed, and is a first important step for dynamic detection solutions.
In this article, we propose a methodology and a tool called GroddDroid to automatically trigger and execute sus-
picious parts of the code of an application: our goal is to take as input an application, run it on a real smartphone and modify as few as possible the control flow of the application in order to force the execution of the suspicious part of the code. To achieve such a goal, we lead a two step approach. First, we identify the suspicious parts of the bytecode and compute a score (indicator of risk) for each function of the malware. Second, we introduce a new GUI stimulator, called GroddDroid, which runs the application by clicking on all possible detected buttons. Finally, we identify the remaining parts of the malware that has not been executed and we force the required control flow statements to push the flow to the unexecuted parts previously scored.
Our experimental results show that our GroddDroid executor has better code coverage than the Monkey [14] and A3E [7]. Combined with the control flow forcing method, the triggering of malicious code increases and we measured this improvement on a dataset of malware for which we have manually identified the malicious parts. On a larger dataset we also show that GroddDroid succeeds in executing the suspicious parts previously detected.
This article is structured as follows. Section 2 describes the problem of executing malware and presents a literature review. Section 3 gives a comprehensive overview of our solution. Three sections describe the different features of GroddDroid: Section 4 explains how the malicious code is targeted, Section 5 describes how is automatized the interaction with the GUI of the applications and finally Section 6 details how GroddDroid can force branches during the execution in order to execute the previously detected suspicious code. Section 7 presents our experimental results based on two datasets, a small one and a large one, and finally Section 8 concludes the article.
2. State of the art
As the production of Android malware is increasing, writing malware is becoming a regular job: Allix et al. explain that, most of malicious codes are copy paste of online tutorials or variation of the same original malicious code [2]. Thus, families of malware can be rebuilt [12] and we can measure the improvements of the sophistication of each family [4].
Two main approaches can be considered to inspect the behavior of malware: static approaches that analyze the available material of the malware as its bytecode, its resources, and dynamic approaches that run the malware and monitors its behavior. As reported in [18], different monitoring techniques can be used. Tainting techniques follows the information in the studied application [6, 11]. Virtual machine events crafting, system calls monitoring and method tracing can be implemented in the Dalvik virtual machine or at kernel level [8].
Bläsing et al. propose an hybrid approach, with a first static step and a second one that is dynamic and run in an emulator [8]. The first step statically analyzes the malware to extract relevant patterns such as JNI calls, binary executors, usage of reflection, etc. Then, the second step runs a dynamic analysis and monitors low levels system calls. Nevertheless, the two steps does not cooperate and the benefits of the static analysis is not reused for the dynamic step. We believe that dynamic approaches are promising approaches as long as they really observe the malicious behavior of a malware. However, malware authors are full of resources to evade dynamic approaches simply in delaying their malicious execution.
Well known analysis platforms like Andrubis [16] or SmartDroid [24] do not address this problem. They trigger all possible activities and generates possible interesting events. But if the malicious code is protected, for instance waiting a special event, the malicious code will never been executed. Thus, running automatically an ordinary application is a difficult challenge. As the Monkey stimulator [14] gives insufficient results, researchers have also contributed to automate the interactions with the GUI. In [10], Choudhary et al. give an overview of the current input generators for Android. Random strategies can choose graphical elements or choose system events in order to stress the application. For example, DynoDroid [17] repeats a loop "observe-select-execute" and implements different strategies for the selection phase. Model-based exploration strategies consider each activity as a state and each event is a possible transition. For example, AndroidRipper [3] discovers new states and generates the possible transitions dynamically during the execution.
Additionally, sophisticated techniques use combinations of methods to help the dynamic exploration, e.g. using the exploration strategy of A3E’s tool of Azim et al. [7]. Their tool calls the activities that can be triggered by Intents if they are detected in the manifest of the application. This is a simple combination of static analysis and dynamic execution that intends to increase the covering of code.
Finally, since malware developers frequently reuse the same benign application to embed different malicious codes, the authors proposes PuppetDroid which is a solution that reuses previous recorded interactions if the repackaged application’s GUI looks similar [13]. This example shows that the code coverage of the application is not the best way to study malware.
In this article, we do not care about covering the benign part of the code. Thus, we base our proposal on the idea that if we identify the parts of the code that are possibly malicious, and if a normal execution does not trigger this part of the code, then we should force the flow of execution in order to reach this part during subsequent executions. We show how a prior static analysis helps dynamic analysis in
increasing its accuracy. The contribution of our article is precisely a static analysis that identifies the bytecode that seems dangerous followed by an automatic execution driven by the first analysis. In the next sections, after giving an overview of our solution, we depict all the components that achieve such a goal.
3. Overview
Figure 1 gives an overview of our proposal. Our approach can be divided into three different steps. First, we instrument the suspected application to observe the behavior of the application under analysis and get a reference execution. This instrumentation enables to learn which branches of the execution are taken during a run (see Control Flow Tracer on Figure 1). Thus, the GroddDroid runner executes the new APK on a smartphone in order to get the reference execution.
The second step consists in identifying the possible malicious code inside the malware using a static analysis of its bytecode (Malicious Code Targeting).
During the third step, GroddDroid uses the execution log of the reference execution to determine which control flow has to be forced to reach the parts of the code identified as malicious. A new APK is produced where the control flow is modified accordingly. The GroddDroid runner executes the new APK and new logs are generated. This step can be repeated for processing all the malicious parts of the identified code.
In the following, Section 4 describes the heuristic that identifies the potential malicious code, Section 5 explains how the reference execution is automated and Section 6 presents how the control flow is forced.
4. Automatic identification of malicious code
Android applications are packaged and distributed as APK files. These files are archives that do not contain the original Java code but only the pre-compiled Dalvik bytecode and the resources used by the application. In this article we propose a heuristic for targeting directly suspicious bytecode.
4.1. Handling application’s bytecode
We extract the bytecode by using the Soot framework [5] that is able to represent the Java bytecode as several intermediate representations. The main representation is based on the Jimple language, that has the same semantic as the Java language but with fewer instructions (only 15). This reduced set of instructions makes Jimple a practical language for static analysis and optimizations. Listing 1 (resp. 2) shows an example of the Jimple (resp. Java) representation of a method syracuse. The type is still available and control flow constructs are similar: the while loop is simplified with a conditional jump and a backward jump.
With this Jimple representation, Soot allows to programmatically manipulate the application code: each instruction is wrapped into an object that extends a Unit object. In the example of Listing 1, an instruction such as $i1 = $i0 % 2; is represented by an AssignStmt object (for assignment statement), a subclass of Unit, from which some values can be accessed, like the target of the assignment ($i1) and the RemExpr (for remainder expression) of the assigned operation. The control flow of a program can also be analyzed through the conditional and unconditional jump instructions IfStmt and GotoStmt.
In order to target suspicious code, we propose to search some particular types that are more frequently encountered in malware. For extracting these types, we use Soot’s ability to give the types used in each instruction of the program.
### 4.2. Suspicious code targeting
Aafer et al. showed that some Java methods of the Android API are noticeably more frequently used in malware code than in benign applications code [1]. The difference can go from 20% to 50% of additional usage in the case of some sensible API calls such as `getSubscriberId`. In the following, we briefly summarize the API calls that are the most impacted by this difference and that will be the core of our scoring function (see [1] for full statistics).
**android.telephony.TelephonyManager:** `getSubscriberId` and `getDeviceId` give a unique identifier of the phone. `getLine1Number` gives the phone number associated with the SIM card.
**android.app.Service:** while services are perfectly common in benign applications, some overridden methods like `onCreate` frequently contain malicious code.
**android.context.pm.PackageManager:** this component allows listing of installed applications and their installation.
**android.telephony.SmsManager:** this class contains `sendTextMessage`, that allows to send a SMS.
**java.lang.[Runtime,Process]:** these standard components of Java allow the application to execute native programs (`exec`), monitor their output (`getOutputStream`) and their shutdown (`waitFor`).
We propose to compute a *risk score* for each method in the bytecode: we compute the sum of the score associated to each `Unit` of the method using the scoring constants of Table 1. This table defines a score value for each suspicious class following on the observations of Aafer et al. [1]. Our heuristic is that malicious code has a higher probability to be the code with the highest score.
### 5. Stimulating the graphical user interface
As discussed in the state of the art, many tools have covered the problem of stimulating the graphical interface in order to have a good coverage of an application. Unfortunately, most of the open source software that are cited in [10] are no longer supported. Thus, it becomes technically difficult to use the previous contributions in order to execute modern malware that use new versions of Android’s APIs. For these reasons, we have chosen to recode a GUI runner, called the GroddDroid runner and to keep the Monkey [14] as a point of comparison.
#### 5.1. Run by a monkey
The Monkey hits randomly the graphical interface and should be stopped arbitrarily because there is not guarantee that all possible activities have been visited.
The Monkey is often combined with the generation of events like SMS or phone calls and by starting all possible activities and services. Using such techniques [16] helps the Monkey but cannot achieve good results as we show at the end of the section.
---
5.2. Run by the Gorilla GroddDroid
The GroddDroid runner is based on uiautomator\textsuperscript{4} which is a python wrapper to the Google API for testing purpose. GroddDroid pushes the malware on the smartphone and launches it main activity. For each displayed activity, GroddDroid collects the graphical elements that may trigger additional code. For now, we only collect Button objects as our first objective is to trigger the maximum number of activities. We could also manipulate forms, radio buttons, etc. If clicking on the button leads to a new activity, GroddDroid analyses it and repeat the same operation as before. Else, it triggers the next element of the activity or gets back to the previous activity. Of course, GroddDroid detects: dead-end activities where nothing new can be activated; GroddDroid generates the event “Go back button” to return to the previous activity; crashes: the application should be started again and GroddDroid should return to the activity where it crashed: the graphical element that triggered the crash is blacklisted; loops: the current activity has already been explored and GroddDroid should backtrack.
5.3. Code coverage comparison
Figure 2 studies the code coverage results of GroddDroid, compared to the Monkey [14] and A3E [7]. For this experiment, we used a large dataset of 100 malware samples, as described later in Section 7.1. For each bar of the graph, we compute the number of applications that have the same coverage ratio. On the top graph, we consider that a method has been covered if the runners enters the method. On the bottom graph, we consider that a branch is covered if the execution flow executes the first instruction of the branch.
For the applications that have a low coverage (less than 15\%), GroddDroid and Monkey have similar results. This can be explained by the fact that some malware samples crash just after being started: we observed 23 crashes for our dataset of 100 malware. A3E has surprisingly lower performances than the two others. We believe that, because A3E has been released several years ago, the tool cannot handle correctly more recent versions of Android applications. Thus, we want to emphasize that we believe that better results could be obtained with A3E if the source code was actualized. For coverages greater than 20\%, GroddDroid is slightly better than Monkey.
Of course, GroddDroid cannot go above 80\%: it is well known that executing 100\% of the code is extremely difficult as it would require generating all possible inputs and to be sure that no dead code is present. We give more precise results in Section 7.
\textsuperscript{4}https://github.com/xiaoccong/uiautomator
6. Forcing malicious code to execute
In this section, we present the ability of GroddDroid to force the conditional branches that have been identified as an execution path leading to the instructions targeted by the algorithm of Section 4. To identify how the control flow should be modified, we build our analysis on a reference execution. First, we compute an execution path from an execution point belonging to the reference execution and leading to the targeted suspicious code. Second, we modify the application bytecode in order to force this new path.
6.1. Control-Flow Tracer
We use Soot to instrument the application bytecode and add tracing information that allows us to know precisely which methods and conditional branches have been explored by the GroddDroid runner. We use the Log class of the Android API where the static method Log.i prints informations in a system log that is readable in real-time. The Control-Flow Tracer inserts calls to the Log.i method with unique identifiers, called tags, at the beginning of each method and conditional branch of the application. In the case of large applications, this means that we have to add thousands of these calls, but experiments have shown that it can be reliably done on every tested application.
GroddDroid runner executes the application once these calls are inserted in the bytecode. This execution forms the reference execution. The printed tags are collected and stored in the Log Collector (cf. Figure 1). Thus, GroddDroid obtains the precise list of branches that have been executed and not executed.
In the following, we explain how we compute an execution path that reaches these parts of the bytecode. We suppose that we want to force a particular targeted method, e.g. the most scored, and we show how we modify the reference execution to force the execution of the targeted method.
6.2. Determining an execution path
Our algorithm that determines an execution path to a targeted method is based on the control flow graph of the application. We use control flow graphs (CFG) computed from the bytecode: in these graphs, nodes are instructions of a method and the directed edges between nodes represent the possible succession of instructions. It is relatively easy to compute the control flow graph of each method appearing in the bytecode. Unfortunately, the computation of the control flow graph of the whole application is trickier.
We take as input the CFG of all methods. These graphs have one entry point and possibly several output points. Any point of a CFG $C_1$ can be connected to an entry point of another CFG $C_2$ if there exists an inter-procedural flow from $C_1$ to $C_2$, e.g. if a node of $C_1$ is an instruction that invokes the method whose CFG is $C_2$. In addition to direct invocations, there are also method calls that indirectly connect two CFGs. These method calls are specific to Android and are related to the creation of application components such as activities and services. For example, to display a new activity, developers must not instantiate an activity object themselves, but rather call the startActivity method of the Android API, with an Intent object as argument. The system reacts to this API call by instantiating a new activity object and calling its method onCreate. As Grace et al. [15] reported, even though developers do not see explicitly the calls to onCreate, these system actions are well-defined in the documentation. We exploit this knowledge of the well-defined semantic of these API calls to determine the implicit flows for the Android-specific behavior. Thus, we created a control flow graph for the whole application (called ACFG) from the union of CFGs of methods. We give an example of reconstruction of an implicit flow in Figure 3. In an application com.app, the main activity starts a service. The call $r1.startService(r2)$ calls a method of $r1$ which is a reference on this. Thus, we cannot see the direct link to the method onCreate of MyService as it is called later by the system. The system also calls, when the service is created, the method onStartCommand where the service runs the functional code e.g. in this example, the malicious code that sends SMS. Thus, we analyze such special invocations and we create the two dotted lines in the ACFG graph to reflect these dependencies.
Once we have computed the ACFG, we can reconstruct the shortest execution path, from a targeted instruction, back to an entry point of the application, or to a method that was executed during the reference execution. This particular execution path becomes our targeted execution path.
6.3. Forcing branches
We propose here to modify the bytecode of the application to force the execution of our targeted execution path. To achieve the forcing, we first collect the tags of conditional jumps in the targeted execution path. Then, we replace each conditional jump that may divert the execution from the targeted execution with an unconditional jump to force the desired branches.
For example, Listing 3 shows a protection code that determines if the execution takes place on an emulator, and starts doing something suspicious otherwise. During a first execution on an emulator, only the branch 1 would be explored. Listing 4 shows the same sample as Jimple. To force the other branch of the conditional jump, we replace it with an unconditional jump, pointing to the first Unit of the branch 2, as shown by Listing 5.
Algorithm was able to point out the code identified as malicious by the manual analysis. Table 3 details these first results. This first analysis is promising: except one analysis that has crashed, and one analysis (MobiDash) that ranked first a legitimate method but found other methods with secondary scores containing malicious code, the highest ranked methods correspond to the malicious behavior of malware.
We have also evaluated the results provided by the targeting algorithm on a larger dataset. This second dataset has not been as much studied as the previous used dataset but it contains a higher diversity of malicious codes. This second dataset contains one hundred malware samples obtained from AndroTotal. We evaluate how many methods are scored by the targeting algorithm and computes the distribution of the score value on this larger dataset. Results are represented graphically, for each malware numbered from 1 to 100 on the x axis, in Figure 4. For each malware we draw a square when a method is scored by the heuristic. The higher the score is, the darker the associated color is. This second experiment shows that the heuristic has computed a score greater than 0 for 83 malware over 100. Moreover, a malware has, on average, 1410 methods with a score greater than 0. In other words, less than one percent of bytecode methods are considered as potentially malicious by our targeting algorithm. Furthermore, 35.82% of the methods that are scored have a score higher than 25 which is shown by the light gray squares on Figure 4. Only 15 malware present high scored methods (greater than 150).
This second experiment allows us to conclude that our targeting algorithm identifies only few methods in the malware code which reduces the scope of further analysis.
Lastly Figure 6 depicts the genome of the second dataset, as seen by our heuristic. Many malware use telephony calls (IMEI, etc.) and also manipulate SMS. The use of the network is of course a common usage and cryptographic primitives are used in 27 cases.
**Table 3: Scoring results on the kharon dataset**
<table>
<thead>
<tr>
<th>Malware</th>
<th>Highest ranked method</th>
<th>Successful Targeting</th>
<th>Most scored method</th>
</tr>
</thead>
<tbody>
<tr>
<td>BadNews</td>
<td>80</td>
<td>ok</td>
<td>gathers user information (phone number, IMEI, ...)</td>
</tr>
<tr>
<td>Cajino</td>
<td>200</td>
<td>ok</td>
<td>sends SMS with parameters from a C&C server</td>
</tr>
<tr>
<td>DroidKungFu</td>
<td>50</td>
<td>ok</td>
<td>starts a binary containing the exploit adev</td>
</tr>
<tr>
<td>MobiDash</td>
<td>147</td>
<td>wrong</td>
<td>gathers user information for legitimate use</td>
</tr>
<tr>
<td>SaveMe</td>
<td>100</td>
<td>ok</td>
<td>sends SMS with parameters from a C&C server</td>
</tr>
<tr>
<td>SimpleLocker</td>
<td>-</td>
<td>crash</td>
<td>-</td>
</tr>
<tr>
<td>WipeLocker</td>
<td>150</td>
<td>ok</td>
<td>sends SMS</td>
</tr>
</tbody>
</table>
**Listing 3: Sample code of a conditional jump**
```java
if (isOnEmulator())
return; // Branch 1
else
manager = SmsManager.getDefault(); // Branch 2
```
**Listing 4: Same sample code, in Jimple**
```java
$z0 = staticinvoke <DummyClass: boolean isOnEmulator() >();
if $z0 != 0 goto label1;
return; // Branch 1
label1: // Branch 2
$z0 = staticinvoke <SmsManager: SmsManager getDefault() >();
```
**Listing 5: Same sample code with forced control flow**
```java
if (isOnEmulator())
return; // Branch 1
else
manager = SmsManager.getDefault(); // Branch 2
```
Then, we create a new version of the application with this modified control-flow. This new version is a reduction of the original one since it offers a strict subset of the possible executions: all executions of the new version were possible in the original application. Finally, the modified APK with one or several control flow modifications is rebuilt and executed by GroddDroid. With a new run, the runner should collect new tags in the Log Collector, indicating that the previously unexplored branches have been executed, thus triggering the possibly malicious parts of the code.
**7. Experiments**
**7.1. Experimenting the targeting algorithm**
To evaluate the soundness of the targeting algorithm presented in section 4, we have used a small collection of seven malware namely the Kharon15 dataset. Every malware belonging to the dataset has been manually reversed in order to be able to locate and understand its malicious code. More precisely, malware in this dataset have been picked from the Genome Project [25] and Contagio mobile\(^3\) and are samples of BadNews (2013), a remote administration tool [19]; Cajino (2015), a spyware; DroidKungFu (2011), a remote administration tool [25]; MobiDash (2014), an aggressive adware; SaveMe (2015), a spyware; Simple Locker (2014), a ransomware; WipeLocker (2014), a data eraser.
We have run the targeting algorithm presented in Section 4.1 on these malware and we wanted to know if our
\(^3\)http://contagiominidump.blogspot.fr/
7.2. Experimenting GroddDroid forcing
To conclude, we have forced the execution of the most scored method for each malware appearing on the two datasets. Our experiments have been done on a Nexus 5 smartphone under Android 4.4, connected to a Quad core PC with 4GB of RAM. Before executing each malware, we reinstall a fresh operating system in order to delete any modifications that the previous malware would have done. On the large dataset, we obtain a coverage of 16.31% of the methods (10.07% of all branches). When the highest scored method is not executed, GroddDroid runs a second run and forces the required branches. We obtain an extra +0.19% of covering for methods (+0.41% for branches). As expected, the increase is greater for branches as GroddDroid forces control flow conditions. These results are better than the coverages obtained with the Monkey (14.99% for methods, 9.35% for branches) or with A3E (1.46% for methods, 0.49% for branches). Figures 5 and 7 show the method and branch covering ratio of GroddDroid (green) for the 100 malware, numbered on the x axis from 1 to 100. The extra bar on the top (in black) represents the additional coverage obtained by GroddDroid when it forces branches.
If we just consider the suspicious methods that have been identified by the heuristic of Section 4, then the covering of these methods goes up to 24% without any forcing. If GroddDroid forces branches, the covering of the suspicious methods obtains an extra value of +4%. On the other hand, Monkey executes 20% of the targeted methods. Thus, in total, GroddDroid succeeds in executing 28% of the suspicious methods with an additional +8% compared to Monkey. Note that this additional performance result of +8% is reduced by the fact that 23 malware crash during the first seconds of execution. Naturally, for these ones, GroddDroid will never be able to reach the targeted method, similarly to Monkey or A3E. For a few malware samples, GroddDroid does not succeed in launching the application. This is because the malware has no main activity and is only triggered with a system event or should be started as a service. We need to add this feature for GroddDroid in future works, in order to launch the required services or intents.
8. Conclusion
We have presented GroddDroid, a framework dedicated to the discovery and the automatic execution of malicious codes driven by static analysis of the application bytecode. The originality of GroddDroid is its ability to force branches in order to reach the suspicious code despite of the malware developer’s countermeasures that protect its triggering.
Experimental results on a well studied set of malware samples showed that the targeting phase is accurate. For a dataset of 100 malware, GroddDroid succeeds in executing 16.31% of the methods on average. Using its ability to force branches, GroddDroid targets the suspicious methods and additionally succeeds in executing +0.19% of all methods. Thus, for the methods that are scored as the most suspicious, GroddDroid obtains an executing ratio of 28%.
References
|
{"Source-Url": "https://inria.hal.science/hal-01201743/file/malcon15-hal.pdf", "len_cl100k_base": 7047, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32338, "total-output-tokens": 9461, "length": "2e12", "weborganizer": {"__label__adult": 0.000904560089111328, "__label__art_design": 0.0005993843078613281, "__label__crime_law": 0.00860595703125, "__label__education_jobs": 0.0008950233459472656, "__label__entertainment": 0.00024819374084472656, "__label__fashion_beauty": 0.0003578662872314453, "__label__finance_business": 0.0002894401550292969, "__label__food_dining": 0.00047969818115234375, "__label__games": 0.0032825469970703125, "__label__hardware": 0.00516510009765625, "__label__health": 0.000957012176513672, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.0001766681671142578, "__label__industrial": 0.0007953643798828125, "__label__literature": 0.0006508827209472656, "__label__politics": 0.0006275177001953125, "__label__religion": 0.000732421875, "__label__science_tech": 0.1719970703125, "__label__social_life": 0.00018644332885742188, "__label__software": 0.08282470703125, "__label__software_dev": 0.71875, "__label__sports_fitness": 0.00046372413635253906, "__label__transportation": 0.0005102157592773438, "__label__travel": 0.00021183490753173828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39355, 0.03212]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39355, 0.32866]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39355, 0.8853]], "google_gemma-3-12b-it_contains_pii": [[0, 1148, false], [1148, 5420, null], [5420, 11213, null], [11213, 14679, null], [14679, 17458, null], [17458, 21416, null], [21416, 25610, null], [25610, 30651, null], [30651, 33271, null], [33271, 39355, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1148, true], [1148, 5420, null], [5420, 11213, null], [11213, 14679, null], [14679, 17458, null], [17458, 21416, null], [21416, 25610, null], [25610, 30651, null], [30651, 33271, null], [33271, 39355, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39355, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39355, null]], "pdf_page_numbers": [[0, 1148, 1], [1148, 5420, 2], [5420, 11213, 3], [11213, 14679, 4], [14679, 17458, 5], [17458, 21416, 6], [21416, 25610, 7], [25610, 30651, 8], [30651, 33271, 9], [33271, 39355, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39355, 0.05882]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
43ed597c717f4c7c6d0e6a7f2e1f1f55e730f575
|
2020 IEEE International Conference on Software Maintenance and Evolution (ICSME 2020)
Adelaide, Australia
28 September – 2 October 2020
Table of Contents
Message from the General Co-Chairs and the Program Co-Chairs xvii
Organizing Committee xix
Steering Committee xxii
Program Committee xxiii
Keynote Abstracts xxxiv
Evaluated Artifacts xxxvii
Research Track
Effects of Adopting Code Review Bots on Pull Requests to OSS Projects 1
Mairieli Wessel (University of São Paulo), Alexander Serebrenik (Eindhoven University of Technology), Igor Wiese (Universidade Tecnologica Federal do Paraná), Igor Steinmacher (Northern Arizona University), and Marco A. Gerosa (Northern Arizona University)
Can You Capture Information as You Intend to? A Case Study on Logging Practice in Industry 12
Guoping Rong (State Key Laboratory of Novel Software Technology, Software Institute, Nanjing University, China), Yangchen Xu (State Key Laboratory of Novel Software Technology, Software Institute, Nanjing University, China), Shenghui Gu (State Key Laboratory of Novel Software Technology, Software Institute, Nanjing University, China), He Zhang (State Key Laboratory of Novel Software Technology, Software Institute, Nanjing University, China), and Dong Shao (State Key Laboratory of Novel Software Technology, Software Institute, Nanjing University, China)
Haste Makes Waste: An Empirical Study of Fast Answers in Stack Overflow 23
Yao Lu (Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, China), Xinjun Mao (Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, China), Minghui Zhou (Peking University, China), Yang Zhang (Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, China), Tao Wang (Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, China), and Zude Li (Central South University, China)
Introducing Differential Privacy Mechanisms for Mobile App Analytics of Dynamic Content 267...
Sufian Latif (The Ohio State University), Yu Hao (The Ohio State University), Hailong Zhang (Fordham University), Raef Bassily (The Ohio State University), and Atanas Rountev (The Ohio State University)
Defining a Software Maintainability Dataset: Collecting, Aggregating and Analysing Expert Evaluations of Software Maintainability 278.......................................................... Markus Schnappinger (Technical University of Munich), Arnaud Fietzke (itestra GmbH), and Alexander Pretschner (Technical University of Munich)
Experiments with Interactive Fault Localization Using Simulated and Real Users 290............................ Ferenc Horváth (University of Szeged, Hungary), Árpád Beszédes (University of Szeged, Hungary), Béla Vancsics (University of Szeged, Hungary), Gergő Balogh (University of Szeged, Hungary), László Vidiács (University of Szeged, Hungary), and Tibor Gyimóthy (University of Szeged, Hungary)
Shake It! Detecting Flaky Tests Caused by Concurrency with Shaker 301........................................ Denini Silva (Federal University of Pernambuco), Leopoldo Teixeira (Federal University of Pernambuco), and Marcelo D'Amorim (Federal University of Pernambuco)
Studying Software Developer Expertise and Contributions in Stack Overflow and GitHub 312...... Sri Lakshmi Vadlamani (Carleton University) and Olga Baysal (Carleton University)
Assessing the Characteristics of FOSS Contributions in Network Automation Projects 324.............. John Anderson (Clemson University), Igor Steinmacher (Northern Arizona University), and Paige Rodeghero (Clemson University)
Pizza versus Pinsa: on the Perception and Measurability of Unit Test Code Quality 336.................. Giovanni Grano (University of Zurich), Cristian De Iaco (University of Zurich), Fabio Palomba (University of Salerno), and Harald C. Gall (University of Zurich)
Evaluating Code Readability and Legibility: An Examination of Human-Centric Studies 348............. Delano Oliveira (Federal University of Pernambuco), Reynde Bruno (Federal University of Pernambuco), Fernanda Madeiral (KTH Royal Institute of Technology), and Fernando Castor (Federal University of Pernambuco)
A Software Maintenance-Focused Process and Supporting Toolset for Academic Environments 360 Ryan Hardt (University of St. Thomas)
A Large-Scale Data Set and an Empirical Study of Docker Images Hosted on Docker Hub 371....... Changyuan Lin (University of Alberta), Sarah Nadi (University of Alberta), and Hanzheh Khazaei (York University)
CounterFault: Value-Based Fault Localization by Modeling and Predicting Counterfactual Outcomes 382............................................................................................................ Andy Podgurski (Case Western Reserve University) and Yiğit Küçük (Case Western Reserve University)
Commit-Aware Mutation Testing 394
Wei Ma (SnT, University of Luxembourg, Luxembourg), Thomas Laurent (Lero School of Computer Science, University College Dublin), Miloš Ojdanić (SnT, University of Luxembourg, Luxembourg), Thierry Titcheu Chekam (SnT, University of Luxembourg, Luxembourg), Anthony Ventresque (Lero School of Computer Science, University College Dublin), and Mike Papadakis (SnT, University of Luxembourg, Luxembourg)
Remote Pair Programming in Virtual Reality 406
James Dominic (Clemson University), Brock Tubre (Clemson University), Charles Ritter (Clemson University), Jada Houser (Clemson University), Colton Smith (Clemson University), and Paige Rodeghero (Clemson University)
A Cost-Effective Approach for Hyper-Parameter Tuning in Search-Based Test Case Generation 418
Shayan Zamani (University of Calgary) and Hadi Hemmati (University of Calgary)
It Takes a Village to Build a Robot: An Empirical Study of the ROS Ecosystem 430
Sophia Kolak (Columbia University), Afsoon Afzal (Carnegie Mellon University), Claire Le Goues (Carnegie Mellon University), Michael Hilton (Carnegie Mellon University), and Christopher Steven Timperley (Carnegie Mellon University)
How (Not) to Find Bugs: The Interplay between Merge Conflicts, Co-Changes, and Bugs 441
Luis Henrique Vieira Amaral (University of Brasilia (UnB)), Marcos Cesar de Oliveira (University of Brasilia (UnB)), Welder Luz (University of Brasilia (UnB)), José Fortes (University of Brasilia (UnB)), Rodrigo Bonifácio (University of Brasilia (UnB)), Daniel Alencar (University of Otago), Eduardo Monteiro (University of Brasilia (UnB)), Gustavo Pinto (Federal University of Pará (UFPA)), and David Lo (Singapore Management University)
Assessing Mock Classes: An Empirical Study 453
Gustavo Pereira (Federal University of Minas Gerais (UFMG)) and Andre Hora (Federal University of Minas Gerais (UFMG))
Automated Recording and Semantics-Aware Replaying of High-Speed Eye Tracking and Interaction Data to Support Cognitive Studies of Software Engineering Tasks 464
Vlas Zyrianov (University of Illinois at Urbana-Champaign), Drew T. Guarnera (Kent State University), Cole S. Peterson (University of Nebraska-Lincoln), Bonita Sharif (University of Nebraska-Lincoln), and Jonathan I. Maletic (Kent State University)
Characterizing Task-Relevant Information in Natural Language Software Artifacts 476
Arthur Marques (University of British Columbia), Nick C. Bradley (University of British Columbia), and Gail C. Murphy (University of British Columbia)
Improving Testing by Mimicking User Behavior 488
Qianqian Wang (Georgia Institute of Technology) and Alessandro Orso (Georgia Institute of Technology)
Expanding the Number of Reviewers in Open-Source Projects by Recommending Appropriate Developers 499
Aleksandr Chueshev (Sorbonne University/LIP6), Julia Lawall (Inria), Reda Bendraou (Sorbonne University/LIP6), and Tewfik Ziadi (Sorbonne University/LIP6)
How Does Modern Code Review Impact Software Design Degradation? An In-Depth Empirical Study 511
Anderson Uchôa (Pontifical Catholic University of Rio de Janeiro (PUC-Rio)), Caio Barbosa (Informatics Department, Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Brazil), William Oizumi (Informatics Department, Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Brazil), Publio Blenlioy (Campus Quixadá, Federal University of Ceará (UFC), Brazil), Rafael Limay (Campus Quixadá, Federal University of Ceará (UFC), Brazil), Alessandro Garcia (Informatics Department, Pontifical Catholic University of Rio de Janeiro (PUC-Rio), Brazil), and Carla Bezerra (Campus Quixadá, Federal University of Ceará (UFC), Brazil)
Revisiting Test Smells in Automatically Generated Tests: Limitations, Pitfalls, and Opportunities 523
Annibale Panichella (Delft University of Technology), Sebastiano Panichella (Zurich University of Applied Science), Gordon Fraser (University of Passau), Anand Ashok Sawant (University of California Davis), and Vincent J. Hellendoorn (University of California Davis)
Lifting the Curtain on Merge Conflict Resolution: A Sensemaking Perspective 534
Caius Brindescu (Oregon State University), Yenifer Ramirez (Oregon State University), Anita Sarma (Oregon State University), and Jensen Carlos (Oregon State University)
On the Impact of Multi-Language Development in Machine Learning Frameworks 546
Manel Grichi (Polytechnique Montreal), Ellis E. Eghan (Polytechnique Montreal), and Bram Adams (Polytechnique Montreal)
Improving Automated GUI Exploration of Android Apps via Static Dependency Analysis 557
Wunan Guo (Shanghai Key Laboratory of Data Science, Fudan University, China), Liwei Shen (Shanghai Key Laboratory of Data Science, Fudan University, China), Ting Su (Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, China), Xin Peng (Shanghai Key Laboratory of Data Science, Fudan University, China), and Weiyang Xie (Shanghai Key Laboratory of Data Science, Fudan University, China)
On the Performance and Adoption of Search-Based Microservice Identification with to Microservices 569
Luiz Carvalho (Pontifical Catholic University of Rio de Janeiro), Alessandro Garcia (Pontifical Catholic University of Rio de Janeiro), Thelma Elita Colanzi (State University of Maringá), Wesley K. G. Assunção (Federal University of Technology - Paraná), Juliana Alves Pereira (Pontifical Catholic University of Rio de Janeiro), Baldoino Fonseca (Federal University of Alagoas), Márcio Ribeiro (Federal University of Alagoas), Maria Julia de Lima (Tecgraf Institute, Pontifical Catholic University of Rio de Janeiro), and Carlos Lucena (Pontifical Catholic University of Rio de Janeiro)
An Empirical Study of i18n Collateral Changes and Bugs in GUIs of Android Apps
Camilo Escobar-Velásquez (Universidad de los Andes), Michael Osorio-Riaño (Universidad de los Andes), Juan Domínguez-Osorio (Universidad de los Andes), María Arevalo (Universidad de los Andes), and Mario Linare-Vásquez (Universidad de los Andes)
AOBTM: Adaptive Online Biterm Topic Modeling for Version Sensitive Short-Texts Analysis
Mohammad Abdul Hadi (The University of British Columbia) and Fatemeh H Fard (The University of British Columbia)
Why are Some Bugs Non-Reproducible?: An Empirical Investigation Using Data Fusion
Mohammad Masudur Rahman (Polytechnique Montreal), Foutse Khomh (Polytechnique Montreal), and Marco Castelluccio (Mozilla Corporation)
A^3IDENT: A Two-Phased Approach to Identify the Leading Authors of Android Apps
Wei Wang (Tianjin University, Tianjin, China), Guozhu Meng (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Haoyu Wang (Beijing University of Posts and Telecommunications, China), Kai Chen (Institute of Information Engineering, Chinese Academy of Sciences, China; School of Cyber Security, University of Chinese Academy of Sciences, China), Weimin Ge (Tianjin University, Tianjin, China), and Xiaohong Li (Tianjin University, Tianjin, China)
Interest of Defect Technical Debt: An Exploratory Study on Apache Projects
Zengyang Li (Central China Normal University), Qinyi Yu (Central China Normal University), Peng Liang (Wuhan University), Ran Mo (Central China Normal University), and Chen Yang (IBO Technology (Shenzhen) Co., Ltd.)
CrossASR: Efficient Differential Testing of Automatic Speech Recognition via Text-to-Speech
Muhammad Hilmi Asyrofi (Singapore Management University), Ferdian Thung (Singapore Management University), David Lo (Singapore Management University), and Lingxiao Jiang (Singapore Management University)
Score-Based Automatic Detection and Resolution of Syntactic Ambiguity in Natural Language Requirements
Mohamed Osama (Deakin University), Aya Zaki Ismail (Deakin University), Mohamed Abdelrazek (Deakin University), John Grundy (Monash University), and Amani Ibrahim (Deakin University)
New and Emerging Results (NIER) Track
Moderate Detection and Removal of Omnipresent Modules in Software Clustering
Keisuke Yano (Fujitsu Laboratories) and Akihiko Matsuo (Fujitsu Laboratories)
Improving Log-Based Anomaly Detection with Component-Aware Analysis
Kun Yin (Chongqing University), Meng Yan (Chongqing University), Ling Xu (Chongqing University), Zhou Xu (Chongqing University), Zhao Li (Chongqing University), Dan Yang (Chongqing University), and Xiaohong Zhang (Chongqing University)
Who (Self) Admits Technical Debt? 672
Gianmarco Fucci (University of Sannio, Italy), Fiorella Zampetti
(University of Sannio, Italy), Alexander Serebrenik (Eindhoven
University of Technology, The Netherlands), and Massimiliano Di Penta
(University of Sannio, Italy)
Investigating the Reproducibility of NPM Packages 677
Pronnoy Goswami (Virginia Tech), Saksham Gupta (Virginia Tech),
Zhiyuan Li (Virginia Tech), Na Meng (Virginia Tech), and Daphne Yao
(Virginia Tech)
On Package Freshness in Linux Distributions 682
Damien Legay (University of Mons), Alexandre Decan (University of
Mons), and Tom Mens (University of Mons)
Fuzzing to Estimate Gas Costs of Ethereum Contracts 687
Daniel Soto (University of Chile, Chile), Alexandre Bergel (University
of Chile, Chile), and Alejandro Hevia (University of Chile, Chile)
Regression Testing of Massively Multiplayer Online Role-Playing Games 692
Yuechen Wu (Fuxi AI Lab, NetEase, Inc.), Yingfeng Chen (Fuxi AI Lab,
NetEase, Inc.), Xiaofei Xie (Nanyang Technological University), Bing
Yu (Kyushu University), Changjie Fan (Fuxi AI Lab, NetEase, Inc.), and
Lei Ma (Kyushu University)
SiblingClassTestDetector: Finding Untested Sibling Functions 697
Qian Liang (University of Waterloo) and Patrick Lam (University of
Waterloo)
SBFL-Suitability: A Software Characteristic for Fault Localization 702
Yui Sasaki (Osaka University), Yoshiki Higo (Osaka University),
Shinsuke Matsumoto (Osaka University), and Shinji Kusumoto (Osaka
University)
Examining the Work Experience of Programmers with Visual Impairments 707
Earl Huff (Clemson University), Kwajo Boateng (Clemson University),
Makayla Moster (Clemson University), Paige Rodeghero (Clemson
University), and Julian Brinkley (Clemson University)
Using Symbolic Execution to Analyze Linux KBuild Makefiles 712
ThanhVu Nguyen (University of Nebraska-Lincoln) and KimHao Nguyen
(University of Nebraska-Lincoln)
Few-Shot Guided Mix for DNN Repairing 717
Xuhong Ren (Tianjin University of Technology, China), Bing Yu (Kyushu
University, Japan), Hua Qi (Kyushu University, Japan), Felix Juefei-Xu
(Alibaba Group, USA), Zhuo Li (Kyushu University, Japan), Wanli Xue
(Tianjin University of Technology, China), Lei Ma (Kyushu University,
Japan), and Jianjun Zhao (Kyushu University, Japan)
Industry Track
On the Need for Automatic Knowledge Management in Modern Collaboration Tools to Improve
Software Maintenance 722
Vipin Balachandran (VMware, Palo Alto, USA)
EWIDL: Single-Source Web API Documentation Management System 723
Michał Michalski (Samsung R&D Institute Poland), Piotr Kosko (Samsung R&D Institute Poland), David Juszczak (Samsung R&D Institute Poland), and Hobum Kwon (Samsung Electronics)
Efficient Bug Triage for Industrial Environments 727
Wei Zhang (Adobe Inc)
Celal Ziftci (Google Inc.) and Diego Cavalcanti (Google Inc.)
From 6.2 to 0.15 Seconds - An Industrial Case Study on Mobile Web Performance 746
Jasper van Riet (Vrije Universiteit Amsterdam), Flavia Paganelli (30MHz), and Ivano Malavolta (Vrije Universiteit Amsterdam)
Incremental Type Migration Using Type Algebra 756
Hyrum K. Wright (Google)
Improving Bug Localization by Mining Crash Reports: An Industrial Study 766
Marcos Medeiros (Federal University of Rio Grande do Norte), Uirá Kulesza (Federal University of Rio Grande do Norte), Rodrigo Bonifácio (University of Brasilia), Eiji Adachi (Federal University of Rio Grande do Norte), and Roberta Coelho (Federal University of Rio Grande do Norte)
Late Breaking Ideas Track
Toward a Definition of Cognitive-Driven Development 776
Alberto Luiz Oliveira Tavares de Souza (Zup Innovation) and Victor Hugo Santiago Costa Pinto (Zup Innovation)
A Blessing in Disguise? Assessing the Relationship between Code Smells and Sustainability 779
Gemma Catolino (Delft University of Technology)
Reducing Accidental Clones Using Instant Clone Search in Automatic Code Review 781
Vipin Balachandran (VMware)
Exploring the Challenges of Cloud Migrations during a Global Pandemic 784
Brock Tubre (Clemson University) and Paige Rodeghero (Clemson University)
Towards a New Test Case Prioritization Approach Based on Fuzzy Clustering Analysis 786
Andreea Vescan (Babes-Bolyai University) and Camelia Şerban (Babes-Bolyai University)
Robin: A Voice Controlled Virtual Teammate for Software Developers and Teams 789
Bruno da Silva (California Polytechnic State University, San Luis Obispo), Chloe Hebert (California Polytechnic State University, San Luis Obispo), Abhishu Rawka (California Polytechnic State University, San Luis Obispo), and Siriwan Sereesathien (California Polytechnic State University, San Luis Obispo)
Exploring Bluetooth Communication Protocols in Internet-of-Things Software Development 792
Tri Minh Triet Pham (Concordia University) and Jinqiu Yang (Concordia University)
Refactoring Recommendations Based on the Optimization of Socio-Technical Congruence
Manuel De Stefano (University of Salerno), Fabiano Pecorelli (University of Salerno), Damian Andrew Tamburri (Jheronimus Academy of Data Science), Fabio Palomba (University of Salerno), and Andrea De Lucia (University of Salerno)
Practitioners’ Insights on Machine-Learning Software Engineering Design Patterns: A Preliminary Study
Hironori Washizaki (Waseda University / National Institute of Informatis / System Information / eXmotion), Hironori Takeuchi (Musashi University), Foutse Khomh (Polytechnique Montréal), Naotake Natori (AISIN SEIKI Co., Ltd.), Takuo Doi (Lifematics Inc.), and Satoshi Okuda (Japan Advanced Institute of Science and Technology)
Graph Neural Network-Based Vulnerability Prediction
Qi Feng (National University of Defense Technology), Chendong Feng (National University of Defense Technology), and Weijiang Hong (National University of Defense Technology)
Tool Demonstrations
Guilherme Lacerda (Unisinos, UFRGS), Fabio Petrillo (UQAC), and Marcelo S. Pimenta (UFRGS)
Teddy: Automatic Recommendation of Pythonic Idiom Usage for Pull-Based Software Projects
Purit Phan-udom (Mahidol University), Naruedon Watanakul (Mahidol University), Tattiya Sakulniwat (Mahidol University), Chaiyong Ragkhitwetsagul (Mahidol University), Thanwadee Sunetnanta (Mahidol University), Morakot Choetkirtikul (Mahidol University), and Raula Gaikovina Kula (Nara Institute of Science and Technology (NAIST))
JCoffee: Using Compiler Feedback to Make Partial Code Snippets Compilable
Piyush Gupta (Indraprastha Institute of Information Technology Delhi), Nikita Mehrotra (Indraprastha Institute of Information Technology Delhi), and Rahul Purandare (Indraprastha Institute of Information Technology Delhi)
A Toolset to Support a Software Maintenance Process in Academic Environments
Ryan Hardt (University of St. Thomas)
QScored: An Open Platform for Code Quality Ranking and Visualization
Vishvajeet Thakur (Himachal Pradesh University), Marouane Kessentini (University of Michigan), and Tushar Sharma (Siemens Corporate Technology)
WebRTS: A Dynamic Regression Test Selection Tool for Java Web Applications
Zhenyue Long (GuangDong Power Grid, GuangDong), Zeliu Ao (University of Chinese Academy of Sciences, China), Guoquan Wu (University of Chinese Academy of Sciences), Wei Chen (University of Chinese Academy of Sciences, China), and Jun Wei (University of Chinese Academy of Sciences, China)
Kaya: A Testing Framework for Blockchain-Based Decentralized Applications
Zhenhao Wu (Peking University, China), Jiashuo Zhang (Peking University, China), Jianbo Gao (Peking University, China), Yue Li (Peking University, China), Qingshan Li (Peking University, China), Zhi Guan (Peking University, China), and Zhong Chen (Peking University, China)
Doctoral Symposium
Post Proposal (Late Pre-Doctoral Track)
Automatic Support for Multi-domain Model Management
Wesley Torres (Eindhoven University of Technology), Mark G. J. van den Brand (Eindhoven University of Technology), and Alexander Serebrenik (Eindhoven University of Technology)
Verifying and Testing Concurrent Programs Using Constraint Solver Based Approaches
Dhriti Khanna (IIIT-Delhi, Delhi, India), Rahul Purandare (IIIT-Delhi), and Subodh Sharma (IIIT-Delhi)
Integration of Program Slicing with Cognitive Complexity for Defect Prediction
Basma S. Alqadi (Imam Muhammad Ibn Saud Islamic University, Kent State University) and Jonathan I. Maletic (Kent State University)
Debugging Declarative Models in Alloy
Guolong Zheng (University of Nebraska-Lincoln), Hamid Bagheri (University of Nebraska-Lincoln), and ThanhVu Nguyen (University of Nebraska-Lincoln)
Post Doctoral Track
From Transient Information to Persistent Documentation: Enhancing Software Documentation
Felipe Ebert (Eindhoven University of Technology)
Registered Reports
Mobile App Energy Consumption: A Study of Known Energy Issues in Mobile Applications and their Classification Schemes - Summary Plan
Ali Alotaibi (University of Southern California), James Clause (University of Delaware), and William G.H. Halfond (University of Southern California)
Newcomer Candidate: Characterizing Contributions of a Novice Developer to GitHub
Ifraz Rehman (Nara Institute of Science and Technology), Dong Wang (Nara Institute of Science and Technology), Raula Gaikovina Kula (Nara Institute of Science and Technology), Takashi Ishio (Nara Institute of Science and Technology), and Kenichi Matsumoto (Nara Institute of Science and Technology)
Automatic Identification of Rollback Edit with Reasons in Stack Overflow Q&A Site 856
Saikat Mondal (University of Saskatchewan, Canada), Gias Uddin
(University of Calgary, Canada), and Chanchal K. Roy (University of
Saskatchewan, Canada)
The Making of Accessible Android Applications: An Empirical Study on the State of the
Practice 857
Marianna Di Gregorio (HCI-UsE Lab - University of Salerno, Italy),
Dario Di Nucci (University of Tilburg / Jheronimus Academy of Data
Science, The Netherlands), Fabio Palomba (SeSa Lab - University of
Salerno, Italy), and Giuliana Vitiello (HCI-UsE Lab - University of
Salerno, Italy)
DocGen2
Leveraging Textual and Non-Textual Features for Documentation Decluttering 862
Giuseppe Colavito (University of Bari), Pierpaolo Basile (University
of Bari), and Nicole Novielli (University of Bari)
Source Code Based On-Demand Class Documentation Generation 864
Mingwei Liu (Fudan University), Xin Peng (Fudan University), Xiujie
Meng (Fudan University), Huanjun Xu (Fudan University), Shuangshuang
Xing (Fudan University), Xin Wang (Fudan University), Yang Liu (Fudan
University), and Gang Lv (Fudan University)
Learning Based and Context Aware Non-Informative Comment Detection 866
Mingwei Liu (Fudan University), Yanjun Yang (Fudan University), Xin
Peng (Fudan University), Chong Wang (Fudan University), Chengyuan Zhao
(Fudan University), Xin Wang (Fudan University), and Shuangshuang Xing
(Fudan University)
Junxiao Han (Zhejiang University), Shuiguang Deng (Zhejiang
University), David Lo (Singapore Management University), Chen Zhi
(Zhejiang University), Jianwei Yin (Zhejiang University), and Xin Xia
(Monash University)
Author Index 879
|
{"Source-Url": "https://www.proceedings.com/content/056/056494webtoc.pdf", "len_cl100k_base": 5937, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 28711, "total-output-tokens": 6725, "length": "2e12", "weborganizer": {"__label__adult": 0.00043654441833496094, "__label__art_design": 0.0005450248718261719, "__label__crime_law": 0.00033354759216308594, "__label__education_jobs": 0.00921630859375, "__label__entertainment": 0.00014078617095947266, "__label__fashion_beauty": 0.00021755695343017575, "__label__finance_business": 0.0004858970642089844, "__label__food_dining": 0.0003235340118408203, "__label__games": 0.0008249282836914062, "__label__hardware": 0.0008111000061035156, "__label__health": 0.0005583763122558594, "__label__history": 0.0004227161407470703, "__label__home_hobbies": 0.00017249584197998047, "__label__industrial": 0.00032639503479003906, "__label__literature": 0.0005564689636230469, "__label__politics": 0.00030875205993652344, "__label__religion": 0.0004515647888183594, "__label__science_tech": 0.0185089111328125, "__label__social_life": 0.0003185272216796875, "__label__software": 0.0110931396484375, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.00038814544677734375, "__label__transportation": 0.0004012584686279297, "__label__travel": 0.00025725364685058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24731, 0.01525]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24731, 0.12702]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24731, 0.68988]], "google_gemma-3-12b-it_contains_pii": [[0, 137, false], [137, 137, null], [137, 1995, null], [1995, 1995, null], [1995, 1995, null], [1995, 4942, null], [4942, 7623, null], [7623, 10629, null], [10629, 13393, null], [13393, 15849, null], [15849, 18314, null], [18314, 20902, null], [20902, 22972, null], [22972, 24731, null]], "google_gemma-3-12b-it_is_public_document": [[0, 137, true], [137, 137, null], [137, 1995, null], [1995, 1995, null], [1995, 1995, null], [1995, 4942, null], [4942, 7623, null], [7623, 10629, null], [10629, 13393, null], [13393, 15849, null], [15849, 18314, null], [18314, 20902, null], [20902, 22972, null], [22972, 24731, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24731, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24731, null]], "pdf_page_numbers": [[0, 137, 1], [137, 137, 2], [137, 1995, 3], [1995, 1995, 4], [1995, 1995, 5], [1995, 4942, 6], [4942, 7623, 7], [7623, 10629, 8], [10629, 13393, 9], [13393, 15849, 10], [15849, 18314, 11], [18314, 20902, 12], [20902, 22972, 13], [22972, 24731, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24731, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
3646da6fe12252a4018070a0977a97ba75342068
|
Removal of group scheme
11/10/2015 07:57 PM - Mark Abraham
We can do this readily once we're sure we have no tests that need the group scheme. Regression tests have been ported, TPI tests have been disabled, if we find any others we can discuss.
At some future time, hopefully soon, we will
- decide how to read tables from grompp or mdrun for the Verlet scheme (some work in Gerrit now)
- got GPU and CPU kernels working for user tables (some work in Gerrit now, none for CPU)
- got no-PBC simulations working for nxnxm (in Gerrit now)
- test particle insertion working for nbnxm
- FEP kernel ported to nbnxm-style pair lists
Anything else?
We removed AdResS long ago.
Some suggested patches that might make sense to implement this are found in the checklist.
- remove SIMD group scheme kernels
- change handling of cutoff-scheme in grompp, so that mdp files for group scheme are rejected with a useful error message
- make cutoff-scheme optional in mdp field (permit users to "choose" only Verlet, but ignore it when they do)
- remove mdrun options for table input
- add fatal error for pbc = no
- add fatal error for TPI
- change handling of t_inputrec.cutoff_scheme so mdrun issues a fatal error for group-scheme tpr
- remove t_forcerec.cutoff_scheme, along with ecuts enum and all code paths called by ecuts_GROUP
- clean up t_forcerec fields no longer used
- clean up Ewald code no longer used
- move t_inputrec.cutoff_scheme to end of t_inputrec, like other removed features
- update docs to remove references to old cutoff schemes (e.g. rlist)
- stop grompp doing anything with charge groups (and update docs to note that such topology fields are now ignored)
Subtasks:
- Task #2422: write C kernel for tables in Verlet scheme
Related issues:
- Related to GROMACS - Feature #1837: Design of new table classes
- Related to GROMACS - Feature #1347: future of tables
- Related to GROMACS - Feature #1666: new approach for Verlet-scheme kernel generation
- Related to GROMACS - Task #1971: Removing buggy features vs. keeping workflows
- Related to GROMACS - Feature #2931: Tables in Verlet kernels
Associated revisions
- Revision 85d918db - 11/15/2015 08:18 PM - Mark Abraham
Remove AdResS
This feature will disappear with the group scheme, so we might as well get it out of the way first. Doing this removal on its own might help re-implement some time, if someone was keen.
Removed bVir, bPress, bSurf fields of t_mdebin, because all MD algorithms now support such calculations.
gmx grompp now issues a fatal error if the main address .mdp option is on, and otherwise ignores the obsolete fields (like we do with other .mdp options we've removed).
mdrun can read old .tpr files, but issues a fatal error if AdResS was active in them.
gmx dump and gmx compare ignore all AdResS related fields.
Other tools can still read such .tpr files for their other content.
Removed Sebastian Fritsch from GROMACS 2016 contributor list, since he only worked on AdResS features. Christoph Junghans made other contributions that are still useful, and so remains.
Removed obsolete literature references
Also fixed some incorrect doxygen of init_forcerec().
Part of #1852
Change-Id: I22fa0fe480148aeda0ace194646a5ec2f3d20a8c
Revision 65e33d97 - 02/21/2016 10:30 PM - Mark Abraham
Separate table construction
Construction of tables for the group scheme, pair interactions and dispersion correction are now separated. The resulting tables are never re-used for something else. This uses slightly more memory, but makes the logic rather more simple. Some of the tables are now held by reference by their owners, rather than by value, which might improve cache locality a little.
With this change, we can implement the table support for the Verlet scheme without getting involved with the group-scheme code, and will have an easier time removing the group scheme.
Refs #1666, #1852
Change-Id: I8ca608f0e41b02723e6080b04d9e7f049900
Revision d89dbd07 - 11/20/2018 05:32 PM - Paul Bauer
Change shell minimzation nm calculation to verlet
To make sure that shell minimization code works with the Verlet scheme the test is moved to use that.
Part of #1852
Change-Id: I7c387a75438d2b6d8f975df83331a2008798e59
Revision 7ca9f08c - 02/04/2019 09:10 PM - Mark Abraham
Remove SIMD support from group scheme
It might take a bit longer to craft the patches that remove the rest of the group scheme, but we may as well speed up our compilation times first.
Refs #1852
Change-Id: I082e40e04678c744d9da8119333210878294a021
Revision ce17e81d - 04/03/2019 05:36 PM - Mark Abraham
Initial deactivation of group scheme.
mdrun now gives a fatal error with a group-scheme .tpr, so anybody using one won't get a segfault.
do_force now only has one implementation, which suits those working on improved force calculations.
More removal of inactive code will follow later. Noted TODO to fix the release notes properly in such a commit.
Refs #1852
Change-Id: l3b13135565951f4d7f872dd3b8518860eccf9db0
Revision 2d983f41 - 04/16/2019 01:50 PM - Szilárd Páll
Group scheme related cleanup
Removed:
- SR force calculation invocation
- LR correction invocations
- SR free energy component accumulation
Minor cleanup.
Refs #1852
Change-Id: l4e22986279039a0f49a5a8be3c447f4115308492
Revision b62fccc55 - 06/30/2019 11:13 PM - Kevin Boyd
Remove group scheme checks from runner
refs #1852
Change-Id: l906fc7c063694fbbd17128ee0d162259040306
History
#1 - 11/11/2015 12:06 AM - Mark Abraham
We're not likely to have any kind of twin-range / MTS support by the time we ship GROMACS 2016, so there's a bunch of old .tprs that can no longer be run. I don't think it is wise to run pre-4.6 .tpr files that happen to use a single-range scheme that happens to fit the requirements of the Verlet scheme, because any such simulations were semantically different (no buffer) and can't be compared with new ones. Defaulting them to a no-buffer version of the Verlet scheme is not terrible, but such simulations are arguably wrong, and they still always get a potential shift and so aren't readily comparable. So, I think there is a serious case for mdrun in GROMACS 2016 to refuse to run all pre-4.6 .tpr files. I see this very much as a step in the direction of "correct by default," and I'd certainly put that objective before "forward compatible .tpr files." (Of very much secondary concern is that this lets us remove small amounts of code that supports reading such files.)
Gerrit received a related patchset '1' for Issue #1852.
Uploader: Mark Abraham (mark.j.abraham@gmail.com)
Change-Id: l22fa0fe480148aed0ace194646a5ec2f3d20a8c
Gerrit URL: https://gerrit.gromacs.org/5311
#3 - 02/18/2016 11:16 PM - Gerrit Code Review Bot
Gerrit received a related patchset '8' for Issue #1852.
Uploader: Mark Abraham (mark.j.abraham@gmail.com)
Change-Id: l8ca668f0e41b02723e680b80b04d9e7f048900
Gerrit URL: https://gerrit.gromacs.org/5132
#4 - 02/26/2016 11:55 AM - Mark Abraham
- Related to Feature #1837: Design of new table classes added
#5 - 02/26/2016 11:55 AM - Mark Abraham
- Related to Feature #1347: future of tables added
#6 - 02/26/2016 11:56 AM - Mark Abraham
- Related to Feature #1666: new approach for Verlet-scheme kernel generation added
#7 - 06/01/2016 01:59 PM - Mark Abraham
- Target version changed from 2016 to 2018
#8 - 07/11/2016 10:43 PM - Mark Abraham
- Target version changed from 2018 to 2016
04/03/2020
Mark Abraham wrote:
We're not likely to have any kind of twin-range / MTS support by the time we ship GROMACS 2016, so there's a bunch of old .tprs that can no longer be run. I don't think it is wise to run pre-4.6 .tpr files that happen to use a single-range scheme that happens to fit the requirements of the Verlet scheme, because any such simulations were semantically different (no buffer) and can't be compared with new ones. Defaulting them to a no-buffer version of the Verlet scheme is not terrible, but such simulations are arguably wrong, and they still always get a potential shift and so aren't readily comparable. So, I think there is a serious case for **mdrun in GROMACS 2016 to refuse to run all pre-4.6 .tpr files**. I see this very much as a step in the direction of "correct by default," and I'd certainly put that objective before "forward compatible .tpr files." (Of very much secondary concern is that this lets us remove small amounts of code that supports reading such files.)
Note to self - find such a .tpr (from an old redmine?) and see what happens now.
#9 - 07/28/2016 05:31 PM - Mark Abraham
- Target version changed from 2016 to 2018
Mark Abraham wrote:
Note to self - find such a .tpr (from an old redmine?) and see what happens now.
Ran with the old regressiontests/complex/fe_test and got
Fatal error:
Twin-range simulations are no longer supported
which is fine for 2016.
By the time we remove the group scheme, then there will be no way to reliably run a group-scheme .tpr. I don't think we should write code to detect whether there's a buffer and whether its size is consistent with the current default buffers, and then remove charge groups and unilaterally apply a potential shift, etc. So in practice, .tpr files with versions before those of 4.6 will be unsupported once we remove the group scheme.
#10 - 03/15/2017 05:39 PM - Mark Abraham
- Target version changed from 2018 to future
unlikely to happen for 2017
#11 - 03/15/2017 06:09 PM - Roland Schulz
I strongly suggest we remove the group kernels right after the 2017 release.
#12 - 04/10/2018 04:00 PM - Mark Abraham
The Verlet scheme does not currently support energy group exclusions, though I suspect this is not a hard thing to implement, given the other kinds of exclusions currently supported.
#13 - 05/20/2018 10:43 AM - Roland Schulz
I don't think we should wait on that. Wouldn't it have been added after 2 years if it was an essential feature? Users can always use the old version if they want to use it before it is added back (if it isn't added back before 2019 is released).
#14 - 05/20/2018 07:28 PM - Mark Abraham
Roland Schulz wrote:
I don't think we should wait on that. Wouldn't it have been added after 2 years if it was an essential feature? Users can always use the old version if they want to use it before it is added back (if it isn't added back before 2019 is released).
At some point we are likely to have to accept that we won't make all things work. For example, IIRC membed depends on such exclusions. Given the range of alternative approaches for doing this (including using old GROMACS for membed), the relative ease of reimplementing it once we have more API like functionality working, and the amount of preliminary work needed now to be able to produce test cases that will show that the new implementation on Verlet exclusion stuff does all the same things that the old one does, the effort doesn't make much sense.
It's nice having been a community code that accepted lots of contributions, but we can't let that be an anchor around our collective neck.
#15 - 10/01/2018 08:37 AM - Mark Abraham
- Description updated
TPI is only supported with the group scheme
#16 - 10/01/2018 08:38 AM - Mark Abraham
- Related to Task #1971: Removing buggy features vs. keeping workflows added
#17 - 10/01/2018 08:47 AM - Roland Schulz
Is it going to be OK to remove them right after the 2019 branch is created?
#18 - 10/01/2018 08:50 AM - Erik Lindahl
Let's wait until we make the release, but then it's fine. The reason for waiting is that experience tells me we'll have a ton of patches during the beta phase of release-2019 that also need to be backported, so we don't want to have huge divergence in the branches already during the beta.
#19 - 10/01/2018 02:04 PM - Mark Abraham
I'm happy with changes (in master) during the beta phase that don't involve a lot of code movement (e.g. enable c++14, stop group scheme kernel testing, update cmake version and use better implemetnations), but much less happy with general refactoring with many-LOC-but-minor changes that makes for merge complexity, or something like removing handling of charge groups from DD code.
#20 - 11/20/2018 05:33 PM - Gerrit Code Review Bot
Gerrit received a related patchset '2' for Issue #1852.
Uploader: Paul Bauer (paul.bauer.q@gmail.com)
Change-Id: regressiontests~release-2018-17c3977a75438d2b6d8f97f5df83331a2008798e59
Gerrit URL: https://gerrit.gromacs.org/8732
#21 - 01/12/2019 04:03 PM - Gerrit Code Review Bot
Gerrit received a related patchset '1' for Issue #1852.
Uploader: Kevin Boyd (kevin.boyd@uconn.edu)
Change-Id: gromacs~master~I906ffc7c063694fbbd17128ee0d1e62259040306
Gerrit URL: https://gerrit.gromacs.org/8964
#22 - 01/12/2019 07:06 PM - Mark Abraham
I'm think that in master branch we might remove the SIMD and interaction-specific kernels, just leaving the generic one so that we can have it available while perhaps we want that while testing the new Verlet tables? Then again, it's ok to test that against 2019 branch.
#23 - 01/28/2019 02:56 PM - Mark Abraham
- Description updated
https://gerrit.gromacs.org/#/c/8946/ disabled TPI testing until TPI has been ported to run with the Verlet scheme.
We've decided that we can't have all these big re-implement tasks blocking progress here, so I have edited the description accordingly.
#24 - 01/28/2019 03:20 PM - Mark Abraham
- Description updated
Suggested subtasks now in the description. Some of them make sense to do in order from the top, some can start now.
We do want old .tpr files to be readable (e.g. as input for analysis tools) but never able to be used for computing forces.
Note that the free-energy kernel in src/gromacs/nonbonded is still used from the Verlet scheme for FEP, so it and supporting headers for data structures will have to remain for now.
#25 - 01/28/2019 03:56 PM - Gerrit Code Review Bot
Gerrit received a related patchset '1' for Issue #1852.
Uploader: Mark Abraham (mark.j.abraham@gmail.com)
Change-Id: gromacs~master~I082e40e04678c744d9da8119333210878294a021
Gerrit URL: https://gerrit.gromacs.org/9052
#26 - 02/12/2019 05:12 PM - Mark Abraham
- Description updated
removed checklist, were no longer using the redmine feature
#27 - 03/27/2019 05:23 PM - Mark Abraham
Just to be clear - we have made a decision to remove the group scheme for 2020. Ideally we will find time to replace e.g. table support for Verlet, but we can't have this legacy code in the way of multiple projects.
#28 - 04/03/2019 04:27 PM - Gerrit Code Review Bot
Gerrit received a related patchset '1' for Issue #1852.
Uploader: Mark Abraham (mark.j.abraham@gmail.com)
Change-Id: gromacs~master~I3b1313565951f4d7872ddf3b8518860eccf0b
Gerrit URL: https://gerrit.gromacs.org/9395
#29 - 04/29/2019 11:31 AM - Szilárd Páll
- Related to Feature #2931: Tables in Verlet kernels added
#30 - 09/27/2019 03:13 PM - Erik Marklund
Just saw the release notes for 2020 beta, which led me to find this thread/issue. The description at the top of this issue says "got no-PBC simulations working for nxnxm (in Gerrit now)". What is the status for that? I can't find it anywhere. Can someone please point me to whatever work is being done to get no-PBC working again? I'd be happy to help in some way because that functionality is important for our future and current work.
#31 - 02/28/2020 08:55 AM - Mark Abraham
Erik Marklund wrote:
> Just saw the release notes for 2020 beta, which led me to find this thread/issue. The description at the top of this issue says "got no-PBC simulations working for nxnxm (in Gerrit now)". What is the status for that? I can't find it anywhere. Can someone please point me to whatever work is being done to get no-PBC working again? I'd be happy to help in some way because that functionality is important for our future and current work.
That was some work Berk had done (IIRC) and was on Gerrit but private. I'd be happy to help bring it forward if Berk can share it!
#32 - 03/03/2020 02:27 PM - David van der Spoel
I cannot seem to find anything on gerrit, maybe because it is private?
#33 - 03/03/2020 09:03 PM - Berk Hess
No, I have had plans but there is no code.
But I think that the only thing needed is a suitable estimate of the extent of the search grid. Not all atoms need to fall within the grid bounds, atoms outside can be (are automatically) put into the nearest cell. But for reasonable performance not too many atoms should be outside the grid and the grid should not be overly large.
|
{"Source-Url": "http://redmine.gromacs.org/issues/1852.pdf", "len_cl100k_base": 4546, "olmocr-version": "0.1.49", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 14095, "total-output-tokens": 5491, "length": "2e12", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.0005116462707519531, "__label__crime_law": 0.0004229545593261719, "__label__education_jobs": 0.0009794235229492188, "__label__entertainment": 0.0001837015151977539, "__label__fashion_beauty": 0.00023984909057617188, "__label__finance_business": 0.00032258033752441406, "__label__food_dining": 0.0005803108215332031, "__label__games": 0.001453399658203125, "__label__hardware": 0.00128936767578125, "__label__health": 0.0005059242248535156, "__label__history": 0.00043845176696777344, "__label__home_hobbies": 0.00016629695892333984, "__label__industrial": 0.000774383544921875, "__label__literature": 0.0003514289855957031, "__label__politics": 0.0004580020904541016, "__label__religion": 0.0007920265197753906, "__label__science_tech": 0.1524658203125, "__label__social_life": 0.00022292137145996096, "__label__software": 0.01503753662109375, "__label__software_dev": 0.8212890625, "__label__sports_fitness": 0.0004916191101074219, "__label__transportation": 0.0004856586456298828, "__label__travel": 0.00020956993103027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 16516, 0.13759]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 16516, 0.02769]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 16516, 0.88497]], "google_gemma-3-12b-it_contains_pii": [[0, 2501, false], [2501, 4822, null], [4822, 7470, null], [7470, 11188, null], [11188, 14500, null], [14500, 16516, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2501, true], [2501, 4822, null], [4822, 7470, null], [7470, 11188, null], [11188, 14500, null], [14500, 16516, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 16516, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 16516, null]], "pdf_page_numbers": [[0, 2501, 1], [2501, 4822, 2], [4822, 7470, 3], [7470, 11188, 4], [11188, 14500, 5], [14500, 16516, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 16516, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
2cdcecbf3864b2bfd3d25d299089732b7fda8da8
|
Grouping and joining transformations in the data extraction process
Marcin Gorawski*, Paweł Marks
Institute of Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland
Abstract
In this paper we present a method of describing ETL processes (Extraction, Transformation and Loading) using graphs. We focus on implementation aspects such as division of a whole process into threads, communication and data exchange between threads, deadlock prevention. Methods of processing of large data sets using insufficient memory resources are also presented upon examples of joining and grouping nodes. Our solution is compared to the efficiency of the OS-level virtual memory in a few tests. Their results are presented and discussed.
1. Introduction
Nowadays data warehouses gather tens of gigabytes of data. The data, before loading to the warehouse, is often read from many various sources. These sources can differ in terms of a data format, so there is necessity of applying proper data transformations making the data uniformly formatted. In consecutive steps the data set is filtered, grouped, joined, aggregated and finally loaded to a destination. The destination can be one or more warehouse tables. A whole process of reading, transforming and data loading is called data extraction process (ETL).
The transformations used in the ETL process can differ in terms of complexity. A few of them are simple (e.g. filtration, projection), whereas others are very long lasting and require a lot of operational memory (e.g. grouping, joining). However, the common feature of the transformations is that each one contains at least one input and an output. This allows to describe the extraction process using a graph, whose nodes correspond to objects performing some operations on tuples, and its edges define data flow paths.
Most of commercial tools like Oracle WB do not consider internal structure of transformations and graph architecture of ETL processes. Exceptions are the research [1,2], where the authors describe ETL ARKTOS (ARKTOS II) tool. It
*Corresponding author: e-mail address: Marcin.Gorawski@polsl.pl
can (graphically) model and execute practical ETL scenarios, providing us with primitive expressions that brings the control over the typical tasks using declarative language. Work [3] presents advanced research on the prototypes containing the AJAX data cleaning tool.
To optimize the ETL process, there is often designed a dedicated extraction application, adjusted to requirements of a particular data warehouse system. Based on the authors’ experiences [4,5], a decision was made to build a developmental ETL environment using JavaBeans components. Similar approach was proposed, in the meantime in work [6]. J2EE architecture with the ETL and ETLLet container was presented there, providing efficient ways of execution, controlling and monitoring of ETL process tasks for the continuous data propagation case.
Further speeding up of the ETL process forced us to give the JavaBeans platform up. An ETL-DR environment [7] is a successor to the ETL/JB and DR/JB [8]. It is a set of Java object classes, used by a designer to build extraction applications. These are analogous to JavaBeans components in the DR/JB environment. However, object properties are saved in an external configuration file, which is read by an environment manager object. It relieves us from recompilation of the application each time the extraction parameters change. In comparison to ETL/JB and DR/JB we improved significantly the processing efficiency and complexity of the most important transformations: grouping and joining. The possibility of storing data on a disk was added when the size of the data set requires much more memory than it is available.
In the following sections we present in detail a method of describing ETL processes using graphs and we show how this description influences the implementation. The problems resulting from the graph usage are also discussed and the methods of data processing using insufficient memory resources are presented.
2. Extraction graph
Operations performed during the extraction process can be divided into three groups:
– reading source data,
– data transformations,
– writing data to a destination.
Fig. 1. One of the simplest extraction graphs. Node E is an extractor, node \( T \) is a transformation and node \( I \) is an inserter
Nodes belonging to the above mentioned operation groups are respectively: extractors (E), transformations (T) and inserters (I). From the graph point of view extractors have only outputs, transformations have both inputs and outputs, whereas inserters contain inputs only. By connecting inputs to outputs we create a connection net that defines data flow paths (Fig. 1). The data flow inside the node is possible in one direction only: from the inputs to the outputs, in the opposite direction it is forbidden. It is also assumed that connection net does not contain closed loops, which means there is no possibility to enter the same graph node traversing along the selected path of the graph. Such a net of nodes and connections is called the directed acyclic graph DAG.
3. ETL-DR data extraction environment
ETL-DR is our research environment designed in Java. It uses the extraction graph idea presented above to describe the extraction processes. During processing each graph node is associated with a thread, that is an instance of a transformation, or an extractor, or an inserter.
Available components are:
1. Extractors
- FileExtractor (FE) – reads tuples from a source file,
- DBExtractor (DE) – reads tuples from a database,
2. Transformations
- AggregationTransformation (AgT) – aggregates a specified attribute,
- FilterTransformation (FiT) – filters the stream of tuples,
- FunctionTransformation (FuT) – user-definable tuple transformation,
- GeneratorTransformation (GeT) – generates ID for each tuple,
- GroupTransformation (GrT) – grouping,
- JoinTransformation (JoT) – joining,
- MergeTransformation (MeT) – merges two streams of tuples,
- ProjectionTransformation (PrT) – projection,
- UnionTransformation (UnT) – union,
3. Inserters
- FileInserter (FI) – writes tuples to a destination file,
- DBInserter (DI) – writes tuples to a database table via JDBC interface,
- OracleDBInserter (ODI) – writes tuples to a database using Oracle specific SQL*Loader,
4. Specials
- VMQueue (VMQ) – FIFO queue which stores data on a disk.
Most of the components process data on-the-fly, which means each tuple just received is transformed or analyzed independently and there is no need to gather a whole data set. The exceptions are: joining node JoT, grouping node GrT and VMQ queue.
3.1. Implementation of graph nodes interconnections
In order to facilitate analysis of interconnections between the graph nodes we have to describe the structure of inputs and outputs of the ETL-DR extraction graph nodes. Each node has a unique ID. Each node input contains ID of a source node assigned by the graph designer, and an automatically assigned number of an output channel of the source node. A node output is a multichannel FIFO buffer with the number of channels equal to the number of inputs connected to the node (Fig. 2). When a node produces output tuples, it puts them into its output, where they are grouped into tuple packets. Upper limit of the packet size is defined by the designer. Packets are gathered in queues, separately for each output channel. The queue size is also limited to avoid unnecessary memory consumption.

3.2. Data exchange between nodes and a risk of deadlock
Let us analyze a case of processing performed by a part of the graph presented in Fig. 3a. The function node FuT(11) produces tuples with attributes (eID, date, transactionsPerDay), and the grouping node GrT(12) computes an average number of transactions for each employee. This is similar to the SQL query below:
```sql
SELECT eID, AVG(transactionsPerDay) AS avgTPD
FROM GrTFuT
GROUP BY eID
```
The joining node JoT(13) performs an action defined by the following SQL query:
```sql
SELECT s1.eID, s1.date, s1.transactionsPerDay, s2.avgTPD
FROM JoTFuT s1, JoTGrT s2
WHERE s1.eID = s2.eID
```
Grouping and joining transformations in the data extraction process 139
Such simple operations like grouping and joining are dangerous because they can be a reason of deadlock. This is a result of the data transferring method between node threads.
The joining node works as follows: it receives tuples from the slave input and puts them into a temporary buffer, next it receives tuples from the primary input. Each tuple from the primary input is checked if it can be joined with tuples in a temporary buffer according to the specified join condition. In the presented example, slave input is the one connected to the grouping node, as it is greatly possible, that after grouping the size of the data set will decrease and a smaller number of tuples will be kept in memory. Tuples generated by the function node are simultaneously gathered in both output channels of the node for the nodes JoT(13) and GrT(12). The grouping node aggregates data all the time, but the joining node waits for the grouped data first, and it still does not read anything from the function node. After exceeding the limit of the output queue size, the function node is halted until the queue size decreases below the specified level. This way a deadlock occurs:
- the node FuT(11) waits until the node JoT(13) starts reading data from it,
- the node GrT(12) waits for the data from the node FuT(11),
- the node JoT(13) waits for the data from the node GrT(12).
Fig. 3. Typical deadlock prone graph nodes connections (a) and a way of deadlock avoidance by the use of VMQueue component (b)
To eliminate the reason for the deadlock we have to make sure that the data from the function node FuT(11) are fetched continuously without exceeding the queue size limit. To do it we created a special VMQueue component. This is a FIFO queue with ability of storing data on a disk. It reads tuples from its input, no matter if they can be hand further or not. If tuples are fetched from the VMQ node continuously it does nothing more but transfers data from the input to the output. In the other case, it writes tuples to the disk in order to avoid overfilling of the output queue of its source node. Next, when VMQueue destination continues processing, the tuples are read from the disk and sent to the queue output. Inserting a VMQueue node between FuT(11) and JoT(13) avoids the deadlock (Fig. 3b).
3.3. Formal definition of the deadlock prone graph nodes subset
A deadlock may occur if two or more data flow paths that split in one node of the graph, meet again in another node. In other words, a given node \( X \) is connected with any of its direct or indirect source nodes by two or more paths. This let us conclude that node \( X \) must have more than one input.
Let us represent a set of source nodes of the node \( X \) as \( \text{SourceNodes}(X) \), and a set of source nodes of the i-th input of \( X \) as \( \text{InputSourceNodes}(X,i) \). We can define:
- \( \text{InputSourceNodes}(X,i) = \text{SourceNodes}(X.\text{in}[i].\text{sourceID}) \cup \{X.\text{in}[i].\text{sourceID}\} \)
- \( \text{SourceNodes}(X) = \emptyset \) if \( X \) is an extractor,
- \( \text{SourceNodes}(X) = \bigcup_{i=1}^{n} \text{InputSourceNodes}(X,i) \) if \( X \) is a transformation or an inserter
- \( \text{CommonNodes}(X,i,j) = \text{InputSourceNodes}(X,i) \cap \text{InputSourceNodes}(X,j) \)
- \( \text{LastNode}(N) = \{X \in N : \text{SourceNodes}(X) = N \setminus \{X\}\} \)
If for each node \( X \) of an extraction graph, which is not an extractor, the following condition is satisfied:
\[
\forall_{i=1}^{n} \forall_{j=1}^{n} \; i \neq j \Rightarrow \text{CommonNodes}(X,i,j) = \emptyset
\]
then deadlock cannot occur. Otherwise deadlock is possible and we should use VMQueue component and insert it into the graph, to avoid the application hang. Insertion of VMQueue node makes sense only behind the nodes from a \( \text{LastNode}(\text{CommonNodes}(X,I,j)) \) set, that is a set of the last nodes from the set of common parts of the two data flow paths. In the example presented in the previous section it was the FuT(11) node (Fig. 3b).
3.4. Temporary data buffering on disk
During an extraction process a large number of tuples is processed. When they need to be buffered, there is a problem of selection of the right place for the buffer. Keeping them in memory is impossible because the size of the data set is usually much bigger than that of the available RAM. The only solution is storing the data on a disk. Two approaches are possible: virtual memory supported by the operating system or storing implemented on the application level in algorithms used in transformation nodes. In our ETL-DR environment the nodes using application-level virtual memory are: VMQueue, GroupTransformation and JoinTransformation.
**VMQueue Component.** As it was presented in Sect. 3.2 VMQueue component is a FIFO queue able to store the buffered data on a disk. Its task is to ensure the data is read from its source as it comes, even if the node receiving data from VMQueue does not work. In such a case tuples are stored in a disk file rather than put into the output buffer. Next when possible, tuples are read from the file
and hand further. Because of a sequential access to the disk file, this solution is more efficient than the OS-level virtual memory.
**Group Transformation Component.** A grouping component can work in one of three modes:
1. input tuples are sorted according to the grouping attribute values,
2. tuples are not sorted, grouping in memory,
3. tuples are not sorted, external grouping.
```
procedure Group()
Begin
List fileList;
While Input.hasTuples() do
Tuple T = Input.getTuple();
If not HM.contains(Attributes(T)) then
HM.put(Attributes(T), Aggregates(T));
End if
Aggregates AG = HM.get(Attributes(T));
AG.doAggregate(T);
If HM.size() > SIZELIMIT then
fileList.add(WriteToFile(HM));
HM.clear();
End if
End while
AggrSource as = getSource(fileList, HM);
Aggregates AG = null;
While as.hasNext() do
If AG == null then
AG = a.next();
Else
Aggregates newAG = as.next();
If (newAG.attr == AG.attr) then
AG.aggregate(newAG);
Else
ProduceOutputTuple(AG);
AG = newAG;
End if
End if
End while
ProduceOutputTuple(AG);
End
```
Fig. 4. External grouping algorithm
In case 1) aggregates are computed as they come, and memory usage level is very low. In case 2) each new combination of the grouping attributes is saved in a hash table with associated aggregates. If such a combination appears again during processing, it is located and the aggregates are updated. The number of entries in the hash table at the end of the processing equals to the number of tuples produced. Both cases 1) and 2) use only RAM.
Case 3) has features of the processing used in cases 1) and 2). First, data set is gathered in the hash table and aggregates are computed (Fig. 4). When the number of entries in the table exceeds the specified limit, the content of the table is written to the external file in the sorted order according to the grouping attribute values. Next, the hash table is cleared and the processing is continued. Such a cycle repeats until the input tuple stream ends. Then the data integration process is run. Tuples are read from the previously created files and final aggregates values are computed. This is very similar to case 1) processing with the exception of getting data from external files instead of the node input.
**JoinTransformation Component.** A joining node works based on the algorithm presented in Fig. 5. The first step is collecting tuples from the slave input. They can be loaded to a temporary associating array or written to a temporary disk file. Before writing to the file, tuples are sorted according to the joining attributes using the external version of the standard Merge-Sort algorithm: tuples are gathered in memory, if the limit of tuples in memory is exceeded they are sorted and written to a file. Next portions of the data set are treated in the same way. Finally, tuples from all the generated sorted files are integrated into one big sorted file. Sorting lets us locate any tuple in the external file in \( \log(n) \) time using the binary search algorithm.
**procedure** Join()
**Begin**
**While** Input(2).hasTuples() **do**
Tuple T = Input(2).getTuple();
HM.put(Attributes(T), T);
**End while**
**While** Input(1).hasTuples() **do**
Tuple T = Input(1).getTuple();
Tuples[] TT = HM.get(Attributes(T));
**For each** JT in TT **do**
Tuple O = Join(T, JT);
ProduceOutputTuple(O);
**End for**
**End while**
**End**
Fig. 5. General joining algorithm
Additional indexing structure located in memory also decreases searching time, by reducing the number of accesses to the file. The index holds locations of the accessed tuples, which enables narrowing down the searching range when accessing consecutive tuples.
The second phase is the same, no matter if the temporary buffer is located in memory or on a disk. Only the implementation of the HM (HashMap) object changes in the algorithm presented in Fig. 5. Each tuple from the primary input is checked if it can be joined with tuples in the temporary buffer according to the specified join condition.
4. External processing tests
For tests we used data files that forced Java Virtual Machine to use much more memory than it was physically available. Tests were performed using the computer with AMD Athlon 2000 processor working under WindowsXP Professional. During tests we were changing the size of the available RAM.
4.1. Grouping test
Grouping was tested based on the extraction graph containing an extractor $FE$, a grouping node $GrT$ and an inserter $I$ (Fig. 6). The extractor reads the tuple stream with attributes ($eID, date, value$), in which for each employee $eID$ and for each day of his work, the transaction values were saved. The number of employee transactions per day varied from 1 to 20. The processing can be described by SQL query as:
```
SELECT eID, date, sum(value) as sumVal, count(*) as trCount
FROM GrT
GROUP BY eID, date
```
The processing time was measured depending on the number of input tuples (10, 15, 20 and 25 millions) and the type of processing. The result chart contains the total processing time (TT) and the moment of loading the first tuple to a destination, so called Critical Time (CT). During all the tests using external grouping (Ext) JVM was assigned only 100MB of RAM. During grouping in memory, we examined the two cases: JVM memory was set with some margin (Normal) and with a minimal possible amount of RAM (Hard) that guaranteed successful completion of the task. The obtained results are shown in Fig. 7.
The test computer contained 384MB physical RAM, and for JVM using virtual memory and for 10m and 15m tuples it was assigned respectively 450MB and 550MB during Normal test, then 300MB and 425MB during Hard test.
As it can be seen, the most efficient processing method is definitely the one using application-level data storing. Its processing time changes from 129 sec. to 322 sec. depending on the number of input tuples. The use of OS-level virtual memory causes that the whole process takes much more time. Only for 10 million of input tuples and strongly limited JVM memory, which resulted in a very low usage of the virtual memory, we obtained results slightly better than for built-in data storing. However, for 15 million tuples the processing takes an extremely long time (the line going rapidly outside the chart). The main reason for so low efficiency of a virtual memory are random accesses to the memory caused by updating aggregates in temporary buffers and Java garbage collector. The application-level storing accesses data files sequentially, and as a result this method is much more efficient.
We have not finished the OS-level virtual memory tests for 20m and 25m tuples because it needed extremely long time (several hours). Our goal was only to show that the application-level buffering can be much better than the OS-level buffering.
4.2. Joining test
Joining test is based on the extraction graph shown in Fig. 8. The extractors read the same number of tuples: \( FE_1 \) reads tuples with attributes...
(eID, date, depID), describing where each employee was working each day, whereas FE2 reads a set of tuples produced in the previous test (eID, date, sumVal, trCount). Joining attributes are (eID, date) and processing times were measured for the following number of input tuples from each extractor: 10, 15 and 20 millions.
During the test the computer was equipped with 256MB RAM, JVM was assigned 100MB RAM when joining with data storing on disk was used, and respectively 400MB and 600MB for 10 and 15 million of tuples when using virtual memory. In this test we can still observe benefits of using application-level data storing, but the difference in comparison to OS virtual memory is not so big as in the grouping test because this time the external file is accessed randomly, not sequentially. The obtained results are presented in Fig. 9.

Fig. 8. Joining test extraction graph

Fig. 9. Processing times measured during joining test. TT is a total processing time, whereas CT denotes a moment when the first output tuple is produced (Critical Time)
4.3. Real extraction test
We also performed a real extraction test. The ETL process generates a star schema data warehouse containing a fact table and two dimensions. In this test, both grouping and joining nodes appear in the extraction graph and they run concurrently: when the grouping node GrT(2) produces output tuples, the joining node JoT(30) puts them into its internal buffer (memory or a disk file). This test lets us examine the behavior of the buffering techniques when more than one node require a lot of memory resources.
The size of the input data set was 300MB. JVM required 475MB RAM to complete the task using virtual memory, and only 100MB when using application-level data storing. The computer had 256MB RAM. The ETL process using data storing took only 26 minutes, whereas when using the virtual memory, it needed 3 hours to complete only 10% of the whole task (the whole processing could take even 30 hours). Continuation of the test did not make sense, because we could already conclude that in this case the efficiency of the virtual memory was extremely low.

**Fig. 10.** The main part of the extraction graph generating star schema data warehouse. Path FE(1)-FI(32) generates fact table, whereas path FE(1)-FI(5) is responsible for producing one of the dimension tables. Extractor FE(1) reads 300MB data file.
In our opinion the obtained results come from of the random accesses to the VM swap file. When many nodes keep a lot of data in a virtual memory and access it randomly (because each node runs as an independent thread) the swap file has to be read and written very often from various locations. This does not take place during application-level buffering, the external files are accessed sequentially if only it is possible (depending on the algorithm that is used).
5. Conclusions
This paper presents a concept of describing extraction processes using graphs, the meaning of graph nodes and the graph edges in the extraction process. We focused on a few implementation aspects like interconnections between nodes and the possibility of deadlock occurrence when particular graph structures are used. A method of avoiding deadlocks was also presented and it was described
Grouping and joining transformations in the data extraction process 147
by mathematical formulas. Next we introduced algorithms for external data queuing, groping and joining.
Although not tested in this paper, the presented data queuing is the efficient method of avoiding deadlocks that may occur in our ETL-DR extraction environment due to the data transferring method we used. The grouping transformation can process data sets of any size, the only limitation is the available temporary disk space. It makes use of the additional tuple stream properties, such as sorted order according to the values of the grouping attributes. The joining transformation can also process an unlimited number of tuples. It can store its slave-input tuples to disk files in a sorted order and then access any tuple in the file in log(n) time.
Our research proves that a virtual memory offered by operating systems is not always the efficient solution. Dedicated algorithms of storing data in external files working on the application level are more efficient due to elimination of random accesses to a disk, which is the weakest side of the OS virtual memory. This weakness is especially emphasized in Java applications. A typical JVM prefers allocating new memory blocks to freeing unnecessary ones as soon as possible. This may be very efficient when only physical RAM is in use, but when JVM enters a virtual memory area and a garbage collector tries to recover unused memory blocks from it, the efficiency of a whole application dramatically drops.
References
|
{"Source-Url": "http://journals.umcs.pl/ai/article/download/3050/2246", "len_cl100k_base": 5726, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 27307, "total-output-tokens": 6782, "length": "2e12", "weborganizer": {"__label__adult": 0.0002491474151611328, "__label__art_design": 0.000244140625, "__label__crime_law": 0.0003523826599121094, "__label__education_jobs": 0.00072479248046875, "__label__entertainment": 4.547834396362305e-05, "__label__fashion_beauty": 0.00011712312698364258, "__label__finance_business": 0.0003633499145507813, "__label__food_dining": 0.0002601146697998047, "__label__games": 0.00031566619873046875, "__label__hardware": 0.0011043548583984375, "__label__health": 0.0003705024719238281, "__label__history": 0.000194549560546875, "__label__home_hobbies": 8.20159912109375e-05, "__label__industrial": 0.000576019287109375, "__label__literature": 0.00014662742614746094, "__label__politics": 0.0001989603042602539, "__label__religion": 0.0002884864807128906, "__label__science_tech": 0.04742431640625, "__label__social_life": 8.040666580200195e-05, "__label__software": 0.019195556640625, "__label__software_dev": 0.9267578125, "__label__sports_fitness": 0.00017547607421875, "__label__transportation": 0.00035834312438964844, "__label__travel": 0.00015854835510253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27050, 0.01517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27050, 0.8472]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27050, 0.89303]], "google_gemma-3-12b-it_contains_pii": [[0, 2154, false], [2154, 4429, null], [4429, 6774, null], [6774, 8507, null], [8507, 10880, null], [10880, 13717, null], [13717, 15030, null], [15030, 17365, null], [17365, 19431, null], [19431, 20961, null], [20961, 22127, null], [22127, 24421, null], [24421, 27050, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2154, true], [2154, 4429, null], [4429, 6774, null], [6774, 8507, null], [8507, 10880, null], [10880, 13717, null], [13717, 15030, null], [15030, 17365, null], [17365, 19431, null], [19431, 20961, null], [20961, 22127, null], [22127, 24421, null], [24421, 27050, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27050, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27050, null]], "pdf_page_numbers": [[0, 2154, 1], [2154, 4429, 2], [4429, 6774, 3], [6774, 8507, 4], [8507, 10880, 5], [10880, 13717, 6], [13717, 15030, 7], [15030, 17365, 8], [17365, 19431, 9], [19431, 20961, 10], [20961, 22127, 11], [22127, 24421, 12], [24421, 27050, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27050, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
c9b419d7bc1df14d2cc5c051fba6d6ebe7ae5813
|
Adding Symmetry Reduction to UPPAAL*
Martijn Hendriks¹, Gerd Behrmann², Kim Larsen², Peter Niebert³**, and Frits Vaandrager¹
¹ Nijmeegs Instituut voor Informatica en Informatiekunde, University of Nijmegen, The Netherlands
{martijnh,fvaan}@cs.kun.nl
² Department of Computing Science, Aalborg University, Denmark
{behrmann,kgl}@cs.auc.dk
³ Laboratoire d’Informatique Fondamentale, CMI, Université de Provence, France
peter.niebert@lif.univ-mrs.fr
Abstract. We describe a prototype extension of the real-time model checking tool UPPAAL with symmetry reduction. The symmetric data type $\text{scalarset}$, which is also used in the MuR$\phi$ model checker, was added to UPPAAL’s system description language to support the easy static detection of symmetries. Our prototype tool uses state swaps, described and proven sound earlier by Hendriks, to reduce the space and memory consumption of UPPAAL. Moreover, the reduction strategy is canonical, which means that the symmetries are optimally used. For all examples that we experimented with (both academic toy examples and industrial cases), we obtained a drastic reduction of both computation time and memory usage, exponential in the size of the scalar sets used.
1 Introduction
Model checking is a semi-automated technique for the validation and verification of all kinds of systems [8]. The approach requires the construction of a model of the system and the definition of a specification for the system. A model checking tool then computes whether the model satisfies its specification. Nowadays, model checkers are available for many application areas, e.g., hardware systems [10, 22], finite-state distributed systems [17], and timed and hybrid systems [21, 27, 25, 16].
Despite the fact that model checkers are relatively easy to use compared to manual verification techniques or theorem provers, they are not being applied on a large scale. An important reason for this is that they must cope with the state
* Supported by the European Community Project IST-2001-35304 (AMETIST), http://ametist.cs.utwente.nl.
** Peter Niebert suggested the method for efficient computation of canonical representatives at an AMETIST project meeting, and was therefore invited to join the list of authors after acceptance of the paper.
space explosion problem, which is the problem of the exponential growth of the
state space as models become larger. This growth often renders the mechanical
verification of realistic systems practically impossible: there just is not enough
time or memory available. As a consequence, much research has been directed
at finding techniques to fight the state space explosion. One such a technique is
the exploitation of behavioral symmetries [18, 23, 20, 19, 12, 7]. The exploitation
of full symmetries can be particularly profitable, since its gain can approach a
factorial magnitude.
There are many timed systems which clearly exhibit full symmetry, e.g., Fish
cher’s mutual exclusion protocol [1], the CSMA/CD protocol [24, 27], industrial
audio/video protocols [13], and distributed algorithms, for instance [4].
Motivated by these examples, the work presented in [14] describes how Up
Paal, a model checker for networks of timed automata [21, 3, 2], can be enhanced
with symmetry reduction. The present paper puts this work to practice: a proto
type of UPPaAL with symmetry reduction has been implemented. The symmetric
data type scalarset, which was introduced in the MuR^> model checker [10], was
added to UPPaAL’s system description language to support the easy static detec
tion of symmetries. Furthermore, the state swaps described and proven sound
in [14] are optimally used to reduce the space and time consumption of the
model checking algorithm. Run-time data is reported for the examples men
tioned above, showing that symmetry reduction in a timed setting can be very
effective.
Related work. Symmetry reduction is a well-known technique to reduce the
resource requirements for model checking algorithms, and it has been success
fully implemented in model checkers such as MuR^> [10, 19], SMV [22], and SPIN
[17, 6]. As far as we know, the only model checker for timed systems that exploits
symmetry is RED [25, 26]. The symmetry reduction technique used in RED, how
ever, gives an over approximation of the reachable state space (this is called the
anomaly of image false reachability by the authors). Therefore, RED can only
be used to ensure that a state is not reachable when it is run with symmetry
reduction, whereas symmetry enhanced UPPaAL can be used to ensure that a
state is reachable, or that it is not reachable.
Contribution. We have added symmetry reduction as used within MuR^>,
a well-established technique to combat the state space explosion problem, to
the real-time model checking tool UPPaAL. For researchers familiar with model
checking it will come as no surprise that this combination can be made and
indeed leads to a significant gain in performance. Still, the effort required to
actually add symmetry reduction to UPPaAL turned out to be substantial.
The soundness of the symmetry reduction technique that we developed for
UPPaAL does not follow trivially from the work of Ip and Dill [19] since the de
scription languages of UPPaAL and MuR^>, from which symmetries are extracted
automatically, are quite different. In fact, the proof that symmetry reduction for
UPPaAL is sound takes up more than 20 pages in [14].
The main theoretical contribution of our work is an efficient algorithm for the computation of a canonical representative. This is not trivial due to UPPAAL's symbolic representation of sets of clock valuations.
Many timed systems exhibit symmetries that can be exploited by our methods. For all examples that we experimented with, we obtained a drastic reduction of both computation time and memory usage, exponential in the size of the scalar sets used.
Outline. Section 2 presents a very brief summary of model checking and symmetry reduction in general, while Sections 3 and 4 introduce symmetry reduction for the UPPAAL model checker in particular. In Section 5, we present run-time data of UPPAAL's performance with and without symmetry reduction, and Section 6 summarizes and draws conclusions.
A full version of the present paper including proofs of lemma 1 and of theorem 2 is available as [15].
2 Model Checking and Symmetry Reduction
This section briefly summarizes the theory of symmetry presented in [19], which is reused in a timed setting since (i) it has proven to be quite successful, and (ii) it is designed for reachability analysis, which is the main purpose of the UPPAAL model checker. We simplify (and in fact generalize) the presentation of [19] using the concept of bisimulations.
In general, a transition system is a tuple $(Q, Q_0, \Delta)$, where $Q$ is a set of states, $Q_0 \subseteq Q$ is a set of initial states, and $\Delta \subseteq Q \times Q$ is a transition relation between states. Figure 1 depicts a general forward reachability algorithm which, under the assumption that $Q$ is finite, computes whether there exists a reachable state $q$ that satisfies some given property $\phi$ (denoted by $q \models \phi$).
```
(1) passed := \emptyset
(2) waiting := Q_0
(3) while waiting \neq \emptyset do
(4) get q from waiting
(5) if $q \models \phi$ then return YES
(6) else if $q \notin passed$ then
(7) add q to passed
(8) waiting := waiting \cup \{ q' \in Q \mid (q, q') \in \Delta \}
(9) fi
(10) od
(11) return NO
```
Fig. 1. A general forward reachability analysis algorithm.
Due to the state space explosion problem, the number of states of a transition system frequently gets too big for the above algorithm to be practical. We would
like to exploit structural properties of transition systems (in particular symmetries) to improve its performance. Here the well-known notion of bisimulation comes in naturally:
**Definition 1 (Bisimulation).** A bisimulation on some transition system, say $(Q, Q_0, \Delta)$, is a relation $R \subseteq Q \times Q$ such that, for all $(q, q') \in R$,
1. $q \in Q_0$ if and only if $q' \in Q_0$,
2. if $(q, r) \in \Delta$ then there exists an $r'$ such that $(q', r') \in \Delta$ and $(r, r') \in R$,
3. if $(q', r') \in \Delta$ then there exists an $r$ such that $(q, r) \in \Delta$ and $(r, r') \in R$.
Suppose that, before starting the reachability analysis of a transition system, we know that a certain equivalence relation $\approx$ is a bisimulation and respects the predicate $\phi$ in the sense that either all states in an equivalence class satisfy $\phi$ or none of them does. Then, when doing reachability analysis, it suffices to store and explore only a single element of each equivalence class. To implement the state space exploration, a representative function $\theta$ may be used that converts a state to a representative of the equivalence class of that state:
$$\forall q \in Q \ (q \approx \theta(q))$$
Using $\theta$, we may improve the algorithm in Figure 1 by replacing lines 2 and 8, respectively, by:
(2) $\text{waiting} := \{ \theta(q) \mid q \in Q_0 \}$
(8) $\text{waiting} := \text{waiting} \cup \{ \theta(q') \mid (q, q') \in \Delta \}$
It can easily be shown that the adjusted algorithm remains correct: for all (finite) transition systems the outcomes of the original and the adjusted algorithm are equal. If the representative function is “good”, which means that many equivalent states are projected onto the same representative, then the number of states to explore, and consequently the size of the passed set, may decrease dramatically. However, in order to apply the approach, the following two problems need to be solved:
- A suitable bisimulation equivalence that respects $\phi$ needs to be statically derived from the system description.
- An appropriate representative function $\theta$ needs to be constructed that satisfies formula (1). Ideally, $\theta$ satisfies $q \approx q' \Rightarrow \theta(q) = \theta(q')$, in which case it is called canonical.
In this paper, we use symmetries to solve these problems. As in [19], the notion of automorphism is used to characterize symmetry within a transition system. This is a bijection on the set of states that (viewed as a relation) is a bisimulation. Phrased alternatively:
**Definition 2 (Automorphism).** An automorphism on a transition system $(Q, Q_0, \Delta)$ is a bijection $h : Q \rightarrow Q$ such that
1. \( q \in Q_0 \) if and only if \( h(q) \in Q_0 \) for all \( q \in Q \), and
2. \( (q, q') \in \Delta \) if and only if \( (h(q), h(q')) \in \Delta \) for all \( q, q' \in Q \).
Let \( H \) be a set of automorphisms, let \( \text{id} \) be the identity function on states, and let \( G(H) \) be the closure of \( H \cup \{\text{id}\} \) under inverse and composition. It can be shown that \( G(H) \) is a group, and it induces a bisimulation equivalence relation \( \approx \) on the set of states as follows:
\[
q \approx q' \iff \exists h \in G(H) \ (h(q) = q')
\]
(2)
We introduce a symmetric data type to let the user explicitly point out the symmetries in the model. Simple static checks can ensure that the symmetry that is pointed out is not broken. Our approach to the second problem of coming up with good representative functions consists of “sorting the state” w.r.t. some ordering relation on states using the automorphisms. For instance, given a state \( q \) and a set of automorphisms, find the smallest state \( q' \) that can be obtained by repeatedly applying automorphisms and their inverses to \( q \). It is clear that such a \( \theta \) satisfies the correctness formula (1), since it is constructed from the automorphisms only.
3 Adding Scalarsets to UPPAAL
The tool UPPAAL is a model checker for networks of timed automata extended with discrete variables (bounded integers, arrays) and blocking, binary synchronization as well as non-blocking broadcast communication (see for instance [21]).
In the remainder of this section we illustrate by an example UPPAAL's description language extended with a scalarset type constructor allowing symmetric data types to be syntactically indicated. Our extension is based on the notion of scalarset first introduced by Ip and Dill in the finite-state model checking tool MUR\(P^\text{F} \) [10, 19]. Also our extension is based on the C-like syntax to be introduced in the forthcoming version 4.0 of UPPAAL.
To illustrate our symmetry extension of UPPAAL we consider Fischer's mutual exclusion protocol. This protocol consists of \( n \) process identical up to their unique process identifiers. The purpose of the protocol is to insure mutual exclusion on the critical sections of the processes. This is accomplished by letting each process write its identifier (\( \text{pid} \)) in a global variable (\( \text{id} \)) before entering its critical section. If after some given lower time bound (say 2) \( \text{id} \) still contains the \( \text{pid} \) of the process, then it may enter its critical section.
A scalarset of size \( n \) may be considered as the subrange \( \{0, 1, \ldots, n-1\} \) of the natural numbers. Thus, the \( n \) process identifiers in the protocol can be modeled using a scalarset with size \( n \). In addition to the global variable \( \text{id} \), we use the array \text{active} to keep track of all active locations of the processes\(^4\).
Global declarations are the following:
\(^4\) This array is actually redundant and not present in the standard formulations of the protocol. However, it is useful for showing important aspects of our extension.
typedef scalarset[3] proc_id; // a scalarset type with size 3
proc_id id; // declaration of a proc_id
// variable
bool set; // declaration of a boolean
int active[proc_id]; // declaration of an array
// indexed by proc_id
The first line defines proc_id to be a scalarset type of size 3, and the second line declares id to be a variable over this type. Thus scalarset is in our extension viewed as a type constructor. In the last line we show a declaration of an array indexed by elements of the scalarset proc_id.
At this point the only thing missing is the declaration of the actual processes in the system. In the description language of UPPAAL, processes are obtained as instances of parameterized process templates. In general, templates may contain several different parameters (e.g. bounded integers, clocks, and channels). In our extension we allow in addition the use of scalarsets as parameters. In the case of Fischer’s protocol the processes of the system are given as instances of the template depicted in Figure 2. The template has one local clock, x, and no local
process Fischer (const proc_id pid)
Fig. 2. The template for Fischer’s protocol.
variables. Note that the header of the template defines a (constant) scalarset parameter pid of type proc_id. Access to the critical section cs is governed by suitable updates and tests of the global scalarset variable id together with upper and lower bound time constraints on when to proceed from requesting access (req) respectively proceed from waiting for access (wait). Note that all transitions update the array active to reflect the current active location of the process. The instantiation of this template and declaration of all three process in the system can be done as follows:
FischerProcs = forall i in proc_id : Fischer(i);
system FischerProcs;
The `forall` construct iterates over all elements of a declared scalarset type. In this case the iteration is over `proc_id` and a set of instances of the template `Fischer` is constructed and bound to `FischerProcs`. In the second line the final system is defined to be precisely this set.
4 Using Scalarsets for Symmetry Reduction
As a preliminary to this section we briefly mention the state representation of `UPPAAL`. A state is a tuple \((I, v, Z)\), where \(I\) is the location vector, \(v\) is the integer variable valuation, and \(Z\) is a zone, which is a convex set of clock valuations that can efficiently be represented by a difference bounded matrix (DBM) [5,9].
4.1 Extraction of Automorphisms
This subsection is a very brief summary of [14], to which we refer for further details. The new syntax described in the previous section enables us to derive the following information from a system description:
- A set \(\Omega\) of scalarset types.
- For each \(\alpha \in \Omega\): (i) a set \(V_\alpha\) of variables of type \(\alpha\), and (ii) a set \(D_\alpha\) of pairs \((a, n)\) where \(a\) is an array and \(n\) is a dimension of \(a\) that must be indexed by variables of type \(\alpha\) to ensure soundness. We assume that arrays that are indexed by scalarsets do not contain elements of scalarsets. The reason is that this would make computation of a canonical representative as hard as testing for graph isomorphism.
- A partial mapping \(\gamma : P \times \Omega \rightarrow \mathbb{N}\) that gives for each process \(p\) and scalarset \(\alpha\) the element of \(\alpha\) with which \(p\) is instantiated. This mapping is defined by quantification over scalarsets in the process definition section.
This information enables us to derive so-called state swaps. Let \(Q\) be the set of states of some `UPPAAL` model, and let \(\alpha\) be a scalarset type in the model with size \(n\). A state swap \(\text{swap}_{ij}^{\alpha} : Q \rightarrow Q\) can be defined for all \(0 \leq i < j < n\), and consists of two parts:
- The multiple process swap swaps the contributions to the state of all pairs of processes \(p\) and \(p'\) if they originate from the same template and \(\gamma(p, \alpha) = i, \gamma(p', \alpha) = j\) and \(\gamma(p, \beta) = \gamma(p', \beta)\) for all \(\beta \neq \alpha \in \Omega\). Swapping such a pair of symmetric processes consists of interchanging the active locations and the values of the local variables and clocks (note that this is not a problem since the processes originate from the same template).
- The data swap swaps array entries \(i\) and \(j\) of all dimensions that are indexed by scalarset \(\alpha\) (these are given by the set \(D_\alpha\)). Moreover, it swaps the value \(i\) with the value \(j\) for all variables in \(V_\alpha\).
Consider the instance of Fischer’s mutual exclusion protocol (as described in the previous section) with three processes. There are three swap functions:
Now consider the following state of the model (the active location of the \( i \)-th process is given by \( l_i \) and the local clock of this process is given by \( x_i \)):
\[
l : l_0 = \text{idle}, \ l_1 = \text{wait}, \ l_2 = \text{cs} \\
v : \text{id} = 2, \ \text{set} = 1 \\
Z : x_0 = 4, \ x_1 = 3, \ x_2 = 2.5 \\
\text{active} : \text{active}[0] = 0, \ \text{active}[1] = 2, \ \text{active}[2] = 3
\]
When we apply \( \text{swap}_{0,2}^{\text{loc}} \) to this state, the result is the following state:
\[
l : l_0 = \text{cs}, \ l_1 = \text{wait}, \ l_2 = \text{idle} \\
v : \text{id} = 0, \ \text{set} = 1 \\
Z : x_0 = 2.5, \ x_1 = 3, \ x_2 = 4 \\
\text{active} : \text{active}[0] = 3, \ \text{active}[1] = 2, \ \text{active}[2] = 0
\]
The process swap swaps \( l_0 \) with \( l_2 \), and \( x_0 \) with \( x_2 \). The data swap first changes the value of the variable \( \text{id} \) from 2 to 0, since \( \text{id} \in V^{\text{loc}} \), and then swaps the values of \( \text{active}[0] \) and \( \text{active}[2] \). Applying \( \text{swap}_{1,2}^{\text{loc}} \) to this state gives the following state:
\[
l : l_0 = \text{cs}, \ l_1 = \text{idle}, \ l_2 = \text{wait} \\
v : \text{id} = 0, \ \text{set} = 1 \\
Z : x_0 = 2.5, \ x_1 = 4, \ x_2 = 3 \\
\text{active} : \text{active}[0] = 3, \ \text{active}[1] = 0, \ \text{active}[2] = 2
\]
Note that this swap does not change the value of \( \text{id} \), since the scalarset elements 1 and 2 are interchanged and \( \text{id} \) contains scalarset element 0.
A number of syntactic checks have been identified that ensure that the symmetry suggested by the scalarsets is not broken. These checks are very similar to those originally identified for the MuRö verification system [19]. For instance, it is not allowed to use variables of a scalarset type for arithmetical operations such as addition. The next soundness theorem has been proven in [14]:
**Theorem 1 (Soundness).** Every state swap is an automorphism.
As a result, the representative function \( \theta \) can be implemented by minimization of the state using the state swaps. Note that every state swap resembles a transposition of the state. Hence, the equivalence classes induced by the state swaps originating from a scalarset with size \( n \) consist of at most \( n! \) states. The maximal theoretical gain that can be achieved using this set of automorphisms is therefore in the order of a factor \( n! \).
### 4.2 Computation of Representatives
The representative of a state is defined as the minimal element of the symmetry class of that state w.r.t. a total order \( \prec \) on the symmetry class. In general,
the DBM representation of zones renders an efficient canonical minimization algorithm impossible, since minimization of a general DBM for any given total order using state swaps is at least as difficult as testing for graph isomorphism for strongly regular graphs [14]. If we assume, however, that the timed automaton that is analyzed resets its clocks to zero only, then the zones (DBMs) that are generated by the forward state space exploration satisfy the nice diagonal property. This property informally means that the individual clocks can always be ordered using the order in which they were reset. To formalize this, three binary relations on the set of clocks parameterized by a zone \( Z \) are defined:
\[
\begin{align*}
x \leq_Z y & \iff \forall \nu \in Z \; \nu(x) \leq \nu(y) \\
x \equiv_Z y & \iff \forall \nu \in Z \; \nu(x) = \nu(y) \\
x <_Z y & \iff (x \leq_Z y \land \neg(x \equiv_Z y))
\end{align*}
\]
The diagonal property is then defined as follows.
**Lemma 1 (Diagonal Property).** Consider the state space exploration algorithm described in figure 6 of [21]. Assume that the clocks are reset to the value 0 only. For all states \((l, v, Z)\) stored in the waiting and passed list and for all clocks \(x\) and \(y\) holds that either \(x \leq_Z y\), or \(x \equiv_Z y\) or \(y <_Z x\).
Using the reset order on clocks and the diagonal property, we can define a total order, say \(\prec\), on all states within a symmetry class whose minimal element can be computed efficiently. To this end we first assume a fixed indexing of the set of clocks \(X\): a bijection \(\rho : X \to \{1, 2, \ldots, |X|\}\). Now note that \(\equiv_Z\) is an equivalence relation that partitions \(X\) in \(P = \{X_1, X_2, \ldots, X_n\}\). We define a relation on the cells of \(P\) as follows:
\[
X_i \leq X_j \iff (\forall x \in X_i, y \in X_j \; x \leq_Z y)
\]
Clearly this is a total order on \(P\). Let \(X_i\) be a cell of \(P\). The code of \(X_i\), denoted by \(C^*(X_i)\), then is the lexicographically sorted sequence of the indices of the clocks in \(X_i\) (the set \(\{\rho(x) | x \in X_i\}\)). The zone code of the zone which induced \(P\) is then defined as follows.
**Definition 3 (Zone code).** Let \(Z\) be a zone and let \(P = \{X_1, X_2, \ldots, X_n\}\) be the partitioning of the set of clocks \(X\) under \(\equiv\) such that \(i \leq j \Rightarrow X_i \leq X_j\) (we can assume this since \(\leq\) is a total order on \(P\)). The zone code of \(Z\), denoted by \(C(Z)\), is the sequence \((C^*(X_1), C^*(X_2), \ldots, C^*(X_n))\).
Note that every zone has exactly one zone code since the indices of equivalent clocks are sorted. Moreover, zone codes can lexicographically be ordered, since they are sequences of number sequences. This order is then used in the following way to define a total order on the states in a symmetry class (the orders on the location vectors and variable valuations are just the lexicographical order on sequences of numbers):
\[(I, v, Z) \prec (I', v', Z') \]
\[\iff \]
\[(I < I') \lor (I = I' \land v < v') \lor (I = I' \land v = v' \land C(Z) < C(Z'))\]
We minimize the state w.r.t. the order of equation (7) using the state swaps by applying the bubble-sort algorithm to it, see Figure 3. It is clear that this representative computation satisfies the soundness equation (1), since states are transformed using the state swaps only, which are automorphisms by Theorem 1. We note that \(\text{swap}_{\alpha}^{i,j}(q)\) is not computed explicitly for the comparison in the fourth line of the algorithm; using the statically derived \(\gamma, D_{\alpha}\) and \(V_{\alpha}\) (see section 4.1) we are able to tell whether swapping results in a smaller state.
\[
\begin{align*}
(1) \text{ for all } \alpha \in \Omega & \text{ do} \\
(2) & \text{ for } i = 1 \text{ to } |\alpha| \text{ do} \\
(3) & \text{ for } j = 1 \text{ to } |\alpha| - i \text{ do} \\
(4) & \text{ if } \text{swap}_{\alpha}^{i,j}(q) \prec q \text{ then} \\
(5) & \quad q := \text{swap}_{\alpha}^{i,j}(q) \\
(6) & \text{ od} \\
(7) & \text{ od} \\
\end{align*}
\]
Fig. 3. Minimization of state \(q\) using the bubble-sort algorithm. The size of scalarset type \(\alpha\) is denoted by \(|\alpha|\).
The following theorem states the main technical contribution of our work. Informally, it means that the detected symmetries are optimally used.
**Theorem 2 (Canonical Representative).** The algorithm in Figure 3 computes a canonical representative.
Note that we assumed that arrays that are indexed by scalarsets do not contain elements of scalarsets. Otherwise, computation of a canonical representative is as hard as graph isomorphism, but this is entirely due to the discrete part of the model, and not to the clock part.
### 5 Experimental Results
This section presents and discusses experimental data that was obtained by the UPFAAL prototype on a dual Athlon 2000+ machine with 3 GB of RAM. The measurements were done using the tool `memtime`, for which a link can be found at the UPFAAL website [http://www.uppaal.com/](http://www.uppaal.com/).
In order to demonstrate the effectiveness of symmetry reduction, the resource requirements for checking the correctness of Fischer’s mutual exclusion protocol were measured as a function of the number of processes for both regular UPFAAL and the prototype, see Figure 4. A conservative extrapolation of the data shows...
that the verification of the protocol for 20 processes without symmetry reduction would take 115 days and 1000 GB of memory, whereas this verification can be done within approximately one second using less than 10 MB of memory with symmetry reduction.
Similar results have been obtained for the CSMA/CD protocol ([24, 27]) and for the timeout task of a distributed agreement algorithm5 [4]. To be more precise, regular UPPAAL’s limit for the CSMA/CD protocol is approximately 10 processes, while the prototype can easily handle 50 processes. Similarly, the prototype can easily handle 30 processes for the model of the timeout task, whereas regular UPPAAL can only handle 6.
Besides the three models discussed above, we also investigated the gain of symmetry reduction for two more complex models. First, we experimented with the previously mentioned agreement algorithm, of which we are unable to verify an interesting instance even with symmetry reduction due to the size of the state space. Nevertheless, symmetry reduction showed a very significant improvement. Second, we experimented with a model of Bang & Olufsen’s audio/video protocol [13]. The mentioned paper describes how UPPAAL is used to find a bug in the
---
5 Models of the agreement algorithm and its timeout task are available through the URL http://www.cs.kun.nl/~martijnh/
protocol, and it describes the verification of the corrected protocol for two (symmetric) senders. Naturally, we added another sender – verification of the model for three senders was impossible at the time of the first verification attempt – and we found another bug, whose source and implications we are investigating at the time of this writing. Table 1 shows run-time data for these models.
Table 1. Comparing the time and memory consumption of the relations for the agreement algorithm and for Bang & Olufsen’s audio/video protocol with two and three senders. The exact parameters of the agreement model are the following: \( n = 2, f = 1, \text{ones} = 0, c_1 = 1, c_2 = 2 \) and \( d \) varied (the value is written between the brackets). Furthermore, the measurements were done for the verification of the agreement invariant only. Three verification runs were measured for each model and the best one w.r.t. time is shown.
<table>
<thead>
<tr>
<th>Model</th>
<th>Time [s] No reduction</th>
<th>Reduction</th>
</tr>
</thead>
<tbody>
<tr>
<td>Agreement (0)</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>Agreement (1)</td>
<td>21</td>
<td>16</td>
</tr>
<tr>
<td>Agreement (2)</td>
<td>80</td>
<td>23</td>
</tr>
<tr>
<td>Agreement (3)</td>
<td>231</td>
<td>32</td>
</tr>
<tr>
<td>B&O (2)</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>B&O (3)</td>
<td>265</td>
<td>36</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Model</th>
<th>Memory [MB] No reduction</th>
<th>Reduction</th>
</tr>
</thead>
<tbody>
<tr>
<td>Agreement (0)</td>
<td>33</td>
<td>45</td>
</tr>
<tr>
<td>Agreement (1)</td>
<td>294</td>
<td>180</td>
</tr>
<tr>
<td>Agreement (2)</td>
<td>905</td>
<td>245</td>
</tr>
<tr>
<td>Agreement (3)</td>
<td>2126</td>
<td>321</td>
</tr>
<tr>
<td>B&O (2)</td>
<td>16</td>
<td>10</td>
</tr>
<tr>
<td>B&O (3)</td>
<td>1109</td>
<td>181</td>
</tr>
</tbody>
</table>
6 Conclusions
The results we obtained with our prototype are clearly quite promising: with relatively limited changes/extensions of the UPPAAL code we obtain a rather drastic improvement of performance for systems with symmetry that can be expressed using scalarsets.
An obvious next step is to do experiments concerning profiling where computation time is spent, and in particular how much time is spent on computing representatives. In the tool Design/CPN [18, 20, 11] (where symmetry reduction is a main reduction mechanism) there have been interesting prototype experiments with an implementation in which the (expensive) computations of representatives were launched as tasks to be solved in parallel with the main exploration algorithm.
The scalarset approach that we follow in this paper only allows one to express total symmetries. An obvious direction for future research will be to study how other types of symmetry (for instance as we see it in a token ring) can be exploited.
References
|
{"Source-Url": "http://repository.ubn.ru.nl/bitstream/handle/2066/60201/60201.pdf?sequence=1", "len_cl100k_base": 7900, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35330, "total-output-tokens": 10250, "length": "2e12", "weborganizer": {"__label__adult": 0.00048470497131347656, "__label__art_design": 0.0004801750183105469, "__label__crime_law": 0.0006651878356933594, "__label__education_jobs": 0.0007038116455078125, "__label__entertainment": 0.00011903047561645508, "__label__fashion_beauty": 0.00023603439331054688, "__label__finance_business": 0.0003426074981689453, "__label__food_dining": 0.0005350112915039062, "__label__games": 0.000965595245361328, "__label__hardware": 0.002197265625, "__label__health": 0.00103759765625, "__label__history": 0.00047707557678222656, "__label__home_hobbies": 0.00015938282012939453, "__label__industrial": 0.0008902549743652344, "__label__literature": 0.00035762786865234375, "__label__politics": 0.0005497932434082031, "__label__religion": 0.0007405281066894531, "__label__science_tech": 0.2127685546875, "__label__social_life": 0.00012493133544921875, "__label__software": 0.00821685791015625, "__label__software_dev": 0.76611328125, "__label__sports_fitness": 0.0004825592041015625, "__label__transportation": 0.0011806488037109375, "__label__travel": 0.0002894401550292969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35072, 0.031]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35072, 0.34304]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35072, 0.85394]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 2282, false], [2282, 5460, null], [5460, 7765, null], [7765, 10483, null], [10483, 13651, null], [13651, 15485, null], [15485, 18452, null], [18452, 21106, null], [21106, 24095, null], [24095, 26519, null], [26519, 27866, null], [27866, 30833, null], [30833, 34194, null], [34194, 35072, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 2282, true], [2282, 5460, null], [5460, 7765, null], [7765, 10483, null], [10483, 13651, null], [13651, 15485, null], [15485, 18452, null], [18452, 21106, null], [21106, 24095, null], [24095, 26519, null], [26519, 27866, null], [27866, 30833, null], [30833, 34194, null], [34194, 35072, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35072, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35072, null]], "pdf_page_numbers": [[0, 0, 1], [0, 2282, 2], [2282, 5460, 3], [5460, 7765, 4], [7765, 10483, 5], [10483, 13651, 6], [13651, 15485, 7], [15485, 18452, 8], [18452, 21106, 9], [21106, 24095, 10], [24095, 26519, 11], [26519, 27866, 12], [27866, 30833, 13], [30833, 34194, 14], [34194, 35072, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35072, 0.06275]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
72a7b46ecf8752f7d6eddae90f1e15897439d934
|
An Aspects Oriented Approach to Dynamically Manage Applications
Bernard Kaddour, Joël Quinqueton
To cite this version:
An Aspects oriented approach to dynamically manage applications
Bernard Kaddour Joël Quinqueton
LIRIS – Université Lyon 1
43 boulevard du 11 novembre 1918
F-69622 Villeurbanne cedex
bkaddour@liris.cnrs.fr
LIRMM
161 rue Ada
F-34392 Montpellier cedex 5
jq@lirmm.fr
Abstract
The emergence of middleware solutions and new services even on small devices will need adapted distributed management solutions which address these specificities, both in terms of software design and in terms of performance. We propose a management system where these high level and low level management concerns are separated.
The high level management part relies on messages interception mechanisms which, coupled with Aspects Oriented Programming concepts, provides facilities for management applications to dynamically operate, enhance and manage JAVA based applications. The management is transparent for the application which doesn’t need to be modified to support management operations as we take advantage of both the JAVA introspection mechanisms and the facilities some Aspects Frameworks offer.
1. Introduction
Since some years, more and more smart phones or personal digital assistants are widely available and used. These small devices still suffer from some limitations compared with high end fixed terminals such as lower CPU performance or smaller memory size, but even now days they already have enough processing capabilities to host complete high Operating Systems - either Windows or Linux dedicated versions - and they appear more and more as autonomous embedded systems. Meanwhile, industrially accepted middleware solutions appear [8] [15] for new distributed applications. These middleware are currently available for small devices [16] and even Multi-Agents Systems based solutions have been introduced in the research area [5].
Such new distributed application whose components may partially or totally be executed by small devices implies new evolutive and flexible needs for their management to be effective.
Another worth to note point is the large acceptance of the JAVA language, both for J2EE [9] or Corba applications and even for Corba frameworks design themselves.
We concentrate to provide solutions widely applicable which use these particularities and we conceptually separate the high and the low parts of the management. The former takes advantage of the large utilization of JAVA for middleware applications and uses Aspect Oriented Programming concepts [12] to manage applications without invading the underlying middleware. This approach avoids modifications of already existing applications and would ease the translation of introduced mechanisms towards new emerging middleware solutions [14]. The later introduces monitoring tasks tightly related to the code of the managed JAVA entities.
In this article we concentrate on the high part of the management system and the low-level management part [11] is not developed here.
We present in section 2. the principles and main ideas which lead our management system for distributed components. The section 3. discusses how managed objects are obtained and section 4. develops the entities used for management. The section 5. presents how they are layered. We conclude with some future works and perspectives.
2. Key features of our architecture
Managing disperse heterogonous entities has already been investigated in the network community and standards defined [17][18], but their intrinsic centralized and frozen aspects are very limitative [13]. Newer and more application centric solutions have appeared [6][7][10], but they may not adequately address the evolutive aspect of management: management functions must be easy to place and re-use, easy to remove or to stop and many management functions can simultaneously manage an entity.
The approach consists in a core management system - named kernel of the management system or KMS - where management applications can be deployed.
The KMS largely relies on the interception of incoming and outgoing messages exchanged by the managed application. It can then spy and filter requests sent or results received by the components of the...
managed application accordingly to management functions requirements.
Management applications are composed of management activities. Activities are in turn composed of management functions and filters. Management functions are not limited to collect data and can be parts of a complete distributed management application while filters can dynamically be linked with the components they have to interact with when deployed by the KMS.
The management system permits to several management applications to simultaneously operate upon a whole application or only upon its components. For example, one could manage an application with some logging management functions for profiling purposes even if another activity is already managing some of the application’s components for synchronization purposes. This simultaneously-multi-managed feature makes the management activities to appear as enhancements of the managed application.
Some differences with JMX exists. From the JMX point of view, the proposed management system can be viewed as a set of linked and modified MbeanServer, differently registering parts of activities, while activities appear as autonomous collections of Mbeans inheriting the KMS Interfaces where all the necessary behaviors are implemented.
3. Requirements and strategies for our management
The introduced architecture deals with entities following the usual client-server model. As we expect to not modify the source code of the managed components, management can be achieved through (i) the external representation of the different elements and the interception of sent and received messages; and (ii) the possibility to directly read and modify some of the variables of the component we manage.
First, the connection with the components we plan to manage as this feature is generally not natively available.
3.1. Connection with the managed entities: Interceptors
We distinguish applications originally written in JAVA from the others, in particular from Corba applications. In the case of JAVA applications, we use the introspection possibilities the language offers and the facilities we have to access compiled byte-code. These facilities can be used to create Interceptors-wrappers or to modify the application with byte-code injection [2][4].
Some tools [1][2] currently go one step further, permitting to operate upon JAVA code (either at source or byte-code level) with high level concepts and high level languages. Performance impact tends to remain acceptable [2].
These features give the possibility to consider JAVA applications as manageable entities.
For Corba applications, OMG provides some basic introspection facilities services: Portable Interceptors.
Interceptors can be used by third parties to spy requests or modify messages exchanged between the middleware and the components. They are often used to enrich Corba with new features, ranging from synchronization to caching [3], but they induce performances penalties. They remain a handy solution as they require only few or not any modification in the original application code. We use them at request and message level (i) for interception purposes and (ii) as a glue with the others parts of the system which are mainly JAVA centric.
The following sections focus on the representation of management activities and on the messages filtering system.
4. Major components of the system
KMS is implemented as a daemon-process to permit sub-activities exchanges between KMS (currently with sockets). It inspects and registers the entities it is in charge of and interacts with AspectWerkz [2]. It is worth to note that KMS provides JAVA classes and interfaces that management entities mandatory implement or inherit from. That’s the way KMS normally use to interact with or force a management entity.
4.1. Activities
The original activity concept was first introduced in the Computer Supported Co-operative Work framework. We will continue to use this term although its meaning has deeply been altered.
Main parts of an activity are :
- **Roles**: We distinguish external roles corresponding to resources that other third parties may provide to the activity from internal roles corresponding to sub-activities of the current activity. Both of them are described by means of references towards interfaces.
- **sub activities**: greatest difference with other tools is their expected autonomous abilities as the activity may later ask the KMS to send them, as a whole, over the network, and then deploy them.
- **constraints and preferences**. They are activate by the requests the activity can receive. They can decide to allow, modify, or reject a request. The loading of a particular characters font as a new application conforming to an editor interface is such an example.
- **internal tools**: It’s a set of functions embedded in the activity for it’s own needs. These tools, usually inactive, can be trigger at any time by the arrival of a new element (via a preference e.g.) or for the activity needs. Particular tools are the incoming or outgoing filters acting upon messages received or emitted by the managed component.
- **monitoring expressions**. These are expressions the activity wants at low-level management for direct code monitoring tasks.
- the main body of the activity composed of methods the activity responds to and the set of its private variables. Security policy to apply for each received message or termination of the activity are known methods that every activity must implement or delegate.
- **set of attributes** from which the activity can be designated. E.g. an edition activity can specify an octet string attribute file which is the name of the file it proceeds with.
An activity is encapsulated in a .jar class file. It is the activity responsibility to report to the management system the interface or set of services it can respond to whereas the KMS will soon verify that the different parts of the activity conform to the expected JAVA interfaces.
Interceptors are particular activities as they are special parts of the KMS with high privileges.
4.2. An example of activity
To depict how an activity is made up, let us consider an application registered as «Service» and providing the «sub» and «add» operations. This application can be managed by a simple «PositiveAccount» activity (i) to count the number of operation requests received and (ii) to force the «sub» operation to return a positive or nil result.
```
PositiveAccount.jar activity ::= [
class Count {
static int count =0;
public static count() { return count; }
}
class Inc_count implements InFilter {
public Object exec(JoinPoint joinPoint) {
Count.count ++;
}
}
class Positive implements OutFilter {
public Object exec(JoinPoint joinPoint) {
Object ret= joinPoint.proceedMod();
if (ret.intValue() <0)
ret =new Integer(0);
return ret;
}
}
class UneContrainte implements Constraints {
public Object exec(Activity a) {
If (a.haveCompatibleInterface("edit"))
...
}
}
class PositiveAccount implements Activity {
```
public int expressionReached(int e) {...}
public int init() {
Inc_count ic = new Inc_count();
register_inFilter("Service", "add", ic);
register_inFilter("Service", "sub", ic);
Positive p = new Positive();
register_outFilter("Service", "sub", p, MOD_PRIV);
register_Constr(new UneContrainte(), ANY_IN);
...
register (this, "Count");
return 1;
}
First the activity registers its filters which have to operate around the «Service» managed application. Inc_Count will count the number of «add» or «sub» requests the application will receive while a Positive instance will check and maybe modify the value returned by «sub» to be positive or nil. Finally the management activity registers itself as a «Count» activity. It can in turn be suspend or remove (and even itself be partially managed), its filters moved or stopped by the KMS.
4.3. Contexts
When an activity A is deployed by the KMS, the KMS firstly ensures by introspections that the different parts of the activity conform to the expected ones: outFilter class inherits from the KMS_outFilter class, Activity from KMS_Activity, etc. Lastly, KMS starts the init() method in a dedicated thread.
The activity then creates its own objects and asks the KMS to register objects which interact with other parts of the system. Such objects are (i) filters that the KMS will connect to managed objects of the managed application and (ii) constraints & preferences which may help the activity to customize the environment.
Finally, the activity can use KMS services (such as communication services to send its sub-activities over the network towards another KMS) or require extra services provided by other activities.
Major differences between filters and constraints & preferences is the kind of tasks they are concerned with. Constraints and Preferences (C&P) are used by the activity to express its desires and restrictions. Once registered by the KMS on behalf of the activity, the KMS ensures that before providing a service (or an external activity) to A, it will first check it against the constraints and then against the preferences registered by the activity A. It is up to the KMS to make as many necessary retries before it sends to A the resulting service. This C&P mechanism gives to the management system some capability to adapt environment resources with activities wishes.
The set of tools, constraints, preferences, services provided to others represent the environment (named context) created by the deployed activity.
In turn, an activity must be deployed in an already existing context.
Two abstract contexts are introduced:
- `site-context` corresponds to the operating system abstract activity. It is the main owner of the local resources (hardware, communication links, etc) and it have higher privileges. This context can offer services to its sub-contexts or acts as delegate for them. For example, a sub-context can delegate its security to the site-context security policies.
- `user-context` corresponds to a user environment in which will be (by default) deployed the activities that the user will later execute.
Deployment of management activities creates contexts that are then organized as a tree. Each context has a father-context and may have child-contexts. Child-contexts can be either contexts tied to sub-activities or contexts created by another activity explicitly deployed in the current context.
Constraints and Preferences (C&P) deal with contexts messages (or actions) received by the context such as requests for the creation or the insertion of new contexts.
In particular, before being deployed in a context C, an activity will have to conform to the constraints imposed by any context surrounding C. Starting from constraints of the site-context to constraints of C.
This mechanism gives to the management system the capability to adapt activities with the environment wishes explained by means of C&P.
On the other hand, Filters deal with sent or received messages (methods calls and return values) by the managed application. This is obtained transparently (from the managed object point of view) by using Aspect oriented programming concepts and tools.
5. The management of the methods & the filtering
5.1. Aspect oriented programming
This paradigm mainly due to [12] has for objectives to capture some singularities that aren't actually properly take in charge by the object oriented model
Worth to note that Aspect oriented programming (AOP) doesn't tend to replace OO model but rather tends to improve and to ease the software development work.
In practice, AOP complements OO programming by allowing to dynamically modify and improve the static OO model with the inclusion of some new needed code required to fulfill some new expected requirements.
AOP has introduced some concepts for addition or modification of existent code, mostly:
- **Join Points.** These are well-defined points in the flow of a program. Methods call or return, exception handler entry point and even field set or get operations are Join points examples.
- **Points cut.** They are mainly used to identify Join points from different classes.
- **Advices.** Which define both the code of the aspect and, combined with join points, when it will have to be executed. Usual advices are *before* advices as we expect the new aspect's code to be executed before the join point, and *after* advices as we expect the new aspect's code to be executed after the join point.
AOP frameworks are now available [1][2]. We prefer AspectWerkz (AW) [2] as it provides a so-called *online-mode* to operate upon already compiled pieces of code (.class) and, most interesting, have a useful API providing all of the requirements needed to dynamically manipulate aspects. Finally, AW (from WebLogic/BEA) intends in a mid-future to fully interact with Jboss and a bringing together of AW and AspectJ is on the way.
We note that classes are AOP main concerns, objects being at glance out of concern. That's to bypass this limitation and to have the possibility to select and operate upon classes instances that naming attributes have been introduced as part of our activities.
### 5.2. Layout of the filters
When launched under AW using:
```
aspectwerkz -Dkms.xml -cp kms app.jar
```
any method `m()` of `app.jar` will be executed after the before aspects known by `kms` have been executed and the result of `m()` will be delivered after the after aspects known by `kms` have been executed.
Filters registered by the KMS on behalf of an activity are inserted in the in-filters or in the out-filters chain according to the activity's request and to the interface they inherited from. From the managed object point of view, each filter then appears as a *before aspect* if placed in the in-filters chain or as an *after aspect* if placed in the out-filters chain. Many filters can simultaneously be present in a chain, these filters may have been required by one or several different activities. Filters placed by the KMS have highest priority, follow activities filters, then sub-activities filters then sub-sub-activities filters etc.
Connected to AW, KMS is informed of the methods calls and returns of each object. For a method call, depending on the caller and on the call parameters, KMS selects in the in-filters the list of filters to apply. These filters are then sequentially executed using AW API (mainly through the method `Object joinPoint.proceedMod()`, a modify version of the original `proceed()` one). If desired, parameters can be altered by using the AW API. Registered filters to select and apply for a method return value follow the same approach.
To avoid incoherence in these chains, the possibility of one or several different activities managing an object -e.g. a logging activity initiated by a user A and a synchronization activity required by a second user B - depends of the application and of the KMS's choices. Moreover, only filters registered with enhanced privileges or filters registered by the KMS for its own needs can modify or stop a message.
Filters placed by the KMS have highest priority, follow activities filters, then sub-activities filters then sub-sub-activities filters etc.
Connected to AW, KMS is informed of the methods calls and returns of each object. For a method call, depending on the caller and on the call parameters, KMS selects in the in-filters the list of filters to apply. These filters are then sequentially executed using AW API (mainly through the method `Object joinPoint.proceedMod()`, a modify version of the original `proceed()` one). If desired, parameters can be altered by using the AW API. Registered filters to select and apply for a method return value follow the same approach.
To avoid incoherence in these chains, the possibility of one or several different activities managing an object -e.g. a logging activity initiated by a user A and a synchronization activity required by a second user B - depends of the application and of the KMS's choices. Moreover, only filters registered with enhanced privileges or filters registered by the KMS for its own needs can modify or stop a message.
**Figure 2. Filters and messages paths.**
Left part of Fig. 2 depicts a managed object wrapped in a JVM/AW execution environment. A dash line represents the execution path taken by an `m()` request. First KMS in-filters are applied, then the activity’s in-filters before `m()` is effectively called. The returned result have to pass through activity’s out-filters and finally the KMS out-filters before being sent back to the requestor.
### 6. Future work and conclusion
The purpose of this work is to investigate the problem of management for distributed applications and provide means to operate in a distributive and dynamic manner. We suggest to use Aspects concepts as a possible solution to allow an evolutive and decentralized high level management scheme without
modifying neither the underlying middleware nor the managed objects.
We plan to define virtual MIBs for the KMS which will allow usual SNMP tools to interact with it and provide new facilities to manage software or even hardware component, but some problems remain as our interfaces directly compete with the SNMP OBJECT-TYPE macro.
7. References
|
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00370600/document", "len_cl100k_base": 4330, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 19158, "total-output-tokens": 5470, "length": "2e12", "weborganizer": {"__label__adult": 0.00025010108947753906, "__label__art_design": 0.0001800060272216797, "__label__crime_law": 0.00020945072174072263, "__label__education_jobs": 0.0002894401550292969, "__label__entertainment": 3.832578659057617e-05, "__label__fashion_beauty": 9.298324584960938e-05, "__label__finance_business": 0.00017690658569335938, "__label__food_dining": 0.00020313262939453125, "__label__games": 0.00028324127197265625, "__label__hardware": 0.00077056884765625, "__label__health": 0.0002486705780029297, "__label__history": 0.00014138221740722656, "__label__home_hobbies": 4.696846008300781e-05, "__label__industrial": 0.00022721290588378904, "__label__literature": 0.0001246929168701172, "__label__politics": 0.00015342235565185547, "__label__religion": 0.0002753734588623047, "__label__science_tech": 0.00688934326171875, "__label__social_life": 5.27501106262207e-05, "__label__software": 0.00726318359375, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00018906593322753904, "__label__transportation": 0.00031828880310058594, "__label__travel": 0.00016188621520996094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23852, 0.02339]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23852, 0.37557]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23852, 0.91197]], "google_gemma-3-12b-it_contains_pii": [[0, 338, false], [338, 4511, null], [4511, 7218, null], [7218, 11650, null], [11650, 16079, null], [16079, 21601, null], [21601, 23852, null]], "google_gemma-3-12b-it_is_public_document": [[0, 338, true], [338, 4511, null], [4511, 7218, null], [7218, 11650, null], [11650, 16079, null], [16079, 21601, null], [21601, 23852, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23852, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23852, null]], "pdf_page_numbers": [[0, 338, 1], [338, 4511, 2], [4511, 7218, 3], [7218, 11650, 4], [11650, 16079, 5], [16079, 21601, 6], [21601, 23852, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23852, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
d463da54c8bed681e212dfb4d25444df615d78a9
|
MINERVA: A PORTABLE MACHINE LEARNING MICROSERVICE FRAMEWORK FOR TRADITIONAL ENTERPRISE SaaS APPLICATIONS
Venkata Duvvuri
Oracle Corp & Department of Technology Leadership and Innovation, Purdue University, IL, USA
ABSTRACT
In traditional SaaS enterprise applications, microservices are an essential ingredient to deploy machine learning (ML) models successfully. In general, microservices result in efficiencies in software service design, development, and delivery. As they become ubiquitous in the redesign of monolithic software, with the addition of machine learning, the traditional applications are also becoming increasingly intelligent. Here, we propose a portable ML microservice framework Minerva (microservices container for applied ML) as an efficient way to modularize and deploy intelligent microservices in traditional “legacy” SaaS applications suite, especially in the enterprise domain. We identify and discuss the needs, challenges and architecture to incorporate ML microservices in such applications. Minerva’s design for optimal integration with legacy applications using microservices architecture leveraging lightweight infrastructure accelerates deploying ML models in such applications.
KEYWORDS
Microservices, Enterprise SaaS applications, Machine Learning, Oracle Cloud Infrastructure, Docker
1. INTRODUCTION
Enterprise SaaS applications are typically delivered as a service [1] to the client who need not worry about network, servers, operating systems, storage and data security. SaaS applications are broadly classified as general use and enterprise. The former involves general use software, such as Google Apps, and the latter specific enterprise applications, such as Oracle CX. Microservices are an effective way to decompose building large and complex systems into smaller sub-systems with these sub-systems interoperating via light weight (e.g., REST - representational state transfer) protocols. Machine learning sub-systems are increasingly important for SaaS applications like Oracle CX, Oracle CRM etc. due to the need to integrate intelligent decision-making. Many SaaS applications were built a decade or two ago, on an older technology stack on top of legacy data centers. Typically, this stack involves running monolithic applications on a multi-tenant SaaS infrastructure [2] with a huge database (like Oracle RDBMS) at its core. Pooyan et al. [3] identified several general benefits of microservices like faster delivery, improved scalability, and greater autonomy, turning an idea on some product manager’s or other project member’s whiteboard into a feature running in production as quickly as possible. Typically, microservices are packaged and deployed in the cloud using lightweight container technologies [4], following industry proven DevOps practices [5] and supported by automated software delivery machinery.
The paper is organized as follows: Section 2 introduces the machine learning needs in enterprise SaaS applications, Section 3 discusses related work, Section 4 elaborates how Minerva addresses these challenges, Section 5 highlights the system architecture, Section 6 & 7 compare and differentiate Minerva platform with other platforms, Section 8 presents the trade-offs made while designing Minerva, Section 9 & 10 presents Minerva’s implementation and rollout in Oracle and finally Section 11 & 12 concludes the paper with future research directions. Table 1 reflects the nomenclature used in this paper.
<table>
<thead>
<tr>
<th>Term</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>CRM</td>
<td>Customer Relationship Management</td>
</tr>
<tr>
<td>CX</td>
<td>Customer Experience</td>
</tr>
<tr>
<td>REST</td>
<td>Representational State Transfer Protocol</td>
</tr>
<tr>
<td>SaaS</td>
<td>Software as a Service</td>
</tr>
<tr>
<td>RDBMS</td>
<td>Relational database Management System</td>
</tr>
<tr>
<td>DevOps</td>
<td>Development Operations</td>
</tr>
<tr>
<td>CPU</td>
<td>Central Processing Unit</td>
</tr>
<tr>
<td>RAM</td>
<td>Random Access Memory</td>
</tr>
<tr>
<td>ML</td>
<td>Machine Learning</td>
</tr>
<tr>
<td>B2B</td>
<td>Business to Business</td>
</tr>
</tbody>
</table>
2. **SaaS Application Machine Learning Needs**
Machine learning models are designed by data scientists on sample datasets extracted for testing and experimentation. Typically, model development involves a fair understanding of the underlying optimization and statistical algorithms. A variety of programming languages, like R, Python etc., can be used along with various open source libraries for implementation. The typical tech-stack for development of these models is different from the host SaaS application stack.
The following additional requirements are not obvious, but they are crucial to implement machine learning in SaaS application. We highlight the technical requirements arising out of them. Section 6 and 7 showcases how Minerva achieves these technical requirements and as well suits the business needs, while others fall short.
2.1. **Technical Requirements**
- The need for *reusability* of the ML sub-system to serve numerous and diverse models to the host apps.
- The need for *decentralized* data governance and pre-processing to help with feature engineering required for machine learning models.
- The need for *scalability*, both horizontally (more machines) and vertically (add. CPU, RAM etc.).
- The need for real time or near real time (*online*) performance in predictions to serve intelligence in host applications.
- The need to accommodate long batch/*offline* training.
• The need to secure the sensitive data exchange between the feature processing subsystem and the machine learning processing sub-system.
• The need to build the ML microservice with the best of the breed modelling polyglot libraries in and independent of tech-stack of legacy system.
2.2. Business Requirements
Executives from SaaS product companies have established an aggressive 3 to 5 years horizon for cloud migration plans [6] for their customers. Some have an even longer sunset plan to stay competitive. Thus, companies like Oracle (and others) need an interim solution that fulfils the intelligence needs of traditional SaaS applications, especially in the B2B enterprise domain. Additionally, we capture the following business requirements that necessitates an innovative solution that is different from typical cloud machine learning solutions [7]:
• The need to serve intelligence in numerous traditional SaaS applications (e.g. SAP, Oracle etc.) in a lightweight fashion without much impact to the underlying systems or infrastructure. This market is roughly $20B revenue annually.
• The need to have ML “algorithm/model run near the data” as opposed to “move the data to algorithm/model”. This arises due to the legal rules of various states/countries making it difficult for applications to migrate data outside their datacenters.
• The need to have an “interim” machine learning solution that is compatible with “legacy” SaaS applications, at least for some time to come due to delays and inertia in adopting recent computing cloud techniques [7].
In a traditional enterprise SaaS Application and/or suite the following requirements are of premium importance: independence of the microservices programming stack, reusability, lightweight ML platform resource needs, algorithm running in proximity to data and compatibility with existing architecture without much impact.
3. RELATED WORK
Machine learning microservices have evolved from leveraging virtualization environments [8] to containerized approaches [9]. Ignacio et al. [8] has adopted a Bring Your Own Learner in the FCUBE project where various machine learning predictive algorithms can be run in a virtual machine environment [10]. They employ a plug and play approach which is a basic tenet of our approach as well. However, they focus on ensemble models for predictive classifications problems. Additionally, this approach is an offline approach and cannot be employed by traditional SaaS applications. Our solution generalizes to any kind of machine learning model. Fundamentally, our Bring Your Own Model or Algorithm (BYOMOA) approach allows for immense flexibility in choice of model libraries employed. Secondly, we also treat the model building exercise as a black box where the modeler obeys an established abstract interface of predict and train with our machine learning microservice framework. Pasquale et al. [9] has extended the FCUBE approach to a cCUBE microservices framework with containerization of services and adding orchestrators to help manage the compute units. While, we adopt such containerizing and task management as well, we offload management of the machine learning and data jobs to either out-of-box orchestrators or to the host application that allow such orchestrations. Unlike cCUBE we relax the limitations by designing for any supervised, unsupervised or deep learning algorithms. Thus, we allow almost unlimited ML capabilities in traditional SaaS applications.
Recently industry has recognized these issues by developing various ML platforms [11], [12], [14], [15] for ML lifecycle management. We borrow and extend their capabilities, and, in our case, it became essential due to value we need to deliver to SaaS applications suite. Similar to Databrick’s MLFlow [11], we allow polyglot library capabilities by developing generic REST APIs that can use any ML library or algorithm. However, we offer a different deployment/microservices model by parceling both model (M) and algorithm (A) in simpler project modules/files within container image instead of packaging models in complicated deployment repositories as in [11]. This allows for portability and enables wider “lift and shift strategy” that is the core value we deliver to SaaS application suites. Other recent ML platforms like Facebook’s FBLearner [12], Uber’s Michelangelo [13] and Google’s TFX [14] have tried to solve the problem but within their own ecosystem. We concur with Zaharia et al. [11] that this limits ML developers to specific algorithms and libraries, decreasing their ability to experiment, and limiting ML developers and not allowing them to use new libraries or models.
Finally, the models have been traditionally integrated into products using hardcoded or embedded stacks. In the hardcoded approach the models are trained, and the model is captured as a mathematical or statistical function capable of being written directly into the host application during prediction process. In the embedded approach, the model is recoded into host application programming language. These two approaches suffer from the following deficiencies:
- Slower development process due to sequential or near waterfall development.
- Additional translation needed into the host application tech stack and/or programming language.
- Limited number of modelling libraries available in the host programming stack.
- Errors cropping up in the model translation efforts.
- No reuse of the model or code by related products in the traditional SaaS application.
4. ADDRESSING THE CHALLENGE
In our approach we devise a machine learning (ML) microservice sub-system “framework” (Minerva) within the ecosystem of a suite of traditional “legacy” SaaS applications. To address the needs of connected intelligence in such applications, we establish well defined REST service contracts with the various sub-systems. Due to the varied nature of interactions amongst the sub-systems we evolve the REST contracts into a “consumer contracts REST pattern” as suggested by Ian [15]. The machine learning model and/or its code is a black box that can be plugged into the Minerva by adopting well defined predict and train abstractions provided by the framework. To support monitoring and operations we allow for tiered logging interfaces that get ingested into the overall host application ecosystem by mounting a shared file system. Thus, we can adopt the same support operations to monitor the machine learning intelligence as the rest of the legacy ecosystem. A continuous deployment (CD) framework facilitates the agile development of Minerva parallel to the host application development. This eliminates a sequential development process and accelerates putting features into production. The cadence with data pre-processing jobs is orchestrated outside Minerva. These data jobs themselves can leverage database libraries already developed. Additionally, the very nature of independence of the microservices allows both the model and framework development in different programming languages and libraries than the “legacy” application ones. This adds to the flexibility in Minerva in choosing custom or advanced libraries for modelling. Organizationally, the microservice architecture allows a separate team to be responsible for this ML intelligence. This “engineering less” approach stems from the very conceptual adoption of ML microservices into the legacy development and
deployment process. Another important contribution of our work is to allow for reuse and/or portability of the same machine learning model and/or algorithm code (A) by other related products (RP) in the parent suite organization. These products can now contribute (“lift and shift strategy”) to mutually beneficial ML algorithms or models, enabled by Minerva’s standard interfaces. These interfaces are served by a container that holds the model, its code (algorithm) and framework. The ease of adoption of Minerva in each RP is achieved by a separation of RP and framework configurations during deployment. An evolution of a given model is possible via a schema versioning in service payload contracts as evinced by Ian [15]. Another important contribution of the framework is the ability by the host applications to train and predict both online as well as offline. Finally, this containerized approach allows both vertical (CPU, RAM etc. of a node) and horizontal (spawning multiple nodes) scaling, albeit manually.
5. **SYSTEM ARCHITECTURE**
The system architecture and integrations are shown in Figure 1. The legacy application is built as several sub-systems: UI, DB, CORE, PLATFORM etc. in a datacenter. The ML microservices is a docker container (Minerva) implemented as per the proposal highlighted in section 4. Minerva container has three layers: core, abstraction and app. The Minerva core layer handles interaction with other systems, process management within the container, concurrency controls, security mechanisms and configurations. The Minerva abstraction stub wraps the algorithm/model, handles dynamic loading of projects (apps), versioning, exceptions and call-backs. The Minerva App layer implements the abstractions and codes the model, algorithm and logic using the best of breed ML libraries.
Minerva interacts with the legacy sub-systems like ui, core etc. for predictions. The training request can be orchestrated when the data is made ready for machine learning by the data processing unit. An orchestrator can be part of the legacy application, but that can also be pulled outside the application if needed. The data needed for ML is extracted and pre-processed by the data processing unit. The legacy application has a shared storage space that the ML microservice can leverage for saving and logging. The ML microservice is self-contained with its own metadata info.
6. PLATFORM COMPARISON
Minerva platform is lightweight and primarily focuses on model/algorithm lift and shift enablement using a diverse set of ML libraries (TensorFlow, Pytorch, Keras, Sklearn etc.). In contrast, Kubeflow can serve fewer ML libraries (notably TensorFlow and Pytorch) and thus is not completely polyglot with respect to the choice of ML open source libraries. MLflow though polyglot and similar, has other design limitations.
Table 2. Comparison of Minerva with various platforms
<table>
<thead>
<tr>
<th>Feature</th>
<th>Minerva</th>
<th>MLFlow</th>
<th>Kubeflow</th>
</tr>
</thead>
<tbody>
<tr>
<td>Job Tracking</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Resources/ML Monitoring</td>
<td></td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>Standard docker packaging</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Dynamic endpoints</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>API Standardization</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Easy lift and shift of Algorithm</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Easy lift and shift of model</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Microservices architecture (FaaS - Function as a Service)</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Real Time Serving (REST)</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Batch processing (train)</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Library Polyglot</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>ML Pipelines</td>
<td></td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>Model Visualization</td>
<td></td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>Experimentation support</td>
<td></td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Open Framework</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Model versioning</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Concurrency control</td>
<td></td>
<td></td>
<td>X</td>
</tr>
</tbody>
</table>
Secondly, Minerva focuses on ease of endpoint configurability which helps customizing the multitude of consumer driven REST APIs to interface with the various legacy application subsystems. These endpoint APIs are themselves standardized (for training) or templatized (for predictions) with version as a key attribute. Presumably, due to polyglot capabilities, these different models can be built using latest and greatest ML libraries in evolving versions, making avail of latest research. Thus, Minerva helps invoking multiple model revisions and swapping them as necessary by legacy application. MLflow has strengths in ML lifecycle management but offers little to help integrate with SaaS applications, especially with respect to API configurability, versioning or standardization. Although MLFlow has reusable projects, Minerva achieves algorithm lift and shift with simple modifications to pluggable single (or few) python project (app) files(s)/modules which are invoked dynamically by the framework. Thus, Minerva’s approach of algorithm reuse and customization is different from MLFlow’s heavy duty approach of external algorithm development involving separate and newer repositories conforming to complex templates. Kubeflow has a nicer set of complementary off-the-shelf capabilities like pipelines, scalability etc. which when supplemented to Minerva platform will enhance the overall robustness and maintenance of ML artifacts in SaaS applications. This integration with Kubeflow is illustrated in section 11 and earmarked for future studies of Minerva in OCI (Oracle Cloud Infrastructure) [16] platform. Finally, while scalability is handled in Minerva by using more and bigger containers, albeit replicating them manually, MLFlow’s techniques of Spark [17] based compute and scalability does not bode well in legacy applications without adding elaborate
infrastructure. We find that scalability requirements in legacy applications is not a severe requirement, considering the typical datasets in enterprise software.
7. **Value Delivered to Legacy Applications**
Table 3 shows the pros and cons of Minerva adoption in legacy applications when compared to MLFlow and Kubeflow. Minerva optimally integrates with legacy subsystems, allows for democratization of algorithm development in a suite of products, leverages light weight infrastructure and is compatible with legacy architectures. We discuss these tenets below with comparisons.
7.1. **Democratization**
Minerva is designed for optimal portability of algorithms in a suite of applications leveraging the microservices architecture, standardizing APIs and enabling *lift and shift* mechanisms. While MLflow comes close, it is primarily designed for ML lifecycle management. It doesn’t address techniques to simplify *lift and shift* needs of algorithms/models across applications in a suite. For this, MLFlow relies on building and importing a new code repository, while Minerva achieves it by easily modifying the algorithm pluggable project (app) file(s) in a docker image. Thus, the ingestion of a new algorithm code is lightweight and also seamless due to the dynamic loading of project modules by framework, thereby minimizing the time to customize algorithms. Thus, Minerva allows for democratization of model/algorithm development by several teams in the organization. However, MLFlow due to lack of configurability capabilities, makes this difficult. Secondly, one can directly import pretrained models into the new application in a suite, and thus need not train a new model per se for the new application in the suite. In all, Minerva is designed for easier *lift and shift* strategy which is a key enabler for democratization of algorithm development in an application suite.
<table>
<thead>
<tr>
<th>Feature</th>
<th>Minerva</th>
<th>MLFlow</th>
<th>Kubeflow</th>
</tr>
</thead>
<tbody>
<tr>
<td>Decentralized Microservices Architecture</td>
<td>X</td>
<td>X</td>
<td>X</td>
</tr>
<tr>
<td>Easier Integrations</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Lightweight Infrastructure</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Scalable Training</td>
<td>X</td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>Legacy Compatibility</td>
<td>X</td>
<td></td>
<td>X</td>
</tr>
<tr>
<td>ML Democratization</td>
<td>X</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
7.2. **Integration**
Integration is another differentiator in Minerva. With its standardized and self-documenting swagger end points [19] it can add, modify or delete them at will. Thus, design of multitude of consumer driven payloads [11] with various legacy application subsystems becomes standard and easy. Additionally, Minerva parses and validates payloads dynamically and supports API versioning and identifiers that can be used to respond back to calling subsystems. MLflow doesn’t offer such configurability and hence is not easier to adopt or adapt when integrating with the many legacy application subsystems.
7.3. Compatibility
Kubeflow [18] can serve some needs of SaaS applications when they are migrated to cloud infrastructure due to its deep roots in Kubernetes, a virtual cloud operation system. But, Kubeflow cannot be readily accessed or ingested by legacy applications as they will need an upgrade to the newer cloud infrastructure. Minerva not only works within the existing infrastructure with minimal computing requirements, but also does not need to move the data out of the legacy application’s datacenter. It is compatible with legacy architecture and can easily integrate into them due to microservices architecture.
Finally, the industry has variety of cloud ML platforms like Amazon AWS [20], Microsoft Azure [21] and Google Cloud [22]. They have ML Microservices enabled via REST end points, but the legacy applications cannot readily leverage them due to unwillingness to export data out and transfer into the respective cloud ecosystems. Additionally, the legacy applications may not migrate to these cloud platforms for some time to come as pointed in Section 2.
In all, configurability, containerization, lightweight infrastructure and standardization make Minerva quite portable as compared to MLFlow and Kubeflow.
8. Design Trade-Offs
Minerva doesn’t offer elastic scalability as yet, as we have traded off due to the inherent smaller sizes of datasets in enterprise SaaS applications, especially in enterprise domain. In contrast, Minerva relies on lighter infrastructure, and in order to alleviate scalability issues, it has concurrency controls via throttling mechanisms for every ML project.
Minerva has little to no support for ML pipelines. It has a rudimentary orchestrator leveraging some built-in support in Oracle CX products. This simpler approach was enough to support the existing workflows that have been built so far. However, Kubeflow [18] has better support for ML pipelines which Minerva could leverage as in Section 11 as and when the supported workflows become increasingly complex.
Minerva has primarily been designed to address deficiencies and delays in deploying ML. Hence, it leverages experimentation done outside the platform. With the addition of Kubeflow’s experimentation platform, Minerva could become more complete.
Finally, model explanation, visualization and performance monitoring are a needed features that Minerva lacks. This trade off was made due to build vs buy decisioning. Observing Kubeflow’s capabilities there was no need to build these. Section 11 highlights that this along with other features can be brought into the Minerva with a mutually symbiotic integrations with one or more such platforms.
9. Case Study
Minerva was successfully adopted for at least four ML projects (apps) to build connected intelligence in Oracle CX product suite. The first ML project (app) was successfully implemented in one Oracle calendar release (three month’s timeframe), the next three ML projects (apps) were adopted and integrated in the next release (subsequent three months). Thus, deploying ML intelligence features in production accelerated three times compared to traditional approaches with similar resources.
Consumer driven payloads form the foundation of the interaction with the sub-systems. We illustrate the case study with a few important ones. Figure 2 illustrates the design and implementation of a generic train payload for batch training any model. The above example specifically trains Customer Lifetime Value (CLV) model within CX product suite. Various job tracking info is passed in top section of the json payload. The framework captures job statuses, which help in monitoring and debugging issues. Also, a polling API exposes the statuses of the ML jobs to the orchestrator. The training happens at a pre-determined frequency per account (a.k.a customer) triggered by the host application using an orchestrator. The data is made available by the data processing unit at CLV_MODEL_INPUT table. The CLV algorithm will build and store the ML model in ML_MODEL_STORED table in a database. The database ensured the model security, versioning and fault tolerance. An asynchronous predict can be triggered similarly by changing action type to predict. This generates the results in CLV_MODEL_OUTPUT for each account. The UI then displays the results to the end customers.
While, above use case has batch training and batch predictions, Minerva is not limited to these cases only. It can do online predictions, as well as hybrid predictions, where acknowledgements are synchronous and final predictions are batched (asynchronous) and reported in a call-back. Figure 3 illustrates an online prediction of a new subject line (text) for consumption by the Ad/Message designer in CX product. This new endpoint can be easily created by swagger configs and additionally processing of its data attribute is handled by the downstream project (app) code/algorithm. Framework parses the remaining attributes to handle the versioning of APIs and synchronous replies within a required SLA (Service level agreement).
```
{
"accountId": 1234,
"workFlowExecId": 1,
"workFlowTaskId": "?",
"workFlowName": "CLV",
"modelName": "model1",
"actionType": "train",
"modelInputPayload": {
"tableNames": ["CLV_MODEL_INPUT"],
"dbNames": ["DB1"]
},
"modelStoredTableName": {
"tableNames": ["ML_MODEL_STORED"],
"dbNames": ["DB1"]
},
"modelPredictedTableName": {
"tableNames": ["CLV_MODEL_OUTPUT"],
"dbNames": ["DB1"]
},
"serverName": "server",
"dataSource": {
"dbUser": "db_service2",
"dbURL": "user2",
"dbPassword": "pwd2",
"dbName": "pwd2"
}
}
```
Figure 2. Standardized payload to train ML model in Minerva
10. ORGANIZATION PERSPECTIVE
Organizationally, Minerva architecture allows for a separate data science(s) team to be responsible for crafting the algorithm, platform and the model. Not only one team, but several ML teams can contribute to the modelling activity, thus democratizing machine learning development. The infrastructure needed to run Minerva can be owned by separate engineering teams. This allows for data scientists to be responsible primarily for modelling and building ML pipelines. The data scientists can monitor their own pipelines and evaluate their own models. Additionally, the data science team can build a blueprint that serves as a deployment framework for engineering to deploy Minerva in many applications in a suite. This clever separation of responsibilities between the engineering and data scientists leverages their respective strengths. In many cases, international deployment teams can be made responsible for rolling out the blueprint to production further adding cost benefits.
11. FUTURE WORK
SOA (service-oriented architecture) [26] have continuously evolved since the initial days, with their adoption growing in cloud eco-systems. Minerva can work seamlessly in a new cloud infrastructure when applications migrate, especially Oracle cloud infrastructure (OCI) [23]. Notably, one can enable several other features along with Minerva platform in OCI. Specifically, OCI datascience module has capabilities to scale Minerva using Oracle Machine learning (OML) [24] which leverages compute power of the Oracle Autonomous database [25]. Additionally, more extensions to Minerva are possible due to native capabilities in OCI like Kubernetes [25] that can help build capabilities like dashboard monitoring, orchestration and account level debugging. Moreover, Minerva can draw strengths from OCI’s Kubernetes pipeline engine - Kubeflow [19], its elastic machine scaling, its GPU compute power and its native load balancing capabilities. When the legacy applications migrate to cloud infrastructure (OCI or similar), one can leverage Minerva in an even better form. First version of Minerva rides out the interim period when legacy applications can’t migrate to the cloud infrastructure.
12. CONCLUSION
Although microservices have conceptually existed since the days of SOA (service-oriented architecture), they have recently been adopted by various product organizations to reorganize monolithic SaaS applications. ML intelligence is a recent initiative to make these applications smart. Traditionally, the models built by data scientists have been integrated into products either using offline or embedded methods. We suggest a portable BYOMOA ML framework to allow for modeling flexibility, reusable ML use cases, agile development, tech-stack independence &
faster deployments. This configurable and reusable Minerva framework is suited for legacy applications which can neither immediately migrate to cloud infrastructures nor can send their data outside their legacy datacenters for consumption by recent cloud ML platforms.
ACKNOWLEDGEMENTS
Thanks to my wife and daughter for their encouragement.
Thanks to Oracle CX Marketing data science team members for their support.
Thanks to Dr. Michael Dyrenfurth - Purdue University, Dr. Nisha Talagala – Pyxeda, Dr. Sanjay Saigal - UC Davis and Dr. Kameshwar Yadavalli - Ostendo for their reviews.
REFERENCES
130 Computer Science & Information Technology (CS & IT)
Venkata Duvvuri
Venkata Duvvuri is a doctoral student at Department of Technology Leadership and Innovation at Purdue University. Additionally, he is an Architect level Consulting Member of Technical Staff - Data Scientist at Oracle corporation in Redwood City, CA, USA. He loves teaching and is an adjunct faculty at Northeastern University. He has held several leadership positions in data science at various companies. He holds a Master’s degree in computer science from University of Massachusetts – Amherst and an MBA from University of California - Davis.
|
{"Source-Url": "https://aircconline.com/csit/papers/vol10/csit100410.pdf", "len_cl100k_base": 6622, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 34514, "total-output-tokens": 8570, "length": "2e12", "weborganizer": {"__label__adult": 0.0003559589385986328, "__label__art_design": 0.0005807876586914062, "__label__crime_law": 0.00033926963806152344, "__label__education_jobs": 0.0009765625, "__label__entertainment": 0.00010848045349121094, "__label__fashion_beauty": 0.00020170211791992188, "__label__finance_business": 0.0010480880737304688, "__label__food_dining": 0.0003666877746582031, "__label__games": 0.0006079673767089844, "__label__hardware": 0.0011358261108398438, "__label__health": 0.0006108283996582031, "__label__history": 0.0002989768981933594, "__label__home_hobbies": 0.00010311603546142578, "__label__industrial": 0.0005588531494140625, "__label__literature": 0.00025010108947753906, "__label__politics": 0.00031375885009765625, "__label__religion": 0.000431060791015625, "__label__science_tech": 0.09454345703125, "__label__social_life": 0.00011456012725830078, "__label__software": 0.01611328125, "__label__software_dev": 0.8798828125, "__label__sports_fitness": 0.00027251243591308594, "__label__transportation": 0.0005288124084472656, "__label__travel": 0.0001932382583618164}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37043, 0.02189]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37043, 0.11278]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37043, 0.88929]], "google_gemma-3-12b-it_contains_pii": [[0, 3029, false], [3029, 5744, null], [5744, 9228, null], [9228, 13191, null], [13191, 15593, null], [15593, 19206, null], [19206, 22251, null], [22251, 25428, null], [25428, 27985, null], [27985, 30784, null], [30784, 33960, null], [33960, 37043, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3029, true], [3029, 5744, null], [5744, 9228, null], [9228, 13191, null], [13191, 15593, null], [15593, 19206, null], [19206, 22251, null], [22251, 25428, null], [25428, 27985, null], [27985, 30784, null], [30784, 33960, null], [33960, 37043, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37043, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37043, null]], "pdf_page_numbers": [[0, 3029, 1], [3029, 5744, 2], [5744, 9228, 3], [9228, 13191, 4], [13191, 15593, 5], [15593, 19206, 6], [19206, 22251, 7], [22251, 25428, 8], [25428, 27985, 9], [27985, 30784, 10], [30784, 33960, 11], [33960, 37043, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37043, 0.22034]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
ba72b0fec111cfe6cbf6fe592d0ef35d8fb8c537
|
A framework for feeding Linked Data to Complex Event Processing engines
How to cite:
© 2010 The Authors
Version: Accepted Manuscript
Link(s) to article on publisher’s website:
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page.
A Framework for Feeding Linked Data to Complex Event Processing Engines
Dong Liu, Carlos Pedrinaci, and John Domingue
Knowledge Media Institute, The Open University
Walton Hall, Milton Keynes, MK7 6AA, UK
{d.liu,c.pedrinaci,j.b.domingue}@open.ac.uk
Abstract. A huge volume of Linked Data has been published on the Web, yet is not processable by Complex Event Processing (CEP) or Event Stream Processing (ESP) engines. This paper presents a framework to bridge this gap, under which Linked Data are first translated into events conforming to a lightweight ontology, and then fed to CEP engines. The event processing results will also be published back onto the Web of Data. In this way, CEP engines are connected to the Web of Data, and the ontological reasoning is integrated with event processing. Finally, the implementation method and a case study of the framework are presented.
Keywords: Linked Data, Complex Event Processing, ontology mapping, rule-based reasoning.
1 Introduction
With the development of the Semantic Web, a huge volume of Linked Data has been published on the Web. On the other hand, with the rise of Complex Event Processing (CEP) and Event Stream Processing (ESP) [9], steps have been made towards their integration with semantic technologies, i.e. Semantic CEP [5, 15]. However, Linked Data is not processable by CEP or ESP engines for several reasons: i) the existing engines, such as Drools Fusion\(^1\) and Esper\(^2\) are object-oriented and lack the capability of accessing the Web of Linked Data; ii) although there are a set of tools for generating Java objects from RDF statements\(^3\), it is still difficult for CEP engines to manipulate Linked Data, because of the heterogeneity of schemas or ontologies defined by independent data providers; iii) semantic repositories cannot perform temporal reasoning over RDF triples. Extensions to RDF and SPARQL have been made to address these issues [14, 3], but they are not realizable because of the requirement of modifying the semantic repositories or query execution engines.
\(^1\) http://www.jboss.org/drools/drools-fusion.html
\(^2\) http://esper.codehaus.org
\(^3\) http://semanticweb.org/wiki/Tripresso
This paper presents a more practical approach: Linked Data are imported from external sources by being transformed into events conforming to EVO-Core, a lightweight but generic ontology. CEP engines process such events with the support of a Java API generated for manipulating the EVO-Core ontology. The results of event processing will be RDF-ified again and published on the Web of Data. Under this framework, the integration of ontological reasoning and CEP is achieved without any modifications to the RDF and SPARQL standards.
To demonstrate the workflow of the proposed framework, the development of a simple analytical system for user logs in iServe is used as a running example in this paper. iServe [12] is a platform for publishing Semantic Web Service (SWS) descriptions as Linked Data. It is notable that all the data in iServe, including logs of the creation and removal of services, are pure RDF. Therefore, the underlying repository of iServe can be regarded as an external data source having its own schema. In addition, Drools Fusion is used as the example event processing engine, due to its ability to deal with both event streams and clouds.
The rest of this paper is organized as follows: Section 2 reviews the recent work on Semantic CEP. Section 3 summarizes the workflow of the proposed framework at both design time and runtime. Section 4 and Section 5 respectively discuss two critical issues regarding the framework: the event modelling and generation. Section 6 sketches the architecture of the implemented prototype. Section 7 demonstrates the use of the framework. Finally, Section 8 concludes the paper and introduces our future research objectives.
2 Related Work
Earlier research on semantic event modelling is presented in [1], which proposes an approach to reveal the semantics of events by means of classification, aggregation, generalization and association. As a result, a knowledge representation scheme for events is developed to describe complex events, especially the relationships between them. However, the paper only presents theoretical work, and does not touch upon the processing of semantic events.
SQL-like and algebra-based event languages are designed to specify the semantics of events [4, 6]. Nevertheless, they also lack solid support from event processing engines. With advances in Semantic Web technologies, more practical solutions to Semantic CEP are proposed [14, 3]. In [14], the authors present a formal extension to the RDF data model, which is called Time-Annotated RDF (TA-RDF). The main idea is to attach a timestamp to each group of RDF triples. The authors of [3] to extend the standard SPARQL query language by adding four binary temporal operators: SEQ, EQUALS, OPTIONAL-SEQ, and EQUALS-OPTIONAL, so that Semantic CEP can be done by executing Event Processing SPARQL (EP-SPARQL) queries. Obviously, both these two solutions require modifications to and optimizations of SPARQL query engines and RDF repositories. In contrast, the framework proposed in this paper is based on standard semantic modelling and query language, i.e. RDFS and SPARQL, as well as a mature and well-used CEP engine.
3 Workflow
At design time, the work to build up an event processor consuming Linked Data includes three stages listed below. Additionally, it also involves some trivial tasks such as configuring the connectors to RDF repositories, setting the options of the CEP engine, etc.
- **Event (Stream) Modelling:** Define domain specific events and split them into different streams. Write SPARQL\texttt{CONSTRUCT} queries, which are executed at runtime to produce event streams.
- **Code Generation:** Automatically generate Java API for manipulating events and the event processing results, using the code generator provided by RDFReactor. It may also require auxiliary coding work to be done manually, e.g. translating instances of Java Calendar into time in the format of milliseconds.
- **Rule Definition:** Define rules for event processing. If needed, develop some helping functions of, for instance, accessing external SPARQL endpoints on the Web of Data, publishing rule-base reasoning results as Linked Data, etc.
In brief, the results of work done at design time contain: i) an application oriented event model, ii) Java libraries for manipulating event model and processing results, iii) the specification of rules for event processing. All of these are inputs to the runtime modules, which work following the flow illustrated by Fig. 1. Event streams are formed by executing SPARQL\texttt{CONSTRUCT} queries against certain sets of Linked Data on a regular basis, and, when necessary, SPARQL\texttt{DESCRIBE} queries may also be executed to make a snapshot of the concerned entities. If a RDF triple store like BigOWL\texttt{im}\textsuperscript{4} offers a notification mechanism, the data transformation will be performed when being notified by the triple store. With the runtime supports of RDFReactor\textsuperscript{5} and RDF2Go\textsuperscript{6}, the generated event streams are sent to Drools Fusion in the form of Java objects. Drools Fusion performs rule-based reasoning, as well as temporal reasoning over the received Java objects, then RDF-ifies the results and saves into the assigned semantic repository by invoking the Java APIs generated at design time.
\textsuperscript{4} http://www.ontotext.com/owlim/big
\textsuperscript{5} http://semanticweb.org/wiki/RDFReactor
\textsuperscript{6} http://semanticweb.org/wiki/RDF2Go
4 Event Model
The proposed framework aims at bringing together Linked Data and rule-based CEP engines, so the following requirements and issues are taken into account when building the conceptual model of events.
- **Usability**: This is made up of two aspects: i) following the Linked Data principles, especially the ability to be interlinked to RDF triples on the Web of Data; ii) ease of being fed into CEP engines such as Drools Fusion.
- **Extendibility**: The event model should be in the form of a generic ontology, rather than a domain specific one, and must be easy to apply to different application areas.
- **Expressiveness**: The model may be able to describe complex events and event streams, as well as the timing, causality, aggregation and hierarchy of events.
- **Simplicity**: Some applications powered by CEP engines, e.g. Business Activity Monitoring (BAM), are real-time or quasi real-time systems. Thus, light-weight semantics of the event model should minimize the impact of ontological reasoning on performance.
As visualized by Figure 2, EVO Core (EVO-Core) is defined in RDF Schema to fulfill the presented demands. It contains four key concepts:
- **Event**, is a concept on the highest level of abstraction and the common ancestor of AtomicEvent and ComplexEvent. A particular property time-stamp is used to specify the time when the event happens, and subEventOf is for modelling the hierarchy of events. The values of property concerns are external links to instances of owl:Thing.
- **AtomicEvent**, refers to an instantaneous occurrence of interest.
- **ComplexEvent**, may be built up from a set of other events that hold certain temporal relationships or satisfy constraint conditions on their attributes. The property causedBy captures causality among events.
Fig. 2. EVO-Core: a generic event ontology.
– **EventStream**, is a timely sequence of individual events that come from a data source. The property `inStream` associates events to streams that come into being by repeatedly executing SPARQL `CONSTRUCT` queries. Here, the queries are expressed using SPARQL Inferencing Notation (SPIN⁷), and stored as instances of `sp:Construct` associated with event streams via `generatedBy`. SPIN is essentially a set of vocabularies for writing SPARQL queries in RDF. This way, machines can carry out further reasoning over queries, such as checking their correctness. Similar to `subEventOf`, the property `subStreamOf` models the hierarchy of event streams.
Efforts have already been made to build the conceptual model of events, and relevant ontologies are found in [13, 11]. However, they are neither general purpose, nor suitable for being processed by CEP engines. The Event ontology originates from research in the digital music area, where an event “is regarded as an arbitrary classification of a space/time region, by a cognitive agent” [13].
From an artificial intelligent perspective, it is believed that an event may have five key features: a location, a time, active agents, factors and products. Thus, the Event ontology defines one property for each of the five features. However, at least in some cases like the iServe logging analysis, location might not be applicable. Because the range of the time property is arbitrary temporal entities, i.e. either Instant or Interval defined in the OWL Time Ontology [10], events cannot be sent directly to CEP engines like Drools Fusion, which only can handle timestamps in milliseconds. As for the other three properties, i.e. agent, factor and product, if necessary, they can be defined as sub-properties of concerns in EVO-Core. In general, everything that induces or relates to the occurrence of an event will be a value of concerns, even the instances of Instant and Interval mentioned above. Another weakness of Event ontology is the lack of a facility for expressing complex events and event streams. It only provides one property, called `sub_event`, to capture the hierarchy of events, and nothing for causal relationships among events.
EVeNT Ontology (EVO) is another representative event model [11]. It is the outcome of our previous work, but also the cornerstone of EVO-Core. The difference between these two ontologies is that the target application area of the EVO ontology is Semantic Business Process Management (SBMP), especially Business Process Analysis (BPA). EVO just extends the Core Ontology for Business Process Analysis (COBRA) with several concepts related to states and transitions of process or activity instances, i.e. seven Process Monitoring Events and twelve Activity Monitoring Events, so as to track the running status of business activities. In short, EVO-Core is a generalized version of EVO, further enhanced by the ability to describe complex events and event streams.
5 Event Generation
As highlighted earlier, Linked Data are published by independent providers, following different schemas that are not designed for being processed by CEP
⁷ [http://spinrdf.org/spin.html](http://spinrdf.org/spin.html)
Therefore, it requires ontological mappings [8] between the schema of Linked Data and the event model. This section explains the process of mapping and translating Linked Data into events through an example of iServe logging system [2]. Listing 1 shows part of the schema definition of iServe user logs.
```rxml
class : LogEntry a rdfs : Class ;
log : hasDateTime time : Instant ;
time : Instant a rdfs : Class .
time : inXSDDateTime rdfs : domain time : Instant ;
rdfs : range xsd : dateTime .
log : Action a rdfs : Class .
log : ServiceRepositoryLogEntry a log : LogEntry .
log : ItemCreation rdfs : subClassOf log : Action ;
log : createdItem log : Item .
log : ItemDeleting rdfs : subClassOf log : Action ;
log : deletedItem log : Item .
```
Listing 1. RDF schema of iServe log entries.
Extensions are made to the EVO-Core ontology to describe the iServe user behaviours. Two new concepts ServiceCreated and ServiceDeleted are defined as sub-classes of AtomicEvent. Moreover, two sub-properties of concerns, concernsAgent and concernsService are also added to the ontology. The ServiceCreated event happens when a new service is uploaded to iServe, whereas ServiceDeleted occurs when it is removed by an iServe user. As the name implies, concernsAgent and concernsService respectively keep the user’s FOAF ID and the URI of the service. Finally, LoggingSystemError is defined as a ComplexEvent, which can be caused by not only a wrong temporal relationship, i.e. a ServiceDeleted event occurred before a ServiceCreated event, but also by the absence of the corresponding ServiceCreated event of a ServiceDeleted event.
Formulae (1) and (2) formalize the morphism from the RDF schema of iServe logs to the extended EVO-Core ontology:
```plain
ServiceRepositoyLogEntry(l) ∧ hasAction(l, a) ∧ hasAgent(l, g) ∧ hasDateTime(l, i)∧ inXSDDateTime(i, t) ∧ ItemCreation(a) ∧ createdItem(a, s) ⊇ ServiceCreated(e)∧
concernsAgent(e, a) ∧ concernsService(e, s) ∧ timestamp(e, t) (1)
```
```plain
ServiceRepositoyLogEntry(l) ∧ hasAction(l, a) ∧ hasAgent(l, g) ∧ hasDateTime(l, i)∧ inXSDDateTime(i, t) ∧ ItemDeleting(a) ∧ deletedItem(a, s) ⊇ ServiceDeleted(e)∧
concernsAgent(e, a) ∧ concernsService(e, s) ∧ timestamp(e, t) (2)
```
Listing 2 elaborates the SPARQL query written according to formula (1). It is not hard to come up with a similar one from the other mapping formula, which is left out from this paper due to space limitations.
```sparql
CONSTRUCT {
_:v rdf: type ec:ServiceCreated ;
ec: timestamp ?time ;
ec: concernsAgent ?agent ;
ec: concernsService ?service ;
} WHERE {
?entry rdf:type log:ServiceRepositoryLogEntry ;
?entry log:hasAgent ?agent ;
?action rdf:type log:ItemCreation ;
?entry log:hasDateTime ?instant .
?instant time: inXSDDateTime ?time .}
```
Listing 2. An example of SPARQL query for event generation.
To enable reasoning on the SPARQL queries for ontology translation, the SPARQL query above is converted into the syntax of SPIN (shown in Listing 3), before being stored in an RDF repository. There have been the tool\(^8\) and online bi-directional converter\(^9\) between SPARQL and SPIN.
\[
\begin{align*}
_:b1 & \text{ sp:varName "action"^^xsd:string .} \\
_:b2 & \text{ sp:varName "service"^^xsd:string .} \\
_:b3 & \text{ sp:varName "instant"^^xsd:string .} \\
_:b4 & \text{ sp:varName "agent"^^xsd:string .} \\
_:b5 & \text{ sp:varName "time"^^xsd:string .} \\
_:b7 & \text{ sp:varName "entry"^^xsd:string .}
\end{align*}
\]
\[
\text{[} \text{ a sp:Construct ;} \\
\text{ sp:templates (} \\
\text{ [sp:object \text{ec:ServiceCreated ; sp:predicate rdf:type ; sp:subject \_:b6] } \\
\text{ [sp:object \_:b5 ; sp:predicate ec:timestamp ; sp:subject \_:b6] } \\
\text{ [sp:object \_:b4 ; sp:predicate ec:concernsAgent ; sp:subject \_:b6] } \\
\text{ [sp:object \_:b2 ; sp:predicate ec:concernsService ; sp:subject \_:b6] } \\
\text{ [sp:object ec:iServeStream ; sp:predicate ec:inStream ; sp:subject \_:b6]) ;} \\
\text{ sp:where (} \\
\text{ [sp:object log:ServiceRepositoyLogEntry ;} \\
\text{ sp:predicate rdf:type ; sp:subject \_:b7] } \\
\text{ [sp:object \_:b4 ; sp:predicate log:hasAgent ; sp:subject \_:b7] } \\
\text{ [sp:object \_:b1 ; sp:predicate log:hasAction ; sp:subject \_:b7] } \\
\text{ [sp:object log:ItemCreation ; sp:predicate rdf:type ; sp:subject \_:b1] } \\
\text{ [sp:object \_:b2 ; sp:predicate log:createdItem ; sp:subject \_:b1] } \\
\text{ [sp:object \_:b3 ; sp:predicate log:hasDateTime ; sp:subject \_:b7] } \\
\text{ [sp:object \_:b5 ; sp:predicate time:inXSDDateTime ; sp:subject \_:b3}) .}
\]
Listing 3. SPARQL query in SPIN syntax.
Listing 4 outlines a log entry in iServe, against which executing the SPARQL query shown in Listing 2, we can get the first event in Listing 5.
\[
\begin{align*}
\text{log:logEntry1271364976707 a log:ServiceRepositoyLogEntry ;} \\
\text{ log:hasAction log:action1271364976707 ;} \\
\text{ log:hasAgent \text{<http://revyu.com/people/dong> ;}} \\
\text{ log:hasDateTime time:instant1271364976707 .}
\end{align*}
\]
\[
\begin{align*}
\text{log:action1271364976707 a log:ItemCreation ;} \\
\text{ log:createdItem service:VEHICLE_PRICE_SERVICE .}
\end{align*}
\]
The other two events in Listing 5 are also generated from the iServe system logs, and serve as the examples of ServiceDeleted and LoggingSystemError.
\[
\begin{align*}
:\text{event101307 a ec:ServiceCreated ;} \\
\text{ ec:concernsAgent \text{<http://revyu.com/people/dong> ;}} \\
\text{ ec:concernsService service:VEHICLE_PRICE_SERVICE ;} \\
\text{ ec:inStream ec:iServeStream .}
\end{align*}
\]
\[
\begin{align*}
:\text{event107470 a ec:ServiceDeleted ;} \\
\text{ ec:timestamp "2010-04-15T20:56:38.377Z"^^xsd:dateTime ;} \\
\text{ ec:concernsAgent \text{<http://revyu.com/people/dong> ;}} \\
\text{ ec:concernsService service:VEHICLE_PRICE_SERVICE ;} \\
\text{ ec:inStream ec:iServeStream .}
\end{align*}
\]
\(^8\) http://www.topquadrant.com/products/SPIN.html
\(^9\) http://sparqlpedia.org/spinrdfconverter.html
With the help of RDFReactor on code generation and the Drools' capability of manipulating Java objects, what we need to do to empower Drools Fusion to deal with the ServiceCreated and ServiceDeleted events is just adding the following declarations to the DRL file:
```drl
declare ServiceCreated
@role(event)
@timestamp(timestampInMills)
end
declare ServiceDeleted
@role(event)
@timestamp(timestampInMills)
end
```
Here, @role tells the CEP engine the type of the declaring entity, and @timestamp tells which attribute will be used as the source of occurrence time of events.
### 6 Implementation
Fig. 3 depicts the overall architecture of the prototype developed for proof of concept. BigOWLim serves as the repository for the RDF triples of events. RDF2Go provides a unified interface to various triple (and quad) stores, through which RDFReactor goes to get the access to the repository. The event processor runs on top of the generated Java API for both EVO-Core ontology and analysis results. It consists of three components, namely, event generator, Drools Fusion and timer. Their functionalities have been described in Section ???. Finally, the Linked Data provider, which is also implemented based on the generated Java API, offers several interfaces for clients, including HTML, Linked Data and a SPARQL endpoint. End users can browse the event processing results with a plain HTML browser or with an RDF browser, both supported seamlessly by the server through content negotiation. Third-party applications can interact with the prototype through the SPARQL endpoint.
### 7 Case Study
This section concentrates on a case study on applying the proposed framework to the analysis of iServe logs. First, we are going to answer the question: who are the top ten active users of iServe so far? Here, active users refer to those who own the most services in iServe. To this end, two rules (shown in Listing 6) are defined respectively for processing ServiceCreated and ServiceDeleted events. Upon submission of a new service, the event generator will put an instance of ServiceCreated into the iServe Stream. As the reaction to this event, the event processor finds the user identified by the value of concernsAgent, increases
the number of services he/she uploaded by one, and updates the time of his/her last action on iServe. Correspondingly, when \texttt{ServiceDeleted} event happens, the number of uploaded service will decrease by one, and the last action time will update in the same way.
\begin{verbatim}
rule "Service Created in iServe"
when $e$ : ServiceCreated( ) from entry-point "iServe Stream"
then String agent =$e$.getAllConcernsAgent().next().toString();
LepHelper.get().increaseUploadedServiceNumber(agent);
LepHelper.get().updateLastActionTime(agent,$e$.getAllTimestamp().next());
end
rule "Service Deleted in iServe"
when $e$ : ServiceDeleted( ) from entry-point "iServe Stream"
then String agent =$e$.getAllConcernsAgent().next().toString();
LepHelper.get().decreaseUploadedServiceNumber(agent);
LepHelper.get().updateLastActionTime(agent,$e$.getAllTimestamp().next());
end
rule "Logging System Error Detecting"
when $e$ : ServiceDeleted( ) from entry-point "iServe Stream"
and ( $e1$ : ServiceCreated( this after $e$ &&
concernsService == $e$.concernsService ) from entry-point "iServe Stream"
or not( $e1$ : ServiceCreated( concernsService == $e$.concernsService )
from entry-point "iServe Stream" ))
then
LepHelper.get().createLoggingSystemErrorEvent($e$, $e1$);
end
\end{verbatim}
\textbf{Listing 6.} Event processing rules definition.
Secondly, in order to guarantee the accuracy of analysis results and to detect errors in logging system, we define the third rule of Listing 6. It detects the complex event LoggingSystemError, which occurs when a ServiceDeleted has a missing ServiceCreated, or when a ServiceDeleted event happens before the ServiceCreated of the same service. Note that, for the ease of understanding, Drools rule attributes, e.g. no-loop, salience, lock-on-active, etc., are omitted from Listing 6. In addition, LepHelper is a Java class wrapping various methods for accessing SPARQL endpoints, invoking EVO-Core API, and persisting the analysis results as RDF triples.
The schema below is a simple vocabulary for describing the analysis results:
\[ \text{:Agent rdf:type rdfs:Class} . \]
\[ \text{:uploadedServiceNumber a rdf:Property ; rdfs:domain :Agent ; rdfs:range xsd:nonNegativeInteger} . \]
\[ \text{:lastActionTime a rdf:Property ; rdfs:domain :Agent ; rdfs:range xsd:dateTime} . \]
According to the schema, it is not hard to come up with a SPARQL SELECT query retrieving ten users ordered by the number of services they have uploaded to iServe:
\[
\text{SELECT ?agent ?number ?time WHERE {}
\text{ ?agent a ia:Agent ; ia:uploadedServiceNumber ?number ; ia:lastActionTime ?time .}
\text{ ORDER BY DESC(?number) DESC(?time) LIMIT 10}
\]
Especially, when the numbers are the same, they will be ordered by the last time they accessed iServe. The query results are displayed in Fig. 4. For privacy reason, some of the FOAF IDs have been concealed.
**Fig. 4.** The screenshot of query result.
### 8 Conclusions and Future Work
In this paper, we present a practical way in which Linked Data can be fed into CEP engines. EVO-Core, a lightweight but generic ontology, is built to describe
events in RDF, and SPARQL-based ontological mapping technique is adopted to transform Linked Data into events conforming to EVO-Core. We also introduce the workflow of developing an application equipped with the event processor. The development of a simple analytical system of iServe logs shows that the proposed framework is feasible.
Our future work will focus on the publication of analysis results according to SDMX-RDF [7]. We will also try to run a public registry for Linked Data sources, which can be used as the origin of events, together with the corresponding SPARQL queries for ontological translation.
Acknowledgements This work was funded by the EU project SOA4All (FP7-215219).
References
|
{"Source-Url": "http://oro.open.ac.uk/26057/1/LiuEtAl_COLD2010.pdf", "len_cl100k_base": 5964, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 28753, "total-output-tokens": 7970, "length": "2e12", "weborganizer": {"__label__adult": 0.0003063678741455078, "__label__art_design": 0.00040268898010253906, "__label__crime_law": 0.0004322528839111328, "__label__education_jobs": 0.0009737014770507812, "__label__entertainment": 0.0001176595687866211, "__label__fashion_beauty": 0.0001786947250366211, "__label__finance_business": 0.0006146430969238281, "__label__food_dining": 0.00039267539978027344, "__label__games": 0.000545501708984375, "__label__hardware": 0.0007061958312988281, "__label__health": 0.0006146430969238281, "__label__history": 0.00033783912658691406, "__label__home_hobbies": 9.441375732421876e-05, "__label__industrial": 0.00045943260192871094, "__label__literature": 0.0004622936248779297, "__label__politics": 0.0003933906555175781, "__label__religion": 0.0004799365997314453, "__label__science_tech": 0.109619140625, "__label__social_life": 0.00014078617095947266, "__label__software": 0.0262298583984375, "__label__software_dev": 0.85546875, "__label__sports_fitness": 0.00027108192443847656, "__label__transportation": 0.0006036758422851562, "__label__travel": 0.00023484230041503904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29084, 0.01922]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29084, 0.21668]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29084, 0.8421]], "google_gemma-3-12b-it_contains_pii": [[0, 809, false], [809, 3007, null], [3007, 6173, null], [6173, 8523, null], [8523, 10367, null], [10367, 13566, null], [13566, 16491, null], [16491, 19748, null], [19748, 21993, null], [21993, 23384, null], [23384, 25167, null], [25167, 27992, null], [27992, 29084, null]], "google_gemma-3-12b-it_is_public_document": [[0, 809, true], [809, 3007, null], [3007, 6173, null], [6173, 8523, null], [8523, 10367, null], [10367, 13566, null], [13566, 16491, null], [16491, 19748, null], [19748, 21993, null], [21993, 23384, null], [23384, 25167, null], [25167, 27992, null], [27992, 29084, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29084, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29084, null]], "pdf_page_numbers": [[0, 809, 1], [809, 3007, 2], [3007, 6173, 3], [6173, 8523, 4], [8523, 10367, 5], [10367, 13566, 6], [13566, 16491, 7], [16491, 19748, 8], [19748, 21993, 9], [21993, 23384, 10], [23384, 25167, 11], [25167, 27992, 12], [27992, 29084, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29084, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
0d1c55f2d43d125f94a34b132b4dfe2501bd51ea
|
Csound on the Web
Victor LAZZARINI and Edward COSTELLO and Steven YI and John FITCH
Department of Music
National University of Ireland
Maynooth, Ireland,
{victor.lazzarini@nuim.ie, edwardcostello@gmail.com, stevenyi@gmail.com, jpff@codemist.co.uk }
Abstract
This paper reports on two approaches to provide a general-purpose audio programming support for web applications based on Csound. It reviews the current state of web audio development, and discusses some previous attempts at this. We then introduce a Javascript version of Csound that has been created using the Emscripten compiler, and discuss its features and limitations. In complement to this, we look at a Native Client implementation of Csound, which is a fully-functional version of Csound running in Chrome and Chromium browsers.
Keywords
Music Programming Languages; Web Applications;
1 Introduction
The web browser has become an increasingly viable platform for the creation and distribution of various types of media computing applications [Wyse and Subramanian, 2013]. It is no surprise that audio is an important part of these developments. For a good while now we have been interested in the possibilities of deployment of client-side Csound-based applications, in addition to the already existing server-side capabilities of the system. Such scenarios would be ideal for various uses of Csound. For instance, in Education, we could see the easy deployment of Computer Music training software for all levels, from secondary schools to third-level institutions. For the researcher, web applications can provide an easy means of creating prototypes and demonstrations. Composers and media artists can also benefit from the wide reach of the internet to create portable works of art. In summary, given the right conditions, Csound can provide a solid and robust general-purpose audio development environment for a variety of uses. In this paper, we report on the progress towards supporting these conditions.
2 Audio Technologies for the Web
The current state of audio systems for worldwide web applications is primarily based upon three technologies: Java\(^1\), Adobe Flash\(^2\), and HTML5 Web Audio\(^3\). Of the three, Java is the oldest. Applications using Java are deployed via the web either as Applets\(^4\) or via Java Web Start\(^5\). Java as a platform for web applications has lost popularity since its introduction, primarily due to historically sluggish start-up times as well as concerns over security breaches. Also of concern is that major browser vendors have either completely disabled Applet loading or disabled them by default, and that NPAPI plugin support, with which the Java plugin for browsers is implemented, is planned to be dropped in future browser versions\(^6\). While Java sees strong support on the server-side and desktop, its future as a web-deployed application is tenuous at best and difficult to recommend for future audio system development.
Adobe Flash as a platform has seen large-scale support across platforms and across browsers. Numerous large-scale applications have been developed such as AudioTool\(^7\), Patchwork\(^8\), and Noteflight\(^9\). Flash developers can choose to deploy to the web using the Flash plugin, as well as use Adobe Air\(^10\) to deploy to desktop and mobile devices. While these applications demonstrate what can be developed for the web
\(^{1}\)http://java.oracle.com
\(^{2}\)http://www.adobe.com/products/flashruntime.html
\(^{3}\)http://www.w3.org/TR/webaudio/
\(^{4}\)http://docs.oracle.com/javase/tutorial/deployment/applet/index.html
\(^{5}\)http://docs.oracle.com/javase/tutorial/deployment/webstart/index.html
\(^{6}\)http://blog.chromium.org/2013/09/saying-goodbye-to-our-old-friend-npapi.html
\(^{7}\)http://www.audiotool.com/
\(^{8}\)http://www.patchwork-synth.com
\(^{9}\)http://www.noteflight.com
\(^{10}\)http://www.adobe.com/products/air.html
using Flash, the Flash platform itself has a number of drawbacks. The primary tools for Flash development are closed-source, commercial applications that are unavailable on Linux, though open source Flash compilers and IDEs do exist. There has been a backlash against Flash in browsers, most famously by Steve Jobs and Apple, and the technology stack as a whole has seen limited development with the growing popularity of HTML5. At this time, Flash may be a viable platform for building audio applications, but the uncertain future makes it difficult to recommend.
Finally, HTML5 Web Audio is the most recent of technologies for web audio applications. Examples include the “Recreating the sounds of the BBC Radiophonic Workshop using the Web Audio API” site, Gibberish, and WebPd. Unlike Java or Flash, which are implemented as browser plug-ins, the Web Audio API is a W3C proposed standard that is implemented by the browser itself. Having built-in support for Audio removes the security issues and concerns over the future of plug-ins that affect Java and Flash. However, the Web Audio API has limitations that will be explored further below in the section on Emscripten.
3 Csound-based Web Application Design
Csound is a music synthesis system that has roots in the very earliest history of computer music. Csound use in Desktop and Mobile applications has been discussed previously in Lazzarini et al., 2012b, Yi and Lazzarini, 2012, and Lazzarini et al., 2012a.
Prior to the technologies presented in this paper, Csound-based web applications have employed Csound mostly on the server-side. For example, NetCsound allows sending a CSD file to the server, where it would render the project to disk and email the user a link to the rendered file when complete. Another use of Csound on the server is Oeyvind Brandtsegg’s VLBI Music, where Csound is running on the server and publishes its audio output to an audio stream that end users can listen to. A similar architecture is found in Johannes and Toshihiro, 2013. Since version 6.02, Csound also includes a built-in server, that can be activated through an option on start up. The server is able to receive code directly through UDP connections and compile them on the fly.
Using Csound server-side has both positives and negatives that should be evaluated for a project’s requirements. It can be appropriate to use if the project’s design calls for a single audio stream/Csound instance that is shared by all listeners. In this case, users might interact with the audio system over the web, at the expense of network latency. Using multiple realtime Csound instances, as would be the case if there was one per user, would certainly be taxing for a single server and would require careful resource limiting. For multiple non-realtime Csound instances, as in the case of NetCsound, multiple jobs may be scheduled and batch processed with less problems than with realtime systems, though resource management is still a concern.
An early project to employ client-side audio computation by Csound was described in Casey and Smaragdis, 1996, where a sound and music description system was proposed for the rendering of network-supplied data streams. A possibly more flexible way to use Csound in client-side applications, however, is to use the web browser as a platform. Two attempts at this have been made in the past. The first was the now-defunct ActiveX Csound (also known as AXCsound), which allowed embedding Csound into a webpage as an ActiveX Object. This technology is no longer maintained and was only available for use on Windows with Internet Explorer. A second attempt was made in the Mobile Csound Project Lazzarini et al., 2012b, where a proof-of-concept Csound-based application was developed with Java and deployed using Java Web Start, achieving client-side Csound use via the browser. However, the technology required special permissions to run on the client side and required Java to be installed. Due to those issues and the unsure future of Java over the web,
the solution was not further explored.
The two systems described in this paper are browser-based solutions that run on the client-side. The both share the following benefits:
- Csound has a large array of signal processing opcodes made immediately available to web-based projects.
- They are compiled using the same source code as is used for the desktop and mobile version of Csound. They only require recompiling to keep them in sync with the latest Csound features and bug fixes.
- Csound code that can be run with these browser solutions can be used on other platforms. Audio systems developed using Csound code is then cross-platform across the web, desktop, mobile, and embedded systems (i.e. Raspberry Pi, Beaglebone; discussed in [Batchelor and Wignall, 2013]). Developers can reuse their audio code from their web-based projects elsewhere, and vice versa.
4 Emscripten
Emscripten is a project created by Alon Zakai at the Mozilla Foundation that compiles the assembly language used by the LLVM compiler into Javascript [Zakai, 2011]. When used in combination with LLVM's Clang frontend, Emscripten allows applications written in C/C++ or languages that use C/C++ runtimes to be run directly in web browsers. This eliminates the need for browser plugins and takes full advantage of web standards that are already in common use.
In order to generate Javascript from C/C++ sourcecode the codebase is first compiled into LLVM assembly language using LLVM’s Clang frontend. Emscripten translates the resulting LLVM assembly language into Javascript, specifically an optimised subset of Javascript entitled asm.js. The asm.js subset of Javascript is intended as a low-level target language for compilers and allows a number of optimisations which are not possible with standard Javascript. Code semantics which differ between Javascript and LLVM assembly can be emulated when accurate code is required. Emscripten has built-in methods to check for arithmetic overflow, signing issues and rounding errors. If emulation is not required, code can be translated without semantic emulation in order to achieve the best execution performance [Zakai, 2011].
Implementations of the C and C++ runtime libraries have been created for applications compiled with Emscripten. These allow programs written in C/C++ to transparently perform common tasks such as using the file system, allocating memory and printing to the console. Emscripten allows a virtual filesystem to be created using its FS library, which is used by Emscripten’s libc and libcxx for file I/O. Files can be added or removed from the virtual filesystem using Javascript helper functions. It is also possible to directly call C functions from Javascript using Emscripten. These functions must first be named at compile time so they are not optimised out of the resulting compiled Javascript code. The required functions are then wrapped using Emscripten’s cwrap function, and assigned to a Javascript function name. The cwrap function allows many Javascript variables to be used transparently as arguments to C functions, such as passing Javascript strings to functions which require the C languages const char array type.
Although Emscripten can successfully compile a large section of C/C++ code there are still a number of limitations to this approach due to limitations within the Javascript language and runtime. As Javascript doesn’t support threading, Emscripten is unable to compile codebases that make use of threads. Some concurrency is possible using web workers, but they do not share state. It is also not possible to directly implement 64-bit integers in Javascript as all numbers are represented using 64-bit doubles. This results in a risk of rounding errors being introduced to the compiled Javascript when performing arithmetic operations with 64-bit integers [Zakai, 2011].
4.1 CsoundEmscripten
CsoundEmscripten is an implementation of the Csound language in Javascript using the Emscripten compiler. A working example of CsoundEmscripten can be found at http://eddyc.github.io/CsoundEmscripten/. The compiled Csound library and CsoundObj Javascript class can be found at https://github.com/eddyc/CsoundEmscripten/. CsoundEmscripten con-
\[^{20}\]https://asmjs.org/spec/latest/
\[^{21}\]https://github.com/kripken/emscripten/wiki/ Filesystem-API
\[^{22}\]https://github.com/kripken/emscripten/wiki/ Interacting-with-code
sists of three main modules:
- The Csound library compiled to Javascript using Emscripten.
- A structure and associated functions written in C named CsoundObj implemented on top of the Csound library that is compiled to Javascript using Emscripten.
- A handwritten Javascript class also named CsoundObj that contains the public interface to CsoundEmscripten. The Javascript class both wraps the compiled CsoundObj structure and associated functions, and connects the Csound library to the Web Audio API.
4.1.1 Wrapping the Csound C API for use with Javascript
In order to simplify the interface between the Csound C API and the Javascript class containing the CsoundEmscripten public interface, a structure named CsoundObj and a number of functions which use this structure were created. The structure contains a reference to the current instance of Csound, a reference to Csound’s input and output buffer, and Csound’s 0dBFS value. Some of the functions that use this structure are:
- CsoundObj_new() - This function allocates and returns an instance of the CsoundObj structure. It also initialises an instance of Csound and disables Csound’s default handling of sound I/O, allowing Csound’s input and output buffers to be used directly.
- CsoundObj_compileCSD(self, filePath, samplerate, controlrate, buffersize) - This function is used to compile CSD files, it takes as its arguments: a pointer to the CsoundObj structure self, the address of a CSD file given by filePath, a specified sample rate given by samplerate, a specified control rate given by controlrate and a buffer size given by buffersize. The CSD file at the given address is compiled using these arguments.
- CsoundObj_process(self, inNumberFrames, inputBuffer, outputBuffer) - This function copies audio samples to Csound’s input buffer and copies samples from Csound’s output buffer. It takes as its arguments: a pointer to the CsoundObj structure self, an integer inNumberFrames specifying the number of samples to be copied, a pointer to a buffer containing the input samples named inputBuffer and a pointer to a destination buffer to copy the output samples named outputBuffer.
Each of the other functions that use the CsoundObj structure simply wrap existing functions present in the Csound C API. The relevant functions are:
- csoundGetKsmps(csound) - This function takes as its argument a pointer to an instance of Csound and returns the number of specified audio frames per control sample.
- csoundGetNchnls(csound) - This function takes as its argument a pointer to an instance of Csound and returns the number of specified audio output channels.
- csoundGetNchnlsInput(csound) - This function takes as its argument a pointer to an instance of Csound and returns the number of specified audio input channels.
- csoundStop(csound) - This function takes as its argument a pointer to an instance of Csound stops the current performance pass.
- csoundReset(csound) - This function takes as its argument a pointer to an instance of Csound and resets its internal memory and state in preparation for a new performance.
- csoundSetControlChannel(csound, name, val) - This function takes as its arguments: a pointer to an instance of Csound, a string given by name, and number given by val, it sets the numerical value of a Csound control channel specified by the string name.
The CsoundObj structure and associated functions are compiled to Javascript using Emscripten and added to the compiled Csound Javascript library. Although this is not necessary, keeping the compiled CsoundObj structure and functions in the same file as the Csound library makes it more convenient when including CsoundEmscripten within web pages.
4.1.2 The CsoudEmscripten Javascript interface
The last component of CsoundEmscripten is the CsoundObj Javascript class. This class provides the public interface for interacting with the compiled Csound library. As well as allocating an instance of Csound this class provides methods for controlling performance and setting the values of Csound’s control channels. Additionally, this class interfaces with the Web Audio API, providing Csound with samples from the audio input bus and copying samples from Csound to the audio output bus. Audio I/O and the Csound process are performed in Javascript using the Web Audio API’s ScriptProcessorNode. This node allows direct access to input and output samples in Javascript allowing audio processing and synthesis using the Csound library.
Csound can be used in any webpage by creating an instance of CsoundObj and calling the available public methods in Javascript. The methods available in the CsoundObj class are:
- **compileCSD(fileName)** This method takes as its argument the address of a CSD file fileName and compiles it for performance. The CSD file must be present in Emscripten’s virtual filesystem. This method calls the compiled C function CsoundObj::compileCSD. It also creates a ScriptProcessorNode instance for Audio I/O.
- **enableAudioInput()** This method enables audio input to the web browser. When called, it triggers a permissions dialogue in the host web browser requesting permission to allow audio input. If permission is granted, audio input is available for the running Csound instance.
- **startAudioCallback()** This method connects the ScriptProcessorNode to the audio output and, if required, the audio input. The ScriptProcessorNodes audio processing callback is also started. During each callback, if required, audio samples from the ScriptProcessorNodes input are copied into Csound’s input buffer and any new values for Csound’s software channels are set. Csound’s csoundPerformKsmsps() function is called and any output samples are copied into the ScriptProcessorNodes output buffer.
- **stopAudioCallback()** This method disconnects the current running ScriptProcessorNode and stops the audio process callback. If required this method also disconnects any audio inputs.
- **addControlChannel(name, initialValue)** This method adds an object to a Javascript array that is used to update Csound’s named channel values. Each object contains a string value given by name, a float value given by initialValue and additionally a boolean value indicating whether the float value has been updated.
- **setControlChannelValue(name, value)** This method sets a named control channel given by the string name to the specified number given by the value argument.
- **getControlChannelValue(name)** This method returns the current value of a named control channel given by the string name.
4.1.3 Limitations
Using CsoundEmscripten, it is possible to add Csound’s audio processing and synthesis capabilities to any web browser that supports the Web Audio API. Unfortunately this approach of bringing Csound to the web comes with a number of drawbacks.
Although Javascript engines are constantly improving in speed and efficiency, running Csound entirely in Javascript is a processor intensive task on modern systems. This is especially troublesome when trying to run even moderately complex CSD files on mobile computing devices.
Another limitation is due to the design of the ScriptProcessorNode part of the Web Audio API. Unfortunately, the ScriptProcessorNode runs on the main thread. This can result in audio glitching when another process on the main thread—such as the UI—causes a delay in audio processing. As part of the W3Cs Web Audio Spec review it has been suggested that the ScriptProcessorNode be moved off of the main thread. There has also been a resolution by the Web Audio API developers that they will make it possible to use the ScriptProcessorNode with web workers. Hopefully in a future version of the Web Audio API the ScriptProcessorNode will be more capable of running the
---
23 https://github.com/w3ctag/spec-reviews/blob/master/2013/07/WebAudio.md#issue-scriptprocessornode-is-unfit-for-purpose-section-15
24 https://www.w3.org/Bugs/Public/show_bug.cgi?id=17415#c94
kind complex audio processing and synthesis capabilities allowed by the Csound library. This version of Csound also does not support plugins, making some opcodes unavailable. Additionally, MIDI I/O is not currently supported. This is not due to the technical limitations of Emscripten, rather it was not implemented due to the current lack of support for the WebMIDI standard in Mozilla Firefox and in the Webkit library.
5 Beyond Web Audio: Creating Audio Applications with PNaCl
As an alternative to the development of audio applications for web deployment in pure Javascript, it is possible to take advantage of the Native Clients (NaCl) platform. This allows the use of C and C++ code to create components that are accessible to client-side Javascript, and run natively inside the browser. NaCl is described as a sandboxing technology, as it provides a safe environment for code to be executed, in an OS-independent manner [Yee et al., 2009] [Sehr et al., 2010]. This is not completely unlike the use of Java with the Java Webstart Technology (JAWS), which has been discussed elsewhere in relation to Csound [Lazzarini et al., 2012b].
There are two basic toolchains in NaCl: native/gcc and PNaCl [Donovan et al., 2010]. While the former produces architecture-dependent code (arm, x86, etc.), the latter is completely independent of any existing architecture. NaCl is currently only supported by the Chrome and Chromium browsers. Since version 31, Chrome enables PNaCl by default, allowing applications created with that technology to work completely out-of-the-box. While PNaCl modules can be served from anywhere in the open web, native-toolchain NaCl applications and extensions can only be installed from Google’s Chrome Web Store.
5.1 The Pepper Plugin API
An integral part of NaCl is the Pepper Plugin API (PPAPI, or just Pepper). It offers various services, of which interfacing with Javascript and accessing the audio device is particularly relevant to our ends. All of the toolchains also include support for parts of the standard C library (eg. stdio), and very importantly for Csound, the pthread library. However, absent from the PNaCl toolchain are dlopen() and friends, which means no dynamic loading is available there.
Javascript client-side code is responsible for requesting the loading of a NaCl module. Once the module is loaded, execution is controlled through Javascript event listeners and message passing. A postMessage() method is used by Pepper to allow communication from Javascript to PNaCl module, triggering a message handler in the C/C++ side. In the opposite direction, a message event is issued when C/C++ code calls the equivalent PostMessage() function.
Audio output is well supported in Pepper with a mid-latency callback mechanism (ca. 10-11ms, 512 frames at 44.1 or 48 KHz sampling rate). Its performance appears to be very uniform across the various platforms. The Audio API design is very straightforward, although the library is a little rigid in terms of parameters. It supports only stereo at one of the two sampling rates mentioned above). Audio input is not yet available in the production release, but support can already be seen in the development repository.
The most complex part of NaCl is access to the local files. In short, there is no open access to the client disk, only to sandboxed filesystems. It is possible to mount a server filesystem (through httpfs), a memory filesystem (memfs), as well as local temporary or permanent filesystems (html5fs). For those to be useful, they can only be mounted and accessed through the NaCl module, which means that any copying of data from the user disk into these partitions has to be mediated by code written in the NaCl module. For instance, it is possible to take advantage of the file HTML5 tag and to get data from NaCl into a Javascript blob so that it can be saved into the user’s disk. It is also possible to copy a file from disk into the sandbox using the URLReader service supplied by Pepper.
5.2 PNaCl
The PNaCl toolchain compiles code down to a portable bytecode executable (called a pexe). When this is delivered to the browser, an ahead-of-time compiler is used to translate the code into native form. A web application using PNaCl will contain three basic components: the pexe binary, a manifest file describing it, and a client-side script in JS, which loads and allows interaction with the module via the Pepper messaging.
5.3 Csound for PNaCl
A fully functional implementation of Csound for Portable Native Clients is available from http://vlazzarini.github.io. The package is composed of three elements: the Javascript module (csound.js), the manifest file (csound.nmf), and the pexe binary (csound.pexe). The source for the PNaCl component is also available from that site (csound.cpp). It depends on the Csound and Libsndfile libraries compiled for PNaCl and the NaCL sdk. A Makefile for PNaCl exists in the Csound 6 sources.
5.3.1 The Javascript interface
Users of Csound for PNaCl will only interact with the services offered by the Javascript module. Typically an application written in HTML5 will require the following elements to use it:
- the csound.js script
- a reference to the module using a div tag with id="engine"
- a script containing the code to control Csound.
The script will contain calls to methods in csound.js, such as:
- csound.Play() - starts performance
- csound.PlayCsd(s) - starts performance from a CSD file s, which can be in ./http/(ORIGIN server) or ./local/(local sandbox).
- csound.RenderCsd(s) - renders a CSD file s, which can be in ./http/(ORIGIN server) or ./local/(local sandbox), with no RT audio output. The “finished render” message is issued on completion.
- csound.Pause() - pauses performance
- csound.CompileOrc(s) - compiles the Csound code in the string s
- csound.ReadScore(s) - reads the score in the string s (with preprocessing support)
- csound.Event(s) - sends in the line events contained in the string s (no preprocessing)
- csound.SetChannel(name, value) - sends the control channel name the value value, both arguments being strings.
As it starts, the PNaCl module will call a moduleDidLoad() function, if it exists. This can be defined in the application script. Also the following callbacks are also definable:
- function handleMessage(message): called when there are messages from Csound (pnacl module). The string message.data contains the message.
- function attachListeners(): this is called when listeners for different events are to be attached.
In addition to Csound-specific controls, the module also includes a number of filesystem facilities, to allow the manipulation of resources in the server and in the sandbox:
- csound.CopyToLocal(src, dest) - copies the file src in the ORIGIN directory to the local file dest, which can be accessed at ./local/dest. The “Complete” message is issued on completion.
- csound.CopyUrlToLocal(url,dest) - copies the url url to the local file dest, which can be accessed at ./local/dest. Currently only ORIGIN and CORS urls are allowed remotely, but local files can also be passed if encoded as urls with the webkitURL.createObjectURL() javascript method. The “Complete” message is issued on completion.
- csound.RequestFileFromLocal(src) - requests the data from the local file src. The “Complete” message is issued on completion.
- csound.GetFileData() - returns the most recently requested file data as an ArrayObject.
A series of examples demonstrating this API is provided in github. In particular, an introductory example is found on http://vlazzarini.github.io/minimal.html.
5.3.2 Limitations
The following limitations apply to the current release of Csound for PNaCl:
- no realtime audio input (not supported yet in Pepper/NaCl)
- no MIDI in the NaCl module. However, it might be possible to implement MIDI in
JavaScript (through WebMIDI), and using the csound.js functions, send control data to Csound, and respond to the various channel messages.
- no plugins, as pNaCl does not support dlopen() and friends. This means some Csound opcodes are not available as they reside in plugin libraries. It might be possible to add some of these opcodes statically to the Csound pNaCl library in the future.
6 Conclusions
In this paper we reviewed the current state of support for the development of web-based audio and music applications. As part of this, we explored two approaches in deploying Csound as an engine for general-purpose media software. The first consisted of a Javascript version created with the help of the Emscripten compiler, and the second a native C/C++ port for the Native Client platform, using the Portable Native Client toolchain. The first has the advantage of enjoying widespread support by a variety of browsers, but is not yet fully deployable. On the other hand, the second approach, while at the moment only running on Chrome and Chromium browsers, is a robust and ready-for-production version of Csound.
7 Acknowledgements
This research was partly funded by the Program of Research in Third Level Institutions (PRTLI 5) of the Higher Education Authority (HEA) of Ireland, through the Digital Arts and Humanities programme.
References
|
{"Source-Url": "http://lac.linuxaudio.org/2014/papers/23.pdf", "len_cl100k_base": 6353, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 26384, "total-output-tokens": 7524, "length": "2e12", "weborganizer": {"__label__adult": 0.0004887580871582031, "__label__art_design": 0.0009403228759765624, "__label__crime_law": 0.00027489662170410156, "__label__education_jobs": 0.00036215782165527344, "__label__entertainment": 0.0004429817199707031, "__label__fashion_beauty": 0.00014412403106689453, "__label__finance_business": 0.0001245737075805664, "__label__food_dining": 0.00035834312438964844, "__label__games": 0.0007085800170898438, "__label__hardware": 0.0017614364624023438, "__label__health": 0.0002453327178955078, "__label__history": 0.000164031982421875, "__label__home_hobbies": 6.276369094848633e-05, "__label__industrial": 0.0002846717834472656, "__label__literature": 0.00024962425231933594, "__label__politics": 0.0001962184906005859, "__label__religion": 0.0005755424499511719, "__label__science_tech": 0.0124664306640625, "__label__social_life": 7.69495964050293e-05, "__label__software": 0.01189422607421875, "__label__software_dev": 0.96728515625, "__label__sports_fitness": 0.00026226043701171875, "__label__transportation": 0.00034689903259277344, "__label__travel": 0.0001544952392578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31988, 0.02117]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31988, 0.44664]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31988, 0.86334]], "google_gemma-3-12b-it_contains_pii": [[0, 3912, false], [3912, 7952, null], [7952, 12363, null], [12363, 16064, null], [16064, 20341, null], [20341, 24791, null], [24791, 28204, null], [28204, 31988, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3912, true], [3912, 7952, null], [7952, 12363, null], [12363, 16064, null], [16064, 20341, null], [20341, 24791, null], [24791, 28204, null], [28204, 31988, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31988, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31988, null]], "pdf_page_numbers": [[0, 3912, 1], [3912, 7952, 2], [7952, 12363, 3], [12363, 16064, 4], [16064, 20341, 5], [20341, 24791, 6], [24791, 28204, 7], [28204, 31988, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31988, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
1485ab1c7bddb80a1cf0588f938840a970a53e2d
|
[REMOVED]
|
{"Source-Url": "https://pure.tue.nl/ws/files/2926696/Metis244201.pdf", "len_cl100k_base": 4449, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 22159, "total-output-tokens": 6088, "length": "2e12", "weborganizer": {"__label__adult": 0.00037741661071777344, "__label__art_design": 0.0008788108825683594, "__label__crime_law": 0.0005249977111816406, "__label__education_jobs": 0.005573272705078125, "__label__entertainment": 0.00012230873107910156, "__label__fashion_beauty": 0.0002894401550292969, "__label__finance_business": 0.00384521484375, "__label__food_dining": 0.0005483627319335938, "__label__games": 0.0006284713745117188, "__label__hardware": 0.000823974609375, "__label__health": 0.0009775161743164062, "__label__history": 0.00049591064453125, "__label__home_hobbies": 0.00019884109497070312, "__label__industrial": 0.0016756057739257812, "__label__literature": 0.0006427764892578125, "__label__politics": 0.0003788471221923828, "__label__religion": 0.0004627704620361328, "__label__science_tech": 0.241943359375, "__label__social_life": 0.00018846988677978516, "__label__software": 0.0298614501953125, "__label__software_dev": 0.7080078125, "__label__sports_fitness": 0.0003414154052734375, "__label__transportation": 0.0008759498596191406, "__label__travel": 0.0002117156982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 24632, 0.03276]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 24632, 0.36081]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 24632, 0.91085]], "google_gemma-3-12b-it_contains_pii": [[0, 2656, false], [2656, 4928, null], [4928, 8019, null], [8019, 9023, null], [9023, 12122, null], [12122, 14237, null], [14237, 14624, null], [14624, 17873, null], [17873, 19535, null], [19535, 22146, null], [22146, 24632, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2656, true], [2656, 4928, null], [4928, 8019, null], [8019, 9023, null], [9023, 12122, null], [12122, 14237, null], [14237, 14624, null], [14624, 17873, null], [17873, 19535, null], [19535, 22146, null], [22146, 24632, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 24632, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 24632, null]], "pdf_page_numbers": [[0, 2656, 1], [2656, 4928, 2], [4928, 8019, 3], [8019, 9023, 4], [9023, 12122, 5], [12122, 14237, 6], [14237, 14624, 7], [14624, 17873, 8], [17873, 19535, 9], [19535, 22146, 10], [22146, 24632, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 24632, 0.04902]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
724881cf35611d078f519114b0e5a7b9ec3fdf59
|
An efficient method for computing Ulam numbers
Philip Gibbs
The Ulam numbers form an increasing sequence beginning 1,2 such that each subsequent number can be uniquely represented as the sum of two smaller Ulam numbers. An algorithm is described and implemented in Java to compute the first billion Ulam numbers.
Introduction
At first sight the Ulam numbers appear to be pseudo-random. If this were the case their asymptotic density would be expected to fade towards zero at high numbers as the number of possibilities for forming sums from previous numbers increases. In actuality the density decreases at first but settles around a distribution where about one in 13.5 numbers are in the sequence.
A closer inspection shows that the numbers fall into dense clumps occurring about every 22 integers with sparser breaks between. Steinerberger performed a Fourier analysis on the sequence using the first 10 million numbers and found a clear signal with an angular frequency given by $\alpha = 2.5714474995$ [1]. This corresponds to a wavelength of $\lambda = \frac{2\pi}{\alpha} = 2.443443 \ldots$ which is approximately $22/9$ so the clumping apparently repeats about every nine wavelengths.
When the frequency of Ulam numbers are plotted against their residue modulo $\lambda$ they are found to have a non-uniform distribution concentrated in two peaks in the middle third of the wavelength. Figure 1 shows the distribution plotted from the first billion Ulam numbers counted in 1200 bins and normalised so that the vertical axis shows the probability for a large positive integer with a given residue to be Ulam. Ulam numbers whose residue falls outside this middle range are outliers. The outliers are relatively rare. There are only 1828 outliers in the first billion Ulam numbers and empirically the number of outliers less than a given Ulam number is less than its cube root for sufficiently large numbers. However they are important since almost all Ulam numbers are formed from a sum including one outlier and they control the shape of the distribution [2].
Computing the Ulam Numbers
The most straightforward way to compute the Ulam numbers $a_n$ is to build up the sequence from the start testing each subsequent positive integer $t$ to see if it is the sum of two previous Ulam numbers. This can be done by simply taking previous Ulam numbers $a_n < \frac{1}{2} t$ and checking to see if $t - a_n$ is in the list of Ulam numbers so far constructed. The search for each number $t$ can be stopped once two sums have been found but if $t$ is an Ulam number the search will have to continue until all possible sums have been checked. If we assume that the density of Ulam numbers has a constant positive density then the computation time for the first $n$ numbers using this method is $O(n^2)$.
To compute the first billion Ulam numbers in a reasonable timeframe a more efficient method is required. An alternative for testing the number $t$ is to sort the smaller numbers $a_n$ according to their residue $r_n$ modulo $\lambda$. If $t$ itself has a residue $r$ modulo $\lambda$ and if $a_n + a_m = t$ then $r_n + r_m = r$ or $r + \lambda$. This means
that one of the residues $r_n$ or $r_m$ must be in one of the ranges $0 < r_k < \frac{1}{2} r$ or $\frac{1}{2} (r + \lambda) < r_k < \lambda$. Therefore it is only necessary to test smaller Ulam number $a_k$ whose residue $r_k$ lies in these ranges to see if it forms a sum.
This search method works for any value of $\lambda$ but if the $\lambda$ we use is (close to) the recently discovered natural wavelength of the Ulam sequence then the number of Ulam numbers with residues in these ranges will be much less than half the number of previous Ulam numbers. If $r$ lies in the central third of the range as is the case for most Ulam numbers then we only need to test the subset of the outliers which lie in these ranges to determine if $t$ is Ulam. If $r$ lies outside the central third there will be a section from the denser portion of the distribution in the ranges, but in this case we usually find two sums very quickly and can rule out the possibility that $t$ is Ulam. In practice we have found that only about 5 tests are required on average to determine whether a number is Ulam using this method.
The efficiency of the algorithm therefore depends on the ability to maintain a list of the previous Ulam numbers sorted by their residues that can be rapidly traversed from either end. When each new Ulam number is found it must be inserted in the list. To find the correct place to insert it quickly we can use a binary search or maintain an index and once we have found the correct place to insert we need to form a linked list structure for rapid insertion avoiding the need to shift up all the subsequent entries in the list.
This can be done by using built-in data structures such as Treemaps in Java but for simplicity and transparency we have used custom structures based on ordinary arrays. The Java code used is shown in the Annex below.
With this implementation the running time to compute the first billion Ulam numbers is less than one hour on an ordinary PC. The limiting factor which makes is hard to go to higher numbers is memory space rather than computation speed. With some space optimisations the program ran on a machine with 16 Gigabytes of RAM and this would need to be extended in proportion to the number to be calculated.
Results
For the purposes of comparison we provide a table of example Ulam numbers
<table>
<thead>
<tr>
<th>$n$</th>
<th>$a_n$</th>
</tr>
</thead>
<tbody>
<tr>
<td>100,000</td>
<td>1,351,223</td>
</tr>
<tr>
<td>1,000,000</td>
<td>13,509,072</td>
</tr>
<tr>
<td>10,000,000</td>
<td>135,160,791</td>
</tr>
<tr>
<td>100,000,000</td>
<td>1,351,856,726</td>
</tr>
<tr>
<td>158,311,381</td>
<td>2,140,095,565</td>
</tr>
<tr>
<td>200,000,000</td>
<td>2,703,579,147</td>
</tr>
<tr>
<td>317,670,407</td>
<td>4,294,217,754</td>
</tr>
<tr>
<td>500,000,000</td>
<td>6,758,780,604</td>
</tr>
<tr>
<td>1,000,000,000</td>
<td>13,517,631,473</td>
</tr>
</tbody>
</table>
It should be mentioned that the Ulam number for $n = 100,000,000$ agrees with an independently calculated value noted in the Online Encyclopaedia of Integer Sequences computed by Jud McCranie (sequence A002858). However the values for $n = 158,311,381$ and $n = 317,670,407$ are in disagreement. Therefore these numbers should not be relied on until a third independent implementation has verified the numbers.
The value for the wavelength is computed to be $\lambda = 2.44344296778474$ with the corresponding frequency $\alpha = 2.57144749847630$. The largest gap in the first billion Ulam numbers was found to be $966291200 - 966290117 = 1083$.
References
public class Ulam {
static int maxn = 1000000000;
static int a[] = new int[maxn+1]; // list of ulam numbers
static int nx[] = new int[maxn+1]; // next when ordered by residue
static int pv[] = new int[maxn+1]; // previous when ordered by residue
static int k[] = new int[maxn/2]; // true for ulam numbers (packed bits)
static int nindex = maxn/100;
static int index[] = new int[nindex];
static int nbin = 12000; // bin Ulam numbers by residue
static int bins[] = new int[nbin]; // should be multiple of 3 to separate outliers
static long kk1=0;
static long kk2=0;
static long kk3=0;
static long kk4=0;
static long kk5=0;
static double lamda = 2.44344296778474;
static double step = 13.517831473;
public static void main(String[] args) {
double alpha = 2.0*Math.PI/lamda;
}
}
double lamdarun = lamda;
System.out.println("lamda = "+lamda);
System.out.println("alpha = "+alpha);
initUlam();
// initialise index
for(int i=0; i<nindex; i++) {
index[i] = 0;
}
pv[0] = 0; // index to number with largest residue
nx[0] = 0; // index to number with smallest residue
setUlam(0,0); // not really an ulam number
setUlam(1,1);
setlinks(1);
setUlam(2,2);
setlinks(2);
int n = 2;
int nol = 1;
int nor = 0;
long bestgap = 0;
for(long a0 = 3; n < maxn; a0++) {
// search for a sum in residue order from both ends
double rd0 = mod(a0, lamda) / lamda;
boolean more = true;
int kount2 = 0;
boolean ulam = false;
if(rd0 < 0.24 || rd0 > 0.80) { // to mind the gap use the brute search
int j = n; // better to start from larger end
long aj = getUlam(j);
while(more && 2*aj > a0) {
kount2++;
long a1 = aj;
long a2 = a0 - a1;
kk3++;
if(isUlam(a2)) {
if(ulam) { // found more than one sum
ulam = false;
more = false;
}
}
}
}
}
} else {
ulam = true;
}
}
j--;
aj = getUlam(j);
}
more = false;
}
long a1x = 0;
int kount0 = 0;
int i = nx[0]; // start with smallest residue
long ai = getUlam(i);
double rdi = mod(ai, lamda)/lamda;
while(2*rdi <= rd0+0.00000002 && more && i != 0) {
kount0++;
long a2 = a0-ai;
kkl++;
if(isUlam(a2) && ai != a2 && a2 != a1x) { // pair adds up
if(ulam) { // a already had a sum
more = false; // found two so can stop
ulam = false;
} else { // otherwise note first sum
ulam = true;
}
}
}
a1x = ai; // note this to check against double counting
}
}
i = nx[i]; // jump to next smallest residue
ai = getUlam(i);
rdi = mod(ai, lamda)/lamda;
int kount1 = 0;
i = pv[0]; // now work back from the largest residue
ai = getUlam(i);
rdi = mod(ai, lamda)/lamda;
while(2*(1.0 - rdi) <= (1.0 - rd0) + 0.00000002 && more && i != 0) {
kount1++;
long a2 = a0 - ai;
kk2++;
if(isUlam(a2) && ai != a2 && a2 != a1x) { // pair adds up
if(ulam) { // already had a sum
more = false; // found two so can stop
ulam = false;
} else { // otherwise note first sum
ulam = true;
a1x = ai; // note this to check against double counting
}
}
i = pv[i]; // jump to next largest residue
ai = getUlam(i);
rdi = mod(ai,lamda)/lamda;
if(ulam) {
n++;
setUlam(a0,n);
double z = mod(a0, lamda)/lamda;
setlinks(n);
long d = (long)(a0/lamda);
double p = 0.0;
if(z < 1.0/3.0) {
nor++;
p = a0/(d+1.0/3.0);
}
if(z > 2.0/3.0) {
nol++;
p = a0/(d+2.0/3.0);
}
if(z > 2.0/3.0 || z < 1.0/3.0) {
lamdarun = (lamdarun*9.0+p)/10.0;
System.out.println(nor+" "+nol+" "+n+" "+a0+" "+z+" "+p+" "+lamdarun);
System.err.println(n+a0+" kk: "+(kk1/a0)+" +(kk2/a0)+" +
(kk3/a0)+" +(kk4/a0)+" +(kk5/a0));
}
if(n==100 || n==1000 || n==10000 || n==100000 || n==1000000 ||
n==5000000 || n%100000000 == 0 || n==158311381 || n==317670407) {
System.out.println("a["+n"] = "+a0);
}
long a1 = getUlam(n-1);
long gap = a0-a1;
if(gap > bestgap) {
bestgap = gap;
System.out.println(n+" "+a0+" - "+a1+" = "+gap+" is bigger gap");
}
// build distribution by residue in bins
int ibin = (int)(z*nbin);
bins[ibin]++;
}
System.out.println("a["+n"] = "+getUlam(n));
System.out.println("biggest gap was "+bestgap);
double density = ((double) n)/((double) getUlam(n));
System.out.println("density = "+density);
System.out.println("step = "+(1.0/density));
System.out.println("\n");
System.out.println("bin frequencies:");
for(int ibin=0; ibin<nbin; ibin++) {
System.out.println(ibin+","+bins[ibin]);
}
checklinks(n);
public static double mod(long x, double m) {
double dx = x;
double z = dx/m;
long iz = (long) z;
z -= iz;
z *= m;
return z;
}
public static void setlinks(int n) {
// set the next and previous links in ordering by residue
// use an index to find a starting point with a lower residue
double rdn = mod(getUlam(n), lamda)/lamda;
int j = (int)(nindex*rdn);
int pvi = index[j];
boolean more = true;
while(more) {
kk4++;
int i = nx[pvi];
long ai = getUlam(i);
double rdi = mod(ai,lamda)/lamda;
if(i == 0) {
more = false;
} else if(rdi < rdn) {
pvi = i;
} else {
more = false;
}
}
int nxi = nx[pvi];
pv[n] = pvi;
nx[pvi] = n;
nx[n] = nxi;
pv[nxi] = n;
// update index
j++; double rdi = 0; if(j < nindex) rdi = mod(getUlam(index[j]), lamda)/lamda; while( j < nindex && rdi < rdn ) {
kk5++; index[j] = n; j++; if(j < nindex) rdi = mod(getUlam(index[j]), lamda)/lamda;
}
static void checklinks(int n) {
int pvi = 0; int m = 0;
while(nx[pvi] != 0) {
if(pv[nx[pvi]] != pvi) System.err.println("links are inconsistent at "+ pvi+" -> "+nx[pvi]+" <- "+pv[nx[pvi]]);
if(nx[pv[pvi]] != pvi) System.err.println("links are inconsistent at "+ pvi+" <- "+pv[pvi]+" -> "+nx[pv[pvi]]);
double rdi = mod(getUlam(pvi), lamda)/lamda;
double rdn = mod(getUlam(nx[pvi]), lamda)/lamda;
if(rdi > rdn) System.err.println("links are not ordered at "+ pvi+","+nx[pvi]+" "+rdi+" > "+rdn);
m++;
pvi = nx[pvi];
}
}
if(m != n) System.err.println("link list is wrong length "+m+" != "+n);
System.err.println("links check complete");
}
// booleans flagging the ulam numbers are packed to save space
static int pow2[] = {1,2,4,8,16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,
1<<16,1<<17,1<<18,1<<19,1<<20,1<<21,1<<22,
1<<23,1<<24,1<<25,1<<26,1<<27,1<<28,1<<29};
public static boolean isUlam(long a0) {
long i30 = 30;
int m30 = (int) (a0%i30);
int d30 = (int) (a0/i30);
boolean ulam = ((k[d30] & pow2[m30]) > 0);
return ulam;
}
public static void setUlam(long a0, int n) {
long i30 = 30;
int m30 = (int) (a0%i30);
int d30 = (int) (a0/i30);
k[d30] |= pow2[m30];
double dn = (double) n;
long ground = (long) (dn*step);
int an = (int) (a0-ground);
a[n] = an;
public static long getUlam(int n) {
double dn = (double) n;
long ground = (long) (dn*step);
long a0 = ground+a[n];
return a0;
}
public static void initUlam() {
for(int i=0; i<k.length; i++) k[i] = 0;
for(int i=0; i<bins.length; i++) bins[i] = 0;
}
|
{"Source-Url": "http://vixra.org/pdf/1508.0085v1.pdf", "len_cl100k_base": 4241, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 28862, "total-output-tokens": 5392, "length": "2e12", "weborganizer": {"__label__adult": 0.0006499290466308594, "__label__art_design": 0.0004105567932128906, "__label__crime_law": 0.0006623268127441406, "__label__education_jobs": 0.0006246566772460938, "__label__entertainment": 0.00017917156219482422, "__label__fashion_beauty": 0.0003113746643066406, "__label__finance_business": 0.0003955364227294922, "__label__food_dining": 0.000957012176513672, "__label__games": 0.0016269683837890625, "__label__hardware": 0.004241943359375, "__label__health": 0.001743316650390625, "__label__history": 0.0005693435668945312, "__label__home_hobbies": 0.00026535987854003906, "__label__industrial": 0.0013246536254882812, "__label__literature": 0.0005717277526855469, "__label__politics": 0.0005440711975097656, "__label__religion": 0.0013561248779296875, "__label__science_tech": 0.313720703125, "__label__social_life": 0.00015735626220703125, "__label__software": 0.005756378173828125, "__label__software_dev": 0.66162109375, "__label__sports_fitness": 0.0007691383361816406, "__label__transportation": 0.0013980865478515625, "__label__travel": 0.00032401084899902344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 14148, 0.06056]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 14148, 0.98687]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 14148, 0.73385]], "google_gemma-3-12b-it_contains_pii": [[0, 2073, false], [2073, 3168, null], [3168, 5428, null], [5428, 6730, null], [6730, 7632, null], [7632, 8060, null], [8060, 8717, null], [8717, 9327, null], [9327, 10205, null], [10205, 10732, null], [10732, 11383, null], [11383, 11888, null], [11888, 12284, null], [12284, 13075, null], [13075, 13876, null], [13876, 14148, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2073, true], [2073, 3168, null], [3168, 5428, null], [5428, 6730, null], [6730, 7632, null], [7632, 8060, null], [8060, 8717, null], [8717, 9327, null], [9327, 10205, null], [10205, 10732, null], [10732, 11383, null], [11383, 11888, null], [11888, 12284, null], [12284, 13075, null], [13075, 13876, null], [13876, 14148, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 14148, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 14148, null]], "pdf_page_numbers": [[0, 2073, 1], [2073, 3168, 2], [3168, 5428, 3], [5428, 6730, 4], [6730, 7632, 5], [7632, 8060, 6], [8060, 8717, 7], [8717, 9327, 8], [9327, 10205, 9], [10205, 10732, 10], [10732, 11383, 11], [11383, 11888, 12], [11888, 12284, 13], [12284, 13075, 14], [13075, 13876, 15], [13876, 14148, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 14148, 0.03971]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
8fb978ac4c9e93eab5e129e8d0dda55144da4e82
|
Flow-Based Specification of Time Design Requirements
Sabah Al-Fedaghi
Computer Engineering Department
Kuwait University
Kuwait
Abstract—This paper focuses on design requirements in real-time systems where information is processed to produce a response within a specified time. Nowadays, computer control applications embedded in chips have grown in significance in many aspects of human life. These systems need a high level of reliability to gain the trust of users. Ensuring correctness in the early stages of the design process is especially a major challenge in these systems. Faulty requirements lead to errors in the final product that have to be fixed later, often at a high cost. A crucial step in this process is modeling the intended system. This paper explores the potential of flow-based modeling in expressing design requirements in real-time systems that include time constraints and synchronization. The main emphasized problem is how to represent time. The objective is to assist real-time system requirement engineers, in an early state of the development, to express the timing behavior of the developed system. Several known examples are modeled and the results point to the viability of the flow-based representation in comparison with such time specifications as state-based and line-based modeling.
Keywords—design requirements; conceptual model; time constraints; model-based systems engineering; requirements specification
I. INTRODUCTION
The product development life cycle in the engineering domain aims at achieving, among other goals, a design process with complete and precise specifications that satisfy all requirements. Requirements are descriptions of functions, features, and goals of the product. The requirements describe ‘what’ the intended product should do, but the ‘how’ is specified as design requirements during the design phase, where measurability and verifiability are of utmost importance. Design requirements (the focus of this paper) include the specifications that the intended product must meet in order to pass the acceptance test. Specifications consist of information that controls the creation of the intended product.
Early assurance of the correctness of design requirements is a major challenge in any system. Faulty design requirements lead to errors in the final product that have to be fixed later, often at a high cost. Reoccurring causes of failures include:
- Inadequate definitions and modifications of specifications
- Faulty interpretation and understanding
- Not meeting customer requirements
- Design not meeting manufacturing requirements
- Difficulties in specifying technical requirements
- Difficulties in interpreting and understanding specifications [1]
There are various methods for specifying real-time systems. For example, prototyping tools can be used by the designer and user to view the product in the development stage [2]. However, prototyping is a phase that comes after the specifications. If prototyping has produced unsatisfactory results, then the designer may have to re-specify the requirements. There are also formal specifications of real-time systems that should enable the system designer to verify mathematically that a system meets timing constraints. However, formal methods are still limited as a verification tool, especially for software systems, not to mention the complexity introduced by timing. Various specification languages for real-time systems with timing constraints can be expressed within the specifications (e.g., [3]), “but at the cost of restricting other features” [4].
The specifications of design requirements are usually formulated in a mixed of English, tables, graphs, screen shots, and unified modeling language (UML) diagrams. According to Palshikar [5], design requirements are examined in terms of:
- accurate reflection of the users’ requirements
- clarity, unambiguity, and understandability
- flexibility and feasibility for the engineers
- easily defined acceptance test cases
- an abstract and high-level manner of writing, away from design, implementation, and technology platforms
“Despite some help from modeling tools such as UML, the problem of ensuring the quality of requirements remains. The process is heavily manual and time-consuming, involving reviews and sometimes-partial prototyping. Using multiple notations (such as those in UML) introduces additional problems” [5].
Additionally, this paper is concerned with design requirements in real-time systems where information is processed to produce a response within a specified time. A real-time system interacts with the environment within certain timing constraints and the requirement specifications for such a
system must include representation of timing which can guarantee meeting these constraints. The notion of time is an important element in such systems, especially if critical features (e.g., safety) are functionally required. The problems here are how to represent time, how to capture causality behavior, and how to integrate functional and timing activities [6].
Embedded systems where the software is completely encapsulated by the hardware that it controls are often real-time systems. An embedded system is a system that interacts continuously with its physical sphere via sensors and actuators. Nowadays, computer control applications embedded in chips have grown in significance in many aspects of human life (e.g., medicine, mobile phones, and vending machines). These embedded systems need a high level of reliability to gain the trust of users. Ensuring correctness in the early stage of the design process is especially a major challenge in these systems.
A crucial step in this process is modeling the intended system. Model-based design has been introduced as the method to deal with the design process where the requirements are specified in a systematic way before continuing with the design and implementation phases. A great deal of attention has focused on this, such as interest in the unified modeling language with its graphical notation, which is used for documentation, communication, and requirement capture, as well as being an abstraction base for implementation details. This paper explores the potential of the flow-based modeling [7–12] in expressing design requirements in real-time systems that include time constraints and synchronization.
This paper focuses on the representation of timing constraints. The objective is to assist real-time system requirement engineers, at an early state of the development, to express the timing behavior of the developed system. Representation here refers to humans’ and machines’ representation of knowledge for the purpose of communication and understanding and analyzing the embedded semantics (e.g., diagrams, formal notations). Representation is usually associated with reasoning (e.g., the computational understanding of human-level cognitive abilities). This concentrates on the representation aspect that can be used for manual or computation analysis, as in problem solving in artificial intelligence.
In preparation to recast the representation of several known design problems in terms of flow-based modeling, and to make this paper self-contained, the next section briefly reviews published materials describing the flow-based model. Several features of the model will be further illustrated.
II. FLOWTHING MODEL
The flowthing model (FM) is a uniform method for representing “things that flow,” called flowthings. Flow in FM refers to the exclusive (i.e., being in one and only one) transformation among six states (also called stages): transfer (input/output), process, creation, release, arrival, and acceptance, as shown in Fig. 1. We will use receive as a combined stage of arrive and accept whenever arriving flowthings are always accepted.

A flowthing has the capability of being created, released, transferred, arrived, accepted, and processed while flowing within and between “units” called spheres. A flow system (referred to as flowsystem) is a system with six stages and transformations (edges) between them. In FM, flows can be controlled by the progress (sequence) of the stream of events (creation, release, transfer, transfer within the next sphere, release, reception, ...) or by a triggering (denoted by a dashed arrow) that can initiate a new flow. Spheres and subspheres are the environments of the flowthing, such as a company, a computer, and a person. A sphere can include the sphere of a flowsystem that includes the transfer stage. Triggering is the transformation from one flow to another, e.g., a flow of electricity triggers a flow of air.
Example: In studying a “successful” model checking for verifying requirements, Palshikar [5] used a simple pumping control system that transfers water from a source tank A into sink tank B using a pump P as shown in Fig. 2. Each tank has two level-meters to detect whether their levels are empty or full. The tank level is ok if it is neither empty nor full.
Initially, both tanks are empty. The pump is to be switched on as soon as the water level in tank A reaches ok (from empty), provided that tank B is not full. The pump remains turned on as long as tank A is not empty and as long as tank B is not full. The pump is to be switched off as soon as either tank A becomes empty or tank B becomes full. The system should not attempt to switch the pump off (on) if it is already off (on). [5]
![Fig. 2. A simple pumping control system (redrawn from [5])](image)
Finite state machine (FSM) approach is utilized as an abstract notation for defining requirements and design. Fig. 3 shows the FM representation of this pumping control system. It impedes some assumptions which are illustrated in Fig. 4.
Fig. 3. FM representation of the pumping control system
Fig. 4. Illustration of assumptions in FM representation
In Fig. 4, it is assumed that water flows in tank A with the transfer of this flow controlled within that tank system. The water does not flow to tank B (and therefore tank B is drawn above tank A). Accordingly, the pump is installed between the two tanks to push the water toward tank B.
Tank A is a flowsystem with transfer, receive, process, and release stages. The transfer stage is drawn twice to simplify the drawing. The gate valve controls the transfer to tank A. As soon as the valve is opened the water is received in the tank. The process in tank A involves measuring the amount of water and, accordingly, the valve is opened or closed. At the bottom of tank A there is no control, hence release and transfer to the pump is immediate. This is analogous to passengers that proceed immediately to a waiting airplane after finishing passport processing. Imagine that this passport checking is on one end of the boarding bridge while the airplane is at the other end of the bridge. In this case, the bridge would correlate to the release stage as part of the airport system. Moving from the bridge end to the airplane door would be a flow between two transfer stages. Accordingly, in tank A’s control system, the flow from the tank to the pipe (see the figure) leading to the pump is a flow between two transfer stages. It is possible to include each pipe in Fig. 3 as a flowsystem with transfer, release, and transfer stages. However, this is not shown in the Fig. 3.
The pump is similarly a flowsystem. The process stage involves pushing/not-pushing the water toward tank B. There is no need for valves because the water cannot flow to tank B without pushing.
There seems to be incompleteness in Palshikar [5]’s original description of this system in a case where both tanks are full. In this case, the valve to tank A is closed and the pump is off forever. Accordingly, an outlet has been added in the flowsystem in tank B.
Switching the description to Fig. 3, the water flows in (circle 1 in the figure) to be processed (circle 2, measuring its water level) and accordingly opens or closes the valve (3). Also, the processing triggers (4) the control flowsystem of tank A to send a signal (5) about the current level of water: empty, okay, or full to the pump control system. On the other hand, tank B also sends (6) such a signal. These signals are processed in the pump control flowsystem (7) to turn on/off the pump (8), which results in the stoppage or flow of the water to tank B (8).
The next section applies FM to the method known time representations in order to compare the two methods side by side.
III. TIME AND FM
Time requirements play a central role in understanding and designing systems. Timing is typically incorporated after tasks and software architectures are defined, when holistic scheduling algorithms and expected worst-case execution times are analyzed [13]. This paper does not involve such a detailed level of description; rather, it is concerned with a very high level of requirements specifications, e.g., the level of UML use-case, sequence, and activity diagrams. Accordingly, this section relates time to its representation in FM.
Philosophically, time can be conceptualized as a fourth-dimensional phenomenon. Such a conceptualization is inspired by Edwin Abbott’s Flatland:
Dr. Abbott pictures intelligent beings whose whole experience is confined to a plane, or other space of two dimensions, who have no faculties by which they can become conscious of anything outside that space and no means of moving off the surface on which they live. He then asks the reader, who has consciousness of the third dimension, to imagine a sphere descending upon the plane of Flatland and passing through it. How will the inhabitants regard this phenomenon? […]
Their experience will be that of a circular obstacle gradually expanding or growing, and then contracting, and they will attribute to growth in time what the external observer in three dimensions assigns to motion in the third dimension. If there is motion of our three-dimensional space relative to the fourth dimension, all the changes we experience and assign to the flow of time will be due simply to this movement, the whole of the future as well as the past always existing in the fourth dimension. (Italics added.) [14]
The sphere (ball) is seen as constantly changing, and the whole change from birth to disappearance is the “life-time” of the sphere. Applying the 3-dimensional world, this time must then be a 4th dimension.
Strachan [15]’s conceptualization of the same phenomenon is as follows:
Let’s imagine a miniature world which is a cube. Now suppose that one of the faces of the cube—say the bottom face—is a little 2-dimensional world, a Flatland, inhabited by creatures called “Toodies” (2-D) . . .
Since the Toodies’ Flatland is infinitely thin . . . , then an infinite number of Flatlands could be stacked into the cube . . .
But let us now suppose that a Toody is subjected to some force which can lift him up the 3rd (up/down) dimension of the cube. So he is propelled out of his own paper-thin world, the bottom face of the cube, right up through the cube to its top face. As he does so, he will pass through all the 2-dimensional ‘paper’ Flatlands which lie in between. Since the whole cube exists, then all of these Flatlands exist, even though they won’t exist for Toody until he reaches them. So they lie in Toody’s future.
But change occurs, and can only occur, in time. So his movement in this 3rd (up/down) space dimension will seem like the passage of time to Toody: it is his time dimension. (Italics added.)
A. Time as spheres
Accordingly, from the FM point of view, these “flatlands” are flowthings that flow through times spheres: past1, past2, . . . , now, future1, future 2, . . . In this case, time is modeled as spheres. All of these spheres are projections of different times on flatlands. UML representation of this modeling of time is shown in Fig. 5, which includes slices of time with processes happening in them. Fig. 6 shows the corresponding FM representation.
Fig. 5. Sample of UML representation of time (from [16])
Fig. 6. Time spheres with “flatlands” flow through them
Fig. 7. Cylinder is used instead of a cube to illustrate time flows through “Flatlands”
B. Time as flowthings
Alternatively, time can be conceptualized as a flowthing that flows through “flatlands.” In this case, Strachan [15]’s cube (though we prefer to use a cylinder instead of a cube; see Fig. 7) passes through all the 2-dimensional Flatlands, accomplishing the same result. In this case, time in FM is a flowthing that can be released, transferred, received, and processed. For each flowsystem, it is processed to count its passing though counting, as will be illustrated in the next section. In FM, time is something that flows contiguously from a fourth-dimension sphere to any other sphere, as shown in Fig. 8. If this is of relevance to flows or triggering in that sphere, it is represented by a flowsystem. This conceptualization of time as a flowthing will be utilized in the discussions in the next sections.
IV. LINEAR TIME DIAGRAMS
Timing diagrams “focus on conditions changing within and among lifelines along a linear time axis … on time of events causing changes in the modeled conditions of the lifelines” [17]. They utilize the notions of lifeline, state or condition timeline, destruction event, duration constraint, and time constraint. Timelines are one of the simplest means of representing the flow of events. In UML 2, timing diagrams are a special form of sequence diagrams where the axes are reversed and the lifelines are shown in separate compartments arranged vertically. These diagrams “aren’t the most popular” [18].
According to the Web site [17], time duration constraint refers to the duration used to determine whether the constraint is satisfied. It is an association between a duration interval and the constructs that it constrains. For example, that ice should melt into water in 1 to 6 minutes can be represented as shown in Fig. 9. From the conceptual point of view, lining (putting in one category) ice, melting, and water is a categorical mix. Ice and water can be categorized as “states” of H$_2$O, but melting is certainly not. Also, it seems that H$_2$O is another name for water. Fig. 10 shows the FM representation.
There are three subspheres: time, ice, and water. The units of time are continuously received (1) and ignored. They are processed (2) as soon as the melting (a kind of process (3)) starts in the ice sphere until “counting” 6 units of time. When the ice starts melting (3), it triggers (4) the counting (processing (2)) of time. When the melting ends (5), the time is ignored again (6) and water is generated (7). The model reflects that time always flows through systems, and thus time constraint is awareness of this flow and alignment of events with the flowing time.
Fig. 9. Representation of how ice should melt into water in 1 to 6 minutes (from [17])
In addition, a time constraint is time expression used to determine whether the constraint is satisfied. “All traces where the constraints are violated are negative traces, i.e., if they occur, the system is considered as failed” [17]. Fig. 11 is given as a representation of this constraint. It involves two states: sleep and awake. At the change from sleep to awake, the time period {5:40 a.m., 6 a.m.} passes to accomplish this change. The state (sleep or awake) is represented by a horizontal line: no line, no state. The change from a state to another is represented by a vertical line that connects the horizontal lines. The delay that corresponds to the change is represented by the diagonal line and the text {5:40 a.m., 6 a.m.} at the point of beginning the awake state.
Semantically, {5:40 a.m., 6 a.m.} is the “length” of sleep. Accordingly, the diagonal line and {5:40 a.m., 6 a.m.} look like a comment and not a modeling of the situation. If it is not a comment, then the representation is misleading because it gives the impression of a three-dimensional representation. Also,
there is no indication of "failure" as mentioned in the given constraint. This example shows the limitations of the line representation of time.
Fig. 12 shows the corresponding FM representation. These are the spheres: time, sleep, awake, and the logical join. The clock performs the following:
- At 5:40 a.m., it triggers sleeping
- At 6:00 a.m., it triggers awaking
- At 6:00 a.m. it also triggers checking whether the awaking occurs
Time is generated by the clock and received by the sphere of the time in the system (circle 1). This sphere is the part of the total system that deals with time. The clock sends continuous signals, say 12:00, 12:01, 12:02, . . . , and these data arrive and are received and processed (2). This processing involves the recognition of 5:40 a.m. and 6:00 a.m. If it is 5:40 a.m. then this triggers the person to enter into sleeping (3). He/she is processed (absorbed) into sleeping (4). If it is 6:00 a.m., then this triggers:
- The release (5) of the person from sleeping to awaking (6)
- The checking of whether the person has arrived to the awaking state (7). If this is the case then this triggers success (For simplification sake, success is reported instead of failure; accordingly, the recipient of the report assumes failure if success does not arrive.)
Note that the horizontal joint bar can be represented in FM as shown in Fig. 13.
V. TIMING AND REAL-TIME SYSTEMS
Coffee machines have been used as a well-known example of modeling real-time systems using such languages as Uppaal and UML (e.g., [19–23]). In this section, we investigate the specification of design requirements for the coffee machine problem in the context of Uppaal, as it is described in many publications and course materials.
The coffee machine problem involves modeling the behavior of a system with three elements: a machine, person, and an observer. The person repeatedly inserts coins to receive coffee, after which he/she produces a publication. There is time delay after each such action. The machine takes some time for brewing the coffee.
It also takes a timeout if the brewed coffee is not taken before a certain upper time limit. The observer complains if more than 8 time units elapse between two consecutive publications.
In modeling the coffee machine in FM, we find that to complete the conceptual picture and flows, we need additional items (spheres) in addition to person, machine, and observer. One interesting aspect of FM description is the systematic application of the same generic stages for entities, subentities, and spheres. This repeatability of application creates specifications that are more complete. It is also possible to simplify the depiction by reducing the level of description in several ways. As an introduction, before giving the complete FM representation, Fig. 14 shows a brief description of the “waves” of flow and the new additional spheres.
In the figure, coin flow (A) triggers the coffee (B) and cup (C—a new sphere with an important role that will be explained later) flows as well as the flow of “counted time units” (D). As was mentioned previously, time flows continuously, but it is ignored until certain events (e.g., the arrival of coins) trigger counting units of time. Accordingly, in the figure with the passing of the coffee preparation period, the coffee and the cup flow to the “filled cup compartment” (E) and start the “fill cup” flow (F). In this case, time is also counted (G), and if the filled cup does not flow (i.e., it is taken from the compartment), then this triggers dispensing. The flow of the filled cup outside the compartment (H) is supposed to trigger the flow of the coffee to the person (I—e.g., being drank). This in turn triggers producing publications (J) that flow to the observer (K). Upon the arrival of publications the observer starts counting time (L) and complains start to flow out (M) if time reaches its maximum without receiving new publications.
The completeness and continuity of events (technical and physical) are grounds for the validity of the model. Take the state-based modeling of the machine as given by Anderson [19] and represented in Fig. 15, according to Anderson [19–20].

Coffee machine accepts a coin and then delays for some time (above it is 6 time units). It then sets a timeout timer, and either (to the right) dispenses coffee, or (to the left) times out and then dispenses coffee. The extra state on the left is because Uppaal does not allow both guards and synchronizing elements to appear on the same transition.
Note that this model assumes that the coffee flows outside the machine immediately, after the brewing process, just as water flows outside a pipe. This means the coffee does not wait to be taken outside the machine. The flow-based FM representation (see Fig. 16) forces introduction of a container for the coffee because there is waiting time. Thus, the items of cup and filled cup (cup+coffee) are necessary to convert the flowing coffee from the state of liquidity (which makes its flow outside the machine compulsory) to the state of “handle-ability” (a thing that stands by itself waiting to be picked up). From the “state” perspective, Fig. 17 shows the two methods of conceptualization. On the left, the model is not based on flows, hence the conceptualization is represented by conceptual jumps from one state to another. On the right side, the FM model is casted in state jumps. In the figure, the two triggering arrows that come from outside the machine sphere change the waiting state.
Fig. 15. Automata for machine (redrawn from [19])
Fig. 16. Flows in the coffee machine problem
Fig. 17. The coffee problem described in terms of states
The point here is that the flow-based conceptualization “forces” continuity and completeness of the narration of events, thus identifying items (e.g., cups) and processes (e.g., waiting liquid).
Fig. 18 shows the complete FM representation of the coffee machine problem. We start with inserting the coins (circle 1). The creation here (in the figure) means the appearance of coins in the episode, just as the first appearance of a new character in a play in theater.
The coins flow to the machine (2) where they are received and trigger three events:
- Displaying “in process” to the user. Initially, we assume it displays “ready” (3).
- Triggering the time counter (4)
- Triggering preparing the coffee (5)
The machine is continuously receiving time units; however, the triggering makes it “pay attention” and count these time units. Note that the time sphere is represented by a clock picture for illustrative purposes, but it is really the flowsystem that creates time units. Also, it is possible to detail the coffee sphere by drawing flowsystems for the coffee powder and water separately to be processed and make coffee.
At the end of the coffee preparation time, the cup is dropped (6) and the coffee is released (7); this happens in the machine compartment subsphere to create the filled cup (8). Creating the filled cup and releasing it trigger waiting time (9) to pick up the cup and display that (10). If the person takes out the filled cup (11), this triggers displaying “ready” and triggers (12) the flow of coffee to the person (13).
Note that, in general, the filled cup sphere includes three subspheres: the filled cup (cup+coffee), coffee, and cup (see Fig. 19). In any sphere, we can focus on any of its subspheres. Accordingly, when the person removes the filled cup outside the compartment the “attention” (matters of interest) is on the filled cup and the coffee subspheres (flowsystems (11 and 12)), but the cup by itself is of no interest.
Continuing the flows, when the coffee is received by the person (14), he/she drinks it to trigger creation of publications (15) that flow to the observer (16), which in turn triggers initializing a waiting time period for the next publication (17). If no publication arrives, this triggers creation of a complaint (18). We assume that initially the waiting timing here is set to zero.
Returning to releasing the filled cup that triggers waiting time (9); if the waiting time is over (19), then this triggers (20) checking whether the filled cup has been already removed (21); if not, the filled cup is dispensed with (22).
VI. CONCLUSION
Methodologies of time representation can be based on states, UML, Petri nets, and other types of diagrams. Each has its own advantages and weaknesses, especially with regard to having the features of understandability and simplicity. This paper proposed a flow-based representation that is based on the notion of flow with a focus on exploring the representation of time. The new methodology was demonstrated through sample timing-related problems.

FM can serve as an early system understanding and communication among stakeholders, including those without technical knowledge, and facilitate agreement between clients/users and designers. Additionally, it can be used as a base for system development and the design phase. The resultant FM representation avoids ambiguous textual language and heterogeneous diagramming. Of course, FM is still not well developed in comparison with such well diagram-oriented modeling methodology such as UML. Its weaknesses in terms of expressivity and complexity have to be studied more in different applications. Nevertheless, comparing FM diagrams side by side with other types of modeling techniques reveals it is a promising viable modeling tool.
We are currently exploring further time representation in FM, especially its relation to the actual design phase.
REFERENCES
http://lib.ugent.be/fulltxt/RUG01/000970662/RUG01-000970662_2010_0001_AC.pdf
|
{"Source-Url": "http://thesai.org/Downloads/Volume6No8/Paper_4-Flow_Based_Specification_of_Time_Design_Requirements.pdf", "len_cl100k_base": 6321, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 31211, "total-output-tokens": 7952, "length": "2e12", "weborganizer": {"__label__adult": 0.00042629241943359375, "__label__art_design": 0.0016450881958007812, "__label__crime_law": 0.00048828125, "__label__education_jobs": 0.0039215087890625, "__label__entertainment": 0.000156402587890625, "__label__fashion_beauty": 0.0002541542053222656, "__label__finance_business": 0.0005803108215332031, "__label__food_dining": 0.00051116943359375, "__label__games": 0.0009436607360839844, "__label__hardware": 0.002269744873046875, "__label__health": 0.0008082389831542969, "__label__history": 0.0005354881286621094, "__label__home_hobbies": 0.00020134449005126953, "__label__industrial": 0.00119781494140625, "__label__literature": 0.0008082389831542969, "__label__politics": 0.0003528594970703125, "__label__religion": 0.0007081031799316406, "__label__science_tech": 0.4208984375, "__label__social_life": 0.00014913082122802734, "__label__software": 0.01019287109375, "__label__software_dev": 0.55126953125, "__label__sports_fitness": 0.0003108978271484375, "__label__transportation": 0.0011892318725585938, "__label__travel": 0.0002397298812866211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33317, 0.02241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33317, 0.4564]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33317, 0.94035]], "google_gemma-3-12b-it_contains_pii": [[0, 4717, false], [4717, 9806, null], [9806, 9920, null], [9920, 16039, null], [16039, 17078, null], [17078, 20076, null], [20076, 21457, null], [21457, 25686, null], [25686, 26036, null], [26036, 28433, null], [28433, 33317, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4717, true], [4717, 9806, null], [9806, 9920, null], [9920, 16039, null], [16039, 17078, null], [17078, 20076, null], [20076, 21457, null], [21457, 25686, null], [25686, 26036, null], [26036, 28433, null], [28433, 33317, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33317, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33317, null]], "pdf_page_numbers": [[0, 4717, 1], [4717, 9806, 2], [9806, 9920, 3], [9920, 16039, 4], [16039, 17078, 5], [17078, 20076, 6], [20076, 21457, 7], [21457, 25686, 8], [25686, 26036, 9], [26036, 28433, 10], [28433, 33317, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33317, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
a80e075ea59ff80a885b82a784d1491f0d48857f
|
FAQ
• Where does the parent process come from?
• Why fork? To create a child process.
• Why the child process is a duplicate of the parent?
• Why fork( ) in child returns a 0? Isn’t the child an exact duplicate? Is the child’s PID 0?
• How does a child know who is its parent? How does the parent know the child’s PID?
• The OS does process scheduling. Is OS machine language or higher level?
• **Address space**
– Child duplicate of parent
– Child has a program loaded into it
• **UNIX examples**
– `fork()` system call creates new process
– `exec()` system call used after a `fork()` to replace the process’ memory space with a new program
Fork ( ) to create a child process
• Fork creates a copy of process
• Return value from fork (): integer
– When > 0:
• Running in (original) Parent process
• return value is pid of new child
– When = 0:
• Running in new Child process
– When < 0:
• Error! Perhaps exceeds resource constraints. sets errno (a global variable in errno.h)
• Running in original process
• All of the state of original process duplicated in both Parent and Child!
– Memory, File Descriptors (next topic), etc…
Process Management System Calls
• UNIX fork – system call to create a copy of the current process, and start it running
– No arguments!
• UNIX exec – system call to change the program being run by the current process. Several variations.
• UNIX wait – system call to wait for a process to finish
• Details: see man pages
Notes:
```c
pid_t pid = getpid(); /* get current processes PID */;
waitpid(cid, 0, 0); /* Wait for my child to terminate. */
exit (0); /* Quit*/
kill(cid, SIGKILL); /* Kill child*/
```
UNIX Process Management
```c
main () {
...
}
```
C Program Forking Separate Process
```c
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid_t pid;
/* fork a child process */
pid = fork();
if (pid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed");
return 1;
}
else if (pid == 0) { /* child process */
execlp("/bin/ls", "ls", NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
wait(NULL);
printf("Child Complete");
}
return 0;
}
```
<sys/types.h> definitions of derived types
<unistd.h> POSIX API
execlp(3) - Linux man page
http://linux.die.net/man/3/execlp
#include <sys/types.h>
#include <stdio.h>
#include <unistd.h>
int main()
{
pid_t cid;
/* fork a child process */
cid = fork();
if (cid < 0) { /* error occurred */
fprintf(stderr, "Fork Failed\n");
return 1;
}
else if (cid == 0) { /* child process */
printf("I am the child %d, my PID is %d\n", cid, getpid());
execvp("/bin/ls","ls",NULL);
}
else { /* parent process */
/* parent will wait for the child to complete */
printf("I am the parent with PID %d, my parent is %d, my child is %d\n",getpid(), getppid(), cid);
wait(NULL);
printf("Child Complete\n");
}
return 0;
}
Ys-MacBook-Air:ch3 ymalaiya$ ./newproc-posix_m
I am the parent with PID 494, my parent is 485, my child is 496
I am the child 0, my PID is 496
DateClient.java newproc-posix_m
Child Complete
Ys-MacBook-Air:ch3 ymalaiya$
• Wait/waitpid ( ) allows caller to suspend execution until child’s status is available
• Process status availability
– Generally after termination
– Or if process is stopped
• pid_t waitpid(pid_t pid, int *status, int options);
• The value of pid can be:
– 0 wait for any child process with same process group ID (perhaps inherited)
– > 0 wait for child whose process group ID is equal to the value of pid
– Others
• Status: where status info needs to be saved
Process Termination
• Process executes last statement and then asks the operating system to delete it using the `exit()` system call.
– Returns status data from child to parent (via `wait()`)
– Process’ resources are deallocated by operating system
• Parent may terminate the execution of children processes using the `abort()` system call. Some reasons for doing so:
– Child has exceeded allocated resources
– Task assigned to child is no longer required
– The parent is exiting and the operating systems does not allow a child to continue if its parent terminates
Process Termination
- Some operating systems do not allow child to exists if its parent has terminated. If a process terminates, then all its children must also be terminated.
- **cascading termination.** All children, grandchildren, etc. are terminated.
- The termination is initiated by the operating system.
- The parent process may wait for termination of a child process by using the `wait()` system call. The call returns status information and the pid of the terminated process:
```c
pid = wait(&status);
```
- If no parent waiting (did not invoke `wait()`) process is a zombie
- If parent terminated without invoking `wait`, process is an orphan (it is still running, reclaimed by init)
Multiprocess Architecture – Chrome Browser
- Early web browsers ran as single process
- If one web site causes trouble, entire browser can hang or crash
- Google Chrome Browser is multiprocess with 3 different types of processes:
- **Browser** process manages user interface, disk and network I/O
- **Renderer** process renders web pages, deals with HTML, Javascript. A new renderer created for each website opened
- Runs in **sandbox** restricting disk and network I/O, minimizing effect of security exploits
- **Plug-in** process for each type of plug-in
Each tab represents a separate process
Multitasking
Cooperating Processes
- **Independent** process cannot affect or be affected by the execution of another process.
- **Cooperating** process can affect or be affected by the execution of another process.
- Advantages of process cooperation:
- Information sharing
- Computation speed-up
- Modularity
- Convenience
Interprocess Communication
- Processes within a system may be *independent* or *cooperating*
- Cooperating process can affect or be affected by other processes, including sharing data
- Reasons for cooperating processes:
- Information sharing
- Computation speedup
- Modularity
- Convenience
- Cooperating processes need *interprocess communication (IPC)*
- Two models of IPC
- Shared memory
- Message passing
Communications Models
(a) Message passing. (b) shared memory.
(a) Message passing. (b) shared memory.
FAQ
• What is process control block PCB? Where is it? Is it stack? Typical size?
– Data structure stored in kernel’s memory space, perhaps as structs in a linked list.
• Context switch time?: depends on PCB size, cache & TLB
• Scheduler: hw or sw? part of kernel
• How the scheduler chooses? Details coming up soon
• Parent & child processes: difference between them? Is it necessary to have a tree structure?
• Is it exec() that makes child run a different process?
• Does the parent process always wait() for a child to finish?
• Difference?: wait(int *wstatus) ex: wait(NULL)
– waitpid(pid_t pid, int *wstatus, int options); see man pages
Producer-Consumer Problem
• Paradigm for cooperating processes, *producer* process produces information that is consumed by a *consumer* process
– *unbounded-buffer* places no practical limit on the size of the buffer
– *bounded-buffer* assumes that there is a fixed buffer size
Bounded-Buffer – Shared-Memory Solution
- Shared data
```c
#define BUFFER_SIZE 10
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
```
- in points to the **next free position** in the buffer
- out points to the **first full position** in the buffer.
- Buffer is empty when `in == out`;
- Buffer is full when `((in + 1) % BUFFER_SIZE) == out`. (Circular buffer)
- This scheme can only use `BUFFER_SIZE-1` elements
item next_produced;
while (true) {
/* produce an item in next produced */
while (((in + 1) % BUFFER_SIZE) == out)
/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
}
item next_consumed;
while (true) {
while (in == out)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
/* consume the item in next consumed */
}
<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg" width="400" height="100" viewBox="0 0 400 100" style="background: white;">
<rect x="0" y="0" width="400" height="100" fill="white"/>
<rect x="200" y="0" width="200" height="100" fill="red"/>
<text x="200" y="50" fill="black" style="font-size: 20px;">Out</text>
<text x="0" y="50" fill="black" style="font-size: 20px;">In</text>
<text x="100" y="25" fill="black" style="font-size: 10px;">0</text>
<text x="150" y="25" fill="black" style="font-size: 10px;">1</text>
<text x="200" y="25" fill="black" style="font-size: 10px;">2</text>
<text x="250" y="25" fill="black" style="font-size: 10px;">3</text>
<text x="300" y="25" fill="black" style="font-size: 10px;">4</text>
<text x="350" y="25" fill="black" style="font-size: 10px;">5</text>
<text x="400" y="25" fill="black" style="font-size: 10px;">6</text>
<text x="450" y="25" fill="black" style="font-size: 10px;">7</text>
</svg>
Interprocess Communication – Shared Memory
• An area of memory shared among the processes that wish to communicate
• The communication is under the control of the users processes not the operating system.
• Major issues is to provide mechanism that will allow the user processes to synchronize their actions when they access shared memory.
– Synchronization is discussed in great details in Chapter 5.
• Example soon.
Interprocess Communication – Message Passing
• Mechanism for processes to communicate and to synchronize their actions
• Message system – processes communicate with each other without resorting to shared variables
• IPC facility provides two operations:
– send(message)
– receive(message)
• The message size is either fixed or variable
• If processes \( P \) and \( Q \) wish to communicate, they need to:
– Establish a *communication link* between them
– Exchange messages via send/receive
• Implementation issues:
– How are links established?
– Can a link be associated with more than two processes?
– How many links can there be between every pair of communicating processes?
– What is the capacity of a link?
– Is the size of a message that the link can accommodate fixed or variable?
– Is a link unidirectional or bi-directional?
• Implementation of communication link
– Physical:
• Shared memory
• Hardware bus
• Network
– Logical: Options (details next)
• Direct (process to process) or indirect (mail box)
• Synchronous (blocking) or asynchronous (non-blocking)
• Automatic or explicit buffering
Direct Communication
• Processes must name each other explicitly:
– send\((P, \text{message})\) – send a message to process P
– receive\((Q, \text{message})\) – receive a message from process Q
• Properties of communication link
– Links are established automatically
– A link is associated with exactly one pair of communicating processes
– Between each pair there exists exactly one link
– The link may be unidirectional, but is usually bi-directional
Indirect Communication
• Messages are directed and received from mailboxes (also referred to as ports)
– Each mailbox has a unique id
– Processes can communicate only if they share a mailbox
• Properties of communication link
– Link established only if processes share a common mailbox
– A link may be associated with many processes
– Each pair of processes may share several communication links
– Link may be unidirectional or bi-directional
Indirect Communication
- Operations
- create a new mailbox (port)
- send and receive messages through mailbox
- destroy a mailbox
- Primitives are defined as:
- `send(A, message)` – send a message to mailbox A
- `receive(A, message)` – receive a message from mailbox A
Indirect Communication
• Mailbox sharing
– $P_1$, $P_2$, and $P_3$ share mailbox A
– $P_1$, sends; $P_2$ and $P_3$ receive
– Who gets the message?
• Possible Solutions
– Allow a link to be associated with at most two processes
– Allow only one process at a time to execute a receive operation
– Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.
Synchronization (blocking or not)
- Message passing may be either blocking or non-blocking
- **Blocking** is termed **synchronous**
- **Blocking send** -- sender is blocked until message is received
- **Blocking receive** -- receiver is blocked until a message is available
- **Non-blocking** is termed **asynchronous**
- **Non-blocking send** -- sender sends message and continues
- **Non-blocking receive** -- the receiver receives:
- A valid message, or
- Null message
Different combinations possible
- If both send and receive are blocking, we have a **rendezvous**.
- Producer-Consumer problem: Easy if both block
Buffering
- Queue of messages attached to the link.
- implemented in one of three ways
1. Zero capacity – no messages are queued on a link. Sender must wait for receiver (rendezvous)
2. Bounded capacity – finite length of $n$ messages Sender must wait if queue full
3. Unbounded capacity – infinite length Sender never waits
Examples of IPC Systems - POSIX
- Older scheme (System V) using `shmget()`, `shmat()`, `shmdt()`, `shmctl()`
- POSIX Shared Memory
- Process first creates shared memory segment
```c
shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
```
- Returns file descriptor (int) which identifies the file
- Also used to open an existing segment to share it
- Set the size of the object
```c
ftruncate(shm_fd, 4096);
```
- Map the shared memory segment in the address space of the process
```c
ptr = mmap(0, SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, 0);
```
- Now the process could write to the shared memory
```c
sprintf(ptr, "Writing to shared memory");
```
Examples of IPC Systems - POSIX
■ POSIX Shared Memory
● Other process opens shared memory object name
\[
\text{shm\_fd} = \text{shm\_open}(\text{name}, \text{O\_RDONLY}, \text{0666});
\]
● Returns file descriptor (int) which identifies the file
● map the shared memory object
\[
\text{ptr} = \text{mmap}(0, \text{SIZE}, \text{PROT\_READ}, \text{MAP\_SHARED}, \text{shm\_fd}, 0);
\]
● Now the process can read from to the shared memory object
\[
\text{printf}(\text{“%s”}, (\text{char} \,*\text{ptr}));
\]
● remove the shared memory object
\[
\text{shm\_unlink(name)};
\]
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <fcntl.h>
#include <sys/shm.h>
#include <sys/stat.h>
int main()
{
/* the size (in bytes) of shared memory object */
const int SIZE = 4096;
/* name of the shared memory object */
const char *name = "OS";
/* strings written to shared memory */
const char *message_0 = "Hello";
const char *message_1 = "World!";
/* shared memory file descriptor */
int shm_fd;
/* pointer to shared memory object */
void *ptr;
/* create the shared memory object */
shm_fd = shm_open(name, O_CREAT | O_RDWR, 0666);
/* configure the size of the shared memory object */
ftruncate(shm_fd, SIZE);
/* memory map the shared memory object */
ptr = mmap(0, SIZE, PROT_WRITE, MAP_SHARED, shm_fd, 0);
/* write to the shared memory object */
sprintf(ptr, "%s", message_0);
ptr += strlen(message_0);
sprintf(ptr, "%s", message_1);
ptr += strlen(message_1);
return 0;
}
```c
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <sys/shm.h>
#include <sys/stat.h>
int main()
{
/* the size (in bytes) of shared memory object */
const int SIZE = 4096;
/* name of the shared memory object */
const char *name = "OS";
/* shared memory file descriptor */
int shm_fd;
/* pointer to shared memory object */
void *ptr;
/* open the shared memory object */
shm_fd = shm_open(name, O_RDONLY, 0666);
/* memory map the shared memory object */
ptr = mmap(0, SIZE, PROT_READ, MAP_SHARED, shm_fd, 0);
/* read from the shared memory object */
printf("%s", (char *)ptr);
/* remove the shared memory object */
shm_unlink(name);
return 0;
}
```
Communications in Client-Server Systems
- Sockets
- Remote Procedure Calls
- Pipes
- Remote Method Invocation (Java)
Sockets
- A **socket** is defined as an endpoint for communication
- Concatenation of IP address and **port** – a number included at start of message packet to differentiate network services on a host
- The socket **161.25.19.8:1625** refers to port **1625** on host **161.25.19.8**
- Communication consists between a pair of sockets
- All ports below 1024 are **well known**, used for standard services
- Special IP address **127.0.0.1 (loopback)** to refer to system on which process is running
Socket Communication
- **CS457 Computer Networks and the Internet**
- Host $X$ (146.86.5.20)
- Socket (146.86.5.20:1625)
- Web server (161.25.19.8)
- Socket (161.25.19.8:80)
80: HTTP (well known)
Pipes
- Acts as a conduit allowing two processes to communicate
- One of the first IPC implementation mechanisms
Pipes
• Conduit allowing two processes to communicate
• Issues:
– Is communication unidirectional or bidirectional?
– If bidirectional, is it **half-duplex** (one way at a time) or **full-duplex** (both directions simultaneously)?
– Must there exist a relationship (i.e., **parent-child**) between the communicating processes?
– Can the pipes be used over a network?
Pipes
• Command line:
– Set up pipe between commands
`ls | more`
Output of `ls` delivered as input to more
• **Ordinary (“anonymous”) pipes** — Typically, a parent process creates a pipe and uses it to communicate with a child process that it created. Cannot be accessed from outside the process that created it.
• **Named pipes (“FIFO”)** — can be accessed without a parent-child relationship.
Ordinary Pipes
- Ordinary Pipes allow communication in standard producer-consumer style
- Producer writes to one end (the write-end of the pipe)
- Consumer reads from the other end (the read-end of the pipe)
- Ordinary pipes are therefore unidirectional (half duplex)
- **Require parent-child relationship** between communicating processes
- **pipe (int fd[])** to create pipe, fd[0] is the read-end, fd[1] is the write-end
- Windows calls these **anonymous pipes**
Ordinary Pipes
- Pipe is a special type of file.
- Inherited by the child
- Must close unused portions of the pipe
UNIX pipe example
```c
#define READ_END 0
#define WRITE_END 1
int fd[2];
cREATE THE PIPE:
if (pipe(fd) == -1) {
fprintf(stderr, "Pipe failed");
return 1;
}
fork a child process:
pid = fork();
parent process:
/* close the unused end of the pipe */
close(fd[READ_END]);
/* write to the pipe */
write(fd[WRITE_END], write_msg, strlen(write_msg)+1);
/* close the write end of the pipe */
close(fd[WRITE_END]);
```
Child inherits the pipe
child process:
/* close the unused end of the pipe */
close(fd[WRITE_END]);
/* read from the pipe */
read(fd[READ_END], read_msg, BUFFER_SIZE);
printf("child read %s\n", read_msg);
/* close the write end of the pipe */
close(fd[READ_END]);
Named Pipes
- Named Pipes (termed FIFO) are more powerful than ordinary pipes
- Communication is bidirectional
- No parent-child relationship is necessary between the communicating processes
- Several processes can use the named pipe for communication
- Provided on both UNIX and Windows systems
|
{"Source-Url": "http://www.cs.colostate.edu/~cs370/Fall17/lectures/3ProcessesL6.pdf", "len_cl100k_base": 5107, "olmocr-version": "0.1.51", "pdf-total-pages": 46, "total-fallback-pages": 0, "total-input-tokens": 68238, "total-output-tokens": 7188, "length": "2e12", "weborganizer": {"__label__adult": 0.00025963783264160156, "__label__art_design": 0.0001842975616455078, "__label__crime_law": 0.0001685619354248047, "__label__education_jobs": 0.0004143714904785156, "__label__entertainment": 6.300210952758789e-05, "__label__fashion_beauty": 8.350610733032227e-05, "__label__finance_business": 8.195638656616211e-05, "__label__food_dining": 0.0002799034118652344, "__label__games": 0.0006685256958007812, "__label__hardware": 0.001789093017578125, "__label__health": 0.0002137422561645508, "__label__history": 0.00015485286712646484, "__label__home_hobbies": 8.046627044677734e-05, "__label__industrial": 0.0003027915954589844, "__label__literature": 0.0001773834228515625, "__label__politics": 0.00011301040649414062, "__label__religion": 0.0003845691680908203, "__label__science_tech": 0.01031494140625, "__label__social_life": 6.377696990966797e-05, "__label__software": 0.01059722900390625, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.0002231597900390625, "__label__transportation": 0.0003120899200439453, "__label__travel": 0.0001436471939086914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19934, 0.01385]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19934, 0.61917]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19934, 0.77457]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 394, false], [394, 652, null], [652, 1170, null], [1170, 1687, null], [1687, 1737, null], [1737, 2412, null], [2412, 3311, null], [3311, 3789, null], [3789, 4367, null], [4367, 5073, null], [5073, 5683, null], [5683, 5696, null], [5696, 6017, null], [6017, 6440, null], [6440, 6544, null], [6544, 7191, null], [7191, 7475, null], [7475, 7947, null], [7947, 8161, null], [8161, 9343, null], [9343, 9764, null], [9764, 10108, null], [10108, 10624, null], [10624, 10921, null], [10921, 11388, null], [11388, 11845, null], [11845, 12126, null], [12126, 12529, null], [12529, 13185, null], [13185, 13517, null], [13517, 14236, null], [14236, 14891, null], [14891, 15893, null], [15893, 16634, null], [16634, 16752, null], [16752, 17255, null], [17255, 17464, null], [17464, 17578, null], [17578, 17955, null], [17955, 18361, null], [18361, 18829, null], [18829, 18945, null], [18945, 19395, null], [19395, 19638, null], [19638, 19934, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 394, true], [394, 652, null], [652, 1170, null], [1170, 1687, null], [1687, 1737, null], [1737, 2412, null], [2412, 3311, null], [3311, 3789, null], [3789, 4367, null], [4367, 5073, null], [5073, 5683, null], [5683, 5696, null], [5696, 6017, null], [6017, 6440, null], [6440, 6544, null], [6544, 7191, null], [7191, 7475, null], [7475, 7947, null], [7947, 8161, null], [8161, 9343, null], [9343, 9764, null], [9764, 10108, null], [10108, 10624, null], [10624, 10921, null], [10921, 11388, null], [11388, 11845, null], [11845, 12126, null], [12126, 12529, null], [12529, 13185, null], [13185, 13517, null], [13517, 14236, null], [14236, 14891, null], [14891, 15893, null], [15893, 16634, null], [16634, 16752, null], [16752, 17255, null], [17255, 17464, null], [17464, 17578, null], [17578, 17955, null], [17955, 18361, null], [18361, 18829, null], [18829, 18945, null], [18945, 19395, null], [19395, 19638, null], [19638, 19934, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19934, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19934, null]], "pdf_page_numbers": [[0, 0, 1], [0, 394, 2], [394, 652, 3], [652, 1170, 4], [1170, 1687, 5], [1687, 1737, 6], [1737, 2412, 7], [2412, 3311, 8], [3311, 3789, 9], [3789, 4367, 10], [4367, 5073, 11], [5073, 5683, 12], [5683, 5696, 13], [5696, 6017, 14], [6017, 6440, 15], [6440, 6544, 16], [6544, 7191, 17], [7191, 7475, 18], [7475, 7947, 19], [7947, 8161, 20], [8161, 9343, 21], [9343, 9764, 22], [9764, 10108, 23], [10108, 10624, 24], [10624, 10921, 25], [10921, 11388, 26], [11388, 11845, 27], [11845, 12126, 28], [12126, 12529, 29], [12529, 13185, 30], [13185, 13517, 31], [13517, 14236, 32], [14236, 14891, 33], [14891, 15893, 34], [15893, 16634, 35], [16634, 16752, 36], [16752, 17255, 37], [17255, 17464, 38], [17464, 17578, 39], [17578, 17955, 40], [17955, 18361, 41], [18361, 18829, 42], [18829, 18945, 43], [18945, 19395, 44], [19395, 19638, 45], [19638, 19934, 46]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19934, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
46c942771957eb6c0af6178e20e4404515a4ba69
|
[REMOVED]
|
{"len_cl100k_base": 5104, "olmocr-version": "0.1.50", "pdf-total-pages": 5, "total-fallback-pages": 0, "total-input-tokens": 17754, "total-output-tokens": 5688, "length": "2e12", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.0004076957702636719, "__label__crime_law": 0.0005388259887695312, "__label__education_jobs": 0.0011186599731445312, "__label__entertainment": 0.00014150142669677734, "__label__fashion_beauty": 0.0001995563507080078, "__label__finance_business": 0.0005369186401367188, "__label__food_dining": 0.0005164146423339844, "__label__games": 0.0005688667297363281, "__label__hardware": 0.001651763916015625, "__label__health": 0.0011119842529296875, "__label__history": 0.0003807544708251953, "__label__home_hobbies": 0.00019073486328125, "__label__industrial": 0.0009212493896484376, "__label__literature": 0.0005879402160644531, "__label__politics": 0.00034809112548828125, "__label__religion": 0.0005574226379394531, "__label__science_tech": 0.382568359375, "__label__social_life": 0.0001475811004638672, "__label__software": 0.0158538818359375, "__label__software_dev": 0.59033203125, "__label__sports_fitness": 0.0003235340118408203, "__label__transportation": 0.0007357597351074219, "__label__travel": 0.0002111196517944336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 19129, 0.03204]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 19129, 0.26642]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 19129, 0.87402]], "google_gemma-3-12b-it_contains_pii": [[0, 4747, false], [4747, 9451, null], [9451, 12198, null], [12198, 17017, null], [17017, 19129, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4747, true], [4747, 9451, null], [9451, 12198, null], [12198, 17017, null], [17017, 19129, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 19129, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 19129, null]], "pdf_page_numbers": [[0, 4747, 1], [4747, 9451, 2], [9451, 12198, 3], [12198, 17017, 4], [17017, 19129, 5]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 19129, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
59e4a3ef47694b8eaf1ff3d6805319cf62de91a3
|
Pragmatic Software Testing Education
Mauricio Aniche
Delft University of Technology
The Netherlands
m.f.aniche@tudelft.nl
Felienne Hermans
Delft University of Technology
The Netherlands
f.f.j.hermans@tudelft.nl
Arie van Deursen
Delft University of Technology
The Netherlands
arie.vandeursen@tudelft.nl
ABSTRACT
Software testing is an important topic in software engineering education, and yet highly challenging from an educational perspective: students are required to learn several testing techniques, to be able to distinguish the right technique to apply, to evaluate the quality of their test suites, and to write maintainable test code. In this paper, we describe how we have been adding a pragmatic perspective to our software testing course, and explore students’ common mistakes, hard topics to learn, favourite learning activities, and challenges they face. To that aim, we analyze the feedback reports that our team of Teaching Assistants gave to the 230 students of our 2016-2017 software testing course at Delft University of Technology. We also survey 84 students and seven of our teaching assistants on their perceptions. Our results help educators not only to propose pragmatic software testing courses in their faculties, but also bring understanding on the challenges that software testing students face when taking software testing courses.
CCS CONCEPTS
• Applied computing → Education; • Software and its engineering → Software verification and validation;
KEYWORDS
software testing education, software engineering education, computer science education.
ACM Reference Format:
1 INTRODUCTION
Every software developer should be aware of the (high) impact that malfunctioning software can have in our society. We have seen huge losses in the financial market [30], and even researchers withdrawing their papers [33], all of them caused by software bugs. Making sure software works is maybe the greatest responsibility of a software developer. Luckily, over the years, software testing moved away from being considered the activity that ‘less skilled’ software engineers do to one of the most important skills an engineer should have.
The act of inspecting large and complex code bases to find bugs is not a trivial task in the real world: engineers need to have a broad understanding of different practices that vary from simple manual exploratory testing, where a human tries to find bugs manually by interacting with the system, to advanced bleeding-edge testing techniques, such as automated testing and automated test generation, where engineers program machines to test their system.
Companies such as Facebook [12], Google [41], and Microsoft [35] take testing seriously and require their engineers to master such techniques. Surveys have shown that developers understand the importance of testing-related training [15] and yet many of them still lack formal testing education [6, 34].
Indeed, educating a student in the art of software testing is challenging, for both students and educators. From the educator’s perspective, it is hard to keep a testing course up-to-date with the novelties of the field as well as to come up with exercises that are realistic [14]. Due to the importance of the topic, educators have been experimenting with the introduction of testing earlier in Computer Science programs [17, 19–21, 23, 27], introducing a test-first approach in CS courses [9, 10, 22], developing tools focused on software testing education [11, 38], and proposing more complete postgraduate courses focused on testing [39]. Educators also face the fact that some testing topics are not conceptually straightforward, not easy to demonstrate and generalize, and are not all available in a single textbook [40].
This paper has a twofold goal. First, to present how we have been teaching pragmatic software testing to the first year CS students at Delft University of Technology. Second, we explore students’ common mistakes, hard topics to learn, favourite learning activities, and challenges they face when learning pragmatic software testing.
To this aim, we analyzed the 1,993 quotes from the feedback report that we, as teachers and teaching assistants, gave to each of the 230 students of the 2017 edition of the Software Quality and Testing course, which is taught at the first year of our Computer Science bachelor. In addition, we performed a survey with 84 students, which we augmented by also surveying seven of our TAs. The main contributions of this paper are:
• A proposal for a pragmatic software testing course based on nine key principles that can be taught for computer science students, including building a test mindset and interaction with practitioners (Section 3).
• An empirical analysis of the students’ most common mistakes (Section 6.1), their perceptions on the most difficult topics in software testing (Section 6.2), and the importance of different teaching activities (Section 6.3) when learning pragmatic software testing.
2 RELATED WORK
Software Testing is an important part of any Software Engineering program [2, 8, 26, 42], and by itself poses several other challenges to educators. Unfortunately, the topic still does not receive its deserved attention in several CS programs. Wong [42] argues that many engineers are not well trained in software testing because most CS programs offer ST as an elective course. Clarke et al. [8] also points to the fact that due to the large number of topics to be covered in a Software Engineering program, little attention is given to Software Testing. Astigarraga et al. [2] show that most CS programs tend to emphasize development at the expense of testing as a formal engineering discipline. Lemos et al. [26] show that software testing education can improve code reliability in terms of correctness; however, authors also argue that university instructors tend to lack the same knowledge that would help students increase their programming skills toward more reliable code.
Educators have been suggesting different approaches on how to introduce testing in a CS curriculum: from students submitting their assignments together with test plans or sets [16, 17, 21], performing black-box testing in a software seeded with errors [21, 24, 31], students testing each others’ programs [36], to suggesting students to use a test-first approach at the very beginning of the program [9, 10, 22, 27]. Many of these authors even suggest that tests should be incorporated to the Computer Science and Software Engineering curricula, not only as an elective discipline, but throughout the curriculum. More specifically, Jones [23] suggests that students need to see the practice of software testing as part of the educational experience and that each core course in the curriculum should impart one or more testing experiences.
In addition, educators have proposed tools that are solely focused on software testing education. Elbaum et al. [11] propose BugHunt. BugHunt is a tool that contains four different lessons on software testing (terminology, black box, white box, efficiency in testing). 79% of the students in their experiment agreed that BugHunt added significant value to the material presented in the lecture(s) on software testing, and 61% of the students agreed that BugHunt could replace the classes on testing. Spacco and Pugh propose Marmoset [38], a tool to help incentivize students to test their software. Marmoset’s innovative element is that if a submission passes all of the public test cases, then students are given the opportunity to test their code against a test suite that is not publicly disclosed.
3 PRAGMATIC SOFTWARE TESTING EDUCATION
The Software Testing and Quality Engineering at Delft University of Technology is a course that covers several different aspects of software testing, ranging from topics in the ISTQB industry certification [5] to software testing automation, as well as the future of testing by means of selected research papers.
The course is currently a compulsory part of the 4th quarter of the first year in the Computer Science bachelor. The course corresponds to 5 ECTS (140 hours). Students have two lectures of 1.5 hours plus 4 hours of labwork a week. As a pre-requisite, students should have at least basic knowledge on Java programming language.
The teaching team is currently composed of two teachers and teaching assistants (TAs). The number of TAs vary as our university has a policy of 1 TA per 30 students. Teachers are responsible for the course design, lectures, creating and assessing multiple choice exams, and they have the overall responsibility of the course. TAs are responsible for helping students, grading all labwork deliverables, and for giving concrete and specific feedback on what students can improve.
Learning goals. At the end of the course, students (1) are able to create unit, integration, and system tests using current existing tools (e.g., JUnit, Mockito) that successfully test complex software systems, (2) are able to derive test cases that deal with exceptional, corner, and bad weather cases by performing several different techniques (i.e., boundary analysis, state-based testing, decision tables), (3) are able to measure and reflect on the effectiveness of the developed test suites by means of different test adequacy metrics (e.g., line and branch code coverage, MC/DC), (4) are able to reflect on limitations of current testing techniques, when and when not to apply them in a given context, and to design testable software systems, (5) Participants are able to write maintainable test code by avoiding well-known test code smells (e.g., Assertion Roulette, Slow or Obscure Tests).
Program. The course covers software quality attributes, maintainability and testability, manual and exploratory testing, automated testing, devops, test adequacy, model-based testing, state-based testing, decision tables, reviews and inspections, design-by-contract, embedded system testing, test-driven design, unit versus integration testing, mocks and stubs. More specifically:
- Week 1: Introduction to software testing, fault vs failure, principles of testing, (un)decidability, introduction to JUnit, introduction to labwork.
- Week 2: Life cycle, validation vs verification, V-model, code reviews. Functional testing, partition testing, boundary testing, and domain testing.
- Week 3: Structural testing, adequacy criteria, code coverage. Unit vs integration vs system testing, mock objects, and test-driven development.
- Week 4: State-based testing, model-based testing, and decision tables.
- Week 6: Security testing. Search-based software testing.
- Week 7: Guest lectures from industry.
Key elements. To achieve a pragmatic software testing course, we have devised and currently follow some key elements:
- Theory applied in the lecture. We put our efforts into developing lectures where students can see theory being applied to practice. Our lectures often have the following structure: we present a (buggy) code implementation (initially on slides, and later in the IDE), we discuss where the bug is, we explore, at a conceptual level, a systematic approach to detect the bug, we apply the approach into a set of concrete examples. In other words, we do not only focus on explaining abstract ideas, but on concretely showing how to apply them on different real world problems, using real-world tools, like JUnit, Mockito, and Cucumber.
We have devised and currently follow some key elements:
Real-world pragmatic discussions. Software testing is a challenging activity to be done in practice. This means that developers often make trade-offs in deciding what and how much to test. Engineering questions that arise when complex software systems are being tested, such as "how much should I test?", "how should I test a mobile application that communicates with a web server?", and "should I use mocks to test this application?" are often discussed in classroom so that students see how to extrapolate from our often small exercises to their future real life as developers.
Build a testing mindset. Software testing is not seen as an important task by many students. A software testing course should inspire students to think about testing whenever they implement any piece of code. In our testing course, we aim to achieve such a testing mindset by (1) showing how testing can be a creative activity, requiring strong developers, by means of several live coding sessions and rich pragmatic discussions, (2) demonstrating not only the usefulness of any testing technique we teach, but also how they are applied, as well as what trade-offs such techniques have in the real-world, (3) bringing guest lecturers who talk about the importance of software testing for their companies.
Software testing automation. The software engineering industry has long been advocating the automation of any software testing activity [12, 35, 41]. However, some software testing courses still focus on writing test case specifications solely as documents, and do not discuss how to automate them. In our course, to all the theoretical and systematic test design techniques we present, from functional testing to structural testing, from unit to system-level tests, students later write them in a form of an automated test. Mastering tools such as JUnit and Mockito, standard tools for test automation in Java, is a clear learning goal of our course. The importance of automation also strongly appears in our labwork, which we discuss next.
A hands-on labwork. We see the labwork as an important learning method. In our course, by means of a practical labwork assignment, students apply a selection of techniques to a 5k lines of code game written in Java, namely, JPacMan. The labwork contains a set of 50 exercises in which students are able to exercise all the techniques we teach. It is important to notice that students not only generate test cases on the paper, but also automate them. A great amount of their work is in actually producing automated JUnit test cases.
In the following, we present the main deliverables of our labwork. The complete assignment can be found in our online appendix [1].
- **Part 0 (Pre-requisites).** Clone the project from Github, configure the project in your IDE, write your first JUnit test, run coverage analysis.
- **Part 1.** Write a smoke test, functional black-box testing, boundary tests, reflect on test understandability and best practices.
- **Part 2.** White-box testing, mock objects, calculate code coverage and apply structural testing, use decision tables for complex scenarios, reflect on how to reduce test complexity and how to avoid flaky tests.
- **Part 3.** Apply state-based testing, test reusability, refactor and reflect on test smells.
Test code quality matters. Due to the importance of automated testing activities, software testers will deal with large test codebases. Empirical research has indeed shown that test code smells often happen in software systems, and that their presence has a strong negative impact on the maintainability of the affected classes [3]. We often reinforce the importance of refactoring test code and make sure they are free of smells. To any test code we write during live coding sessions, we make sure that they are as free of smells as possible. Test smell catalogues such as the ones proposed by Meszaros [32] are deeply discussed in a dedicated lecture.
Design systems for testability. Designing software in such a way that it eases testability is a common practice among practitioners [13, 18, 29]. This requires us to not only discuss software testing in our course, but software architecture and design principles of testable software systems, such as dependency inversion [28], observability and controllability, in an entire dedicated lecture for the topic. Questions like "Do I need to test this behavior via an unit or a system test?", “How can I test my mobile application?” are extensively discussed not only through the eyes of software testing, but also to the eyes of software design.
Mixture of pragmatic and theoretical books. The two books we use as textbooks in the course are the “Foundations of software testing: ISTQB certification” [5], which gives students a solid foundation about testing theory, and the "Pragmatic Unit Testing in Java 8 with JUnit" [25], which gives students concrete and practical examples on how to use testing tools, like JUnit. We believe both complement each other and both are important for students who will soon become a software tester.
Interaction with practitioners. We strongly encourage their interaction with practitioners throughout our course. Having guest lectures from industry practitioners helps us to show the pragmatic side of software testing. Guests focus their lectures on how they apply software testing at their companies, tools they use, their pros and cons, and on the mistakes and challenges they face. In the 2017 edition, we also experimented with Ask-Me-Anything (AMA) sessions, where we called experts from all over the world via Skype and students had 15 minutes to ask any software-testing related questions.
Grading. We currently use the following formula to grade our students: 0.25 * labwork + 0.75 * exam. The labwork (as we explain below) is composed of 4 deliverables, each graded by our TAs in a range of [0..10]. We later average the grades of four deliverables, which compose the labwork component of the grade. At the end of the course, we propose a 40-question multiple choice exam. Students may take a resit 6 weeks later if they did not pass in the first time. We also offer an optional midterm exam for students who want to practice beforehand.
4 RESEARCH METHODOLOGY
The goal of this study is to provide a better understanding of the difficulties and challenges that students face when learning pragmatic software testing.
To that aim, we analyze the data from 230 students of the 2016-2017 edition of our software testing course. We propose three research questions:
**RQ.** What common mistakes do students make when learning software testing?
RQ1: Which software testing topics do students find hardest to learn?
RQ2: Which teaching methods do students find most helpful?
To answer our research questions, we collect and analyze data from three different sources: the feedback reports that TAs give to students throughout the course, a survey with students, and a survey with the TAs, both performed after the course. We characterize the participants in Section 5. In the following, we detail the three parts of our methodology.
**Manual content analysis on the feedback.** As we explain in Section 3, students work on and produce four deliverables during the course. After each deliverable, our team of TAs manually reads students’ reports, source code, and tests, and with the help of a rubric, provides them with rich qualitative feedback.
This feedback usually contains several quotes that touch on a mix of different topics, such as mistakes they made in the exercises, tips on how to improve their existing work, issues on the written report, and even compliments for their good work. The language of such feedback reports is usually informal, as we do not give constraints to TAs on how the feedback should be.
We analyze the content of all feedback reports. To that aim, we first filter out any feedback that is not directly related to software testing (e.g., comments on exercises that were not done, or compliments). We then follow an iterative process, derived from standard qualitative data analysis procedures [37]: (1) we assign a code for each quote in the feedback; the code summarizes the essence of the quote, (2) if a quote does not belong to any existing codes, we introduce a new code, (3) each quote has just a single code; if a quote tackles two different problems, we split the original quote into two quotes, (4) to assign the correct code to a quote, we used our knowledge of the testing course, labwork, and the existing rubrics. We assigned 40 different codes to a total of 1,993 quotes. As a next step, we started an iterative merging process to derive the final themes, by grouping similar codes into higher-level themes, e.g., the theme “maintainability of test code” contains quotes from the “test quality”, and “test duplication” codes. We ended up with eight themes that we present in the Results (Section 6).
**Survey with students.** With the goal of capturing their perceptions on learning software testing, we asked students to answer a questionnaire that contained both open and closed questions at the end of the course.
The survey contains a total of 18 questions, none of which are required. The two closed questions of the survey asked students about the difficulty of learning and putting into practice the concepts and techniques we taught, and about the importance of the different activities we used throughout the course. In these questions, students had to choose from a five point Likert-scale, ranging from strongly disagree to strongly agree (see Figures 2 and 3). The open questions were mostly focused on understanding the students’ main challenges, difficulties, and suggestions of improvements for our testing course. We apply qualitative techniques to analyze the results of each open question individually, similarly to our analysis of the feedback reports. The full survey as well the full code book can be found in our online appendix [1].
We did not make answering the survey compulsory for the students. We received 84 complete answers out of the 230 students.
**Survey with Teaching Assistants.** Our TAs support students throughout the course, by answering their questions, supporting their work during the lab, and by grading their assignments. As a consequence of such intense contact with students, TAs obtain a good perspective on the challenges of teaching software testing.
We also performed a similar survey with TAs, focusing on what they perceive as challenges for students. The survey contained the same two closed questions from the students’ survey (challenges when applying software testing, and the importance of the different activities). In the open questions, we focused on asking about the common mistakes students do during the lab, as well as their perceptions on the challenges that students face.
We shared the survey internally at the end of our course. We also did not make answering the survey compulsory for TAs. At the end, we received 7 complete answers out of the 10 TAs.
### 5 Characterization of the Participants
**Students.** 66 students identify themselves as male, 8 as female, and 10 preferred not to answer. 89.3% of the students are between 18 to 24 years, five are between 25 and 34, and four are 17 or younger. Only three students were international students. In terms of Java knowledge, in a scale from 1 to 10, 9.5% of students consider their knowledge between 9 and 10, and 72% of them consider themselves between 7 and 8. Only 4 students consider themselves 5 or below.
Thanks to the introduction to JUnit that students receive during their very first course on programming, most of them already had some knowledge on software testing prior to our course. In fact, as we show in Figure 1, before the course starts, in a scale from 1 to 10, 39% of them consider themselves between 6 and 8, 44% between 4 and 5, and only 16% between 1 and 3. No student considered herself a 9 or 10. Students considered that their knowledge increased after the course. All of them considered their knowledge after the course as 6 or greater; 39% of them ranked themselves with a 8, and 14.6% with a 9. Two students ranked themselves with a 10.
We characterize the labwork feedback in eight different themes: test coverage, maintainability of test code, understanding testing concepts, boundary testing, state-based testing, assertions, mock objects, and tools.
Teaching Assistants. All TAs are between 18 and 24 years-old, one of them being female. They all ranked their Java knowledge between 8 and 10, and software testing knowledge between 7 and 8. Four of them are TAs for the first time in our software course; the other three TAs are performing this role for the third year in a row.
6 RESULTS
6.1 RQ1: What common mistakes do students make when learning software testing?
We characterize the labwork feedback in eight different themes (ordered by their frequency): test coverage, maintainability of test code, understanding testing concepts, boundary testing, state-based testing, assertions, mock objects, and tools.
Test coverage (416 times, 20.87%). Students commonly either miss tests, i.e., they do not provide all the expected tests for a given piece of code, or they write tests that are not totally correct, e.g., the test does not actually test the piece of code, or the test exercises the wrong class. In addition, we also observed cases (14) where the student actually “overtested” (i.e., wrote tests for more cases than required).
Maintainability of test code (407 times, 20.42%). Students often need advice on how to write maintainable test code. More specifically, test quality advices in general, such as better naming and excessive complexity (247), code duplication and lack of reusability (69), tests that could be split in two (31), better usage of test cleanup features, such as JUnit’s Before and After (47).
Understanding testing concepts (306 times, 15.35%). Students provide incomplete answers or have difficulties when it comes to questions that involve testing concepts and ideas, such as what flaky tests are about, advantages and disadvantages of unit and system tests, and the importance of removing test smells.
Boundary testing (258 times, 12.95%). Students often miss all the tests required to cover a boundary (142). As we also ask them to first build a decision table and then derive the tests, we also see that they often miss elements in the table (50) and generate tables that are not fully correct (46).
State-based testing (247 times, 12.39%). When it comes to state-based testing, students often miss or create wrong states or events (56) and transitions (72), or develop non-clear or not legible state machines (68).
Assertions (158 times, 7.93%). Most feedback related to assertions focus on missing assertions, i.e., the student forgot to assert one or more expected result, and on assertions that are wrong or should not exist in that test.
Mock Objects (117 times, 5.87%). Students required some feedback on how to use mock objects. More specifically, on how to properly verify interactions with mock objects (i.e., Mockito’s ‘verify’ method) and to explain when one should mock an object.
Tools (84 times, 4.21%). Students sometimes do not use the tools properly. More specifically to our course, students commonly use JUnit 4 features instead of JUnit 5, do not correctly use AssertJ’s fluent API, and make wrong use of Cucumber features.
TAs perspective. Overall, the observations of TAs match with what we observed in the labwork analysis. In terms of testing best practices, TAs mentioned to help students in writing maintainable test code. According to one TA, students often write tests that contain unnecessary code and weird interactions with the class under test. In addition, according to one TA, students do not clearly see how to reuse test code. Another TA mentioned that a common question is on how to properly test exceptions. Finally, a TA also observed that students often write tests that actually do not exercise any production code (in this case, JUnit still shows a green bar, giving a false impression of success to the student).
6.2 RQ2: Which software testing topics do students find hardest to learn?
In Figure 2, we show, based on the survey data, how students and TAs perceive the difficulty of each of the topics we teach.
Most students consider using the JUnit framework (Q1) as well as to think about the Act-Arrange-Assert pattern that composes any unit test (Q2) easy to learn. In fact, 76% and 73% of students consider it easy or very easy to learn JUnit and to use the AAA pattern, respectively. These perceptions are also shared by TAs, and matches with RQ1 results, as the number of feedback related to bad tool usage is small (4.21%).
Interestingly, applying MC/DC (Modified Condition/Decision Coverage) [7] criteria to test complicated conditions (Q7) was considered hard or very hard by 49% of the students; this is the hardest topic among all of them. However, it seems that other coverage criteria are easier to learn, as only 16% of students considered structural testing hard (Q6).
Applying software testing in a pragmatic way, as expected, was considered hard for students. Deciding how much testing is enough (Q14) is also considered a hard topic by 42% of students (the second most hard topic). TAs agree and even perceive this topic harder than the students. This result also matches with our findings on RQ1, where test coverage is the most prominent topic in feedback. In addition, writing the minimum set of tests that gives confidence (Q18) is considered hard for 25% of students and neutral for 40%. Choosing the right level of testing (e.g., unit, integration, or system tests) is not considered easy to all of them; 29% consider it easy.
while 71% of TAs perceive it as a hard topic for students. However, when it comes to following testing best practices (Q10), on the other hand, was considered important by 72% of participants; 38% are neutral, and 30% do not consider them important.
Moreover, different interactions during the lecture are also considered important for students. Teachers performing live code (Q3) and discussions and interactions during the lecture (Q4) are considered important by 75% and 65% of students, respectively. We conjecture that discussions and live coding are moments in which students have the opportunity to discuss the topics they consider hard, such as how much testing is enough, which test level to use, and test code best practices (as seen in RQ1 and RQ2).
On the other hand, the two books we use as textbooks in the course are not considered fundamental for students. More specifically, 31% of students find the ISTQB [5] not important and 36% are neutral (Q6), whereas 29% of them find the PragProg [25] not important and 51% are neutral (Q5). Reading related papers (Q9) is also considered not important for 35% of them.
6.4 Limitations of our study
The qualitative analysis of the open questions in the survey was manually conducted by the first author of this paper. The analysis, therefore, could be biased towards the views of the authors. To mitigate the threat, we make all the data available for inspection in our online appendix [1].
TAs were responsible for giving feedback to students throughout the study. Although we instruct all TAs on how to grade and what kind of feedback to give (they all follow the same rubrics), different TAs have different personalities. In practice, we observed that some TAs provided more feedback than other TAs. While we believe this could have little impact on the percentages of each theme in RQ1, we do not expect any other theme to emerge.
In terms of generalizability, although we analyzed the behavior of 230 students, we do not claim that our results are complete and/or generalizable. Furthermore, most students were Dutch (we only had 3 international students answering our survey), which may introduce cultural bias to our results. We urge researchers to perform replications of this study in different countries and universities.
7 CONCLUSIONS
Software testing is a vital discipline in any Software Engineering curriculum. However, the topic poses several challenges to educators and to students. In this paper, we proposed a pragmatic software testing curriculum and explored students’ common mistakes, hard topics to learn, favorite learning activities, important learning outcomes, and challenges they face when studying software testing.
Researchers and educators agree that software testing education is fundamental not only to industry, but also to research. We hope this paper helps the community to improve even more the quality of their software testing courses. As Bertolino [4] states in her paper on the achievements, challenges, and dreams on software testing research: “While it is research that can advance the state of the art, it is only by awareness and adoption of those results by the next-coming generation of testers that we can also advance the state of practice. Education must be continuing, to keep the pace with the advances in testing technology”.
ACKNOWLEDGMENTS
We thank all the students and teaching assistants that followed our course in the last years.
REFERENCES
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/47139096/paper_testing.pdf", "len_cl100k_base": 6839, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 25493, "total-output-tokens": 9953, "length": "2e12", "weborganizer": {"__label__adult": 0.0011444091796875, "__label__art_design": 0.0011882781982421875, "__label__crime_law": 0.0009455680847167968, "__label__education_jobs": 0.1336669921875, "__label__entertainment": 0.00018143653869628904, "__label__fashion_beauty": 0.0006093978881835938, "__label__finance_business": 0.0007200241088867188, "__label__food_dining": 0.0014295578002929688, "__label__games": 0.0019931793212890625, "__label__hardware": 0.001293182373046875, "__label__health": 0.0013370513916015625, "__label__history": 0.0006732940673828125, "__label__home_hobbies": 0.0003767013549804687, "__label__industrial": 0.0008645057678222656, "__label__literature": 0.000988006591796875, "__label__politics": 0.0008330345153808594, "__label__religion": 0.0014162063598632812, "__label__science_tech": 0.008026123046875, "__label__social_life": 0.0005097389221191406, "__label__software": 0.007442474365234375, "__label__software_dev": 0.83154296875, "__label__sports_fitness": 0.001026153564453125, "__label__transportation": 0.0012969970703125, "__label__travel": 0.0006642341613769531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40837, 0.03307]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40837, 0.77382]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40837, 0.9314]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5314, false], [5314, 11920, null], [11920, 18558, null], [18558, 24161, null], [24161, 29780, null], [29780, 33237, null], [33237, 40837, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5314, true], [5314, 11920, null], [11920, 18558, null], [18558, 24161, null], [24161, 29780, null], [29780, 33237, null], [33237, 40837, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40837, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40837, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5314, 2], [5314, 11920, 3], [11920, 18558, 4], [18558, 24161, 5], [24161, 29780, 6], [29780, 33237, 7], [33237, 40837, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40837, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4e90e2efea916b7d0a770c97c507a968c59c1e8b
|
AS112 Redirection Using DNAME
Abstract
AS112 provides a mechanism for handling reverse lookups on IP addresses that are not unique (e.g., RFC 1918 addresses). This document describes modifications to the deployment and use of AS112 infrastructure that will allow zones to be added and dropped much more easily, using DNAME resource records.
This approach makes it possible for any DNS zone administrator to sink traffic relating to parts of the global DNS namespace under their control to the AS112 infrastructure without coordination with the operators of AS112 infrastructure.
Status of This Memo
This document is not an Internet Standards Track specification; it is published for informational purposes.
This document is a product of the Internet Engineering Task Force (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741.
Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at http://www.rfc-editor.org/info/rfc7535.
1. Introduction
Many sites connected to the Internet make use of IPv4 addresses that are not globally unique. Examples are the addresses designated in [RFC1918] for private use within individual sites.
Devices in such environments may occasionally originate Domain Name System (DNS) queries (so-called "reverse lookups") corresponding to those private-use addresses. Since the addresses concerned have only local significance, it is good practice for site administrators to ensure that such queries are answered locally. However, it is not uncommon for such queries to follow the normal delegation path in the public DNS instead of being answered within the site.
It is not possible for public DNS servers to give useful answers to such queries. In addition, due to the wide deployment of private-use addresses and the continuing growth of the Internet, the volume of such queries is large and growing. The AS112 project aims to provide a distributed sink for such queries in order to reduce the load on the IN-ADDR.ARPA authoritative servers. The AS112 project is named after the Autonomous System Number (ASN) that was assigned to it.
Prior to implementation of this technique, the AS112 project did not accommodate the addition and removal of DNS zones elegantly. Since additional zones of definitively local significance are known to exist, this presents a problem. This document describes modifications to the deployment and use of AS112 infrastructure that will allow zones to be added and dropped much more easily.
The AS112 project is described in detail in [RFC7534].
The AS112 nameservers (PRISONER.IANA.ORG, BLACKHOLE-1.IANA.ORG, and BLACKHOLE-2.IANA.ORG) are required to answer authoritatively for each and every zone that is delegated to them. If a zone is delegated to AS112 nameservers without those nameservers being configured ahead of time to answer authoritatively for that zone, there is a detrimental impact on clients following referrals for queries within that zone. This misconfiguration is colloquially known as a "lame delegation".
AS112 nameserver operators are only loosely coordinated, and hence adding support for a new zone (or, correspondingly, removing support for a zone that is no longer delegated to the AS112 nameservers) is difficult to accomplish with accuracy. Testing AS112 nameservers remotely to see whether they are configured to answer authoritatively for a particular zone is similarly challenging, since AS112 nodes are distributed using anycast [RFC4786].
This document defines a more flexible approach for sinking queries on AS112 infrastructure that can be deployed alongside unmodified, existing AS112 nodes. Instead of delegating additional zones directly to AS112 nameservers, DNAME [RFC6672] redirection is used. This approach has the advantage that query traffic for arbitrary parts of the namespace can be directed to AS112 servers without those servers having to be reconfigured every time a zone is added or removed.
This approach makes it possible for any DNS zone administrator to sink traffic relating to parts of the global DNS namespace under their control to the AS112 infrastructure without coordination with the operators of AS112 infrastructure.
2. Design Overview
A new zone, EMPTY.AS112.ARPA, is delegated to a single nameserver BLACKHOLE.AS112.ARPA (IPv4 address 192.31.196.1, IPv6 address 2001:4:112::1).
The IPv4 address 192.31.196.1 has been selected from the prefix assigned by the IANA such that the address is coverable by a single IPv4 /24 prefix, and that no other address covered by that prefix is in use. The IPv6 address 2001:4:112::1 has been similarly assigned such that no other address within a covering /48 is in use. This addressing plan accommodates the anycast distribution of the BLACKHOLE.AS112.ARPA service using a single IPv4 service prefix and a single IPv6 service prefix. See [RFC4786] for more discussion of anycast service distribution; see Section 8 for the specific actions completed by IANA per this document.
Some or all of the existing AS112 nodes should be extended to support these new nameserver addresses and to host the EMPTY.AS112.ARPA zone. See [RFC7534] for revised guidance to AS112 server operators.
Each part of the DNS namespace for which it is desirable to sink queries at AS112 nameservers should be redirected to the EMPTY.AS112.ARPA zone using DNAME [RFC6672]. See Section 3.2 for guidance to zone administrators.
3. AS12 Operations
3.1. Extensions to Support DNAME Redirection
Guidance to operators of AS12 nodes is extended to include configuration of the 192.31.196.1 and 2001:4:112::1 addresses, and the corresponding announcement of covering routes for those addresses, and to host the EMPTY.AS12.ARPA zone.
IPv4-only AS12 nodes should only configure the 192.31.196.1 nameserver address; IPv6-only AS12 nodes should only configure the 2001:4:112::1 nameserver address.
It is only necessary for a single AS12 server operator to implement these extensions for this mechanism to function as intended. It is beneficial if many more than one AS12 server operator makes these changes, however, since that provides for greater distribution and capacity for the nameservers serving the EMPTY.AS12.ARPA zone. It is not necessary for all AS12 server operators to make these changes for the mechanism to be viable.
Detailed instructions for the implementation of these extensions are included in [RFC7534].
3.2. Redirection of Query Traffic to AS12 Servers
Once the EMPTY.AS12.ARPA zone has been deployed using the nameservers described in Section 3.1, redirections may be installed in the DNS namespace for queries that are intended to be answered by the AS12 infrastructure.
For example, reverse queries corresponding to TEST-NET-1 (192.0.2.0/24) [RFC5737] could be redirected to AS12 nameservers by installing a DNAME resource record in the 192.IN-ADDR.ARPA zone, as illustrated in Figure 1.
```
$ORIGIN 192.IN-ADDR.ARPA.
...
2.0 IN DNAME EMPTY.AS12.ARPA.
...
```
Figure 1
There is no practical limit to the number of redirections that can be configured in this fashion. Redirection of a particular part of the namespace to EMPTY.AS12.ARPA can be removed at any time, under the control of the administrators of the corresponding part of the DNS namespace. No changes to deployed AS12 nodes incorporating the
extensions described in this document are required to support additional redirections. A list of possible candidates for AS112 redirection can be found in Section 5.
DNAME resource records deployed for this purpose can be signed with DNSSEC [RFC4033], providing a secure means of authenticating the legitimacy of each redirection.
4. Continuity of AS112 Operations
Existing guidance to AS112 server operators to accept and respond to queries directed at the PRISONER.IANA.ORG, BLACKHOLE-1.IANA.ORG, and BLACKHOLE-2.IANA.ORG nameservers should continue to be followed, and no changes to the delegation of existing zones hosted on AS112 servers should occur. These measures are intended to provide continuity of operations for zones currently delegated to AS112 servers and avoid any accidental client impact due to the changes proposed in this document.
Once it has become empirically and quantitatively clear that the EMPTY.AS112.ARPA zone is well hosted to the extent that it is thought that the existing, unmodified AS112 servers host 10.IN-ADDR.ARPA, the decision might be made to replace the delegation of those [RFC1918] zones with DNAME redirection. Once implemented, the PRISONER.IANA.ORG, BLACKHOLE-1.IANA.ORG, and BLACKHOLE-2.IANA.ORG nameservers could be retired. This document gives no such direction to the IANA, however.
5. Candidate Zones for AS112 Redirection
All zones listed in [RFC6303] are candidates for AS112 redirection.
Since no pre-provisioning is required on the part of AS112 operators to facilitate sinking of any name in the DNS namespace by AS112 infrastructure, this mechanism supports AS112 redirection by any zone owner in the DNS.
This document is simply concerned with provision of the AS112 redirection service and does not specify that any particular AS112 redirection be put in place.
6. DNAME Deployment Considerations
DNAME was specified years after the original implementations of [RFC1035], and hence universal deployment cannot be expected. [RFC6672] specifies a fallback mechanism that makes use of synthesised CNAME RRsets for this reason. The expectation that design choices in the DNAME specification ought to mitigate any lack of deployment is reviewed below. Experimental validation of those expectations is included in Appendix A.
It is a fundamental design requirement of AS112 service that responses be cached. We can safely declare DNAME support on the authoritative server to be a prerequisite for DNAME redirection, but the cases where individual elements in resolver chains do not support DNAME processing deserve closer examination.
The expected behaviour when a DNAME response is supplied to a resolver that does not support DNAME is that the accompanying, synthesised CNAME will be accepted and cached. Re-query frequency will be determined by the TTLs (Time to Live) returned by the DNAME-responding authoritative servers.
Resolution of the CNAME target is straightforward and functions exactly as the AS112 project has operated since it was deployed. The negative caching [RFC2308] of the CNAME target follows the parameters defined in the target zone, EMPTY.AS112.ARPA. This has the side effects that all redirected names ultimately landing on an AS112 node will be negatively cached with the same parameters, but this lack of flexibility seems non-controversial; the effect of reducing the negative cache TTL would be increased query volume on the AS112 node operator concerned, and hence controls seem well aligned with operation.
Validating resolvers (i.e., those requesting and processing DNSSEC [RFC4033] metadata) are required to implement DNAME and hence should not make use of synthesised CNAME RRs. The lack of signature over a received CNAME RR should hence not limit the ability to sign the (DNAME) redirection point, and for those (DNAME) signatures to be validated.
In the case where a recursive server implements DNAME but DNAME is not implemented in a stub resolver, CNAME synthesis will again provide a viable path.
DNAME support on AS112 nodes themselves is never required under this proposal.
7. IAB Statement Regarding This .ARPA Request
With the publication of this document, the IAB approves of the delegation of ‘AS112’ in the ARPA domain. Under [RFC3172], the IAB has requested that IANA delegate and provision "AS112.ARPA" as specified in this specification. However, the IAB does not take any architectural or technical position about this specification.
8. IANA Considerations
8.1. Address Assignment
Per this document, IANA has assigned IPv4 and IPv6 number resources in conformance with Section 4 of [RFC2860].
The IANA has assigned one IPv4 /24 netblock and registered its use in the "IANA IPv4 Special-Purpose Address Registry" [RFC6890] as follows:
<table>
<thead>
<tr>
<th>Name</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Address Block</td>
<td>192.31.196.0/24</td>
</tr>
<tr>
<td>Name</td>
<td>AS112-v4</td>
</tr>
<tr>
<td>RFC</td>
<td>RFC 7535</td>
</tr>
<tr>
<td>Allocation Date</td>
<td>2014-12</td>
</tr>
<tr>
<td>Termination Date</td>
<td>N/A</td>
</tr>
<tr>
<td>Source</td>
<td>True</td>
</tr>
<tr>
<td>Destination</td>
<td>True</td>
</tr>
<tr>
<td>Forwardable</td>
<td>True</td>
</tr>
<tr>
<td>Global</td>
<td>True</td>
</tr>
<tr>
<td>Reserved-by-Protocol</td>
<td>False</td>
</tr>
</tbody>
</table>
IANA has assigned one IPv6 /48 netblock and registered its use in the "IANA IPv6 Special-Purpose Address Registry" [RFC6890] as follows:
<table>
<thead>
<tr>
<th>Name</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Address Block</td>
<td>2001:4:112::/48</td>
</tr>
<tr>
<td>Name</td>
<td>AS112-v6</td>
</tr>
<tr>
<td>RFC</td>
<td>RFC 7535</td>
</tr>
<tr>
<td>Allocation Date</td>
<td>2014-12</td>
</tr>
<tr>
<td>Termination Date</td>
<td>N/A</td>
</tr>
<tr>
<td>Source</td>
<td>True</td>
</tr>
<tr>
<td>Destination</td>
<td>True</td>
</tr>
<tr>
<td>Forwardable</td>
<td>True</td>
</tr>
<tr>
<td>Global</td>
<td>True</td>
</tr>
<tr>
<td>Reserved-by-Protocol</td>
<td>False</td>
</tr>
</tbody>
</table>
8.2. Hosting of AS112.ARPA
The IANA hosts and signs the zone AS112.ARPA using nameservers and DNSSEC signing infrastructure of their choosing, as shown in Figure 2. SOA RDATA may be adjusted by the IANA to suit their operational requirements.
$ORIGIN AS112.ARPA.
$TTL 3600
@ IN SOA BLACKHOLE.AS112.ARPA. NOC.DNS.ICANN.ORG. ( 1 ; serial
10800 ; refresh
3600 ; retry
1209600 ; expire
3600 ) ; negative cache TTL
NS A.IANA-SERVERS.NET.
NS B.IANA-SERVERS.NET.
NS C.IANA-SERVERS.NET.
BLACKHOLE A 192.31.196.1
AAAA 2001:4:112::1
HOSTNAME NS BLACKHOLE
EMPTY NS BLACKHOLE
Figure 2
8.3. Delegation of AS112.ARPA
The IANA has arranged delegation from the ARPA zone according to normal IANA procedure for ARPA zone management, to the nameservers used in carrying out the direction in Section 8.2. The whois contact information for the new record is specified by the IAB under [RFC3172].
9. Security Considerations
This document presents no known additional security concerns to the Internet.
For security considerations relating to AS112 service in general, see [RFC7534].
10. References
10.1. Normative References
10.2. Informative References
Appendix A. Assessing Support for DNAME in the Real World
To measure the extent to which the DNAME construct is supported in the Internet, we have used an experimental technique to test the DNS resolvers used by end hosts and derive from the test a measurement of DNAME support within the Internet.
A.1. Methodology
The test was conducted by loading a user’s browser with four URLs to retrieve. The first three comprise the test setup, while the final URL communicates the result to the experiment controller. The URLs are:
A http://a.<unique_string>.dname.example.com/1x1.png?
a.<unique_string>.dname
B http://b.dname.example.com/1x1.png?
b.<unique_string>.dname
C http://c.<unique_string>.target.example.net/1x1.png?
c.<unique_string>.target
D http://results.recorder.example.net/1x1.png?
results.<unique_string>?za=<a_result>&zb=<b_result>&zc=<c_result>
The A URL is designed to test the end user’s capability to resolve a name that has never been seen before, so that the resolution of this domain name will reliably result in a query at the authoritative nameserver. This is intended to test the use of domain names where there is a dynamic component that also uses the DNAME construct.
The B URL is deliberately designed to be cached by caching resolvers that are used in the process of resolving the domain name.
The C URL is a control URL. This is a unique URL, similar to A, but does not refer to a DNAME structure.
The D URL uses a static cacheable domain name.
The <unique_string> value is common to the four URLs used in each individual instance of this test but varies from test to test. The result is that each end user is presented with a unique string.
The contents of the EXAMPLE.COM, TARGET.EXAMPLE.NET, and RECORDER.EXAMPLE.NET zones are shown in Figure 3.
```shell
$ORIGIN EXAMPLE.COM.
...
DNAME. IN DNAME TARGET.EXAMPLE.NET.
...
$ORIGIN TARGET.EXAMPLE.NET.
...
B IN A 192.0.2.0
* IN A 192.0.2.0
...
$ORIGIN RECORDER.EXAMPLE.NET.
...
RESULTS IN A 192.0.2.0
...
```
Figure 3
The first three URLs (A, B, and C) are loaded as tasks into the user’s browser upon execution of the test’s script. The script starts a timer with each of these URLs to measure the elapsed time to fetch the URL. The script then waits for the three fetches to complete, or 10 seconds, whichever occurs first. The script then loads the results of the three timers into the GET arguments of the D URL and performs a fetch to pass these results back to the experiment’s server.
Logs on the web server reached at RESULTS.RECORDER.EXAMPLE.NET will include entries of the form shown in Figure 4. If any of the URLs fail to load within 10 seconds, the D URL will report the failure as a "null" timer value.
```
GET /1x1.png?results.<unique_string>?za=1822&zb=1674&zc=1582
GET /1x1.png?results.<unique_string>?za=null&zb=null&zc=161
```
Figure 4
The script has been encoded in Adobe Flash with a simple image in the form of an online advertisement. An online advertisement network has been used to distribute the script. The script is invoked when the advertisement is presented in the end user’s browser or application and does not require the user to click on the supplied image in any way. The advertisement placement parameters were set to the broadest possible scope to sample users from across the entire Internet.
A.2. Results
The test was loaded into an advertisement distributed on 2013-10-10 and 2013-10-11.
<table>
<thead>
<tr>
<th></th>
<th>Count</th>
<th>Percentage</th>
</tr>
</thead>
<tbody>
<tr>
<td>Recorded Results:</td>
<td>338,478</td>
<td></td>
</tr>
<tr>
<td>A or B Loaded:</td>
<td>331,896</td>
<td>98.1%</td>
</tr>
<tr>
<td>A Fail and B Fail:</td>
<td>6,492</td>
<td>1.9%</td>
</tr>
<tr>
<td>A Fail and B Load:</td>
<td>4,249</td>
<td>1.3%</td>
</tr>
<tr>
<td>A Load and B Fail:</td>
<td>1,624</td>
<td>0.5%</td>
</tr>
<tr>
<td>C Fail:</td>
<td>9,355</td>
<td>2.8%</td>
</tr>
</tbody>
</table>
Table 1
These results indicate that at most 1.9% of tested clients use DNS resolvers that fail to resolve a domain name that contains a DNAME redirection. However, the failure rate of slightly lower than 3% for the control URL indicates that the failure rate for the DNAME construct lies within the bounds of error within the experimental framework. We conclude that there is no evidence of a consistent failure on the part of deployed DNS resolvers to correctly resolve a DNAME construct.
This experiment was conducted by Geoff Huston and George Michaelson.
Acknowledgements
The authors acknowledge the valuable contributions of Bob Harold and other participants in the DNSOP working group in the preparation of this document.
Authors’ Addresses
Joe Abley
Dyn, Inc.
103-186 Albert Street
London, ON N6A 1M1
Canada
Phone: +1 519 670 9327
EMail: jabley@dyn.com
Brian Dickson
Twitter, Inc.
EMail: bdickson@twitter.com
Warren Kumari
Google
1600 Amphitheatre Parkway
Mountain View, CA 94043
United States
EMail: warren@kumari.net
George Michaelson
APNIC
EMail: ggm@apnic.net
|
{"Source-Url": "http://www.potaroo.net/ietf/rfc/PDF/rfc7535.pdf", "len_cl100k_base": 4784, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 30345, "total-output-tokens": 6532, "length": "2e12", "weborganizer": {"__label__adult": 0.0003514289855957031, "__label__art_design": 0.0003612041473388672, "__label__crime_law": 0.0007815361022949219, "__label__education_jobs": 0.0007638931274414062, "__label__entertainment": 0.00019633769989013672, "__label__fashion_beauty": 0.00021445751190185547, "__label__finance_business": 0.0008044242858886719, "__label__food_dining": 0.0003082752227783203, "__label__games": 0.000774383544921875, "__label__hardware": 0.005649566650390625, "__label__health": 0.0005350112915039062, "__label__history": 0.0006313323974609375, "__label__home_hobbies": 0.00010335445404052734, "__label__industrial": 0.0005593299865722656, "__label__literature": 0.00045561790466308594, "__label__politics": 0.0005712509155273438, "__label__religion": 0.0005574226379394531, "__label__science_tech": 0.344482421875, "__label__social_life": 0.00014495849609375, "__label__software": 0.185791015625, "__label__software_dev": 0.45458984375, "__label__sports_fitness": 0.00029730796813964844, "__label__transportation": 0.0008516311645507812, "__label__travel": 0.0002856254577636719}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21878, 0.07075]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21878, 0.21953]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21878, 0.86277]], "google_gemma-3-12b-it_contains_pii": [[0, 1236, false], [1236, 1236, null], [1236, 3747, null], [3747, 5682, null], [5682, 7591, null], [7591, 9421, null], [9421, 11678, null], [11678, 12879, null], [12879, 13545, null], [13545, 14620, null], [14620, 16413, null], [16413, 16950, null], [16950, 18647, null], [18647, 20291, null], [20291, 21360, null], [21360, 21878, null], [21878, 21878, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1236, true], [1236, 1236, null], [1236, 3747, null], [3747, 5682, null], [5682, 7591, null], [7591, 9421, null], [9421, 11678, null], [11678, 12879, null], [12879, 13545, null], [13545, 14620, null], [14620, 16413, null], [16413, 16950, null], [16950, 18647, null], [18647, 20291, null], [20291, 21360, null], [21360, 21878, null], [21878, 21878, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21878, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21878, null]], "pdf_page_numbers": [[0, 1236, 1], [1236, 1236, 2], [1236, 3747, 3], [3747, 5682, 4], [5682, 7591, 5], [7591, 9421, 6], [9421, 11678, 7], [11678, 12879, 8], [12879, 13545, 9], [13545, 14620, 10], [14620, 16413, 11], [16413, 16950, 12], [16950, 18647, 13], [18647, 20291, 14], [20291, 21360, 15], [21360, 21878, 16], [21878, 21878, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21878, 0.15842]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
dcf5e6854e1861f323f30349c11e06557bd252e8
|
Another way to utilize Lemma 3.15 ($\Delta h_{ij}, \Delta v_{ij} \in \{-1, 0, 1\}$) is to use precomputed tables to process multiple matrix cells at a time.
- There are at most 3$^m$ different columns. Thus there exists a deterministic automaton with 3$^m$ states and 3$^m$ transitions that can find all approximate occurrences in $O(n)$ time. However, the space and construction time of the automaton can be too big to be practical.
- There is a super-alphabet algorithm that processes $O(\log n)$ characters at a time and $O(\log^2 n)$ matrix cells at a time using lookup tables of size $O(n)$. This gives time complexity $O(mn/\log^2 n)$.
- A practical variant uses smaller lookup tables to compute multiple entries of a column at a time.
The following lemma shows the property used by the Baeza-Yates-Perleberg algorithm and proves that it satisfies the first condition.
**Lemma 3.23:** Let $P_1 P_2 \ldots P_{k+1} = P$ be a partitioning of the pattern $P$ into $k+1$ nonempty factors. Any string $S$ with $ed(P, S) \leq k$ contains $P_i$ as a factor for some $i \in [1..k+1]$.
**Proof.** Each single symbol edit operation can change at most one of the pattern factors $P_i$. Thus any set of at most $k$ edit operations leaves at least one of the factors untouched. \qed
Let us analyze the average case time complexity of the verification phase.
- The best pattern partitioning is as even as possible. Then each pattern factor has length at least $r = \lceil m/(k+1) \rceil$.
- The expected number of exact occurrences of a random string of length $r$ in a random text of length $n$ is at most $n/r^k$.
- The expected total verification time is at most $O\left(\frac{m^2(k+1)n}{\sigma^r}\right) \leq O\left(\frac{m^3n}{\sigma^r}\right)$.
This is $O(n)$ if $r \geq 3\log m$.
- The condition $r \geq 3\log m$ is satisfied when $(k+1) \leq m/(3\log m + 1)$.
**Theorem 3.24:** The average case time complexity of the Baeza-Yates-Perleberg algorithm is $O(n)$ when $k \leq m/(3\log m + 1) - 1$.
**Baeza-Yates-Perleberg Filtering Algorithm**
A filtering algorithm for approximate string matching searches the text for factors having some property that satisfies the following conditions:
1. Every approximate occurrence of the pattern has this property.
2. Strings having this property are reasonably rare.
3. Text factors having this property can be found quickly.
Each text factor with the property is a potential occurrence, which is then verified for whether it is an actual approximate occurrence.
Filtering algorithms can achieve linear or even sublinear average case time complexity.
The algorithm has two phases:
**Filtration:** Search the text for exact occurrences of the pattern factors $P_i$. Using the Aho-Corasick algorithm this takes $O(n)$ time for a constant alphabet.
**Verification:** An area of length $O(m)$ surrounding each potential occurrence found in the filtration phase is searched using the standard dynamic programming algorithm in $O(m^2n)$ time.
The worst case time complexity is $O(m^2n)$, which can be reduced to $O(mn)$ by combining any overlapping areas to be searched.
Many variations of the algorithm have been suggested:
- The filtration can be done with a different multiple exact string matching algorithm.
- The verification time can be reduced using a technique called hierarchical verification.
- The pattern can be partitioned into fewer than $k+1$ pieces, which are searched allowing a small number of errors.
A lower bound on the average case time complexity is $\Omega(n(k + \log m)/m)$, and there exists a filtering algorithm matching this bound.
**4. Suffix Trees and Arrays**
Let $T = T[0..n]$ be the text. For $i \in [0..n]$, let $T_i$ denote the suffix $T[i..n]$. Furthermore, for any subset $C \subseteq [0..n]$, we write $T_C = \{ T_i | i \in C \}$. In particular, $T_{[0..n]}$ is the set of all suffixes of $T$.
Suffix tree and suffix array are search data structures for the set $T_{[0..n]}$:
- **Suffix tree** is a compact trie for $T_{[0..n]}$.
- **Suffix array** is an ordered array for $T_{[0..n]}$.
They support fast exact string matching on $T$:
- A pattern $P$ has an occurrence starting at position $i$ if and only if $P$ is a prefix of $T_i$.
- Thus we can find all occurrences of $P$ by a prefix search in $T_{[0..n]}$.
A data structure supporting fast string matching is called a text index.
There are numerous other applications too, as we will see later.
The set $T_{[0..n]}$ contains $|T_{[0..n]}| = n+1$ strings of total length $||T_{[0..n]}|| = \Theta(n^2)$. It is also possible that $\Sigma LCP(T_{[0..n]}) = \Theta(n^2)$, for example, when $T = \sigma^n$ or $T = \Sigma X$ for any string $X$.
- A basic trie has $\Theta(n^2)$ nodes for most texts, which is too much.
- A compact trie with $O(n)$ nodes and an ordered array with $n+1$ entries have linear size.
- A compact ternary trie has $O(n)$ nodes too. However, the construction algorithms and some other algorithms we will see are not straightforward to adapt for it.
Even for a compact trie or an ordered array, we need a specialized construction algorithm, because any general construction algorithm would need $\Omega(\Sigma LCP(T_{[0..n]}))$ time.
As with tries, there are many possibilities for implementing the child operation. We again avoid this complication by assuming that $\sigma$ is constant. Then the size of the suffix tree is $O(n)$.
- There are exactly $n+1$ leaves and at most $n$ internal nodes.
- There are at most $2n$ edges. The edge labels are factors of the text and can be represented by pointers to the text.
Given the suffix tree of $T$, all occurrences of $P$ in $T$ can be found in time $O(|P| + occ)$, where occ is the number of occurrences.
Let $S_u$ denote the string represented by a node $u$. The suffix tree representation uses four functions:
- $child(u,c)$ is the child $v$ of node $u$ such that the label of the edge $(u,v)$ starts with the symbol $c$, and $\perp$ if $u$ has no such child.
- $parent(u)$ is the parent of $u$.
- $depth(u)$ is the length of $S_u$.
- $start(u)$ is the starting position of some occurrence of $S_u$ in $T$.
Then
- $S_u = T[start(u) \ldots start(u) + \text{depth}(u)]$.
- $T[start(u) + \text{depth}(parent(u)) \ldots start(u) + \text{depth}(u)]$ is the label of the edge $(parent(u),u)$.
Now we are ready to describe the construction algorithm.
**Algorithm 4.2: Brute force suffix tree construction**
**Input:** text $T[0..n]$ ($T[n] = \$)$
**Output:** suffix tree of $T$: root, child, parent, depth, start
1. create new node root; depth(root) $\leftarrow 0$
2. $u \leftarrow root$; $d \leftarrow 0$ // (u,d) is the active locus
3. for $i \leftarrow 0$ to $n$ do // insert suffix $T_i$
4. while $d = \text{depth}(u)$ and $child(u, T[i+d]) = \perp$ do
5. $u \leftarrow child(u, T[i+d])$; $d \leftarrow d+1$
6. while $d < \text{depth}(u)$ and $T[start(u) + d] = T[i+d]$ do $d \leftarrow d+1$
7. if $d < \text{depth}(u)$ then // (u,d) is in the middle of an edge
8. $u \leftarrow \text{CreateNode}(u,d)$
9. $\text{CreateLeaf}(i, u)$
10. $u \leftarrow root$; $d \leftarrow 0$
**CreateLeaf(i, u)** // Create leaf representing suffix $T_i$
1. create new leaf $w$
2. $\text{start}(w) \leftarrow i$; $\text{depth}(w) \leftarrow n - i + 1$
3. $\text{child}(u, T[i+d]) \leftarrow w$; $\text{parent}(w) \leftarrow u$ // Set $u$ as parent
4. return $w$
**Suffix Tree**
The suffix tree of a text $T$ is the compact trie of the set $T_{[0..n]}$ of all suffixes of $T$.
We assume that there is an extra character $\$ \notin \Sigma$ at the end of the text. That is, $T[n] = \$ \text{ and } T_i = T[i..n]$ for all $i \in [0..n]$. Then:
- No suffix is a prefix of another suffix, i.e., the set $T_{[0..n]}$ is prefix free.
- All nodes in the suffix tree representing a suffix are leaves.
This simplifies algorithms.
**Example 4.1:** $T = \text{banana}$.
```
$\$
/
/
/
$a$ $n$
/
/
$n$ $a$ $n$ $a$
```
**Brute Force Construction**
Let us now look at algorithms for constructing the suffix tree. We start with a brute force algorithm with time complexity $\Theta(\Sigma LCP(T_{[0..n]}))$. Later we will modify this algorithm to obtain a linear time complexity.
The idea is to add suffixes to the trie one at a time starting from the longest suffix. The insertion procedure is essentially the same as we saw in Algorithm 1.2 (insertion into trie) except it has been modified to work on a compact trie instead of a trie.
A **locus** in the suffix tree is a pair $(u,d)$ where $\text{depth}(\text{parent}(u)) < d \leq \text{depth}(u)$. It represents
- the uncompact trie node that would be at depth $d$ along the edge $(\text{parent}(u),u)$, and
- the corresponding string $S_{(u,d)} = T[start(u) \ldots start(u) + d]$.
Every factor of $T$ is a prefix of a suffix and thus has a locus along the path from the root to the leaf representing that suffix.
During the construction, we need to create nodes at an existing locus in the middle of an edge, splitting the edge into two edges:
1. $i \leftarrow \text{start}(u)$; $p \leftarrow \text{parent}(u)$
2. $\text{create new node } v$
3. $\text{start}(v) \leftarrow i$; $\text{depth}(v) \leftarrow d$
4. $\text{child}(v, T[i+d]) \leftarrow u$; $\text{parent}(u) \leftarrow v$
5. $\text{child}(p, T[i+\text{depth}(p)]) \leftarrow v$; $\text{parent}(v) \leftarrow p$
6. return $v$
**Suffix Links**
The key to efficient suffix tree construction are suffix links:
- $\text{slink}(u)$ is the node $v$ such that $S_v$ is the longest proper suffix of $S_u$, i.e., if $S_u = T[i..j]$ then $S_v = T[i+1..j]$.
**Example 4.3:** The suffix tree of $T = \text{banana}$ with internal node suffix links.
```
$\$
/
/
/
$a$ $n$
/
/
$n$ $a$ $n$ $a$
```
```
Suffix links are well defined for all nodes except the root.
**Lemma 4.4:** If the suffix tree of $T$ has a node $u$ representing $T[i..j]$ for any $0 \leq i < j \leq n$, then it has a node $v$ representing $T[i+1..j]$.
**Proof.** If $u$ is the leaf representing the suffix $T_i$, then $v$ is the leaf representing the suffix $T_{i+1}$.
If $u$ is an internal node, then it has two child edges with labels starting with different symbols, say $a$ and $b$, which means that $T[i..j]a$ and $T[i..j]b$ are both factors of $T$. Then, $T[i+1..j]$ and $T[i+1..j]$ are factors of $T$ too, and thus there must be a branching node $v$ representing $T[i+1..j]$. □
Suffix links are needed only for internal nodes. For root, we define $slink(root) = root$.
**Algorithm 4.5:** McCreight
**Input:** text $T[0..n]$ ($T[n] = \$)$
**Output:** suffix tree of $T$: root, child, parent, depth, etc.
1. $v \leftarrow CreateNode(u, d)$
2. $CreateLeaf(i, u)$
3. if $slink(u) = \bot$ then ComputeSlink($u$)
4. $v \leftarrow slink(u)$; $d \leftarrow d - 1$
The creation of a new node on line (6) is never needed in a fully constructed suffix tree, but during the brute force algorithm the necessary nodes are created during the brute force construction.
**Theorem 4.6:** Let $T$ be a string of length $n$ over an alphabet of constant size. McCreight’s algorithm computes the suffix tree of $T$ in $O(n)$ time.
**Proof.** Insertion of a suffix $T_i$ takes constant time except in two points:
- The while loops on lines (4)–(6) traverse from the node $slink(u)$ to $u_{i+1}$. Every round in these loops increments $d$. The only place where $d$ decreases is on line (11) and even then by one. Since $d$ can never exceed $n$, the total time on lines (4)–(6) is $O(n)$.
- The while loop on lines (3)–(4) during a call to ComputeSlink($u$) traverses from the node $slink(parent(u))$ to $slink(u)$. Let $d_k$ be the depth of $parent(u)$. Clearly, $d_{i+1} \geq d_k - 1$, and every round in the while loop during ComputeSlink($u$) increases $d_{i+1}$. Since $d_k$ can never be larger than $n$, the total time in the loop on lines (3)–(4) in ComputeSlink is $O(n)$.
There are other linear time algorithms for suffix tree construction:
- Weiner’s algorithm was the first. It inserts the suffixes into the tree in the opposite order: $T_i, T_{i-1}, \ldots, T_0$.
- Ukkonen’s algorithm constructs suffix tree first for $T[0..1]$ then for $T[0..2]$, etc. The algorithm is structured differently, but performs essentially the same tree traversal as McCreight’s algorithm.
- All of the above are linear time only for constant alphabet size. Farach’s algorithm achieves linear time for an integer alphabet of polynomial size. The algorithm is complicated and impractical.
- Practical linear time construction for an integer alphabet is possible via suffix array.
Applications of Suffix Tree
Let us have a glimpse of the numerous applications of suffix trees.
Exact String Matching
As already mentioned earlier, given the suffix tree of the text, all occurrences of a pattern $P$ can be found in time $O(|P| + occ)$.
Even if we take into account the time for constructing the suffix tree, this is asymptotically as fast as Knuth–Morris–Pratt for a single pattern and Aho–Corasick for multiple patterns.
However, the primary use of suffix trees is in indexed string matching, where we can afford to spend a lot of time in preprocessing the text, but must then answer queries very quickly.
Approximate String Matching
Several approximate string matching algorithms achieving $O(kn)$ worst case time complexity are based on suffix trees.
Filtering algorithms that reduce approximate string matching to exact string matching such as partitioning the pattern into $k + 1$ factors, can use suffix trees in the filtering phase.
Another approach is to generate all strings in the $k$-neighborhood of the pattern, i.e., all strings within edit distance $k$ from the pattern and search for them in the suffix tree.
The best practical algorithms for indexed approximate string matching are hybrids of the last two approaches. For example, partition the pattern into $\ell \leq k + 1$ factors and find approximate occurrences of the factors with edit distance $\lfloor k/\ell \rfloor$ using the neighborhood method in the filtering phase.
Text Statistics
Suffix tree is useful for computing all kinds of statistics on the text. For example:
- Every locus in the suffix tree represents a factor of the text and, vice versa, every factor is represented by some locus. Thus the number of distinct factors in the text is exactly the number of distinct locuses, which can be computed by a traversal of the suffix tree in $O(n)$ time even though the resulting value is typically $\Theta(n^2)$.
- The longest repeating factor of the text is the longest string that occurs at least twice in the text. It is represented by the deepest internal node in the suffix tree.
AC Automaton for the Set of Suffixes
As already mentioned, a suffix tree with suffix links is essentially an Aho–Corasick automaton for the set of all suffixes.
- We saw that it is possible to follow suffix link / failure transition from any locus, not just from suffix tree nodes.
- Following such an implicit suffix link may take more than a constant time, but the total time during the scanning of a string with the automaton is linear in the length of the string. This can be shown with a similar argument as in the construction algorithm.
Thus suffix tree is asymptotically as fast to operate as the AC automaton, but needs much less space.
Generalized Suffix Tree
A generalized suffix tree of two strings $S$ and $T$ is the suffix tree of the string $S \cdot T$, where $\cdot$ is a symbol that do not occur elsewhere in $S$ and $T$.
Each leaf is marked as an $S$-leaf or a $T$-leaf according to the starting position of the suffix it represents. Using a depth first traversal, we determine for each internal node if its subtree contains only $S$-leaves, only $T$-leaves, or both. The deepest node that contains both represents the longest common factor of $S$ and $T$. It can be computed in linear time.
The generalized suffix tree can also be defined for more than two strings.
Matching Statistics
The matching statistics of a string $S[0..n]$ with respect to a string $T$ is an array $MS[S[0..n]]$, where $MS[S]$ is a pair $(i, k)$ such that:
1. $S[i..i+k]$ is the longest prefix of $S$, that is a factor of $T$, and
2. $T[p..p+k] = S[i..i+k]$.
Matching statistics can be computed by using the suffix tree of $T$ as an AC-automaton and scanning $S$ with it.
- If before reading $S[i]$ we are at the locus $(v, d)$ in the automaton, then $S[i..i+k] = T[j..j+d]$, where $j = \text{start}(S[i])$. If reading $S[i]$ causes a failure transition, then $MS[S[i]] = (d, j)$.
- Following the failure transition decrements $d$ and thus increments $i - d$ by one. Following a normal transition/edge, increments both $i$ and $d$ by one, and thus $i - d$ stays the same. Thus all entries are computed.
From the matching statistics, we can easily compute the longest common factor of $S$ and $T$. Because we need the suffix tree only for $T$, this saves space compared to a generalized suffix tree.
Matching statistics are also used in some approximate string matching algorithms.
Longest Palindrome
A palindrome is a string that is its own reverse. For example, saippuakauppias is a palindrome.
We can use the LCA preprocessed generalized suffix tree of a string $T$ and its reverse $T^R$ to find the longest palindrome in $T$ in linear time.
- Let $k_i$ be the length of the longest common extension of $T_{i-1}$ and $T_{n-i}$, which can be computed in constant time. Then $T[i..i+k_i]$ is the longest odd length palindrome with the middle at $i$.
- We can find the longest odd length palindrome by computing $k_i$ for all $i \in [0..n]$ in $O(n)$ time.
- The longest even length palindrome can be found similarly in $O(n)$ time. The longest palindrome overall is the longer of the two.
|
{"Source-Url": "https://www.cs.helsinki.fi/u/tpkarkka/teach/14-15/SPA/lecture08-2x4.pdf", "len_cl100k_base": 5054, "olmocr-version": "0.1.53", "pdf-total-pages": 4, "total-fallback-pages": 0, "total-input-tokens": 16592, "total-output-tokens": 5449, "length": "2e12", "weborganizer": {"__label__adult": 0.0004456043243408203, "__label__art_design": 0.0004222393035888672, "__label__crime_law": 0.0007839202880859375, "__label__education_jobs": 0.0009675025939941406, "__label__entertainment": 0.0002052783966064453, "__label__fashion_beauty": 0.00020301342010498047, "__label__finance_business": 0.0003147125244140625, "__label__food_dining": 0.000579833984375, "__label__games": 0.0011730194091796875, "__label__hardware": 0.0018949508666992188, "__label__health": 0.0008783340454101562, "__label__history": 0.0004498958587646485, "__label__home_hobbies": 0.00016224384307861328, "__label__industrial": 0.0008630752563476562, "__label__literature": 0.0008287429809570312, "__label__politics": 0.0005078315734863281, "__label__religion": 0.0008573532104492188, "__label__science_tech": 0.34814453125, "__label__social_life": 0.00013816356658935547, "__label__software": 0.01293182373046875, "__label__software_dev": 0.62548828125, "__label__sports_fitness": 0.0005145072937011719, "__label__transportation": 0.0007348060607910156, "__label__travel": 0.0002363920211791992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 17945, 0.00996]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 17945, 0.73105]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 17945, 0.83563]], "google_gemma-3-12b-it_contains_pii": [[0, 4463, false], [4463, 9904, null], [9904, 12743, null], [12743, 17945, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4463, true], [4463, 9904, null], [9904, 12743, null], [12743, 17945, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 17945, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 17945, null]], "pdf_page_numbers": [[0, 4463, 1], [4463, 9904, 2], [9904, 12743, 3], [12743, 17945, 4]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 17945, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
df9132738a68cb5bad9b97643108e652882486df
|
An Oracle White Paper
June 2013
Introduction to Java Platform,
Enterprise Edition 7
# Executive Overview
Executive Overview .......................................................... 3
# Introduction
Introduction ........................................................................... 3
# Introducing Java Platform, Enterprise Edition 7
Introducing Java Platform, Enterprise Edition 7 .................. 3
- Deliver Dynamic Scalable HTML5 Applications .......... 5
- Increased Developer Productivity ................................. 8
- Meeting the Demands of the Enterprise ........................ 11
# Java EE Web Profile Enhancements
Java EE Web Profile Enhancements ..................................... 13
# GlassFish Server Open Source Edition 4.0
GlassFish Server Open Source Edition 4.0 ......................... 14
# Java EE 7 SDK
Java EE 7 SDK ........................................................................ 14
# Integrated Development Environments
Integrated Development Environments ............................... 14
# Conclusion
Conclusion ............................................................................ 15
# Appendix 1: References
Appendix 1: References ........................................................... 16
Executive Overview
Java Platform, Enterprise Edition 7 (Java EE 7) offers new features that enhance HTML5 support, increase developer productivity, and further improves how enterprise demands can be met. Java EE 7 developers will write less boilerplate code, have better support for the latest Web applications and frameworks, and gain access to enhanced scalability and richer, simpler functionality. Enterprises will benefit from new features that enable portable batch processing and improved scalability.
Introduction
Java EE initially evolved as an enterprise application deployment platform that focused on robustness, Web services, and ease of deployment. Continually shaped by feedback through the Java Community Process (JCP), Java EE represents a universal standard in enterprise IT, facilitating the development, deployment, and management of multi-tier, server-centric applications. Beginning with Java EE 5, focus shifted to increasing developer efficiency with the introduction of annotations, the Enterprise JavaBeans (EJB) 3.0 business component development model, new and updated Web services, and improvements to the persistence model. Java EE 6 further streamlined the development process and increased the flexibility of the platform, thus enabling it to better address lightweight Web applications. This is in part due to the introduction of the Web Profile as a subset of the Java EE specification targeted to Web applications. In addition, Java EE 6 embraced open source frameworks with hooks for more seamless integration, and began the process of pruning less relevant technologies.
Java EE 6 was particularly successful:
- As of May 2013, there have been over 50 million downloads of Java EE components, from Oracle and other industry vendors.
- It is the #1 choice for enterprise developers.
- It is the #1 application development platform.
- It has had the fastest adoption of any Java EE release with 18 compliant application server vendors.
Java EE 7 extends the benefits of Java EE 6 by leveraging the more transparent JCP process and community participation to deliver new and updated features, excelling in the areas expressed in Figure 1:
Java EE 7 enables developers to deliver HTML5 dynamic scalable applications. New to the platform, WebSockets reduce response time with low latency bi-directional data exchange while standard JSON support simplifies data parsing for portable applications. JAX-RS has been improved to deliver asynchronous, scalable, high performance RESTful Services. And much more.
Java EE 7 increases developer productivity in multiple ways. It offers a simplified application architecture with a cohesive integrated platform; increased efficiency with reduced boiler-plate code and broader use of annotations; and enhanced application portability with standard RESTful Web service client support.
Java EE 7 meets the most demanding enterprise requirements by breaking down batch jobs into manageable chunks for uninterrupted OLTP performance; easily defines multithreaded concurrent tasks for improved scalability; and delivers transactional applications with choice and flexibility.
Java EE 7 was developed with the most extensive Java community participation of any Java EE release, with vendors, organizations and individuals all contributing to Expert Groups. The Java community has been an engaged partner, offering reviews and feedback in the ongoing development of the specifications. The Java Community Process (JCP) has refined its processes in ways that have facilitated greater openness and more accessible participation among stakeholders. 19 Java User Groups (JUGs) throughout the world, ranging from North America to Europe, South America, and Asia, have participated in the Adopt-a-JSR program, reviewing platform proposals and developing several applications in an effort to explore, test, and create code samples for proposed features.
One of these applications is now a part of the Java EE 7 SDK, an all-in-one bundle with API documentation, samples, tutorials, and GlassFish Server Open Source Edition 4.0 that is used to teach developers how to become proficient with Java EE 7. While the SDK is a way to get started with Java EE, the platform is a multi-vendor and community technology with wide cross-industry investment represented by 19 Java EE 6 implementations from multiple vendors and many Java EE 7 implementations are forthcoming.
This White Paper provides a technical overview of Java EE 7 and specifies ways in which its new features and functionalities enable Java EE developers to work with greater efficiency and productivity.
Introducing Java Platform, Enterprise Edition 7
Since its inception, the Java EE platform has been targeted for offloading common infrastructure tasks through its container-based model and abstraction of resource access so developers can focus on business logic. In recent releases, the platform has considerably simplified the APIs for access to container services while broadening the range of services available. Java EE 7 continues this trend with enhanced simplification and productivity while further extending the range of the platform to encompass support for emerging Web technologies.
Deliver Dynamic Scalable HTML5 Applications
HTML5 is accelerating the ability of developers to create applications that are highly interactive and dynamic, alongside client-side technologies like JavaScript and CSS3. These applications can deliver live data feeds such as sport scores, stock news and quotes, application notifications, twitter and chat feeds, and more, all in a highly interactive manner. HTML5 enables these applications to be written once and render properly on a range of devices like smart phones, tables, and desktops. These highly dynamic applications, combined with the ability to access them at any time from anywhere, are driving the need to scale the services that feed application data to the client. Java EE 7 lays the foundation for dynamic HTML5 applications with new JSRs like WebSockets and JSON Processing, along with updates to existing JSRs like JAX-RS 2.0, Java Server Faces 2.2, and Servlet 3.1 NIO.
Low Latency Data Exchange Using the Java API for WebSocket 1.0
A growing number of Web applications rely on timely updates from central servers. The Java developer community has expressed considerable interest in WebSockets because they offer a solution to the inherent problems of latency and bi-directional communication that come with HTTP-based solutions like polling, long-polling and HTTP-streaming.
WebSockets seamlessly support low latency, bi-directional client-server data exchange over a single TCP connection. This can be exemplified by a whiteboard application, where multiple participants can be drawing on a shared whiteboard, seeing each other’s work simultaneously. The WebSocket API, at its most basic level, is an annotated plain old Java object (POJO) as shown here:
```java
@ServerEndpoint("/whiteboard")
public class WhiteboardServer {
@OnOpen
public void onOpen(...) {
}
@OnClose
public void onClose(...) {
}
@OnMessage
public void message(String message, ...) {
}
}
```
Defining an endpoint to a socket is as simple as specifying the URI with the `@ServerEndpoint` annotation. The annotation-based callback API responds to specific events, such as when a client connects, a message is received, and a client disconnects. The WebSocket API, at its most basic level, supports sending and receiving simple text and binary messages. The simplicity of the API enables developers to get started quickly.
Of course, feature-rich applications have more complex needs, and for those the WebSocket API supports programmatic endpoints that allow control of the protocol handshake and message exchange. In addition, WebSockets leverage the existing web container security model for authentication, authorization, and transport guarantee, so secure communication can be established with little effort.
**Simplify Data Parsing for Portable Applications with Java API for JSON Processing 1.0**
JSON (JavaScript Object Notation), a lightweight data-interchange format, is used by many popular Web services to invoke and return textual data. Many popular online services, like Twitter, Facebook, and Pinterest, expose RESTful services that exchange JSON objects. Prior to Java EE 7, Java applications have used different implementation libraries to produce and consume JSON from RESTful services. However, this is no longer the case.
With the Java API for JSON Processing 1.0, JSON processing is standardized into a single API so that applications that use JSON need not bundle 3rd party implementation libraries. As a result, applications will be smaller in size and more portable. However, the API includes support for plugging in any parser/generator implementation, so developers have the option to use the best implementation for the job at hand.
The Java API for JSON Processing will look familiar to JAXP developers. It can produce and consume JSON text in a streaming fashion similar to StaX's XMLStreamReader, a pull parser. The API also supports a Java Object Model representation of the JSON, similar to the DOM API. The image below shows a simple JSON-formatted object on the left, with the Java Object Model representation to the right, with the dashed guide to facilitate the comparison. Generating a JSON object using the API is more straightforward and less error prone than manually creating JSON.
```java
JsonGenerator jg = Json.createObjectBuilder().add("phoneNumber", [example JSON object here]).build().createGenerator(response.getWriter());
```
Scalable RESTful Services with Java API for RESTful Web Services 2.0 – JAX-RS 2.0
JAX-RS 2.0 adds asynchronous response processing, which is critical for scaling to meet the demands of data-hungry HTML5 clients. Asynchronous processing is a technique that enables a better and more efficient use of processing threads. On the server side, a thread that is processing a request should avoid blocking while waiting for an external task to complete so that other requests arriving at the server during that period of time can be attended. Accessing remote RESTful resources from Twitter or Facebook, for example, can now occur without blocking other clients while the request is being serviced.
Similarly, on the client side, a thread that issues a request will block while waiting for a response, impacting the performance of the application. The new JAX-RS 2.0 asynchronous client API enables the client's call to a RESTful service to execute in parallel with other client activity. The client can poll for a response or use a callback API to be notified of a response. The benefit of the asynchronous client API is that a client can invoke multiple backend services simultaneously, reducing the client's overall latency to the request originator.
To easily enhance a RESTful service, JAX-RS 2.0 developers can use Filters and Entity Interceptors. JAX-RS 2.0 filters are similar to Servlet filters. They execute before and after request and response processing, and are primarily used to modify or process incoming and outgoing request or response headers. Like Servlet filters, they can be chained together so multiple filters can inspect requests. The JAX-RS 2.0 Entity Interceptor API allows framework developers to intercept request and response processing, just as the Interceptor API does for Java methods. The Interceptors operate on the request and response message bodies instead of headers. This API allows framework developers to transparently add orthogonal concerns like authentication, caching, and encoding without polluting application code. Interceptors are intended to be applicable to any kind of JAX-RS service. Prior to JAX-RS 2.0, many JAX-RS providers like RESTEasy, Jersey, and Apache CXF wrote their own proprietary Filter and Entity Interceptor frameworks to deliver various features in their implementations. Developers can now utilize the standard APIs for these features, enabling more portable applications.
Enhanced Ease of Development with JavaServer Faces 2.2
JavaServer Faces is the standard, component-oriented Java EE framework for building portable Web application user interfaces. It maximizes the productivity of Web application development for graphical IDEs, while simultaneously minimizing the complexity of maintenance of the Web application during its production lifetime. With this release, JSF adds support for HTML5.
JavaServer Faces 2.2 offers HTML5-friendly markup, enabling page authors to write “pure” HTML markup that can be viewed in an HTML tool, or simply rendered in a browser page as HTML-formatted code without any clunky XML markup. As illustrated below, any JSF attributes that are preceded by “jsf:” are ignored by the browser and passed on to the server.
```html
<button type="submit" jsf:id="submitbutton"
jsf:action="#{bean.action}"
>
Click Me!
</button>
```
JSF 2.2 includes a new feature called “pass-through elements”. HTML5 adds a series of new attributes for existing elements, like “tel”, “range”, and “date” for input elements. Unfortunately, existing JSF components do not recognize these new attributes, so JSF applications would ignore these attributes and be unable to use them, or proprietary or one-off workarounds will have to be created. With pass-through elements, the JSF renderer ignores the elements, and instead just passes them to the HTML5-enabled browser, which renders them properly -- enabling existing JSF components to "utilize" HTM5 features.
JSF introduces a new pass-through namespace http://xmlns.jcp.org/jsf/passthrough that maps to “p:". Any arbitrary name/value pair in a component can be prefixed with “p:" and passed through to the browser.
```xml
<h:inputText
value="#{bean.color}"
p:type="color" />
```
In this case, HTML5 “type=color” is passed through to the browser, without any interpretation by the JSF component.
Improved Request Processing with Servlet 3.1 NIO
HTML5 applications are inherently more dynamic, and drive many more requests to the server for information updates. In Java EE 6, Servlet Asynchronous I/O enabled many more concurrent requests by removing the “thread per request” limitation, enabling a thread to handle multiple concurrent requests. This can help deliver necessary data to an HTML5 client in a scalable manner. However, if the server can read data faster than the client can send it, perhaps due to a slow client connection, the thread will block until more data is available, therefore limiting scalability. With Servlet 3.1 NIO, reading data from a client is non-blocking when using the new event-driven API. When data is available, a Servlet thread can read and process just that data, and then move on to another request.
Increased Developer Productivity
Beginning with Java EE 5, a tremendous amount of focus has been placed on developer productivity. This is important to Java developers because it makes Java EE more enjoyable to work with, and more importantly, helps meet management deadlines. Developer productivity is important to business because it can deliver new services in less time, and new services drive new revenue opportunities.
Java EE 7 delivers significant developer productivity improvements. First, it removes the amount of boilerplate code required to write core business logic. Next, the platform continues its convention over configuration approach to development by introducing more annotated POJOs that use little to no XML configuration. Last, the technologies that are delivered in Java EE 7 are more tightly integrated, offering a more seamless developer experience.
Reduce Boilerplate Code
Java EE 7 goes a long way towards reducing the amount of “boilerplate” code, which is a set of required steps in code that must be executed before the core business logic can run. The top three areas that reduce boilerplate code include default resources, JMS 2.0, and the JAX-RS client API.
Default resources is a new feature that requires the platform provider to pre-configure a default data source and a default JMS connection factory, for example, that maps to an underlying database and JMS runtime respectively. This option eliminates the need for developers to define these resources since they can rely on default resources. This provides a simpler out-of-the-box developer experience for building sample applications.
JMS has gone through significant improvements as well, and JMS 2.0 is the first update to the JMS 1.1 API since 2003. JMS is used in countless production deployments, and the fact that it has been meeting enterprise needs for ten years proves that it is a well-defined specification. The JMS 1.1 API was fairly verbose due to available Java SE and Java EE capabilities at the time. For example, the JMS 1.1 API required 13 lines of boilerplate code just to send a message. However, modern features in Java SE and Java EE, combined with a refreshed API, have enabled JMS 2.0 to significantly simplify the API for developers. For example, JMS 2.0 introduces a JMSCContext interface to reduce two separate JMS 1.1 classes down to a single interface; utilizes Java EE 7 default resources with the default connection factory; supports AutoCloseable; uses runtime exceptions; and chains method calls together. Together, all of these reduce the lines of code required to send a message down to one, as shown below.
```java
@Inject
JMSCContext context;
@Resource(lookup = "java:global/jms/demoQueue")
Queue demoQueue;
public void sendMessage(String payload) {
context.createProducer().send(demoQueue, payload);
}
```
JAX-RS 2.0 has not just gone through scalability improvements, it also addresses the most commonly requested feature: a client API. Many if not all JAX-RS 1.1 implementations provide some degree of client API support. However, until JAX-RS 2.0 they were all different, impacting application portability. To develop portable applications, developers would have to use a rather unproductive approach with the HttpURLConnection class, custom error checking, manage data bindings, and more. While the approach is not difficult, it requires a lot of boilerplate code just to invoke a RESTful service and get a response. As shown below, the JAX-RS 2.0 client API uses a builder pattern where developers can chain together method calls to build the RESTful client invocation, requiring only two lines of code!
More Annotated POJOs
Thanks to annotations, Java EE has become more about programming with Java objects, and less about configuration. For example, beginning with Java EE 6 the web.xml became optional thanks to the ability to provide metadata with annotations, and annotated EJBs could be packaged with .war files.
Java EE 7 continues the move towards a POJO development model. As shown earlier, a WebSocket is represented by an annotated POJO, exposed externally at @ServerEndpoint and whose @OnOpen, @OnClose, and @OnMessage methods are called by the runtime when the respective event is fired.
JAX-RS 2.0 Interceptors and Filters are defined as POJOs, are annotated with @Provider, and implement specific interfaces. Interceptors and Filters annotated with @Provider are globally enabled for all resources by default – no configuration required. Applying Interceptors and Filters to specific resources is accomplished by using type-safe annotations similar to CDI Qualifiers.
With feedback from the Java EE developer community, CDI is now enabled by default and no longer requires beans.xml file just to use CDI. Developers can simply use @Inject to inject virtually any Java object with no configuration required. This includes the new @JMSDestinationDefinition and @MailSessionDefinition resource annotations, which enable developers to specify resource metadata in source code, simplifying the DevOps experience.
Cohesive, Integrated Platform
Java EE 6 introduced Managed Beans 1.0 as the first step towards aligning EJBs, JSF Managed Beans, and CDI beans. With Java EE 7, managed bean alignment continues. For example, JSF Managed Beans has begun the pruning process in favor of CDI Beans.
Java EE 7 also brings the ease of use of EJB container managed transactions to the platform as a whole, using a more general solution based on CDI interceptors so that these can be used by CDI managed beans and other Java EE components. Applying the @Transactional annotation to any CDI bean or a method of any managed component makes that method transactional.
Bean Validation is more widespread in Java EE 7, and can now be used for method-level validation, including both built-in and custom constraints. Constraints can be applied to method parameters as well as return values. Constraints can also use the Java EE Expression Language for flexible rendering and string formatting of constraint violations.
Bean validation also extends to JAX-RS 2.0. Constraining annotations can be specified in public constructor parameters, method parameters, fields and bean properties. In addition, they can also
decorate resource classes, entity parameters, and resource methods. For example, constraints can be applied to JAX-RS method parameters to validate form data submitted through @POST and @PUT.
Simplifying Java EE by Pruning Old Technologies
While many new features have been added in Java EE 7, others have been made optional as older features have been replaced by simpler ones or have simply been removed. Java EE 6 introduced a formal process for deprecating obsolete technologies, and targeted features for eventual pruning -- Java EE Management (JSR-77); Application Deployment (JSR-88); JAXR, for interfacing with UDDI registries (JSR-93); JAX-RPC, for XML-based RPC (JSR-101); and EJ B 2.x Container Managed Persistence, which is effectively replaced by the Java Persistence API (JSR-338). These specifications, while removed from the current release, remain optional for vendors in the event that demand for them persists among customers. They will, however, be removed in Java EE 8.
Meeting the Demands of the Enterprise
Java EE has been addressing enterprise demands for over a decade with connectivity to backend systems using the Java Connector Architecture, support for transactions with the Java Transaction Service, and communication between many IT systems using the Java Message Service. Today, enterprises want to leverage their developers’ existing Java skills to write batch applications that use a standard API and are portable across multiple runtimes.
Enterprises also need to build highly scalable applications to meet higher demand for their services and to also drive higher utilization of existing assets. Concurrency Utilities for Java EE enable developers to write scalable applications so that they cleanly integrate with the Java EE runtime in a secure, reliable manner.
Greater Efficiency with Batch Applications for the Java Platform
While the vast majority of Java EE applications are online user-driven systems, there is an expanding class of server-side applications that require batch processing -- especially with a renewed need for off-line analytics and ETL (extract, transform, load) tasks. These batch-oriented applications are best suited for non-interactive, bulk-oriented and long-running tasks that are computationally intensive, can execute sequentially or parallel, and may be initiated ad hoc or through scheduling. Batch processing also effectively utilizes computing resources by shifting processing times to when resources are typically idle.
Previously, no standard Java programming model existed for batch applications. Batch Applications for the Java Platform provides such a model and creates a lingua franca for well understood batch processing concerns such as jobs, steps, repositories, the reader-processor-writer pattern, chunking, checkpoints, parallel processing, flow, split, transactions, retries, sequencing and partitioning.
As illustrated in Figure 2, a **job** represents a series of closely related **steps** that, taken together, perform a discrete business process. Steps may be executed in sequence or in parallel. Steps may also be optional, with the decision to execute or skip them conditioned on the outcome of prior steps in the same **workflow**. Steps can be **checkpointed** and retried if needed and are generally transactional. A **repository** stores information about the current jobs, such as the last time a job executed. Jobs can be listed, started, stopped, paused and cancelled through the **operator**. The operator is typically invoked in a scheduled or ad-hoc fashion. The entire batch process is put together through a **Job Specification Language (JSL)** written in XML.
Although the JSR codifies these robust concepts, the programming model is kept very simple as can be seen in the following example:
```xml
<step id="sendStatements">
<chunk>
<reader ref="accountReader" />
<processor ref="accountProcessor" />
<writer ref="emailWriter" />
</chunk>
</step>
@Named("accountReader")
...implements ItemReader ...
public Account readItem() {
// Read account using JPA
@Named("accountProcessor")
...implements ItemProcessor...
public Statement processItems(Account account) {
// Read Account, return Statement
@Named("emailWriter")
...implements ItemWriter ...
public void writeItems(List<Statements> statements) {
// Use JavaMail to send email
```
The example illustrates a simple step to send emailed bank statements as a batch process. The step is composed of a reader, processor, and writer. The **reader** reads the next account, conceivably from a database using JPA. The **processor** processes the account and creates a corresponding statement. The writer conceivably uses JavaMail to email the set of statements. Each chunk is executed within a transaction, and there is also automatic checkpointing.
The JSR allows for batch applications that may execute in either a Java SE or Java EE environment. There may be different qualities of service among each environment -- for example more robust JTA transactions in a Java EE environment.
**Simplified Concurrency and Enhanced Portability with Concurrency Utilities for Java EE**
Java EE has long provided a rich set of features geared towards asynchronous, parallel, and background task execution such as EJB `@Asynchronous`, Servlet `async` and JAX-RS `async`. However, in some cases developers need to use lower level concurrency facilities such as the ones provided by Java SE threads. Java EE discourages the direct use of underlying JVM threads as the container cannot manage important quality of service metrics such as reliability, scalability, and security in unmanaged threads. In the past, these challenges were addressed by vendor-specific services that
allowed low-level threads to be executed in a managed fashion through the container. The
Concurrency Utilities for Java EE standardizes this concept by providing managed versions of the Java
SE threading APIs available in the java.util.concurrent package such as ExecutorService.

As Figure 3 and the following code demonstrate, the API allows developers to write tasks that
implement the java.lang.Runnable or java.util.concurrent.Callable
interfaces and submit the tasks to a managed executor service that guarantees the tasks scalability,
utilizing the container thread pool and run inside a reliable security and naming context.
```java
public class TestServlet extends HttpServlet {
@Resource(name="concurrent/MyExecutorService")
...
pub
...
// Task logic
}
}
```
Both simple common and advanced concurrency patterns are supported such as scheduled execution,
custom thread factories, execution notification, and manual container context management.
Java EE Web Profile Enhancements
The Java Enterprise Edition Web Profile was introduced in Java EE 6, and is targeted at developers of
dynamic Web applications. Most Web applications have significant requirements in the areas of
transaction management, security, and persistence. Such requirements can be readily addressed by established Java EE technologies such as the Enterprise JavaBeans (EJB) Lite technology, and the Java Persistence API, Java Transaction API, but are not supported by standalone servlet containers. By incorporating many of these APIs, the Web Profile raises the bar for the development of Web applications using the Java platform with pre-installed, pre-integrated, fully tested Web infrastructure features.
The Java EE 7 Web Profile adds support for HTML5 with WebSockets, JSON, JAX-RS 2.0, and more. While the Web Profile is feature rich, it strives for simplicity by leaving out many of the enterprise connectivity APIs that are part of the full Java EE platform. If enterprise connectivity is required at a later date, developers can simply redeploy their applications to a full Java EE platform.
**GlassFish Server Open Source Edition 4.0**
GlassFish Server Open Source Edition 4.0 is a compatible, production implementation of the Java EE 7 platform specification built using an open source license. As with Java EE 5 and 6, the Reference Implementation (RI) of Java EE 7 is derived from Project GlassFish. As the RI, GlassFish Server is always up to date with the latest Java EE specifications. For developers, GlassFish Server offers a lightweight runtime that starts in seconds, and enables rapid iterative development with Active Redeploy that saves session state when an application is redeployed. For IT Operations, GlassFish Server offers a feature-rich web console for manual operations, and a feature-equivalent command line utility for automated environment. GlassFish Server also features centralized administration and high availability clustering.
GlassFish can be downloaded from [http://glassfish.org](http://glassfish.org).
**Java EE 7 SDK**
For developers that are coming up to speed on Java EE, or simply want to quickly get started on new Java EE 7 features, the Java EE 7 SDK (SDK) is an all-in-one bundle for doing just that. The SDK includes the First Cup Java EE 7 introduction, the full Java EE 7 tutorial, sample applications that can be built with Maven, Java EE 7 javadocs, GlassFish Server Open Source Edition 4.0, and (optionally) JDK 7. The Java EE 7 SDK has been tested with the NetBeans IDE, although the samples should work in any IDE that supports Maven.
The Java EE 7 SDK can be downloaded from [http://www.oracle.com/javaee](http://www.oracle.com/javaee).
**Integrated Development Environments**
Leading IDEs such as NetBeans and Eclipse can be used to develop applications and other components for Java EE 7. Such IDEs support virtually all the Java EE capabilities described in this paper. NetBeans 7.3.1 and later provide comprehensive support of the Java EE 7 platform and bundles GlassFish so you can get started quickly. NetBeans also includes wizards to rapidly create JAX-RS 2.0
In addition, the Eclipse Kepler release will include support for Java EE 7, and the Oracle Enterprise Pack for Eclipse (OEPE) 12.1.2 in the Eclipse Marketplace hosts the GlassFish plugin. You can learn more about Eclipse and other IDE support from [https://glassfishplugins.java.net/](https://glassfishplugins.java.net/).
### Conclusion
The Java EE platform offers enterprise developers the opportunity to deliver today’s Web applications with the greatest efficiency, flexibility, and ease of development. After 13 years offering business-critical applications for thousands of companies, Java EE remains ahead of the pack as an enterprise application and deployment platform. As the industry standard for enterprise computing, Java EE enables developers to take advantage of the emerging usages, patterns, frameworks, and technologies of the enterprise space.
Developing enterprise applications has never been easier.
Appendix 1: References
Java EE 7 contains 14 new and updated JSRs. Java specifications are available at [http://www.jcp.org](http://www.jcp.org).
- JSR 236: Concurrency Utilities for Java EE 1.0
- JSR 338: Java Persistence API 2.1
- JSR 339: Java API for RESTful Web Services 2.0
- JSR 340: Java Servlet 3.1
- JSR 341: Expression Language 3.0
- JSR 342: Java Platform, Enterprise Edition 7
- JSR 343: Java Message Service 2.0
- JSR 344: JavaServer Faces 2.2
- JSR 345: Enterprise JavaBeans 3.2
- JSR 346: Contexts and Dependency Injection for Java EE 1.1
- JSR 349: Bean Validation 1.1
- JSR 352: Batch Applications for the Java Platform 1.0
- JSR 353: Java API for JSON Processing 1.0
- JSR 356: Java API for WebSocket 1.0
|
{"Source-Url": "https://www.oracle.com/technetwork/java/javaee/javaee7-whitepaper-1956203.pdf", "len_cl100k_base": 6862, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 31167, "total-output-tokens": 7693, "length": "2e12", "weborganizer": {"__label__adult": 0.0002486705780029297, "__label__art_design": 0.00017535686492919922, "__label__crime_law": 0.0001552104949951172, "__label__education_jobs": 0.00039315223693847656, "__label__entertainment": 4.023313522338867e-05, "__label__fashion_beauty": 8.928775787353516e-05, "__label__finance_business": 0.00044345855712890625, "__label__food_dining": 0.0002033710479736328, "__label__games": 0.0002875328063964844, "__label__hardware": 0.0006260871887207031, "__label__health": 0.0001798868179321289, "__label__history": 0.0001347064971923828, "__label__home_hobbies": 4.112720489501953e-05, "__label__industrial": 0.00022923946380615232, "__label__literature": 0.0001068711280822754, "__label__politics": 0.0001227855682373047, "__label__religion": 0.000247955322265625, "__label__science_tech": 0.0035610198974609375, "__label__social_life": 3.8564205169677734e-05, "__label__software": 0.0087432861328125, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00017750263214111328, "__label__transportation": 0.0002620220184326172, "__label__travel": 0.0001634359359741211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34185, 0.01764]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34185, 0.20504]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34185, 0.88134]], "google_gemma-3-12b-it_contains_pii": [[0, 85, false], [85, 1279, null], [1279, 3458, null], [3458, 5910, null], [5910, 8462, null], [8462, 10948, null], [10948, 14285, null], [14285, 17042, null], [17042, 19792, null], [19792, 22402, null], [22402, 25301, null], [25301, 28155, null], [28155, 29440, null], [29440, 32374, null], [32374, 33459, null], [33459, 34185, null], [34185, 34185, null]], "google_gemma-3-12b-it_is_public_document": [[0, 85, true], [85, 1279, null], [1279, 3458, null], [3458, 5910, null], [5910, 8462, null], [8462, 10948, null], [10948, 14285, null], [14285, 17042, null], [17042, 19792, null], [19792, 22402, null], [22402, 25301, null], [25301, 28155, null], [28155, 29440, null], [29440, 32374, null], [32374, 33459, null], [33459, 34185, null], [34185, 34185, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34185, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34185, null]], "pdf_page_numbers": [[0, 85, 1], [85, 1279, 2], [1279, 3458, 3], [3458, 5910, 4], [5910, 8462, 5], [8462, 10948, 6], [10948, 14285, 7], [14285, 17042, 8], [17042, 19792, 9], [19792, 22402, 10], [22402, 25301, 11], [25301, 28155, 12], [28155, 29440, 13], [29440, 32374, 14], [32374, 33459, 15], [33459, 34185, 16], [34185, 34185, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34185, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
a08097b06e30a658b43eb6a4a471318e1c5d37b8
|
Integrating Natural Language Processing and Software Engineering
Prasanth Yalla\(^1\) and Nakul Sharma\(^2\)
\(^1\)Professor, K.L. University, Guntur, India
\(^2\)Research Scholar, K.L. University, Guntur, India,
\(^1\)prasanthyalla@gmail.com, \(^2\)nakul777@gmail.com
Abstract
This paper tries to put various ways in which Natural Language Processing (NLP) and Software Engineering (SE) can be seen as inter-disciplinary research areas. We survey the current literature, with the aim of assessing use of Software Engineering and Natural Language Processing tools in the researches undertaken. An assessment of how various phases of SDLC can employ NLP techniques is presented. The paper also provides the justification of the use of text for automating or combining both these areas. A short research direction while undertaking multidisciplinary research is also provided.
Keywords: Natural Language Processing (NLP), Natural Language Understanding (NLU), Software Engineering (SE), Computational linguistics, Software Development Life Cycle (SDLC)
1. Introduction
Software Engineering and Natural Language Processing are related to each other in that both are branches of computer science and engineering. SE is a disciplined approach for construction of a software [1]. NLP is the processing done by computer(s) on natural languages [2]. The paper addresses how both these discipline can be combined and hence increase the chances of universal programmability [11].
Software Engineering consists of tools, methods, processes, techniques for developing software [29]. NLP has as its sub branches various fields which can be utilized in the realm of Software Engineering. It is our conviction by using tools and techniques of one research area in the context of another, better software that will be developed.
There is a lot of research work been carried out in respect of Software Engineering and Natural Language Processing? In our work we are trying to solve the following research question:-
Question- 1. What are the means of combining Natural Language Processing (NLP) and Software Engineering (Software Engineering)?
Question - 2. How can SE and NLP be seen in context of each other?
The paper is divided into following sections as follows. Section 1 gives the brief introduction, section 2 gives the literature review, section 3 provides the analysis of the existing literature, section 4 shows how NLP can be used in Software Engineering context by making use of textual information, section 5 gives the use of SE in NLP software, section 6 gives justification for the use of NL text for automation, section 7 gives the advantages interdisciplinary research section 8 concludes with future scope and direction.
2. Literature Review
Using textual specification, domain model is generated directly. By using NLP tools such as OpenNLP and CoreNLP this work is accomplished. The overall technique involves linguistic analysis and statistical classifiers. Natural Language Text is understood by humans with little effort. The importance of textual processing on natural language text is discussed by Viliam [3].
Farid discusses the use of UML’s class diagram in generation of natural language text. The paper describes various NL based systems to strengthen the view point of generating NL specification from class diagrams. The paper shows use of WordNet to clarify the structure of UML string names and generating the semantically sound sentences [5].
Reynaldo uses controlled NL text of requirements to generate class models. The paper describes some initial results arising out of parsing the text for ambiguity. The paper introduces a research plan of the author to integrate requirement validation with RAVEN project [6].
Deva Kumar, et al., created an automated tool (UMGAR) to generate UML’s analysis and design models from natural language text. They have used Stanford parser, Word Net 2.1 and Java RAP to accomplish this task [7].
Sascha, et al., proposed a round trip engineering process by creating SPIDER tool. The paper addressed the concerns about errors at requirement level being propagated to design and coding stages. The behavioral properties shown from the NL text are utilized to give developer a UML model [8].
Priya More, et al., have developed a from NL text UML Diagrams. They have developed a tool called RAPID for analyzing the requirement specifications. The software used for completing the task is OpenNLP, RAPID Stemming algorithm, WordNet [9].
Waralak, et al., discusses the role of ontology in object oriented software engineering. The author gives the introductory definition of ontology and object modeling. The paper then discusses the development tools and various standards in which ontology can be applied [10].
Walter, et al., suggest the prospect of every human to undertake programming by making universal programmability. The authors predict that by combining NLP, AI and SE, it will be possible to achieve universal programming. The authors are currently developing nlrpBENCH as a benchmark for NLP requirements [11].
Harry M Sneed has undertaken the task of developing test cases from natural language requirements. The NL text is parsed for getting the useful information such as Part-Of-Speech (POS). Using this information, test cases are generated [12].
Fabian Friedrich, et al., generate a process model by using natural language text. The natural text is scanned for various POS. The paper claims to make 77% of BPMN models accurately by scanning the document for necessary information [13].
By using textual business information, UML diagrams are generated by Imran et al., A new methodology for extracting relevant information natural language has been proposed and implemented. The analysis includes information about the amount of objects, attributes, sequence and labeling present with respect to class, activity and sequence diagrams [14].
BrainTool, a tool developed by Riga Technical University, has been utilized in developing UML diagrams from Natural Language Text. A manually generated UML diagrams are compared with the UML diagrams generated from the BrainTool and two-hemisphere technique [15].
Automatic generation of SVBR to UML’s class diagram is conducted with the input specification being put in SVBR format. The main issues in getting UML diagrams from SVBR are presented. Evaluation of NL tools is done using precision and recall [16].
A speech language interface has been developed by using rule based framework. A natural language based automated tool has been used for getting the information objects and their associated attributes and methods [17].
Pro-case diagram from the behavioral specification are developed by Mencl V. The textual use cases are converted to Pro-cases based on behavioral protocols. Various case studies have been used to check the result of converting textual use cases to Pro-cases [18].
How natural language input can be processed by a robot is shown by mapping. The paper describes language is mapped onto the structures for robot to understand [19].
Generating automated scenario and state machine diagrams is shown. By using Object Modeling Notation, scenario and state machine diagrams automation tools are developed [20].
The role of use case diagrams outside the realm of software development is also discussed by Matthias et al. The author suggests role of use case in avionics system and system engineering. The pitfalls of use cases and the solutions are also presented [21].
Arnis, et al., present a meta-model driven approach towards UML’s system as well as simulation. Authors develop the system model by identifying the artifacts from the problem domain and thereby generating Use Case and Activity diagram [22].
Imran S. Bajwa, et al., discusses an approach generating SVBR rules from Natural Language Specification. The paper shows the importance automation in generation SVBR indicating that business analyst with load of documents. They have developed an algorithm for detecting the semantics of English language [23].
Imran S. Bajwa, et al., highlights the cases in which Stanford POS tagger does not identify the particular syntactic ambiguities in English specifications of software constraints. A novel approach to overcome these syntactic ambiguities is provided and better results are presented [24].
Imran S. Bajwa, et al., presents a new model for extracting necessary information from the natural language text. The authors generate Use Case, Activity, Class and Sequence diagram from the natural language text. The designed system also allows generation of system from Natural Language Text [25].
Imran S. Bajwa, et al., propose a SVBR approach to generate a unambiguous representation in English language. The input text is extracted for the relevant information of SVBR. A tool named NL2SVBRviaSBVR is made to accomplish this task [26].
Imran S. Bajwa, et al., propose an interactive tool to draw Use-Case diagrams. The authors have utilized LESSA approach for getting useful information from the Natural Language Text [27].
Mathias, et al., have developed a Requirement Feedback System (REFS) using various NLP tools and techniques. REFS generate UML Models and also checks for the feedback when the requirements are changed [34].
Jochen L. Leidner discusses various issues in Software Engineering for natural language processing. A discussion of toolkit vs framework and system vs experiment is also given [35].
Drigas, et al., have developed a system called Learning Management System (LMS) for the Greek sign language. The system provides the Greek sign language video corresponding to every text [36].
Gang, et al., have resolved several issues in regard to word semantic similarity on web. The author make use of WordNet,’s synonym service to improve the accuracy of word similarity calculator [7].
Yuri, et al., have developed an Internet portal for dissemination computational linguistics knowledge and information resources. The information can be searched according to the subject content or knowledge-based navigation through the portal content [37].
Köhler, et al., propose to integrate UML Diagrams for production control systems. This again, increases the chances of interdisciplinary research [38].
Eladio, et al., propose to utilize state machine diagram in developing program code. The authors have undertaken Systematic Literature Review to accomplish the task [39].
Rogério, et al., have developed a research road map consisting of design space, software engineering processes, from centralized to decentralized control, and practical run-time verification & validation for self-adaptive systems [40].
3. Discussion on Literature Review
There have been notable efforts in generating useful information from Natural Language Text. The information hence generated is used for generation of UML diagrams [5-8, 11-15, 19, 21, 22, 24-27]. In some cases, it has been also been used in generation of process model [13]. Table 1 summarizes some of the contributions made.
<table>
<thead>
<tr>
<th>Paper Title</th>
<th>SE Concept/ Tools</th>
<th>NLP Tool/ Concept</th>
<th>Concept In</th>
</tr>
</thead>
<tbody>
<tr>
<td>[3]</td>
<td>Domain Model</td>
<td>Standford CoreNLP, Apache OpenNLP with statistical classifiers</td>
<td>SE</td>
</tr>
<tr>
<td>[34]</td>
<td>Eclipse Modelling Toolkit’s (EMF) EMFCompare</td>
<td>Autoannotator, Salmax</td>
<td>NLP and SE</td>
</tr>
<tr>
<td>[8]</td>
<td>UML Model</td>
<td>N L Text</td>
<td>NLP and SE</td>
</tr>
</tbody>
</table>
The analysis of literature hence provides wider coverage to specific use of NLP and SE. The SE has tools, methodologies, and processes etc. which are used in developing the software [29]. NLP also has variety of tools and techniques and its sub branches which can help in developing a more efficient and robust software [2].
The literature review also indicates the scope of combining both the fields is at a lower amount of abstraction which can be increased. Hence in our current work we bring about the necessary information at a higher level of abstraction.
4. NLP in SE
4.1. NLP in Software Development Life Cycle
Software Development Life Cycle (SDLC) consists of set phases which provide guidelines to develop softwares. NLP can be applied to every phase within Software Development Life Cycle [1]. It is specifically more useful when the artifacts of phase or activity are plain text. Plain text can be provided as input to Natural Language Processing tasks. Basically all the activities in which the humans interpret the document there is scope of textual generation [33]. In this section, we try to outline the artifacts in SDLC which fall in the category of plain text. Table 2 shows in analysis phase, which textual documents are, generated [1].
<table>
<thead>
<tr>
<th>Document/ Artifact</th>
<th>Author</th>
</tr>
</thead>
<tbody>
<tr>
<td>Requirement Document</td>
<td>System analyst</td>
</tr>
</tbody>
</table>
The requirement document is authored by the system analyst after understanding the requirements given by stakeholders. Software Requirement Specification (SRS) is a textual written agreement between signed between the company and the stakeholders. Use cases describe the interaction of system to be developed with various actors [1]. Table 3 shows at design level, the textual documents which can be generated [1, 29].
<table>
<thead>
<tr>
<th>Table 3. Design Phase Textual Artifacts</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Document/Artifact</strong></td>
</tr>
<tr>
<td>Software Design Specification</td>
</tr>
<tr>
<td>UML Diagrams</td>
</tr>
<tr>
<td>Design level Test cases</td>
</tr>
</tbody>
</table>
4.2. NLP in Umbrella Activities
There are the following lists of activities which are performed across various phases in SDLC [1]:-
- Software Project Tracking and control
- Risk management
- Technical Reviews
- Measurement
- Software configuration management
- Reusability management
- Work product preparation and production
The umbrella activities can also have textual artifacts. The artifacts which are read or used by the developers or the business managers will be having textual format. The measurement of cost requires numerical data which is also in numerical format [1]. The exact specification of the any artifact depends upon the organization, team and personal choice of the person executing a particular process [1].
5. SE in NLP
Software engineering although evolving has many standard processes, tools, methodologies which can be utilized in development of NLP software’s. Software development in NLP context can be under following headings:-
5.1. Open Source Development
This type development constitutes giving software for free which facilitates collaborative development of software. In the context of NLP, open source development frees the developer of software from any legal fringes arising from the proprietary licensed software. The software’s many times developed for research purpose and not for commercial development. The researcher’s main focus is hence only to get the prototype ready while patenting the product is left for the industry [35].
5.2. Closed Source Development
Under this model of development, the developed software is given to the customer after payment of a fee. The source code may be made available under certain conditions.
5.3. Software’s Quality Attributes
Software quality strives to create a tangible product that provides an appreciable value to people and develop the software [1]. In NLP systems, the quality is one of the attributes of concern.
There are following quality attributes which need to be assessed for which need to be assessed in NLP software [1]:-
- Performance
- Feature
- Reliability
- Aesthetics
- Perception
6. Justifying the Use of Text
In their paper, Fabia, et al., developed a process model from natural language text [13]. The textual description is quite distinct from the semi-formal or completely formal descriptions. There are following benefits of using any textual documents:-
- It is possible to directly automate the document and artifact.
- The textual information is intelligible to humans.
- Textual format is easy to make.
By having a textual format, it is possible to automate using NLP tools and techniques. The textual artifact can also be converted into any other natural language by undertaking machine translation of the original text. Textual format of the artifact allows a wider audience which can interpret and understand the meaning and revelation behind the subject matter under consideration.
7. Advantages of the Interdisciplinary Research
There are following advantages of undertaking multidisciplinary research:-
1. The different research areas can be combined to get a more holistic view of the common research area. Here for instance, we are trying to see the interdisciplinary research across two research areas, i.e., Software Engineering and Natural Language Processing. By addressing the issues and concerns in both the areas, it is possible to develop a more holistic approach towards Computer Science and Engineering.
2. By undertaking a joint research in both the fields, it will be possible to have greater possibility of automation in the field of Computer Science and Engineering. This is because of automation it is necessary to have textual information or any other type of information which is intelligible to both the computer as well as humans.
3. By having joint research disciplines being developed, it is possible to achieve universal programmability.
Table 5. Our Work and Others Work
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Contribution</th>
<th>Others Work</th>
</tr>
</thead>
<tbody>
<tr>
<td>Generality of work</td>
<td>More</td>
<td>Less</td>
</tr>
<tr>
<td>Abstraction level</td>
<td>Higher (at subject level)</td>
<td>Lower (at topic level)</td>
</tr>
<tr>
<td>Future Scope and direction</td>
<td>More</td>
<td>Less</td>
</tr>
</tbody>
</table>
Table 5 shows the comparison of the work done which is presented by the authors work and the other similar work. We have tried to mention various parameters under which the comparison can be made. Although standards exist to compare one authors work to another, but still subjectively, we have tried to differentiate out work done with that of the others.
8. Conclusion and Future Scope
In this paper, we tried to develop a vision of combining SE and NLP. The literature review being undertaken is especially in respect of generating UML diagrams from Natural Language Text. The future work entails studying each artifact of SE process models in generating more useful information. The textual specification can be a key in providing the requisite amount of information for carrying out automation. However care needs to be taken to ensure that varying degree of interpretation does not affect the performance of the system. Natural Language processing and Software engineering although divergent, can still be combined with a view of developing a better software.
Acknowledgement
We would like express our thanks and gratitude towards the Head of Department Computer Science and all the staff members of K.L. University who have been a source of inspiration in doing this research work. The second author’s correspondence with Barbara Kichenhem and Dr. Prof. Rajesh Bhatia helped in creating the vision for research work. The second author also thanks to his ME guide, Dr. Prof. Prateek Bhatia for his encouragement and support. Thanks are also due to my mother and father who have helped in every step of life. It is difficult to pen-down the efforts they all have undertaken.
References
[31] “Shell files (sh), Linux file system’s extension”, GNU Licenses.
[34] Dr. P. Yalla uneering, (2013).
Authors
Nakul Sharma, is a PhD Research Scholar in Computer Science and Engineering Department K.L. University. He is pursuing his PhD in the area of Software Engineering. He has completed his M.E. from Thapar University and B. Tech from Bharati Vidyapeeth College of Engineering, Pune. His research interests include Software Engineering and Its Applications.
Prasanth Yalla, received his B. Tech Degree from Acharya Nagarjuna University, Guntur (Dist), India in 2001. M.Tech degree in Computer Science and Engineering from Acharya Nagarjuna University in 2004, and received his Ph.D. degree in CSE titled “A Generic Framework to identify and execute functional test cases for services based on Web Service Description Language” from Acharya Nagarjuna University, Guntur (Dist), India in April 2013. He was an associate professor, with Department of Information Science and Technology in KL University, from 2004 to 2010. Later he worked as Associate professor, with the department of Freshman Engineering from 2011 in KL University. Presently he is working as Professor in the department of Computer Science & Engineering in KL University. Till now he has published 9 papers in various international journals and 4 papers in conferences. His research interests include Software Engineering, Web services and SOA. He taught several subjects like Multimedia technologies, Distributed Systems, Advanced Software Engineering, Object Oriented Analysis and design, C programming, Object-Oriented programming with C++, Operating Systems, Database management systems, UML etc. He is the Life member of CSI and received “Active Participation- Young Member” Award on 13-12-13 from CSI.
|
{"Source-Url": "http://www.sersc.org/journals/IJSEIA/vol9_no11_2015/12.pdf", "len_cl100k_base": 4455, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 31747, "total-output-tokens": 7680, "length": "2e12", "weborganizer": {"__label__adult": 0.0004317760467529297, "__label__art_design": 0.0004963874816894531, "__label__crime_law": 0.00040793418884277344, "__label__education_jobs": 0.00604248046875, "__label__entertainment": 9.429454803466796e-05, "__label__fashion_beauty": 0.0002157688140869141, "__label__finance_business": 0.00023221969604492188, "__label__food_dining": 0.00040841102600097656, "__label__games": 0.000751495361328125, "__label__hardware": 0.0006155967712402344, "__label__health": 0.0006670951843261719, "__label__history": 0.000274658203125, "__label__home_hobbies": 0.00012814998626708984, "__label__industrial": 0.00035190582275390625, "__label__literature": 0.0006861686706542969, "__label__politics": 0.0002598762512207031, "__label__religion": 0.0005884170532226562, "__label__science_tech": 0.021636962890625, "__label__social_life": 0.0002129077911376953, "__label__software": 0.006256103515625, "__label__software_dev": 0.9580078125, "__label__sports_fitness": 0.0003528594970703125, "__label__transportation": 0.0004870891571044922, "__label__travel": 0.00018608570098876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 30063, 0.02611]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 30063, 0.4103]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 30063, 0.86116]], "google_gemma-3-12b-it_contains_pii": [[0, 2735, false], [2735, 6444, null], [6444, 10145, null], [10145, 13021, null], [13021, 15457, null], [15457, 17676, null], [17676, 21494, null], [21494, 26487, null], [26487, 30063, null], [30063, 30063, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2735, true], [2735, 6444, null], [6444, 10145, null], [10145, 13021, null], [13021, 15457, null], [15457, 17676, null], [17676, 21494, null], [21494, 26487, null], [26487, 30063, null], [30063, 30063, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 30063, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 30063, null]], "pdf_page_numbers": [[0, 2735, 1], [2735, 6444, 2], [6444, 10145, 3], [10145, 13021, 4], [13021, 15457, 5], [15457, 17676, 6], [17676, 21494, 7], [21494, 26487, 8], [26487, 30063, 9], [30063, 30063, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 30063, 0.12883]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
e5552258050f593074eb3cdba422c3d938429303
|
Deadlock-free Buffer Configuration for Stream Computing
Peng Li
Jonathan Beard
Jeremy Buhler
Dept. of Computer Science and Engineering
Washington University in St. Louis
Deadlock-free Buffer Configuration for Stream Computing
Peng Li Jonathan Beard Jeremy Buhler
Dept. of Computer Science and Engineering
Washington University in St. Louis
St. Louis, MO 63130, USA
{pengli, jbeard, jbuhler}@wustl.edu
ABSTRACT
Stream computing is a popular paradigm for parallel and distributed computing, which features computing nodes connected by first-in first-out (FIFO) data channels. To increase the efficiency of communication links and boost application throughput, output buffers are often used. However, the connection between the configuration of output buffers and application deadlocks has not been studied. In this paper, we show that bad configuration of output buffers can lead to application deadlock. We prove necessary and sufficient condition for deadlock-free buffer configurations. We also propose an efficient method based on all-pair shortest path algorithms to detect unsafe buffer configurations. We also sketch a method to adjust an unsafe buffer configuration to a safe one.
Categories and Subject Descriptors
F.1.2 [COMPUTATION BY ABSTRACT DEVICES]: Modes of Computation; H.3.4 [SYSTEMS AND SOFTWARE]: [Distributed Systems]
Keywords
Stream Computing, Buffer Configuration, Deadlock Avoidance
1. INTRODUCTION
Stream computing is a paradigm of parallel and distributed computing featuring computing nodes connected by data channels. Each node runs an application module and processes data in first-in-first-out (FIFO) order. Data channels deliver data, also in FIFO order. The sequence of data items delivered by a data channel is called a data stream. Figure 1 is a stream computing system for approximating population variance, which can be calculated with the following formula [30]:
\[ \sigma^2 = \bar{x}^2 - \bar{z}^2 \]
(1)
where \( \bar{x} \) is the average of the \( N \) values.
The source node \( u \) duplicates input data to \( v \) and \( w \), which compute \( \bar{z} \) and \( \bar{z}^2 \) respectively for each data set. These quantities are then merged at node \( x \) to compute estimated variance values.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org.
PMAM '15, February 7-8, 2015, San Francisco Bay Area, USA.
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-3404-4/15/02 ..$15.00.
http://dx.doi.org/10.1145/2712386.2712403.
This simple example demonstrates two benefits of stream computing. First, while benefiting from parallel execution, the application developer can think sequentially when authoring each application module, which is very helpful since most of today’s programmers still prefer sequential programming. Second, the FIFO order of data delivery and data processing makes it possible to reason about formal properties of streaming applications, such as the fix-point property [13] and deadlock freedom [4, 18].
While in theory we can think each data channel as a single (bounded) buffer, in practice, it usually consists of an output buffer visible only to the sender, an input buffer visible only to the receiver, and possible transmission buffers in between. The use of output and input buffers are usually for performance considerations since each send or receive operation incurs some fixed overheads. By buffering some data items locally and sending them in one operation, we can amortize overhead per data item and thus improve throughput, which has been observed not only in streaming applications [24, 27] but also in other domains [25, 26]. This “batching” idea has been implemented in some streaming computing systems [11, 31]. Note amortizing communication overhead is just one way of improving throughput of streaming application, which can also be optimized in many other ways [12, 19].
In addition to performance, output buffers might also impact application correctness. More specifically, they can lead to potential application deadlocks. To the best of our knowledge, the deadlock implications of output buffers have not been studied before. In this paper, we will show that in typical streaming applications, if output buffers are not configured appropriately, deadlock can happen during application execution. We will present necessary and sufficient conditions that make a buffer configuration vulnerable to deadlocks. We also provide an algorithm to check whether a buffer configuration is freedom of deadlocks.
2. MODEL DESCRIPTION
In this section, we present the stream computing model we will
use in this paper. The model is very straightforward and easy to understand. If the reader is familiar with the Synchronous Dataflow (SDF) [15, 16], our model is essentially an acyclic homogeneous SDF, where each port has a data rate of one.
An application is represented as a directed acyclic graph (DAG) topology where modules and channels are vertices and edges, respectively. We do not consider feedback channels in this paper, but we may consider it in future work. [28] shows that feedback channels are uncommon in stream computing applications.
As in ordinary streaming applications, data delivery and data processing are in FIFO order. We add a data rate restriction that each node consumes exactly one data item, or token, from each input channel (if the node is not a source node) and produces exactly one data item on each output channel (if the node is not a sink node), regardless of data content. Data filtering and dynamic data rates are not allowed.
Each channel q has a static buffer size, denoted as |q|, which is determined when the application is constructed and stays unchanged during application execution. Each channel has an output buffer, which is part of the channel buffer; hence, the size of the output buffer, denoted as |q|o, is smaller than the total channel buffer. Remember that data in the output buffer is invisible to the receiver. The sender can choose to flush the output buffer and make the data visible to the receiver (after some finite time) at any time (e.g. due to some control events [17, 22]); however, if the output buffer becomes full, the sender must flush it. If the rest of the channel buffer does not have enough capacity to buffer all data to be flushed from the output buffer, it accepts as much data as it can, and the remaining data remains in the output buffer.
If the input data stream is bounded, then after the source node finishes reading all input data, it sends an end-of-stream (EOS) token to downstream receivers to flush any remaining data in output buffers. All nodes receiving the EOS token must propagate it, so that it eventually reaches the sink node.
If a channel is full, neither the output buffer nor the rest of the buffer can accept any data item, and the sender is blocked; if the receiver does not see input at one of its input channels, the receiver is blocked. Note that even if a channel is not empty, e.g. data are in the output buffer, the receiver could still be blocked because the data in the output buffer is not available to the receiver. Blocking on non-empty channels is a key factor in the deadlocks we study below.
3. DEADLOCK CHARACTERIZATION
Figure 2: A deadlock due to bad output buffer configuration. Both uw and wx have an output buffer size of 8, so output buffers are not flushed.
We now demonstrate how a deadlock can happen under in the presence of output buffers. As an example, consider Figure 2, which has a topology similar to Figure 1. The buffer configuration is |uw| = 4, |vx| = 4, |uw| = 16, |wx| = 8, |wx| = 16, |wx| = 8; we ignore |uw|, and |vx|, for simplicity. After u sends 4 data items to both v and w, it flushes the output buffer of uw due to some control event although the buffer is not full. Then sends another 4 data items to both v and w, but this time it does not flush uw’s output buffer, and wx’s output buffer is not flushed, either. Now uw and wx are full, blocking u and v, respectively; the output buffers of uw and wx each holds 4 data items, making w and x see no data on uw and wx, respectively, so w and x are also blocked. None of the four nodes can make any progress, yet there are data unprocessed in the system, so the application is deadlocked.
The buffer configuration for this simple topology is obviously a bit contrived, and it is easy to identify deadlock risk within it. More complex topologies, however, are not so straightforward to analyze. We therefore begin by identifying those topologies in which a deadlock can occur.
Before proceeding to the analysis of properties that lead to potential deadlocks (or freedom of deadlock), let us clarify definitions. Many of the following definitions have been presented in [18].
DEFINITION 3.1. (Blocking Relation) If a node v is waiting for data from an upstream neighbor u, or if v is waiting to send output to a downstream neighbor u because the channel buffer between them is full, we say that u blocks v, denoted u |→ v. If there exists a sequence of nodes v1 . . . vn such that vi |→ vi+1 for 1 ≤ i < n, we write v1 |→∗ vn.
DEFINITION 3.2. (Deadlock) A system is said to deadlock if no node in the system can make progress, but some channel in the system still retains unprocessed data items (so that the computation is incomplete).
THEOREM 3.3. (DEADLOCK THEOREM). A system eventually deadlocks if and only if, at some point in the computation, there exists a node u s.t. u |→∗ u.
PROOF. (⇒) Suppose that at some point in the computation, there is a node u such that u |→∗ u. Because a blocked node cannot make progress, no node on the cycle involving u can make progress. Hence, once the blocking cycle occurs, it will remain indefinitely. Moreover, not every pair of successive nodes in the cycle can be linked by an empty channel; otherwise, we would have that u is waiting for input from w, which is impossible because the graph of computing nodes is a DAG. Hence, the blocking cycle contains at least one full channel, which means there are unprocessed data items, and so the system is deadlocked.
(⇒) Suppose that u |→∗ u does not hold for any node u at any point in the computation. We show that, as long as there is any data in the system, some node is able to make progress; hence, the computation will never halt with unprocessed data on a channel.
At any point in the computation, either every node with input data can make progress, or some such node u is blocked. Let H be the directed graph obtained by tracing all blocking relationships outward from u, such that there is an edge from v to w iff v |→ w. By assumption, H has no cycles and is therefore a DAG. Let v0 be a topologically minimal node in H, which is not blocked by any node. If v0 has data items on its input channels, it is able to consume them and so make progress. Otherwise, v0’s input channels are all empty, so that it cannot block any upstream neighbors. Moreover, since v0 itself is not blocked, either it is a source node that can advance its computation index by spontaneously producing data items, or it must have received the EOS marker and so cannot block any downstream neighbors (which contradicts v0’s presence in H). Conclude that v0 is able to make progress, as desired.
Definition 3.4. **(Blockwise (not clockwise) and Counterblockwise)** Let $C$ be a cycle of blocked nodes $v_1 \ldots v_n$, such that $v_1 \rightarrow^+ v_n$ and $v_n \rightarrow^+ v_1$. The direction of increasing index on $C$ is called blockwise, while the opposite direction is counterblockwise.
A channel on $C$ between $v_i$ and $v_{i+1}$ may be oriented either blockwise from $v_i$ to $v_{i+1}$ or counterblockwise from $v_{i+1}$ to $v_i$. Because $v_i \rightarrow^+ v_{i+1}$, a blockwise channel on a blocking cycle is always empty, while a counterblockwise channel is always full. For example, in Figure 2, $uw$ and $ux$ are blockwise channels while $uw$ and $vx$ are counterblockwise channels.
We notice that not all systems can have deadlocks. For example, a system with just two nodes connected by one channel will never deadlock. However, even quite simple systems, such as one with just two nodes connected by two parallel data channels, can deadlock.
Definition 3.5. **(Potential Deadlock)** A system with finite buffer sizes on all channels has a potential deadlock if, given the node topology and channel buffer configuration, including output buffer configuration, there exist input streams and histories of data flushes at each node such that a deadlock is possible.
For example, in the graph of Figure 2, $uxuv$ is an undirected cycle that can become blocking. We now show that in a general DAG, every undirected cycle can become blocking.
Claim 3.7. Given a system $S$ abstracted as a DAG $G$, $S$ has potential deadlocks only if $G$ has an undirected cycle.
Proof. Note that the claim says an undirected cycle is only a necessary condition for deadlocks. Indeed, if there is no undirected cycle, there cannot be a blocking cycle, hence deadlocks cannot happen. □
Theorem 3.8. If every channel has an output buffer of size zero, which means every output data item is guaranteed to be visible to the receiver after finite time, then the system cannot deadlock.
Proof. The system is equivalent to a non-filtering system described in [18]. According to Theorem 3.1 in [18], the system is freedom of deadlocks. □
According to Theorem 3.8, if we do not use an output buffer, the system cannot deadlock. But this “write-through” style of data delivery is not good for throughput because of the delivery overhead per data item. Generally speaking, a large output buffer can improve data throughput (though it might increase data latency), so how large should we set the output buffer so that the system is still deadlock-free? In other words, given a buffer configuration, can we tell if the system is deadlock-free? If so, how can we change the buffer configuration so that the system is deadlock-free? We will answer these questions in the next couple of sections.
4. DEADLOCK AVOIDANCE
To avoid the deadlocks, one solution is using a timer for each node (or each channel). When the timer expires, all data in the corresponding buffer(s) must be flushed. This solution works because it avoids buffering data indefinitely. However, choosing appropriate length for timers is non-trivial. Too long or too short timers can degrade application performance.
We avoid using timers by setting safe buffer sizes. We argue that by setting appropriate total channel buffer sizes and output buffer sizes, a streaming computing system under the conditions we have specified can never deadlock.
4.1 Conditions for Deadlock-free Buffer Configuration
We will prove that the space of safe buffer configurations for a given application graph $G$ is precisely defined by a set of linear constraints on these total buffer sizes and output buffer sizes. We introduce two constraints for each undirected cycle in $G$, which together ensure that this cycle cannot become a blocking cycle for more than finite time.

To describe the necessary constraints, consider Figure 3, which illustrates the division of an undirected cycle $C$ in an application. Channels on this cycle are directed either clockwise or counter-clockwise. Given such an undirected cycle $C$, suppose the set of clockwise channels is $H_1$ and the set of counter-clockwise channels is $H_2$. Let $|q|$ be the size of the output buffer for channel $q$, and set $|q|_o = |q| - 1$. Let $|q|$ be the total buffer size of $q$.
We establish the following inequality constraints for cycle $C$:
$$\sum_{q \in H_1} |q|_o' \leq \sum_{q \in H_2} |q| \quad (2)$$
$$\sum_{q \in H_2} |q|_o' \leq \sum_{q \in H_1} |q| \quad (3)$$
Besides the above constraints, for each channel $q$, the following constraint is also naturally enforced:
$$0 \leq |q|_o' \quad (4)$$
$$|q|_o' < |q| \quad (5)$$
Note that if $|q|_o = 0$, which means no output buffer is associated with $q$, we let $|q|_o' = 0$ rather than $-1$. The reason is that the effect of $|q|_o = 0$ is same as that of $|q|_o = 1$ as no data item will ever stay in the output buffer.
An application graph may have more than one undirected cycle, each of which generates a pair of constraints as described. The union of all these constraints defines the space of safe buffer configurations.
Theorem 4.1. Inequalities 2, 3, 4, and 5 together are both necessary and sufficient to guarantee deadlock freedom of the given stream computing system.
Proof. “Necessary” means that if any of the constraints are violated, the system is at risk of deadlock; “sufficient” means that by following the constraints, the system is guaranteed to be freedom of deadlocks.
Instead of proving the theorem from scratch, we map the system in this paper, denoted as $\Gamma$, to the one described in Section III.C of [21], denoted as $\Phi$, which proves that a dummy message schedule constrained by a set of linear inequalities can avoid deadlocks caused by data filtering. We set $|q|^\prime_q$ for $q$ in $\Gamma$ as the dummy interval $[q]$ (defined in [21]) in $\Phi$. Note that Inequalities 4 and 5 cannot be violated; otherwise, the buffer configuration is impossible.
Suppose $\Gamma$ deadlocks; then there must be a blocking cycle with some full channels and other channels with unflushed output buffers. We construct a data history in $\Phi$ such that all node receive the same history of data, all channels corresponding to full channels in $\Gamma$ do not filter, and other channels on the cycle filter data items corresponding to the ones in output buffers. In $\Gamma$, we get a blocking cycle with alternate full paths and empty paths, which means a deadlock.
Suppose $\Phi$ deadlocks; then there is a blocking cycle with alternate full paths and empty paths. WLOG, we assume that full channels have not filtered any data. We let each node in $\Gamma$ receive the same sequence of data as the corresponding node in $\Phi$ does. No output buffer is flushed unless it is full. When the deadlock happens in $\Phi$, the filtered data items are corresponding to the data items in the output buffer in that they are invisible to the receiver. Since the filtering history causes a deadlock in $\Phi$, the invisibility caused by output buffers causes a deadlock in $\Gamma$.
Because the dummy intervals constrained by Inequalities 2 and 3 are sufficient for avoiding deadlocks, inequalities 2, 3, 4 and 5 are both sufficient and necessary for avoiding deadlocks caused by bad buffer configurations. □
To verify whether a set of buffer configurations is freedom of deadlock or not, we can enumerate all undirected cycles and check whether any of the inequalities is violated. However, the number of undirected cycles could be exponential to the graph size. For example, by turning an undirected complete graph into a DAG, we can have $2^N$ undirected cycles, where $N$ is the number of vertices. Verifying the inequalities by enumerating all undirected cycles could be very expensive, so we next present an efficient algorithm to verify the safety of buffer configurations.
4.2 Verifying Safety of Buffer Configuration
We sketch a method to verify the safety of a given set of buffer configurations, which involves checking for non-positive cycles on a specially defined graph.
**Definition 4.2. (OB-graph and Mirror Edge)** Given a DAG $G = (V, E)$ for a streaming system and its output buffer configurations, we create a new graph $G' = (V, E')$. For each edge $e = uv \in E$, we create two edges on $G'$: $e$ and $e' = vu$ (Note the direction of $e'$). The weight of $e'$ is $-[uv]|_o$, $G'$ is the OB-graph (short for output-buffer graph) for $G$, and $e'$ and $e$ are mirror edges of each other.
The careful reader will notice that the assignment of the weight $|e'|$ is related to the inequalities defined in the previous section.
**Claim 4.3.** Given a dataflow graph $G$ for a streaming system and its OB-graph $G'$, Inequalities 2, 3, 4 and 5 hold for every simple undirected cycle in $G'$ if every cycle in $G'$ has a positive total weight.
**Proof.** ($\leftarrow$) A directed cycle $C'$ in $G'$ is created from an undirected cycle $C$ in $G$. If the inequalities hold for $C$, $C'$ has a positive weight, since the absolute value of the sum of negative edges is less than the sum of the positive edges.
($\rightarrow$) Suppose one of the inequalities fails to hold for some undirected cycle $C$ in $G$. WLOG, suppose Inequality 2 is violated, which means the sum of $|q|^\prime_q$ of clockwise channels is at least the sum of buffer size of counterclockwise channels. Let $C'$ be the directed cycle created with clockwise negative edges and counterclockwise positive edges based on $C$. The absolute value of the total weight of negative edges on $C'$ is at least the total weight of its positive edges, so $C'$ has a non-positive total weight. □
To check whether there is a non-positive cycle, we can run an all-pairs shortest path algorithm (e.g. the Floyd-Warshall algorithm [10, 29]) on $G'$, as described in Algorithm 1. A non-positive distance from a vertex to itself indicates the existence of a non-positive cycle. With the classic Floyd-Warshall algorithm, we can check for non-positive cycle in $O(N^3)$, where $N$ is the number of total nodes in the stream computing system.
**Algorithm 1:** Checking for Non-positive cycle.
```
for i ← 1 to n do
for j ← 1 to n do
if $v_i, v_j \in E$ then
$d_{ij} ← |v_i v_j|
else
$d_{ij} ← \infty$
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
if $d_{ij} < d_{ik} + d_{kj}$ then
$d_{ij} ← d_{ik} + d_{kj}$
if $d_{ii} ≤ 0$ then
return True
return False
```
If a set of buffer configuration is found to be unsafe, we can adjust the configuration to make it safe with some additional steps. To be specific, if a non-positive path from a vertex to itself is discovered, we pick some negative edges and increment its value (e.g. from $-8$ to $-4$), which means shrinking the corresponding output buffers, until the configuration is safe.
5. RELATED WORK
Some streaming computing models, such as the Kahn’s Process Networks [13], assume infinite buffer capacity for each channel, which is impractical. With bounded buffering capacity, setting appropriate buffer sizes is important to both correctness and performance of streaming applications. For models with static data rates, such as the synchronous dataflow (SDF) [15, 16], it is possible to compute a bounded-memory schedule (if there exists one) and assign buffer sizes accordingly to guarantee freedom of deadlocks. But for models with fully dynamic data rates, whether a bounded-memory schedule exists is unknown [2, 3]. If the dynamic data rate is limited to data-dependent filtering, it is possible to schedule the application in bounded memory with the use of special control messages [4, 18, 20, 21]. The flushing behavior is similar to the control message in that they both make the sender’s output history visible to the receiver. The method for determining buffer configuration in this paper is actually inspired by the scheduling of those special messages.
Deadlocks in other distributed systems have also been studied intensively. Chandy et al. classified deadlocks in distributed system as communication deadlocks and resource deadlocks and proposed detection algorithms for each type of deadlock [5, 6], but no prevention mechanism is introduced. In packet-switched networks, deadlock is an issue for routing algorithms. The network could deadlock if the “waiting-for” relations form a blocking cycle, and
many routing algorithms have been designed to guarantee freedom of deadlock while trying to maximize performance [7, 8, 9]. While those deadlocks are related with blocking queues, their models did not feature output buffers and how they could relate with deadlocks. Deadlock avoidance has also been studied in queuing networks, where small queues could lead to deadlocks [14, 23]. In contrast, small channel buffers alone without large output buffers in our model do not cause deadlocks. To conclude, the deadlock problem we study in this paper is a new and interesting one, which has not been studied before.
6. CONCLUSION AND FUTURE WORK
In this paper, we showed the influence of output buffers on the correctness of streaming applications. If output buffers are not configured appropriately, the streaming application could have potential deadlocks. We proved the sufficient and necessary condition of deadlock-free buffer configurations. We also proposed an efficient method for verifying if a set of buffer configuration is deadlock-free or not by using classic all-pair shortest path algorithms. If a set of buffer configuration is not deadlock-free, we also provided methods to change it to a deadlock-free one.
In future, there are several direction we plan to take. First, we want to add directed cycles to our model. By allowing directed cycles, deadlocks can be caused by all-full cycles or all-empty cycles. Secondly, we plan to extend our model to general SDFs, where the data rates are not necessarily 1 at each port. It would be more promising if we can solve the output buffer configuration problem for general SDFs. A third direction is comparing our approach with the one that uses timers. Our approach does not need any timer, but it prohibits certain buffer configurations; while the timer approach allows arbitrary buffer configurations but the length of timer needs to be carefully chosen to avoid performance degradation. We would like to see which approach has better performance in applications deployed with frameworks such as RafiLib [1].
7. ACKNOWLEDGMENTS
We sincerely thank the anonymous reviewers for their devoted time and insightful comments. This work was supported by NIH award R42 HG003225, NSF award CNS-0751212, and NSF award CNS-0905368.
8. REFERENCES
|
{"Source-Url": "http://www.jonathanbeard.io/pdf/lbb15.pdf", "len_cl100k_base": 6090, "olmocr-version": "0.1.51", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 23285, "total-output-tokens": 8457, "length": "2e12", "weborganizer": {"__label__adult": 0.00039768218994140625, "__label__art_design": 0.0004336833953857422, "__label__crime_law": 0.00044035911560058594, "__label__education_jobs": 0.0010728836059570312, "__label__entertainment": 0.00014162063598632812, "__label__fashion_beauty": 0.00021278858184814453, "__label__finance_business": 0.0004673004150390625, "__label__food_dining": 0.0004620552062988281, "__label__games": 0.0008864402770996094, "__label__hardware": 0.0031452178955078125, "__label__health": 0.0010585784912109375, "__label__history": 0.00044155120849609375, "__label__home_hobbies": 0.0001595020294189453, "__label__industrial": 0.0007395744323730469, "__label__literature": 0.0003631114959716797, "__label__politics": 0.00035691261291503906, "__label__religion": 0.0006208419799804688, "__label__science_tech": 0.368896484375, "__label__social_life": 0.00010728836059570312, "__label__software": 0.01226043701171875, "__label__software_dev": 0.60595703125, "__label__sports_fitness": 0.00038552284240722656, "__label__transportation": 0.0009984970092773438, "__label__travel": 0.00025582313537597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33107, 0.03719]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33107, 0.49719]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33107, 0.90246]], "google_gemma-3-12b-it_contains_pii": [[0, 411, false], [411, 5397, null], [5397, 12076, null], [12076, 17614, null], [17614, 24668, null], [24668, 31018, null], [31018, 33107, null]], "google_gemma-3-12b-it_is_public_document": [[0, 411, true], [411, 5397, null], [5397, 12076, null], [12076, 17614, null], [17614, 24668, null], [24668, 31018, null], [31018, 33107, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33107, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33107, null]], "pdf_page_numbers": [[0, 411, 1], [411, 5397, 2], [5397, 12076, 3], [12076, 17614, 4], [17614, 24668, 5], [24668, 31018, 6], [31018, 33107, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33107, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-04
|
2024-12-04
|
16e632663d3793cbc8aa99aaad2404c987ac6a6e
|
High Performance Code Generation for Stencil Computation on Heterogeneous Multi-device Architectures
Pei Li, Elisabeth Brunet, Raymond Namyst
To cite this version:
Pei Li, Elisabeth Brunet, Raymond Namyst. High Performance Code Generation for Stencil Computation on Heterogeneous Multi-device Architectures. HPCC 2013 - 15th IEEE International Conference on High Performance Computing and Communications, Nov 2013, Zhangjiajie, China. hal-00925481
HAL Id: hal-00925481
https://inria.hal.science/hal-00925481
Submitted on 8 Jan 2014
Abstract—Heterogeneous architectures have been widely used in the domain of high performance computing. On one hand, it allows a designer to use multiple types of computing units and each able to execute the tasks that it is best suited for to increase performance; on the other hand, it brings many challenges in programming for novice users, especially for heterogeneous systems with multi-devices.
In this paper, we propose the code generator STEPOCL that generates OpenCL host program for heterogeneous multi-device architecture. In order to simplify the analyzing process, we ask user to provide the description of input and kernel parameters in an XML file, then our generator analyzes the description and generates automatically the host program. Due to the data partition and data exchange strategies, the generated host program can be executed on multi-devices without changing any kernel code. The experiment of iterative stencil loop code (ISL) shows that our tool is efficient. It guarantees the minimum data exchanges and achieves high performance on heterogeneous multi-device architecture.
Index Terms—GPGPUs, OpenCL, Stencil computations, Multi-device, Code generation, Heterogeneous architectures
I. INTRODUCTION
High performance computing (HPC) is closely tied to scientific computing and industries. Because of the increasing complexity and growing amount of data for practical problem, we always demand better performance to achieve faster time to solution. Hence, there are two paths to combine: the enhancement of the algorithms and better material architectures. Recently HPC system architectures are shifting from the traditional homogeneous multi-core systems to heterogeneous systems such as GPGPU. Compared to the standard multi-core CPUs, GPGPUs offer a significantly higher floating point peak and a better power efficiency.
However, this novel architecture presents new challenges at the application developing level: time and effort are needed to exploit such kind of material. Moreover, many heterogeneous systems start having multiple computing devices. It can be more difficult and error-prone since developing programs that make best use of the characteristics of different computing devices increases the programmer’s burden. Balancing the workload between several available computing devices can be also complicated, especially given that they have different performance characteristics. Besides, the communication and the exchange of intermediary results between several devices should also be considered. It is costly and difficult to design applications for heterogeneous multi-device systems. Thus, there is a huge demand for programming tools that help the novices designing applications for heterogeneous multi-device systems.
In this paper, we propose the code generator STEPOCL that automatically generates the parallel host OpenCL code for heterogeneous multi-device systems. It enables OpenCL programs written for a single compute device to run on systems with multiple devices without any modification. The architecture of the system is completely transparent to user. The information of available devices is obtained at run time and the workload is distributed to each device with optimal strategy. Thus, OpenCL kernels are executed in parallel. The host program manages automatically the communication and data exchanges between devices and results are retrieved from each device at the end of execution.
The rest of paper is organized as follows. In Section II, we present our contribution in detail. Section III discusses the evaluation of our generated code. Section IV presents related works. Finally, Section V concludes the paper.
II. STEPOCL
STEPOCL aims to facilitate programming on heterogeneous multi-GPUs systems through the open standard OpenCL [1], a language especially designed to address heterogeneous platforms consisting of multi-core CPUs, GPGUs and other modern processors. Instead of writing the error-prone code for heterogeneous multi-device system, user only needs to provide the basic description of kernel argument, space information and the kernel function for one device. From these description, STEPOCL automatically generates an entire OpenCL source code for multiple devices architectures dealing with all the necessary technical aspects including not only the basic ones with the tuning of the initialization phase (library, devices declaration, etc.), or kernels launching and
retrieving of the results, but also trickier aspects in order to determine the best data and computation distribution or the exchange of intermediary results.
```
for (int t=0; t<T; ++t){
for (int i=10;i<N-10;++i){
for (int j=10;j<N-10;++j){
A[i][j]=CNST + (B[i][j]+1)+B[i][j-1]
+B[i-10][j]+B[i+10][j]);
}
}
swap(A,B);
}
```
Listing 1. Stencil Loop Example
Indeed, if we consider an iterative stencil computation – a case widely used in many scientific domains as depicted in Listing 1, the computation of each element needs the access to a set of neighboring elements according to a fixed pattern. Thus, in a multi-device version, the distribution may imply the allocation of some of those neighboring elements in the memory of another device. Then, some data need to be shared in some way – by exchange or by replication – and furthermore, need to be updated after each iteration in order to maintain the data coherency. All these operations make programming iterative stencil code on heterogeneous architectures more difficult and error-prone.
Thus, in order to generate a complete multi-device code, we need to face the following challenges:
- Workload and data partitioning: the kernel space and data space should be characterized in order to be efficiently partitioned by rows, columns or grids. The shared data regions [2] – called here ghost zones – should be distinguished from useful data.
- Managing the data transmission between devices: in order to maximize the reutilization of data transferred in devices memory, intermediary results need to be exchanged in a way that guarantees the minimum data transfer, that is to say only the effective data necessary to pursue the computation on the other devices.
- Code generation: it should be transparent to users. Users should not care about the number or the type of available devices. The OpenCL host code will be automatically generated from the description of kernel information for single device.
- The rest of this section presents more details about each of mentioned points and how we consider to implement them in STEPOCL.
A. Workload and Data partitioning
1) Background - OpenCL kernel space: An OpenCL application consists of two distinct parts: the host program and a collection of computation kernels addressing computation potentially heterogeneous devices such as CPUs or GPUs. Kernels are typically simple functions that transform input memory objects into output memory objects. Nevertheless, the host part is the driver of the execution: it is in charge of the data transfers to and from the devices memory and of kernels launching on the devices. When the host submits a kernel, the OpenCL runtime system creates an integer index space based on two arguments: the number of the work-item and the size of work-groups. An instance of the kernel is executed for each point in this index space.
Each instance of an executing kernel is called a work-item. Work-item are organized into work-groups. The work-groups provide a more coarse-grained decomposition of the index space. All the work-groups are the same size in corresponding dimensions, and this size evenly divides the global size in each dimension. A unique ID is assigned to each work-group following the same dimensionality as the index space used for the work-items. Each work-item is referenced by a unique local ID within a work-group, so that a single work-item can be uniquely identified by its global ID or by a combination of its local ID and work-group ID. For example, Figure 1 shows a two-dimensional index space with a global size of index space of \((20 \times 15)\) and a size of work-group of \((5 \times 5)\). Hence the number of work-groups in the whole index space is \((4 \times 3)\).
During the execution, each work-item uses the same sequence of instructions defined by a single kernel. In order to execute the kernel on multiple devices, the number of work-item should be adjusted according to the number of device. The corresponding data space should also be split and distributed to the memory of each device.
2) Determining the kernel and data space: As we said in introduction, for the first prototype of STEPOCL, we solicit the assistance of the user to describe how the space can be split. Therefore, the kernel and data space is described in XML as a property of arguments as depicted in Listing 2. Still on the same example, the argument \(A\) is defined as a 2 dimension memory object with a size of \(4096 \times 1026\). The number of work-item in kernel space is \(4096 \times 256\). After analyzing these description with a XML parser, we can easily determine the size of kernel and data space. Nevertheless, the objective of our further work is to automatically analyze the kernel and data space from the real kernel program thanks to efficient compilers as PIPS [3] or Insieme [4].
3) **Distinguishing the useful data from ghost zones** [5] and keeping them up-to-date: If we manage an ISL distribution, the ghost zones need to be exchanged among different processing elements at the beginning of the execution and after each iteration involving significant overhead in terms of communication and synchronization. Hence, larger ghost zones may be created to replicate stencil operations, reducing communication and synchronization costs at the expense of redundantly computing some values on multiple processing elements [2]. The optimal size of ghost zone can improve the performance for ISL on GPUs. Nevertheless, the objective of our work is to maximize the memory utilization to allow the kernels scaling. In this way, in our model, we always consider the minimal ghost zone size—even if it may be the case for the data space, computation space will not be replicated. Just as in the previous paragraph, in the current STEPOCL prototype, we ask the programmer to give the relative information but the goal is to take advantage of the data dependency analysis from compilers to do it automatically.
We extract the domain of ghost zone and useful data region from the data description in the XML file given by the programmer. Then we split the useful data region into sub-region, and allocate relative ghost sub-regions for each data sub-regions. Still on the same example, ghost zone is required and may be described in the file as depicted below in Listing 3.
4) **Partitioning with abstract number of available devices:** The number of available devices is unknown until the runtime in order to be able to exploit correctly the actual devices involved in the execution. Thus at compilation time, the number can only be presented with variable `num_device`. At first, we analyze the kernel space to evaluate the maximum parallel capacity. In Figure 1, the number of work-group in this kernel space is 12. If we want to parallelize stencil code without modifying kernel code, we need at most 12 devices. And the number of regions in dimension X should not exceed 4 (`num_group_x`), equally the number of regions in dimension Y.
Y should not exceed 3 (num_group_y). Afterwards with the dependent points which are provided by user, we can determine a priority list. The list presents the splitting order of dimensions. For instance, the dependent points in Listing 1 are (0,1), (0,-1), (10,0) and (-10,0). The projection of ghost region size on X is \( dx = |10| + |-10| = 20 \) and the projection on Y is \( dy = 1 + |-1| = 2 \). By comparing the size of projection, we can decide which axis should be split first. In this example, we should split the data in row first, which means splitting dimension Y. And, if there are still available devices, we can split the data on dimension X afterwards. So, the list of priorities is \( list = \{ y, x \} \).
Following the list of priority, we calculate the greatest common divisor (gcd) of \( num \_workgroup \_x \) and the number of available devices \( num \_device \) for each dimension. \( num \_device \) is updated with \( num \_device / = \text{gcd}(num \_workgroup \_x, num \_device) \) after each gcd operation. The list of gcd presents the data distribution, and the result of \( \prod_{i=0}^{N} \text{gcd}(i) \) is the number of devices that will be used for computing. In our application, we may decrease the number of total devices in a certain range (default value is 100) to achieve the maximum parallelization. In the example of Listing 1, with a priority list \( list = \{ y, x \} \), if OpenCL detected 2 available devices, the list of gcd should be \( \{1,2\} \). This means that the data in dimension X remain in one part, but the data in dimension Y will be split into two parts, as shown in Figure 3. After the data partition, each segment (we call it \( \text{local} \) zone) should keep the following information: the global zone ID, the relative ID in each dimension and the range of indexes in each dimension. Each local zone is composed of written data region (we call it the \( \text{write-zone} \)) and ghost zone (corresponding to shared data). Since we know the size of local region and all the dependence points, the index domain of write-zone can also be determined. Thereby, local regions represent \( \text{EXACT \_READ} \) data regions and write-regions, \( \text{EXACT \_WRITE} \) ones. As ghost zones are partial write-zone projection of neighbors, after each iteration of stencil loops, they need to be updated to prepare the next iteration. Thus, we need to determine the neighbors global ID list for each local zone.
The global ID of zone region can be represented with relative ID \( (re \_ID \_x, re \_ID \_y) \) and device partitioning information (the number of devices in the first dimension \( numdev \_x \), the number of devices in the second dimension \( numdev \_y \)).
\[
global \_ID = \frac{re \_ID \_y \times numdev \_y}{re \_ID \_x} (2D : global \_ID) \quad (1)
\]
\[
global \_ID = \frac{re \_ID \_z \times numdev \_y \times numdev \_x}{re \_ID \_y \times numdev \_y + re \_ID \_x} (3D : global \_ID) \quad (2)
\]
A dependent point indicates a direction of dependent zone. For instance, on Figure 5: the dependent point of device 3 is (-1,-1). We suppose that the length and the height of zone is M and N which are larger than 1. So the offset of relative ID is still (-1,-1). The ID of dependent zone is \( (1,1) + (-1, -1) = (0,0) \). So the global ID can be calculated as follows:
\[
global \_ID = 0 \times 2 + 0 \times 2 = 0; \quad (3)
\]
In this way, we can find all the dependent zones with the list of dependent points. Then, the list of dependent zones (we call it \( \text{neighbor} \_\text{list} \)) will be used during the data transmission or communication process.
### B. Managing the data transmission between devices
1) Read/write memory objects in OpenCL: OpenCL lets users create three kinds of memory objects: buffers, 2D images and 3D images. These memory objects are stored in the host memory (typically, in RAM) or in the device memory (typically, in GRAM directly on the graphic card). There are several functions that can be used to read and write memory object. The Table I presents five functions that read and write buffer object [6].
<table>
<thead>
<tr>
<th>Function</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>clEnqueueReadBuffer</td>
<td>Reads data from a buffer object to host memory</td>
</tr>
<tr>
<td>clEnqueueWriteBuffer</td>
<td>Writes data from host memory to a buffer object</td>
</tr>
<tr>
<td>clEnqueueReadBufferRect</td>
<td>Reads a rectangular portion of data from a buffer object to host memory</td>
</tr>
<tr>
<td>clEnqueueWriteBufferRect</td>
<td>Writes a rectangular portion of data from host memory to a buffer object</td>
</tr>
<tr>
<td>clEnqueueCopyBuffer</td>
<td>Enqueues a command to copy a buffer object to another buffer object</td>
</tr>
</tbody>
</table>
**TABLE I**
**READ AND WRITE BUFFER OBJECTS**
2) Data transmission between multiple devices: After each iteration, the data that needs to be transferred from device A to device B can be calculated in the following way:
\[
\text{region} \_aTo \_b = \text{exact} \_\text{write} \_a \cap \text{exact} \_\text{read} \_b \quad (4)
\]
The data region is an \( \text{EXACT} \_\text{WRITE} \) region; the zone region is an \( \text{EXACT} \_\text{READ} \) region. Thus, the formula [4] can be redefined as:
\[
\text{region} \_aTo \_b = \text{data} \_\text{region} \_a \cap \text{zone} \_\text{region} \_b \quad (5)
\]
Then all the data can be transferred with the \( \text{neighbor} \_\text{list} \) following the process presented in Listing 4.
The generated host code can directly be executed with the original kernel.
In order to evaluate our implementation, we generated a 4-points Jacobi 2D stencil with 50 iterations. We executed it on a machine with 4 NVIDIA GTX-460 devices. The system is Red Hat Enterprise Linux Server release 5.5. We varied the
size of the input data to highlight the new scalibity offered by our generated program.
\[ \text{Performance} = \text{Num\_iter} \times \text{Size\_date} \times \text{Flt\_Opt}/Tn \]
(6)
The \text{Num\_iter} presents the number of iterations, while \text{Flt\_Opt} means the number of operations in the inner loop and \text{Tn} means the total time used for executing stencil code which also includes the time of communication between each computing device. On one hand, the results show that single GPU gains better performance with small data sizes as the number of work-groups in the kernel is not big enough to make all devices busy. In this case, the overhead of data transmission is significant in the whole execution time. Figure 8 helps to picture exactly the impact of the communication on the overall time execution in percentage (PTE): PTE decreases as the size of data growth. Though we use our partition strategy to avoid unnecessary extra data transmission, the PTE value is still very high. Thus, even if we will investigate in the short term how to take care of the communication – by overlapping them in order to avoid occupying CPU communication by several GPUs at the same time for example –, we above all want to take in consideration this saturation threshold as a parameter of a dynamic scheduling strategy with the purpose of determining the best compromise between data size and number of devices to use. The goal is to use only the necessary resources and not to just occupy them and avoid their exploitation for another computation. On another hand, as expected, the performance curves show aggressive growth with the increase of data. With four GPU devices, we achieved 61 GFLOPS which is 3.5 times faster than using only one GPU device and almost 1.8 times faster than using two GPU devices. The generated program even achieves to process a data input size which is impossible to treat on a single device program.
IV. RELATED WORK
A number of recent studies have focused on the parallelization of OpenCL code for multi-devices GPGPU systems. The Amdahl Software provides similar application – OpenCL CodeBench [7]. It enables developers to rapidly generate and optimize OpenCL code. The main difference between STEPOCL and OpenCL CodeBench is that the host code generated by OpenCL CodeBench is more general and users need to define their own ways of communication between several accelerators. Jungwon Kim and Honggyu Kim [8] propose an OpenCL framework that treats multiple GPUs as a single compute device. This framework analyzes the OpenCL kernel index space at run time and it performs a sampling run just before the kernel is executed. The sampling run obtains buffer access ranges of each affine array references for different GPUs. Using this information, the runtime distributes the kernel work-group index space efficiently. Sylvain Henry provides an OpenCL implementation which is called SOCL [9]. It is based on StarPU [10]. It gives a unified access to every available OpenCL device: applications can now share entities such as Events, Contexts or Command Queues between several OpenCL implementations. In addition, the Command Queues that are created without specifying a device provide automatic scheduling of the submitted commands on OpenCL devices contained in the context to which the command queue is attached. On the other hand, this implementation use dynamic analysis and scheduling the available devices at runtime. Considering our data partitioning also happens at runtime, it will be very promising to combine our research achievement. Data transfer between the CPU and GPUs can degrade the performance. Overlapping [8], [11] the data transfer and GPU computation is a solution to reduce the overhead of data transmission.
V. CONCLUSION
In this paper, we introduced the design and implementation of the new OpenCL code generator STEPOCL which
provides the facilities for developing code on heterogeneous multi-device systems. Instead of developing a tedious code implementation, user only needs to provide the basic description of kernel argument, space information and the kernel function for one device. Then, STEPOCL automatically generates OpenCL code for multi-device without changing kernel functions. STEPOCL builds model of kernel and data space from the description then partition the workload with best strategy. Communication management and exchange of intermediary results between several devices are also generated. Preliminary experiments on an iterative stencil show that the generated code achieved high performance on multi-device architectures.
Further works are planned at different levels. First, we will massively enhance our tool by relaxing the information demanded to the user thanks to a cooperation with static compilation techniques: kernel and data space and ghost zones will be automatically detected when possible. Next, as STEPOCL is able to generate parametric kernel, it is able to generate non uniform distribution. We plan to collect information on available heterogeneous devices at runtime with the purpose of applying dynamic scheduling strategies and performing an accurate partitioning.
REFERENCES
|
{"Source-Url": "https://inria.hal.science/hal-00925481/file/HPCC2013.pdf", "len_cl100k_base": 5020, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27503, "total-output-tokens": 6134, "length": "2e12", "weborganizer": {"__label__adult": 0.0005321502685546875, "__label__art_design": 0.0006728172302246094, "__label__crime_law": 0.0005426406860351562, "__label__education_jobs": 0.0006566047668457031, "__label__entertainment": 0.0001531839370727539, "__label__fashion_beauty": 0.0002498626708984375, "__label__finance_business": 0.0003414154052734375, "__label__food_dining": 0.00047898292541503906, "__label__games": 0.0010595321655273438, "__label__hardware": 0.010467529296875, "__label__health": 0.000946044921875, "__label__history": 0.0004758834838867187, "__label__home_hobbies": 0.00019931793212890625, "__label__industrial": 0.0012531280517578125, "__label__literature": 0.0002491474151611328, "__label__politics": 0.0003857612609863281, "__label__religion": 0.000888824462890625, "__label__science_tech": 0.345947265625, "__label__social_life": 0.00010085105895996094, "__label__software": 0.0108184814453125, "__label__software_dev": 0.62158203125, "__label__sports_fitness": 0.0004982948303222656, "__label__transportation": 0.0012359619140625, "__label__travel": 0.00033211708068847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 25135, 0.02624]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 25135, 0.58846]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 25135, 0.86817]], "google_gemma-3-12b-it_contains_pii": [[0, 535, false], [535, 5004, null], [5004, 9930, null], [9930, 12081, null], [12081, 17636, null], [17636, 17947, null], [17947, 21853, null], [21853, 25135, null]], "google_gemma-3-12b-it_is_public_document": [[0, 535, true], [535, 5004, null], [5004, 9930, null], [9930, 12081, null], [12081, 17636, null], [17636, 17947, null], [17947, 21853, null], [21853, 25135, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 25135, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 25135, null]], "pdf_page_numbers": [[0, 535, 1], [535, 5004, 2], [5004, 9930, 3], [9930, 12081, 4], [12081, 17636, 5], [17636, 17947, 6], [17947, 21853, 7], [21853, 25135, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 25135, 0.06863]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
0fa04be945d59d974a67419b9c9ff0a080d20578
|
Testing of Concurrent and Imperative Software using CLP
Elvira Albert
DSIC, Complutense University of Madrid
28040 Madrid, Spain
elvira@fdi.ucm.es
Puri Arenas
DSIC, Complutense University of Madrid
28040 Madrid, Spain
puri@sip.ucm.es
Miguel Gómez-Zamalloa
DSIC, Complutense University of Madrid
28040 Madrid, Spain
mzamalloa@fdi.ucm.es
ABSTRACT
Testing is a vital part of the software development process. In static testing, instead of executing the program on normal values (e.g., numbers), typically the program is executed on symbolic variables representing arbitrary values. Constraints on the symbolic variables are used to represent the conditions under which the execution paths are taken. Testing tools can uncover issues such as memory leaks, buffer overflows, and also concurrency errors like deadlocks or data races. Due to its inherent symbolic execution mechanism and the availability of constraint solvers, Constraint Logic Programming (CLP) has a big potential in the field of testing. In this talk, we will describe a fully CLP-based framework to testing of today’s imperative language. We will also discuss the extension of this framework to handle actor-based concurrency, used in languages such as Go, Actor Foundry, Erlang, and Scala, among others.
CCS Concepts
- Computing methodologies → Concurrent programming languages; Distributed programming languages; Parallel programming languages;
- Theory of computation → Operational semantics;
Keywords
Testing; Static Analysis; Dynamic Analysis; Concurrency
1. INTRODUCTION
Testing is the most widely-used methodology for software validation in industry. It typically requires at least half of the total cost of a software project. Still, it remains a mostly manual stage within the software development process. Test Case Generation (TCG) is devoted to the automation of a crucial part of the testing process, the generation of input data for interesting coverage criteria. Coverage criteria aim at measuring how well the program is exercised by a test suite. Examples of coverage criteria are: statement coverage which requires that each line of the code is executed; path coverage which requires that every possible trace through a given part of the code is executed. Among the wide variety of approaches to TCG (see e.g. [41]), our work focuses on glass-box testing, where test cases are obtained from the concrete program in contrast to black-box testing, where they are deduced from a specification of the program. We will initially focus on static testing, where we assume no knowledge about the input data, in contrast to dynamic approaches [21, 30] which execute the program to be tested for concrete input values, as will be discussed for the concurrent setting.
The standard approach to generating test cases statically is to perform a symbolic execution of the program [32, 17, 29, 34, 35, 19, 18], where the contents of variables are expressions rather than concrete values. Symbolic execution produces a system of constraints consisting of the conditions to execute the different paths. This happens, for instance, in branching instructions, like if-then-else, where we might want to generate test cases for the two alternative branches and hence accumulate the conditions for each path as constraints. The symbolic execution approach has been combined with the use of constraint solvers [35, 29, 18] in order to handle the constraint systems by solving the feasibility of paths and, afterwards, to instantiate the input variables. For instance, a symbolic JVM machine which integrates several constraint solvers has been designed in [35] for TCG of Java (bytecode) programs. In general, a symbolic machine requires non-trivial extensions w.r.t. a non-symbolic one like the JVM: (1) it needs to execute (imperative) code symbolically as explained above, (2) it must be able to non-deterministically execute multiple paths (as without knowledge about the input data non-determinism usually arises).
We overview our CLP-based approach to TCG of imperative programs which consists of four main ingredients: (i) The imperative program is first translated into an equivalent CLP one, named CLP-translated program in what follows. The translation can be performed by partial evaluation [25] or by traditional compilation. (ii) Symbolic execution on the CLP-translated program can be performed by relying on the standard evaluation mechanism of CLP, which provides backtracking and handling of symbolic expressions for free. (iii) The use of dynamic memory requires to define heap-related operations that, during TCG, take care of constructing complex data structures with unbounded data (e.g., recursive data structures). Such operations can be implemented in CLP [2]. (iv) We can guide the TCG process towards specific paths by adding to our CLP-translated programs trace terms that track the sequence of calls per-
formed. We can supply fully or partially instantiated traces, thus guiding, completely or partially, the symbolic execution towards specific paths.
Finally, we will describe our work on the field of testing concurrent programs [5, 13, 6, 12]. Concurrent programs are becoming increasingly important as multicore and networked computing systems are omnipresent. Writing correct concurrent programs is more difficult than writing sequential ones, because with concurrency come additional hazards not present in sequential programs such as race conditions, deadlocks, and livelocks. Therefore, testing techniques urge especially in the context of concurrent programming. Due to the non-deterministic interleaving of processes, traditional testing for concurrent programs is not as effective as for sequential programs. In order to ensure that all behaviors of the program are tested, the testing process, in principle, must explore all possible non-deterministic ways in which the processes can interleave. This is known as systematic testing [16, 39, 40] in the context of concurrent programs. Such full systematic exploration of all process interleavings produces the well known state explosion problem and is often computationally intractable (see, e.g., [40] and its references). We will discuss our recent work [5, 13, 6, 12] on defining strategies and heuristics for pruning redundant state-exploration when testing concurrent systems by reducing the amount of unnecessary non-determinism.
2. CLP-BASED STATIC TESTING
This section overviews our CLP-based static testing framework. It was originally proposed for the context of TCG of a simple bytecode language in [14], and later extended to sequential OO programs [7] and to concurrent actors [4, 6]. Its implementations for the Java (bytecode) and ABS languages have led to the development of the jPET [15, 8] and aPET [7] tools. The framework takes advantage of the inherent characteristics of CLP, namely, its evaluation mechanism based on backtracking and its constraint solving facilities, for the purpose of symbolic execution. Moreover, it shows that logic programming in general is an adequate paradigm as the basis for reasoning about other programming languages (meta-programming) [10]. The main architecture of the framework is shown in Fig. 1. It consists of three independent phases: (1) First, the program-under-test is translated into an equivalent CLP program. (2) The CLP program is then symbolically executed in CLP relying on CLP’s execution mechanism using a termination/coverage criterion. (3) The obtained test-cases are presented to the user in different forms, namely, graphically or as unit tests.
2.1 Translation from OO Imperative to CLP programs
The translation of imperative object-oriented programs into equivalent CLP-translated programs has been subject of previous work (see, e.g., [3, 27]). We rely on the so-called interpretive compilation by partial evaluation [27]. It consists in compiling the imperative OO program to CLP by partially evaluating an interpreter of the OO imperative language written in CLP w.r.t. the imperative program.
```java
int exp(int a, int n) {
if (n < 0)
throw new Exception();
else {
int r = 1;
while (n > 0) {
r = r*a;
n--;
}
return r;
}
}
```
Figure 2: Imperative code for the exp method
```java
exp([A,N],Out,H_{in},H_{out},EF) :-
if([A,N],Out,H_{in},H_{out},EF),
if([A,N],Out,H_{in},H_{out},exc(Ref)) :-
N #< 0,
new_object(H_{in},'Exception',Ref,H_{out}).
if([A,N],Out,H,H,ok) :-
N #>= 0,
loop([A,N,1],Out).
loop([A,N,R],R) :-
N #=< 0,
loop([A,N,R],Out) :-
N #> 0,
R' #= R*A,
N' #= N-1,
loop(A,N',R',Out).
```
Figure 3: CLP-translation for the exp method
Example 2.1. Fig. 2 shows the imperative code for method `exp` which takes two integer input arguments `a` and `n` and computes $a^n$ by successive multiplications. If the value of `n` is less than 0 an exception is thrown. Fig. 3 shows its corresponding (pretty-printed) CLP-translation.
The main features that can be observed from the translation are: (1) The root predicates for methods (in this case `exp/5`) include as parameters: the input arguments (as a list), the output argument, the input and output heaps and the exception flag. The rest of the predicates include the required parameters depending on the context. (2) Conditional statements and loops in the source program are transformed into guarded rules and recursion in the CLP program, resp., e.g., rules for while. Mutual exclusion between the rules of a predicate is ensured either by means of mutually exclusive guards, or by information made explicit on the heads of rules, as usual in CLP. (3) The global memory or heap is explicitly handled and carried along the execution being used and transformed by the corresponding heap built-ins as a black box. E.g., the `new_object/4` operation takes an input heap and a class name, and creates a new object of that class, returning the new heap containing it and its assigned reference. Heaps are therefore represented in the CLP program by means of logic variables (e.g. $H_{in}$ and $H_{out}$). (4) Exceptional behaviour is handled explicitly in the CLP-translated program by means of the exception flag and exception objects. When an exception is thrown the flag takes the value `exc(Ref)` being `Ref` the reference of the corresponding exception object in the heap. Otherwise the value `ok` is obtained.
2.2 CLP-based Symbolic Execution and TCG
The standard CLP execution mechanism, together with a suitable implementation of the heap built-ins, suffices to execute the CLP-translated programs. This can be done simply running in a Prolog system a goal with the predicate corresponding to the method under test and fully instantiated input parameters. For instance, we can run the goal \( \exp([2, 10], \text{Out}, \text{H}_{\text{out}}, \text{EF}) \) to compute \( 2^{10} \). Note that the heap is represented as a list of Reference-Object pairs and therefore \([\ ]\) represents an empty heap. As a result the following bounds are obtained: \( \text{Out} = 1024, \text{H}_{\text{out}} = [\ ], \text{EF} = \text{ok} \).
One of the main advantages of our CLP-translated programs is that they can be symbolically executed using the standard CLP execution mechanism. To do that we simply run a goal with the predicate corresponding to the method under test and free variables for all its arguments. The inherent constraint solving and backtracking mechanisms of CLP allow keeping track of so called path conditions (or constraint stores), failing and backtracking when unsatisfiable constraints are hit, hence discarding such execution paths; and succeeding when satisfiable constraints lead to a terminating state in the program, which in the context of symbolic execution implies that a new solution (or test case) is generated.
**Example 2.2.** Let us perform a symbolic execution of the \( \exp \) method by running the goal \( \exp([A, N], \text{Out}, \text{H}_{\text{in}}, \text{H}_{\text{out}}, \text{EF}) \). As a first solution we get
\[
N < 0, \text{H}_{\text{out}} = [(R, \text{object(‘Exception’, …))})]_{\text{H}_{\text{in}}}, \text{EF} = \text{exc}(R)
\]
which reads as: if \( N < 0 \) the execution ends with an uncaught exception whose associated object is \( R \). If we ask for another solution we get
\[
N = 0, \text{Out} = 1, \text{H}_{\text{out}} = \text{H}_{\text{in}}, \text{EF} = \text{ok}
\]
which reads as: if \( N = 0 \) we get 1 as output, regardless of the value of \( A \), and the heap does not change. The third solution is:
\[
N = 1, \text{Out} = A, \text{H}_{\text{out}} = \text{H}_{\text{in}}, \text{EF} = \text{ok}
\]
It is well-known that the symbolic execution tree (SLD tree in this framework) is in general infinite. This is because iterative constructs such as loops and recursion, whose number of iterations depend on input arguments, usually induce an infinite number of execution paths when executed with symbolic input values. This happens for instance in the symbolic execution of the \( \exp \) method. It is therefore essential to establish a termination criterion. Such a termination criterion can be expressed in different forms. For instance, a computation time budget can be established, or an explicit bound on the depth of the symbolic execution tree can be imposed. In our framework we adopt a more code-oriented termination criterion, which consists in imposing an upper bound on the number of times each loop (or recursive call) is iterated. This can be easily implemented in CLP as a meta-interpreter which controls and limits the number of recursive calls made on each predicate [14].
The outcome of such bounded symbolic execution is a finite set of path conditions (variable bindings and constraints over them), one for each symbolic execution path. Each path condition represents the conditions over the input variables that characterize the set of feasible concrete executions of the program that take the same path. In a next step, off-the-shelf constraint solvers can be used to solve such path conditions and generate concrete instantiations for each of them. This last step provides concrete test-cases for the program, amenable to further validation by testing frameworks such as JUnit, which execute such test inputs and check that the output is as expected.
**Example 2.3.** The following concrete test-cases are obtained by our framework for the \( \exp \) method if we set a limit of at most 2 loop iterations and the domain \(-10..10\) to numeric variables:
<table>
<thead>
<tr>
<th>#</th>
<th>Input ((a, n))</th>
<th>Output</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>((-10, -10))</td>
<td>Uncaught Exception</td>
</tr>
<tr>
<td>2</td>
<td>((-10, 0))</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>((-10, 1))</td>
<td>-10</td>
</tr>
<tr>
<td>4</td>
<td>((-10, 2))</td>
<td>100</td>
</tr>
</tbody>
</table>
As an example, the following JUnit test could be automatically generated for the second test-case:
```java
public void testExp2(){
int input0 = -10, input1 = 0;
int output = exp(input0, input1);
int expected = 1;
assertEquals(expected, output);
}
```
**Handling Heap-manipulating Programs.**
One of the main challenges in symbolic execution is to correctly and efficiently handle heap-manipulating programs [?]. This kind of programs often create and use complex and possibly aliased dynamically heap-allocated data structures. Symbolic execution must consider all possible shapes these dynamic data structures can take. In trying to do so, scalability issues arise since many (even an exponential number of) shapes may be built due to the aliasing of references. This lead us to the development of a heap solver [9] which enables a more efficient treatment of reference aliasing in symbolic execution by means of disjunctive reasoning, and
the use of advanced back-propagation of heap related constraints.
2.3 Guided Testing
It is well known that symbolic execution presents scalability problems when it is applied on realistic programs. Also, in the context of static testing, this complicates human reasoning on the generated test cases. Guided testing [36, 37] aims at steering symbolic execution towards specific program paths in order to efficiently generate more relevant test cases and filter out less interesting ones with respect to a given selection criterion. The goal is thus to improve on scalability and efficiency by achieving a high degree of control over the coverage criteria. This has potential applicability for industrial software testing practices such as unit testing, where units of code (e.g. methods) must be thoroughly tested in isolation, or selective testing, in which only specific paths of a program must be tested. The intuition of guided testing is the following: (1) A heuristics-based trace-generator generates possibly partial traces, i.e., partial descriptions of paths, according to a given selection criterion. This can be done by relying on the control-flow graph of the program. (2) Bounded symbolic execution is guided by the obtained traces. The process is repeated until the selection criterion is satisfied or until no more traces are generated.
We can define a concrete methodology for guided testing in our CLP-based framework as follows:
1. We instrument our CLP-translated programs so that they generate as an additional result so called trace-terms. Trace-terms are of the form of the form: \( p(K, \{T_1, \ldots, T_n\}) \). where \( p \) is the name of the predicate, \( K \) a natural number indicating the concrete predicate rule, and, \( T_1, \ldots, T_n \) the trace-terms of the calls in the body of the \( K \)-th rule of \( p \). Trace-terms are tree-like representations of execution traces and enable us to keep track of the sequence (and order) of rules executed in each derivation.
2. We define a trace-generator which generates a set of, possibly partially instantiated, trace-terms according to a given selection criterion.
3. A set of symbolic executions is performed (possibly in parallel), each of them using as input a different trace-term. Trace-terms allow guiding, completely or partially, the symbolic execution towards specific paths.
Example 2.4. Let us consider selective testing for method \texttt{exp}. As a selection criterion, e.g., one could be interested in generating a test-case that raises an exception. The challenge is to generate such a test avoiding traversing as much as possible the rest of the paths. Fig. 4 shows the CLP-translated program instrumented with trace-terms (step 1 above). Let us observe the additional trace-term parameter on each program predicate. An effective trace-generator for the above criterion would be able to generate the trace \( \texttt{exp}(1, \texttt{if}(2, \ldots)) \) (step 2). A symbolic execution using such trace-term as input will generate only the path that raises the exception (step 3).
In [36] we define concrete trace-generators and guided testing schemes for two different selection criteria, and demonstrate its effectiveness via an experimental evaluation. We also discuss about two central aspects of guided testing, namely completeness and effectiveness.
3. TESTING CONCURRENT (ACTOR) PROGRAMS
We consider actor systems [2, 31], a model of concurrent programming that has been gaining popularity and that it is being used in many systems (such as Go, Actor Foundry, Asynchronous Agents, Charm++, E, ABS, Erlang, and Scala). The Actor Model is having also influence on commercial practice, namely Twitter has used actors for scalability, also, Microsoft has used the actor model in the development of its asynchronous agents library.
Actor programs consist of computing entities called actors, each with its own local state and thread of control, that communicate by exchanging messages asynchronously. An actor configuration consists of the local state of the actors and a set of pending tasks. In response to receiving a message, an actor can update its local state, send messages, or create new actors. At each step in the computation of an actor system, firstly an actor and secondly a process of its pending tasks are scheduled. We consider a language for distributed and concurrent programming which, in addition to the usual sequential instructions, has two instructions for concurrency: \( x = \text{new} \ C \) which allows the dynamic creation of an actor \( x \) of class \( C \), and \( x! m(z) \) which spawns a new task or process within the actor \( x \) to execute \( m(z) \). As actors do not share their states, in testing one can assume [39] that the evaluation of all statements of a task takes place serially (without interleaving with any other task) until it releases the processor (gets to a return instruction). In this context, transitions correspond to the execution of complete tasks. In particular, a transition or derivation step \( S_i \rightarrow S_{i+1} \) denotes that the task \( t \) of an existing actor in \( S_i \) has been selected and fully executed, resulting in a modified state \( S_{i+1} \). This corresponds to the notion of macro-step semantics in [39]. Both the selection of which actor executes and which task within the selected component is scheduled is non-deterministic. In order to ensure that all behaviors of the program are tested, the testing process, in principle, must systematically explore all possible ways in which the processes can interleave. This is known as systematic testing [16, 39, 40] in the context of concurrent programs. Such full systematic exploration of all process interleavings
produces the well known state explosion problem and is often computationally intractable (see, e.g., [40] and its references).
**Example 3.1.** Consider the program in Fig. 5, where we have a state with two actors o and o', each of them with a task (t₁ and t' respectively) in its queue. The complete execution tree contains three branches with the results this.f = 7, this.f = 7 and this.f = 5 respectively.
### 3.1 The Path Explosion Problem
The challenge of systematic testing of concurrent programs in general is to avoid as much as possible the exploration of redundant paths which lead to the same configuration. There are two levels of non-determinism:
1. **actor-selection**, the selection of which actor executes, and
2. **task-selection**, the selection of the task within the selected actor.
Such non-determinism might result in different configurations, and they all need to be explored as only some specific interleavings may reveal the bugs.
Partial-order reduction (POR) [23, 20, 24] is a general theory that helps mitigate the state-space explosion problem by formally identifying equivalence classes of redundant explorations. The basic observation that motivates these techniques is that, in general, the set of executions from a state S contains many redundant derivations. Basically, given a derivation where there are two consecutive transitions which are “independent”, i.e., whose execution does not interfere with each other, changing their order of execution will not modify their combined effect. More formally, two macro-step transitions t₁, t₂, possibly belonging to different actors, are independent if:
1. they do not enable each other, i.e., the execution of t₁ does not lead to introducing t₂, or vice versa, and
2. for every state S in which they are both enabled, there is a unique state S₂ such that S \(\xrightarrow{t_1} S_1 \xrightarrow{t_2} S_2\) and S \(\xrightarrow{t_2} S_3 \xrightarrow{t_1} S_2\), i.e., they can commute.
Conversely, two transitions are dependent if they are not independent. Transition dependencies can thus be categorized in:
- enabling dependencies, if one of the transitions enables the other one, and,
- interacting dependencies, if they can be both enabled and their combined effect varies with their order.
A complete derivation thus represents an equivalence class of similar derivations that can be obtained by swapping adjacent independent transitions. The so-called happens-before relation [22], written \(\prec_d\), is used to characterize these equivalence classes. Given a derivation \(S_0 \xrightarrow{t_1} \cdots \xrightarrow{t_n} S_n\), we say that transition \(t_i\) happens-before transition \(t_j\), written \(t_i \prec_d t_j\), if \(i < j\) and \(t_i\) is dependent with \(t_j\). The happens-before relation is a partial-order relation, hence the name POR. Furthermore, two derivations are redundant if they have the same happens-before relation.
**Example 3.2.** Consider the execution tree in Fig. 5. The happens-before relation, or partial order, of the leftmost and middle derivations is \(\{t'_1 \prec_d t_2, t_1 \prec_d t_2\}\). Thus both derivations compute the same result this.f = 7. However the rightmost derivation has as partial order \(\{t'_1 \prec_d t_2, t_2 \prec_d t_1\}\), computing this.f = 5 as result.
The goal of POR methods is to detect these redundant executions and ideally generate only one representative derivation for each equivalence class and with the minimum number of explored states. Early POR algorithms were based on different static analyses to detect and avoid exploring redundant derivations. The state-of-the-art POR algorithm [22] DPOR (Dynamic POR), improves over those approaches by dynamically detecting and avoiding the exploration of redundant derivations-on-the-fly. Since the invention of DPOR, there have been several works [1, 16, 39, 40, 38] proposing improvements, variants and extensions in different contexts to the original DPOR algorithm. The most notable one is [1] which proposes an improved DPOR algorithm which further reduces redundant computations ensuring that only one derivation per equivalence class is generated. Some of these works [40, 39, 33] have addressed the application of POR to the context of actor systems from different perspectives. The most recent one [40] presents the TransDPOR algorithm, which extends DPOR to take advantage of a specific property in the dependency relations in pure actor systems, namely transitivity, to explore fewer configurations than DPOR.
Intuitively a DPOR algorithm carries out the exploration of the execution tree using POR. Each node (i.e., state) in the execution tree is evaluated with a backtracking set back, which is used to store those actors that must be explored from this node. The backtracking set in the initial state is empty. DPOR algorithms look dynamically at occurring interacting dependencies (i.e., tasks that can be both enabled in a state and their combined effect varies with their execution order) and only backtracks at the those states in which it is possible to reverse them. The TransDPOR algorithm [40] uses an over-approximation \(\subseteq_d\) of \(\prec_d\) which considers as dependent those tasks which belong to the same actor. Thus once an actor has been selected the algorithm always tries with the selection of all tasks, since they are in principle considered to be dependent with each other. Instead, at the level of actor selection, the back set is dynamically updated only with the actors that need to be explored. In particular, an actor is added to back only if during the execution the algorithm realizes that it was needed.
because a new task $t$ of an actor $a$ previously selected appears. This situation might indicate an interacting dependency, and therefore, the algorithm must try to explore the reverse reordering, by updating the back set of the last state $S$ in which $a$ was used. As a simple example, consider a state $S$ in which an actor $a_1$ with a unique task $t_1$ is selected. Now, assume that when the execution proceeds, a new task $t_2$ of $a_1$ is spawned by the execution of a task $t'$ of an actor $a_2$ and that $t'$ was in $S$. This means that it is required to consider also first the execution of $t_2$ and, next the execution of $t_1$, since it represents a different partial order between the tasks of $a_1$. This is accomplished by adding $a_2$ to the back set of $S$, which allows exploring the execution in which $a_2$ is selected before $a_1$ at $S$, and thus considering the partial order $t_2 <_d t_1$. The formal process of updating the back sets (and its optimization with freeze-flags to avoid further redundancy) can be found at [40].
**Example 3.3.** Figure 5 shows the execution tree explored by TransDPOR using the over-approximation $<_d$. We distinguish two types of edges: dotted edges that will be removed later when introducing the notion of stable actor in Sec. 3.2, and normal edges which are introduced when an actor has been selected and thus all its tasks must be executed. The unique back set updated by TransDPOR is the one associated to the root of the tree. Concretely, $o'$ is introduced in back($S_0$) when executing $o'.t'$ in the left branch, what generates the new state $o'.t'_2.o';\{\}$. At this point, since $o,t_2$ was previously executed from $S_0$, $t_2$ does not belong to the queue of $o$ in $S_0$ but $t'_2$ belongs and introduces $t_2$ in the queue of $o$ in $S_0$, then $t'_2$ must be added to back($S_0$).
The techniques that we describe below enhance previous approaches with novel strategies to further prune redundant state exploration, and that can be easily integrated within the aforementioned algorithms.
### 3.2 Stable Objects
In previous DPOR algorithms actors are selected arbitrarily. As noticed in [33], the pruning that can be achieved using DPOR algorithms is highly dependent on the order in which tasks are considered for processing. Consider the execution tree in Figure 5. We can see that the same partial order $t_1 <_d t_2$ occurs in the executions computing this.$f = 7$ as result. Hence, the dotted subtree can be removed by considering only the rightmost one.
The notion of temporal stability allows us to guide the selection of actors so that the search space can be pruned further and redundant computations avoided. An actor is stable if there is no other actor different from it that introduces tasks in its queue. Basically, this means that the actor is autonomous since its execution does not depend on any other actor. In general, it is quite unlikely that an actor is stable in a whole execution. However, if we consider the tasks that have been spawned in a given state, it is often the case that we can find an actor that is temporarily stable w.r.t. the actors in that state.
**Example 3.4.** Let us re-consider the exploration of the example in Fig. 5 with our improved actor selection function. Observe that at the root node, actor $o$ is not temporarily stable because in the queue of $o'$ there is a call $t'$, and in the body of method $t'$ there is also a call to method $t_2$ of object $o$ (i.e., $o$ can possibly be modified by $o'$). However, actor $o'$ is temporarily stable at the root. Our algorithm will therefore select $o'$. The rightmost subtree in Fig. 5 corresponds to the state space explored by our algorithm, in which we can observe that there is no redundant state exploration (with the over-approximation of dependencies used).
### 3.3 Independent Tasks
As mentioned in Sec. 3.1, the notion of task dependency has to be over-approximated in order to be used in practice within DPOR. The precision of this over-approximation can be crucial for the effectiveness of DPOR, as the following example shows.
**Example 3.5.** Consider the program in Fig. 6, where all methods belong to the same class which contains two fields $f$ and $g$, both of them initialized to 1. Using the theoretical definition of task dependency, there are only two equivalence classes of non-redundant executions (branches labeled with (1) and (2)), whereas using the over-approximation of [40] there are six equivalence classes (the complete execution tree).
The over-approximation of [40] could be improved by looking at shared memory accesses among tasks of the same actor. This way, two tasks belonging to the same actor would have an interacting dependency only if they access a non-disjoint area of their shared memory. In our example, this improved over-approximation would detect task $r_0$ independent to both $r_1$ and $r_2$, hence allowing DPOR to behave in an optimal way. We call dep to this particular relation.
**Example 3.6.** Let us consider the program in Fig. 6. Assume that $R(t)$ and $W(t)$ denote the set of fields that are read and written, respectively, in task $t$. We have that $R(r_0) = W(r_0) = \{f\}$, $R(r_1) = W(r_1) = \{g\}$ and $R(r_2) = W(r_2) = \{g\}$. Thus, $r_1, r_2$ are dependent but $r_0, r_1$ and $r_0, r_2$ are independent.
In order to use the dep relation in testing, we have defined a specialized algorithm at the task selection level which takes full advantage of the over-approximation used in it. The algorithm makes use of marks in the tasks so that the elements in the queues can now be marked or unmarked, written $t$ or
be used to improve the detection of independent tasks. Also, we have recently worked on the extension of our framework for concurrent testing so that it can be driven towards paths that potentially lead to deadlock [12].
Acknowledgments
This work was funded partially by the EU project FP7-ICT-610582 ENVISAGE: Engineering Virtualized Services (http://www.envisage-project.eu), by the Spanish MINECO projects TIN2012-38137 and TIN2015-69175-C4-2-R, and by the CM project S2013/ICE-3006.
5. REFERENCES
|
{"Source-Url": "http://costa.ls.fi.upm.es/papers/costa/AlbertAG16.pdf", "len_cl100k_base": 7674, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 32885, "total-output-tokens": 10953, "length": "2e12", "weborganizer": {"__label__adult": 0.0003862380981445313, "__label__art_design": 0.00026297569274902344, "__label__crime_law": 0.0003859996795654297, "__label__education_jobs": 0.0008358955383300781, "__label__entertainment": 5.6803226470947266e-05, "__label__fashion_beauty": 0.00015974044799804688, "__label__finance_business": 0.00013768672943115234, "__label__food_dining": 0.00034546852111816406, "__label__games": 0.0006189346313476562, "__label__hardware": 0.0006589889526367188, "__label__health": 0.0005602836608886719, "__label__history": 0.0001982450485229492, "__label__home_hobbies": 7.385015487670898e-05, "__label__industrial": 0.00034618377685546875, "__label__literature": 0.0002720355987548828, "__label__politics": 0.00027942657470703125, "__label__religion": 0.00054931640625, "__label__science_tech": 0.01114654541015625, "__label__social_life": 9.179115295410156e-05, "__label__software": 0.0036067962646484375, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.00034236907958984375, "__label__transportation": 0.00051116943359375, "__label__travel": 0.00020515918731689453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40745, 0.04035]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40745, 0.62904]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40745, 0.88273]], "google_gemma-3-12b-it_contains_pii": [[0, 4905, false], [4905, 10489, null], [10489, 15727, null], [15727, 21493, null], [21493, 27165, null], [27165, 32827, null], [32827, 35471, null], [35471, 40745, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4905, true], [4905, 10489, null], [10489, 15727, null], [15727, 21493, null], [21493, 27165, null], [27165, 32827, null], [32827, 35471, null], [35471, 40745, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40745, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40745, null]], "pdf_page_numbers": [[0, 4905, 1], [4905, 10489, 2], [10489, 15727, 3], [15727, 21493, 4], [21493, 27165, 5], [27165, 32827, 6], [32827, 35471, 7], [35471, 40745, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40745, 0.03209]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
e3545e008f89cc379de87b1ec16b2b088dbd808e
|
Semantic Web-driven development of service-oriented systems - exploiting linked data for service annotation and discovery
Conference or Workshop Item
How to cite:
© 2011 The Authors
Version: Version of Record
Link(s) to article on publisher’s website:
Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online's data policy on reuse of materials please consult the policies page.
oro.open.ac.uk
Semantic Web-driven Development of Services-oriented Systems – Exploiting Linked Data for Services Annotation and Discovery
Stefan Dietze\(^1\), Dong Liu\(^2\), Hong Qing Yu\(^2\), Carlos Pedrinaci\(^2\)
\(^1\)L3S Research Center, Hanover, Germany
dietze@l3s.de
\(^2\)Knowledge Media Institute, The Open University, Milton Keynes, MK76AA, UK
\{d.liu, h.q.yu, c.pedrinaci\}@open.ac.uk
Abstract. Within a service-oriented architecture (SOA), software components are accessible via well-defined interfaces. To this end, the discovery and integration of Web services and APIs is becoming an increasingly important task in present-day software engineering. Despite considerable research dedicated to Semantic Web Services (SWS), structured semantics are still not used significantly to facilitate services and API discovery. This is due to the complexity of comprehensive SWS models and has led to the emergence of a new approach dubbed Linked Service which adopt Linked Data principles to produce simplified, RDF-based service descriptions that are easier to create and interpret. However, current Linked Services tools assume the existence of services documentation (HTML, WSDL) and do not sufficiently support non-functional properties (NfP). Therefore, we introduce SmartLink, a Web-based editor and search environment which allows both humans as well as machines to produce light-weight service descriptions from scratch by addressing both, functional and non-functional service properties.
Keywords: Software Engineering, Semantic Web, Linked Data, SmartLink.
1 Introduction
An essential part of Software Engineering nowadays is concerned with the discovery of reusable software components which satisfy one or more requirements of the overall system to be implemented. The past decade has seen the emergence and large-scale success of another fundamental paradigm: service-orientation. Within a service-oriented architecture (SOA), components are accessible via well-defined interfaces and usually exchange messages via remote-procedure calls (RPC), HTTP or SOAP. Particularly the emergence of REST-ful services has led to the widespread availability of public and reusable Web APIs, such as the wide range of APIs offered by Google\(^1\). To this end, the discovery and integration of Web services and APIs is becoming an increasingly important task in present-day software engineering.
\(^1\)https://code.google.com/
Research efforts in the area of Semantic Web Services (SWS) were mainly aiming at the automation of Web service-related tasks such as discovery, orchestration or mediation. Several conceptual models, such as OWL-S [6], WSMO [3], and standards like SAWSDL [7] have been proposed, usually covering aspects such as service capabilities and interfaces. However, SWS research has for the most part targeted WSDL or SOAP-based Web services, which are not prevalent on the Web. Also, due to the inherent complexity required to fully capture computational functionality, creating SWS descriptions has represented an important knowledge acquisition bottleneck and required the use of rich knowledge representation languages and complex reasoners. Hence, so far there has been little take up of SWS technology within non-academic environments. That is particularly concerning since Web services – nowadays including a range of often more light-weight technologies beyond the WSDL/SOAP approach, such as RESTful services or XML-feeds – are in widespread use throughout the Web. That has led to the emergence of more simplified SWS approaches such as WSMO-Lite [9] SA-REST [7] and Micro-WSMO/hRESTs [4] which benefit from simpler models expressed in RDF(S).
While the Semantic Web has successfully redefined itself as a Web of Linked (Open) Data (LOD) [1], the emerging Linked Services approach [7] exploits the established LOD principles for service description and publication. By supporting annotation of a variety of services, such as WSDL services as well as REST APIs, the Linked Services registry and discovery engine iServe\(^2\) enables publishing of service annotations as linked data expressed in terms of a simple conceptual model: Minimal Service Model (MSM), a simple RDF(S) ontology able to capture (part of) the semantics of both Web services and Web APIs.
However, while Linked Services appears to be a promising stream of research, we observe two major issues which hinder a large-scale take-up of the Linked Services approach:
(i1) Lack of consideration of non-functional service properties and less formal metadata
(ii) Lack of appropriate editors and annotation environments
With respect to (i1), previous efforts have largely focused on formalizing the actual functionalities of a service (capabilities, interfaces). However, in order to allow assessment about suitability of individual services or APIs for a particular service consumer, non-functional properties (NfP) are of crucial importance. These include, for instance, basic metadata about the development status or the licensing model as well as information about the quality of service (QoS). In addition, less formal services annotations turned out to be very useful since one of the yet most established mode of using Linked Services aims at rather semi-automated service discovery where developers browse or navigate through Linked Services libraries based on filtering mechanisms, as opposed to fully automated services discovery and orchestration. While the latter is fundamentally dependent on complex and formal specifications of services capabilities and interfaces (i.e. functional properties) the former can be supported based on rather light-weight and often non-functional service
\(\text{http://iserve.kmi.open.ac.uk}\)
metadata, such as classifications, tags or development status information. However these are not sufficiently supported within current schemas such as MSM and WSMO-Lite.
With regard to (i2), editors had been developed which support developers in creating semantic annotations for services: SWEET [5] (SemanticWeb sErvices Editing Tool) and SOWER (SWEET is nOt a Wsdl EditoR). However, SWEET and SOWER build on the assumption that either HTML documentation of services/APIs (SWEET) or WSDL files (SOWER) are available as starting point for annotation. While that holds for a certain set of services, a growing number of services on the Web neither provide a WSDL nor an HTML documentation and hence, current Linked Services editors cannot be deployed in a range of cases. In this regard, we particularly would like to promote an approach where services documentation relies exclusively on structured RDF(S) while additional human-readable documentation is not provided manually but automatically generated to avoid redundancies.
Therefore, we introduce SmartLink3 ("SeMantic Annotation enViRonmenT for Linked services"), which addresses (i1) and (i2) by contributing:
(a) an RDF schema and data store for service NfP
(b) an integrated editing and browsing environment for Linked Services on the Web (taking into account both functional and non-functional data)
In the following Section we provide some background information on Linked Services, while Section 3 introduces the SmartLink NfP schema. Section 4 describes overall architecture of SmartLink. We finally discuss our results in Section 6.
2 Non-functional properties for Linked Services
Previous work dealing with the exploitation of SWS and Linked Services technologies in NoTube4 and mEducator5, as described in [2][10], has shown that one of the most established and accepted use cases for Linked Services annotations aims at browsing and searching services in a meaningful way as opposed to automated services discovery and execution. To this end, Linked Services seem of particular use when aiding developers in finding APIs for a given software engineering task.
In this regard, formal specifications turned out to be less important while light-weight service annotation with tags/keywords and classifications plaid a vital role. Particularly when supporting collaborative annotation of entities – services like any documents, content or data – by a multiplicity of service consumers and developers, formal correctness of the generated data can hardly be enforced and means are required to provide descriptions in a more loose and flexible way. For instance, in many cases, Linked Data resources can be roughly associated with a service – for instance, by tagging it with a service category or keyword which might not provide formal enough semantics to facilitate automation of discovery-based execution, but might still be useful to facilitate users in finding appropriate services. For instance, an
---
1 http://smartlink.open.ac.uk & http://kmi.open.ac.uk/technologies/name/smartlink
2 http://notube.tv
3 http://www.meducator.net
API exposing metadata of resources could be associated with a keyword “metadata” or a reference to http://dbpedia.org/resource/Metadata. However, the current scope of SWS and Linked Services does not provide appropriate facilities to represent such rather lose relationships in an appropriate way but focuses on formal representations of service elements, such as message parts or operations. In that respect, a need for less formal services annotations was observed, to facilitate developers and service consumers to collaboratively annotate services based on Linked Data principles without constraining them by insisting on complete coherence of the provided annotations. Instead of enforcing non-contradictory data, collaborative annotation schemas need to embrace diversity even if that reduces the opportunities for reasoning-based automation.
On a similar note, current service description schemas (e.g., MSM, OWL-S, WSMO-Lite) seem to be fundamentally focused on functional properties while not providing sufficient support for NfPs, which would, for instance, allow users to specify licensing schemes, quality of service information or development status descriptions. While some schemas already allow the association of additional service information with particular service instances, the use of dedicated Linked Data vocabularies to further specify NfPs is still underdeveloped.
**SmartLink NfP schema**
To this end, we have developed a dedicated schema that addresses the aforementioned issues by (a) focusing in particular on NfPs and (b) facilitating collaborative, naturally diverse and less formally coherent annotation. To ensure the widespread applicability and reusability of the NfP schema, we reuse existing ontologies and vocabularies rather than constructing new ontologies from scratch. As shown in Fig. 1, the schema captures four main aspects of the non-functional properties of Web services, i.e. social, technical, licensing and QoS. Social attributes include human factors such as developer, contact person, organisation, project. The FOAF\(^6\) vocabulary is adopted to describe those personal and social factors. Furthermore, tags attached to Web services are also regarded as an important social attribute, which helps in service classification and organization. Thus, the CommonTag\(^7\) vocabulary is adopted to support the tagging by ensuring interoperability of provided service tags. The technical NfPs refer to information about how to interact with the services and cover, for instance, the communication protocol (e.g. HTTP and SOAP), data (exchange) format (e.g. XML, RDF and JSON), status (e.g. testing, final, work-in-progress), authentication model (e.g. HTTP Basic, API Key, OAuth). It is worth noting that technical NfPs do not describe the behaviours of services, but clarify the prerequisites for consumers to invoke those Web services.
The licensing properties indicate the terms and conditions with respect to the usage of individual Web services. As shown in Fig. 1, we currently define four concepts for the licensing properties, i.e. service license, data license, usage limits and fees. A service license authorizes and constrains invocation of the service, whereas a data license is for the reuse or repurpose of data generated or provided by the service. Usage limits cover the amount of times of service invocation within a certain time period, or the minimum interval between two times of invocation. Obviously, fees are
---
\(^6\) http://www.foaf-project.org/
\(^7\) http://commontag.org/
applicable to non-free services only and refer to the price a consumer needs to pay for consuming a service.
Fig. 1. A partial view of the SmartLink NfP Schema.
The licensing properties indicate the terms and conditions with respect to the usage of individual Web services. As shown in Fig. 1, we currently define four concepts for the licensing properties, i.e. service license, data license, usage limits and fees. A service license authorizes and constrains invocation of the service, whereas a data license is for the reuse or repurpose of data generated or provided by the service. Usage limits cover the amount of times of service invocation within a certain time period, or the minimum interval between two times of invocation. Obviously, fees are applicable to non-free services only and refer to the price a consumer needs to pay for consuming a service.
With respect to the quality of Web services, we adopt the model from [12], where the QoS parameters are divided into two classes: objective parameters and subjective parameters. The former are quantitative measures like availability, reliability, throughput and response time, whereas the latter are qualitative measures like user ratings. Here, we only focus on the objective QoS parameters, because theses have been published on the Web [8].
Schema mapping and alignment
We reuse existing vocabularies to represent the NIPs of Web services. It allows interoperability between individual service description repositories and facilitates the import of publicly available service NIP metadata into SmartLink. Here, we take ProgrammableWeb [10] as an example to demonstrate schema mapping and alignment.
Parts of the mappings between our schema and the one of ProgrammableWeb are shown in the table below. In addition, API Status [8] provides the statistics of the availability and response time of public APIs. Similarly, Mashery [9] monitors on the availability and response time of a set of services. The metadata these repositories exploit can be completely mapped to SmartLink schema. Moreover, the data can also be imported to SmartLink.
---
8 http://api-status.com/
9 http://developer.mashery.com/status
10 http://www.programmableweb.com/
Table. 1. NfP schema mapping between SmartLink and ProgrammableWeb.
<table>
<thead>
<tr>
<th>SmartLink NfP Schema</th>
<th>ProgrammableWeb’s Schema</th>
</tr>
</thead>
<tbody>
<tr>
<td>ServiceLicense</td>
<td>Commercial Licensing</td>
</tr>
<tr>
<td>ServiceLicense</td>
<td>Non-Commercial Licensing</td>
</tr>
<tr>
<td>Fee</td>
<td>Usage Fees</td>
</tr>
<tr>
<td>Usage Limit</td>
<td>Usage Limits</td>
</tr>
<tr>
<td>Authentication Model</td>
<td>Authentication Model</td>
</tr>
<tr>
<td>foaf:Organization</td>
<td>Provider</td>
</tr>
<tr>
<td>foaf:Company</td>
<td>Company</td>
</tr>
<tr>
<td>foaf:weblog</td>
<td>API Blog</td>
</tr>
</tbody>
</table>
3 SmartLink: a Linked Services editor and browser
In order to provide a Linked Services editor which allows (a) the annotation of REST-full services without any pre-existing documentation and (b) annotation of services according to multiple schemas, in particular SmartLink NfP, we have developed the SmartLink editor. SmartLink provides editing and browsing facilities to interact with multiple RDF stores and data sets. It allows annotation of services from scratch, that is, without any pre-existing services documentation such as WSDL or HTML files, as assumed by existing annotation tools (Section 1).

As shown in Fig. 2, SmartLink operates on top of Linked Data stores that exploit the MSM and the SmartLink NfP schemas and are interlinked with other Linked Data sets. MSM-schema properties are directly stored in iServe, while additional properties are captured in our SmartLink NfP repository\(^\text{11}\). The repository provides a SPARQL endpoint\(^\text{12}\). Following rdfs:isDefinedBy links from SmartLink to iServe, more information about the functionalities and behaviours of the services can be retrieved. Being an LOD-compliant environment, one of the core features of SmartLink is the capability to associate service descriptions with so-called model references which refer to RDF descriptions in external vocabularies defining the semantics of the
---
\(^\text{11}\) [http://ckan.net/package/smartlink](http://ckan.net/package/smartlink)
\(^\text{12}\) [http://smartlink.open.ac.uk/smartlink/sparql](http://smartlink.open.ac.uk/smartlink/sparql)
service or its parts. However, while this feature is useful and even necessary in order to provide meaningful service models, finding appropriate model references across the entire Web of data is a challenging task. Therefore, SmartLink uses established Linked Data APIs – currently the WATSON\textsuperscript{13} API - to identify and recommend suitable model references to the user.

Fig. 3. SmartLink – Service editor.
After loading RDF triples from both iServe and SmartLink, the editor visualizes the description of a service as shown in Fig. 3. The left-hand side of the editor is the tree-based overview of the service, which represents a hierarchy composed of a service, its operations and input/output messages. The right hand side displays more details about the selected element in a form, which essentially include the semantics, categories, and literal descriptions. To persistently store changes made to a service description, SmartLink publishes the descriptions as Linked Data by invoking the RESTful APIs provided by iServe and the SmartLink NfP repository. SmartLink currently provides mechanisms that enable the export of particular service instances as RDF or human-readable HTML. In order to facilitate service model transformation between MSM and other SWS formalisms, current research deals with the establishment of an export mechanism of MSM/SmartLink NfP services. In addition, SmartLink also offers a simple UI for filtering services by NfPs. That way, developers can easily construct queries without having to formulate SPARQL queries to create specific views on the services data.
4 Discussion and conclusion
In this paper, we have proposed SmartLink which provides (a) an RDF schema which allows to describe non-functional properties of Web APIs and services (SmartLink NfP) and (b) a public environment which enables developers to annotate services and store descriptions in a public Linked Data-compliant store, to interlink them with other service descriptions such as the ones offered by iServe, and to search for available services and APIs by exploiting the structured semantics of the SmartLink NfP repository. To this end, SmartLink facilitates software engineering processes, particularly in the context of the prevailing SOA paradigm, by supporting developers in annotation and discovery of software components, i.e., services and APIs, across the Web.
\textsuperscript{13} http://watson.kmi.open.ac.uk/WatsonWUI/
Currently ongoing work deals with the exploitation of SmartLink in the context of two European projects, NoTube and mEducator (see [10]). While NoTube exploits the SmartLink approach merely as a means to aid software developers in documenting and searching software/services, in mEducator SmartLink also supports the execution and alignment of heterogeneous services. However, while the currently implemented execution approach is tailored to a specific kind of services – educational metadata harvesting services – no general-purpose execution approach had been developed yet.
From our initial use cases, a few observations have been made which will shape our future efforts. Current research and development deals with the extension of the MSM/SmartLink NfP schemas by taking into account execution and composition oriented aspects. These extensions will be supported by the development of additional APIs, which allow the discovery, execution and semi-automated composition of Linked Services in a general-purpose fashion.
Acknowledgements. This work is partly carried out within the research projects NoTube and mEducator, kindly funded by the European Commission. The authors would like to thank the European Commission for their support.
5 References
|
{"Source-Url": "http://oro.open.ac.uk/30056/1/8.pdf", "len_cl100k_base": 4274, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 20649, "total-output-tokens": 5479, "length": "2e12", "weborganizer": {"__label__adult": 0.0002753734588623047, "__label__art_design": 0.000347137451171875, "__label__crime_law": 0.00027441978454589844, "__label__education_jobs": 0.000804901123046875, "__label__entertainment": 6.967782974243164e-05, "__label__fashion_beauty": 0.00013685226440429688, "__label__finance_business": 0.0002765655517578125, "__label__food_dining": 0.0003082752227783203, "__label__games": 0.0003170967102050781, "__label__hardware": 0.0005002021789550781, "__label__health": 0.00041294097900390625, "__label__history": 0.00023472309112548828, "__label__home_hobbies": 6.836652755737305e-05, "__label__industrial": 0.00030732154846191406, "__label__literature": 0.0003294944763183594, "__label__politics": 0.0002269744873046875, "__label__religion": 0.00040650367736816406, "__label__science_tech": 0.02655029296875, "__label__social_life": 0.00010651350021362303, "__label__software": 0.01102447509765625, "__label__software_dev": 0.9560546875, "__label__sports_fitness": 0.0001920461654663086, "__label__transportation": 0.0004773139953613281, "__label__travel": 0.00020897388458251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 23863, 0.01942]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 23863, 0.16276]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 23863, 0.88642]], "google_gemma-3-12b-it_contains_pii": [[0, 891, false], [891, 3318, null], [3318, 6627, null], [6627, 9733, null], [9733, 13288, null], [13288, 15504, null], [15504, 17719, null], [17719, 20216, null], [20216, 23863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 891, true], [891, 3318, null], [3318, 6627, null], [6627, 9733, null], [9733, 13288, null], [13288, 15504, null], [15504, 17719, null], [17719, 20216, null], [20216, 23863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 23863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 23863, null]], "pdf_page_numbers": [[0, 891, 1], [891, 3318, 2], [3318, 6627, 3], [6627, 9733, 4], [9733, 13288, 5], [13288, 15504, 6], [15504, 17719, 7], [17719, 20216, 8], [20216, 23863, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 23863, 0.10101]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
5b87adfd429db25319ffd29acd4f1c303334f8b2
|
Announcements
- Accounts have been set up
- last 30 minutes of lecture
- if someone has registered for the course but did not fill in the account information form, please talk to me after class
- Homework 1 due today
- Homework 2 will be handed out today
- due on October 8, 1998
- 3 weeks to project proposal due date (October 15)
- project groups (3-4 students per group)
Outline
- Last lecture
- parallel programs
- key steps: decomposition, assignment, orchestration, mapping
- case studies: Ocean, Raytrace
- parallel constructs in different programming models
- data-parallel, shared memory
- This lecture
- parallel constructs in different programming models (contd.)
- message passing, summary and comparison
- analytical models of parallel computation: PRAM, LogP
- performance issues: naming, synchronization, latency, bandwidth
- tutorial (last 30 minutes)
- how to write and run programs on the HP/Convex Exemplar
{ Culler/Singh/Gupta: Chapter 1 (1.3), 3, Almasi/Gottlieb: Chapter 4, LogP paper }
Grid Solver Example
Expression for updating each interior point:
\[ A(i,j) = 0.2 \times (A(i,j) + A[i-1,j] + A[i+1,j] + A[i,j+1] + A[i,j-1]) \]
- Gauss-Seidel (near-neighbor) sweeps to convergence
- interior \( n \times n \) points of \( (n+2) \times (n+2) \) grid updated in each sweep
- updates done \textit{in-place}
- keep track of difference from previous value
- accumulate partial differences into global difference at end of every sweep
- do another sweep if error has not converged
1. int n; /*size of matrix: (n + 2-by-n + 2) elements*/
2. float **A, diff = 0;
3. main()
4. begin
5. read(n); /*read input parameter: matrix size*/
6. A = malloc (a 2-d array of size n + 2 by n + 2 doubles);
7. initialize(A); /*initialize the matrix A somehow*/
8. Solve (A); /*call the routine to solve equation*/
9. end main
10. procedure Solve (A) /*solve the equation system*/
11. begin
12. float **A;
13. /*A is an (n + 2-by-n + 2) array*/
14. int i, j, done = 0;
15. float temp;
16. diff = 0;
17. while (!done) do
18. for i ← 1 to n do
19. for j ← 1 to n do
20. temp = A[i,j]; /*save old value of element*/
22. end for
23. end for
24. if (diff/(n*n) < TOL) then done = 1;
25. end while
26. end procedure
Grid Solver Example: Red-Black Ordering
- Left-to-right, top-to-bottom ordering not fundamental to Gauss-Seidel
- Red-black ordering
- decompose grid into two sets of points (as in a chess-board)
- different ordering of updates: may converge quicker or slower
- red sweep and black sweep are each fully parallel
- global synchronization between them (conservative but convenient)
- Exploit additional asynchrony not present in the sequential algorithm
Message Passing Model: Orchestration Support
- Process creation and termination
- CREATE
- WAIT_FOR_END
- Communication: data-transfer + synchronization
- SEND(src_addr, size, dest, tag)
- send `size` bytes from `src_addr` to `dest` process, with `tag` identifier
- RECEIVE(buffer_addr, size, src, tag)
- receive a message of `size` from `src` process with `tag` identifier, and store it in `buffer_addr`
- SEND_ASYNC, SEND_PROBE, RECEIVE_ASYNC, RECEIVE_PROBE
- Global synchronization
- BARRIER
Message Passing Model: Grid Solver Example
- Structurally similar to shared memory program (still SPMD), but differs significantly in orchestration
- data structures and data access/naming
- cannot declare grid to be a shared array any more
- need to compose it logically from per-process private arrays
- usually allocated in accordance with the assignment of work
- process assigned a set of rows allocates them locally
- communication
- transfers of entire rows between traversals
- synchronization
```c
int pid, n, b;
/*process id, matrix dimension and number of processors to be used*/
float **myA;
main()
begin
read(n); read(nprocs);
CREATE (nprocs-1, Solve);
Solve();
WAIT_FOR_END (nprocs–1);
end main
procedure Solve()
begin
int i,j, pid, n' = n/nprocs, done = 0;
float temp, tempdiff, mydiff = 0;
/*private variables*/
myA ← malloc(a 2-d array of size [n/nprocs + 2] by n+2);
/*my assigned rows of A*/
initialize(myA);
/*initialize my rows of A, in an unspecified way*/
while (!done) do
mydiff = 0;
/*set local diff to 0*/
if (pid != 0) then
SEND(&myA[1,0],n*sizeof(float),pid-1,
ROW);
if (pid = nprocs-1) then
SEND(&myA[n',0],n*sizeof(float),pid+1,
ROW);
if (pid != 0) then
RECEIVE(&myA[0,0],n*sizeof(float),pid-1,
ROW);
if (pid != nprocs-1) then
RECEIVE(&myA[n'+1,0],n*sizeof(float), pid+1,
ROW);
/*border rows of neighbors have now been copied
into myA[0,*] and myA[n'+1,*]*/
for i ← 1 to n' do
/*for each of my (nonghost) rows*/
for j ← 1 to n do
/*for all nonborder elements in that row*/
temp = myA[i,j];
myA[i,j+1] + myA[i+1,j]);
mydiff += abs(myA[i,j] - temp);
endfor
endfor
/*communicate local diff values and determine if done; can be replaced by reduction and broadcast*/
if (pid != 0) then
SEND(mydiff,sizeof(float),0,
DIFF);
if (pid == 0) then
REDUCE(0,mydiff,sizeof(float),ADD);
if (mydiff/(n*n) < TOL) then done = 1;
endfor
mydiff += tempdiff; /*accumulate into total*/
if (mydiff/(n*n) < TOL) then done = 1;
for i ← 1 to nprocs-1 do
SEND(done,sizeof(int),i,
DONE);
endif
endwhile
end procedure
```
Message Passing Model: Grid Solver (contd.)
- Private portions of grid array
- use of ghost rows: to store neighbor values
- Core: similar, but indices/bounds in local rather than global space
- Communication
- receive does not transfer data, send does
- at beginning of iteration (no asynchrony), whole rows at a time
- Synchronization
- using sends and receives
- update of global diff and event synchronization for done condition
- could implement locks and barriers with messages
- can use REDUCE and BROADCAST library calls to simplify code
/*communicate local diff values and determine if done, using reduction and broadcast*/
### Send and Receive Alternatives
- Can extend functionality
- stride, scatter-gather, groups
- Semantic flavors: based on when control is returned after call
- Synchronous
- Asynchronous
- affect when data structures or buffers can be reused at either end
- affect event synchronization
- synchronous messages provide synchronization through match
- separate event synchronization needed with asynchronous messages
- affect ease of programming and performance
- with synchronous messages, our code is deadlocked! Fix?
### Orchestration: Summary
- Data parallel
- decomposition of data structures (implicit assignment of tasks)
- Shared address space
- shared and private data explicitly separate
- no correctness need for data distribution
- communication implicit in access patterns
- synchronization via atomic operations on shared data
- synchronization explicit and distinct from data communication
- Message passing
- data distribution among local address spaces needed
- no explicit shared structures
- communication is explicit
- synchronization implicit in communication
- with synchronous SEND/RECEIVE primitives
- mutual exclusion for free: only one process updating each address space
### Grid Solver Program
- Decomposition and assignment (partitioning) similar in all three programming models
- Orchestration is different
- data structures, data access/naming, communication, synchronization
<table>
<thead>
<tr>
<th></th>
<th>Data Parallel</th>
<th>Shared Memory</th>
<th>Message Passing</th>
</tr>
</thead>
<tbody>
<tr>
<td>Explicit global data structure?</td>
<td>Yes</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Assignment independent of data layout?</td>
<td>No</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Communication</td>
<td>Implicit</td>
<td>Implicit</td>
<td>Explicit</td>
</tr>
<tr>
<td>Synchronization</td>
<td>Implicit</td>
<td>Explicit</td>
<td>Implicit</td>
</tr>
<tr>
<td>Explicit replication of border rows?</td>
<td>No</td>
<td>No</td>
<td>Yes</td>
</tr>
</tbody>
</table>
High-Performance Parallel Programs
• Tradeoffs between several interacting issues
– can be addressed/helped by software or hardware
• Models of parallel computation
– ideal: PRAM
– realistic: LogP
• Program tuning as successive refinement
– architecture-independent partitioning
• view machine as a collection of communicating processors
• focus: balancing workload, reducing inherent communication & extra work
– architecture-dependent orchestration
• view machine as extended memory hierarchy
• focus: reduce artifactual communication, orchestration overheads
• What are the common issues?
PRAM
• Idealized model of parallel computation
– collection of P processors and a single memory
– in one computation step, each processor can perform one operation, read from a memory cell, and write into a memory cell
– distinctions based on whether or not simultaneous access (particularly stores) permitted to a single memory cell
• EREW: a cell cannot be simultaneously accessed by two processors
• CREW: reads are okay, writes are serialized
• CRCW: most flexible; combining of written results
• Does not model behavior of real parallel machines
– assumes zero cost of communication
• infinite bandwidth, zero latency, zero overhead
– does not model contention
• simultaneous access permitted to a single memory cell
• processors are assumed to operate synchronously
• eliminates need for synchronization primitives
LogP: A More Realistic Model
• Idealized model of parallel computation
– collection of P processors and a single memory
– in one computation step, each processor can perform one operation, read from a memory cell, and write into a memory cell
– distinctions based on whether or not simultaneous access (particularly stores) permitted to a single memory cell
• EREW: a cell cannot be simultaneously accessed by two processors
• CREW: reads are okay, writes are serialized
• CRCW: most flexible; combining of written results
• Does not model behavior of real parallel machines
– assumes zero cost of communication
• infinite bandwidth, zero latency, zero overhead
– does not model contention
• simultaneous access permitted to a single memory cell
• processors are assumed to operate synchronously
• eliminates need for synchronization primitives
Implications of LogP
• Eliminates loopholes provided by PRAM-like models
– communication costs
• motivates larger-grained applications
• motivates locality optimizations in algorithms
– contention for resources is modeled
• finite-capacity network models network contention
• g models end-point contention
• o models occupancy
– overlap of communication and computation
• separation of o and L parameters
• LogP is a compromise model which does not take into account
– caching/replication
– network topology
– synchronization overheads
Programming as Successive Refinement
- Not all issues dealt with up front
- Partitioning often independent of architecture, and done first
- view machine as a collection of communicating processors
- PRAM + communication costs
- balancing the workload
- reducing required amount of inherent communication
- Then, interactions with architecture (orchestration)
- view machine as extended memory hierarchy
- extra communication due to architectural interactions
- cost of communication depends on how it is structured
- may inspire changes in partitioning
- Our objective is to understand the tradeoffs
- details in Lectures 4-10
Partitioning for Performance
- 3 major focus areas
- Balancing the workload + reducing wait time at synchronization points
- Reducing inherent communication
- Reducing extra work
- Trade off even among these algorithmic issues
- minimize communication ⇒ run on 1 processor ⇒ extreme load imbalance
- maximize load balance ⇒ random assignment of tiny tasks ⇒ no control over communication
- good partition may imply extra work to compute or manage it
- Goal is to compromise
- fortunately, often not difficult in practice
Focus 1: Load Balance and Synchronization Time
- Limits on speedup
\[
\text{speedup}_{\text{problem}}(p) \leq \frac{\text{sequential work}}{\max(\text{work on any processor})}
\]
- work includes data access and other costs
- not just equal work, but must be busy at the same time
- Four parts to the problem
- identify enough concurrency
- decide how to manage it
- determine the granularity at which to exploit it
- reduce serialization and cost of synchronization
Identifying Concurrency
- Techniques seen for the Equation Solver kernel
- loop structure
- fundamental dependencies (independent of loop structure)
- new algorithms
In general: Two orthogonal levels of parallelism
- **Function (Task) parallelism**
- entire large tasks (procedures) can be done in parallel
- degree usually modest, and does not grow with input size
- difficult to load balance
- **Data parallelism**
- more scalable: proportional to input size
- function parallelism can reduce synchronization between data parallel phases
Managing Concurrency
Static versus Dynamic techniques
• Static techniques
– algorithmic assignment based on input: does not change
– low run-time overhead, but requires predictable computation
– preferable when applicable
caveat: multiprogrammed/heterogeneous environments
• Dynamic techniques
– adapt at run time to balance load
– but, can increase communication and task management overheads
Determining Task Granularity
• Task granularity: amount of work associated with a task
– scaled with respect to parallelism overheads in the system
• communication, synchronization, etc.
• General rule:
– coarse-grained ⇒ often poor load balance
– fine-grained ⇒ more overhead, often more communication,
requires more synchronization (contention)
• Overheads influenced by both task size, and assignment
– dynamic tasking requires a threshold task size
Reducing Serialization
• Influenced by assignment and orchestration (includes how tasks are
scheduled on physical resources)
• Event synchronization
– conservative (global) versus point-to-point synchronization
• e.g., barriers versus locks
– however, fine-grained synchronization more difficult to program and can
produce more synchronization operations
• Mutual exclusion
– main goal is to reduce contention: separate locks for separate data
– smaller critical sections
– stagger critical sections in time
Implications of Load Balance
- Extends speedup limit expression to
\[ \text{speedup}_{\text{problem}}(p) \leq \frac{\text{sequential work}}{\max \left( \text{work on any processor} + \text{synchronization wait time} \right)} \]
- Generally, the responsibility of the programmer
- algorithmic decisions, based on fairly simple machine model
- PRAM + communication has non-zero cost
- How can architecture help?
- fine-grained communication (low overhead, latency)
- allows smaller tasks, better load balance (low-overhead access to queues)
- naming logically shared data in the presence of task stealing
- need to access data of stolen tasks
- hardware shared address space advantageous
Focus 2: Reducing Inherent Communication
- Simple machine view: communication is expensive!
\[ \text{speedup}_{\text{problem}}(p) \leq \frac{\text{sequential work}}{\max \left( \text{work on any processor} + \text{synchronization wait time} + \text{communication costs} \right)} \]
- metric: communication to computation ratio
- provides guidance on which communication aspect is important
- if computation is execution time, ratio gives average BW need
- if computation is operation count, gives extremes in impact of latency and BW
- latency: assume no latency hiding
- bandwidth: assume all latency is hidden
- real-life is somewhere in between
- Solution: assign tasks that access same data to same process
- solving communication and load balance is NP-hard (in general)
- however, simple heuristic solutions work well
- exploit application structure: e.g., domain decomposition
Focus 3: Reducing Extra Work
- Extends speedup limit expression
\[ \text{speedup}_{\text{problem}}(p) \leq \frac{\text{sequential work}}{\max \left( \text{work on any processor} + \text{synchronization wait time} + \text{communication costs} + \text{extra work} \right)} \]
- Common sources of extra work
- computing a good partition (e.g., in a sparse matrix computation)
- using redundant computation to avoid communication
- task, data, and process management overhead
- applications, languages, run-time systems, OS
- imposing structure on communication
- coalescing messages, allowing effective naming
- How can architecture help?
- efficient support of communication and synchronization (orchestration)
Architecture-independent Partitioning: Summary
- Useful for early development
- focus on partitioning and mapping
- understanding algorithm structure
- simple machine model: ideal (PRAM) + non-zero communication cost
- However, unrealistic for real performance
- simple view of machine does not model communication accurately
- wrongly models direct costs as well as imbalances
- partially addressed by more realistic models such as LogP
- Moreover, communication costs determined not only by amount
- depends on structuring of communication (naming, synchronization)
- cost of communication in system (latency, bandwidth)
- common set of issues helped/addressed by both programming model and parallel architecture
Memory-oriented View of a Multiprocessor
- Multiprocessor as an extended memory hierarchy
- levels: registers, caches, local memory, remote memory (topology)
- glued together by communication architecture
- levels communicate at a certain granularity of data transfer
- differences in access costs and bandwidth
- need to exploit spatial and temporal locality in hierarchy
- similar to uniprocessors: extra communication-high communication costs
- trade off against partitioning goals
Artificial Communication Costs
Accesses not satisfied in local hierarchy levels cause communication
- Inherent
- determined by program
- assumes unlimited capacity, small transfers, perfect knowledge
- Artifactual
- determined by program implementation and architecture interactions
- some reasons:
- poor allocation of data across distributed memories
- redundant communication of data
- unnecessary data in a transfer or unnecessary transfers (system granularity)
- finite replication capacity
- four kinds of cache misses: compulsory, capacity, conflict, coherence
- finite capacity affects capacity and conflict misses
- tradeoff between reducing artificial communication cost and improving spatial locality
Orchestration for Performance
Two areas of focus
- Reducing amount of communication
- inherent: change logical data sharing patterns in algorithm
- artifactual: exploit spatial, temporal locality in extended hierarchy
- techniques often similar to those on uniprocessors
- shared address space machines support this in hardware, distributed memory machines support the same techniques in software
- Structuring communication to reduce cost
Reducing Amount of Communication
- Exploiting temporal locality
- structure communication so working sets map well to hierarchy
- More useful when O(n^(k+1)) computation with O(n^k) data (factorization)
- Exploiting spatial locality
- system granularity
- tradeoffs with reducing inherent communication
- block vs. row decomposition
Structuring Communication to Reduce Cost
\[
\text{communication cost} = f \left( a + l + \frac{n_c}{m} + \frac{t_c - \text{overlap}}{B} \right)
\]
- frequency of messages
- message overhead
- network delay per message
- cost induced by contention
- in the network
- end-point contention
- \(n_c\): total data sent
- \(m\): number of messages
- \(B\): bandwidth along path
- portion of latency that can be overlapped
Summary of Performance Tradeoffs
- Load balance [synchronization wait time]
- fine-grain tasks
- random or dynamic assignment
- Inherent communication volume [data access costs]
- coarse-grained tasks
- tension between locality and load balance
- Extra work [processor overheads + data access costs]
- coarse-grained tasks
- simple assignment
- Artifactual communication costs [data access costs]
- big transfers: amortize overhead and latency
- small transfers: reduce contention and occupancy
Efficient naming, synchronization, and communication reduce incentive for creating ill-behaved programs
Lecture Summary
- High performance parallel programs
- models of parallel computation
- ideal: PRAM
- realistic: LogP
- programming as successive refinement
- architecture-independent partitioning
- machine is viewed as a collection of communicating processors
- balance workload, reduce inherent communication, reduce extra work
- tug-of-war even among these issues
- architecture-dependent orchestration
- machine is viewed as an extended memory hierarchy
- artifactual communication costs
- common issues: naming, synchronization, latency, bandwidth
- issues can be addressed/helped by hardware or software
Next Lecture
- Small-scale shared memory machines
- bus-based architectures
- snoopy cache-coherence protocols
- case study: Convex Exemplar
- Tutorial
- programming with threads
Readings
- Culler/Singh/Gupta: Chapter 4
- Almasi/Gottlieb: Chapter 10 (Sections 10.3.1, 10.3.2)
|
{"Source-Url": "https://cs.nyu.edu/courses/fall98/G22.3033-003/lectures/lect3.pdf", "len_cl100k_base": 5187, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 27899, "total-output-tokens": 5768, "length": "2e12", "weborganizer": {"__label__adult": 0.0003924369812011719, "__label__art_design": 0.00046634674072265625, "__label__crime_law": 0.0003604888916015625, "__label__education_jobs": 0.00794219970703125, "__label__entertainment": 9.65595245361328e-05, "__label__fashion_beauty": 0.00020682811737060547, "__label__finance_business": 0.00030231475830078125, "__label__food_dining": 0.0004496574401855469, "__label__games": 0.0011310577392578125, "__label__hardware": 0.002742767333984375, "__label__health": 0.0006480216979980469, "__label__history": 0.0004897117614746094, "__label__home_hobbies": 0.0002384185791015625, "__label__industrial": 0.0008869171142578125, "__label__literature": 0.0003554821014404297, "__label__politics": 0.00034117698669433594, "__label__religion": 0.0008411407470703125, "__label__science_tech": 0.061767578125, "__label__social_life": 0.00017511844635009766, "__label__software": 0.007022857666015625, "__label__software_dev": 0.9111328125, "__label__sports_fitness": 0.0006098747253417969, "__label__transportation": 0.0012121200561523438, "__label__travel": 0.00031256675720214844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 22263, 0.01057]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 22263, 0.49356]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 22263, 0.82763]], "google_gemma-3-12b-it_contains_pii": [[0, 1565, false], [1565, 2833, null], [2833, 6385, null], [6385, 8482, null], [8482, 11429, null], [11429, 13669, null], [13669, 15085, null], [15085, 18210, null], [18210, 20272, null], [20272, 22263, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1565, true], [1565, 2833, null], [2833, 6385, null], [6385, 8482, null], [8482, 11429, null], [11429, 13669, null], [13669, 15085, null], [15085, 18210, null], [18210, 20272, null], [20272, 22263, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 22263, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 22263, null]], "pdf_page_numbers": [[0, 1565, 1], [1565, 2833, 2], [2833, 6385, 3], [6385, 8482, 4], [8482, 11429, 5], [11429, 13669, 6], [13669, 15085, 7], [15085, 18210, 8], [18210, 20272, 9], [20272, 22263, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 22263, 0.01429]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
bb9beb10c1b0d0b7468c04071bc87d422ce84a3a
|
Debugging .NET and Native Applications in the Field
Gad J. Meir
IDAG Ltd.
Bug Exterminator & Process Plumber
EBlog: weblogs.asp.net/gadim
HBlog: blogs.microsoft.co.il/blogs/gadim
Email: gadin@idag.co.il, Site: www.idag.co.il
About Gad J. Meir
• Experience: Since 1975
• Work: www.idag.co.il
• Function: www.productiondebugging.com
• Blog: http://weblogs.asp.net/gadim/default.aspx
• MSF Certified Trainer & Practitioner
• BSc. Computer engineering Technion
• Microsoft Certified MC...
About IDAG Ltd.
• Founded 1983
• Established the first Microsoft certified training center in Israel at 1992.
• Areas of operation
– Troubleshooting systems and procedures
– Production time debugging to root cause of failure
– Projects monitoring and guidance
– Knowledge gaps detection and filling
– Technologies and methodologies deployment
© 2011 IDAG Ltd.
From Bug Extermination to Process Plumbing
The root cause of failure is always Architecture, Process (rarely Technology)
© 2011 IDAG Ltd.
I Have a Question 1/4
• Are you a
– Developer?
– Test/QA?
– IT?
– Management?
– Other?
I Have a Question 2/4
• Main Target Operating System
– XP?
– Vista?
– Windows 7?
– Server 2003?
– Server 2008?
– 2008 R2?
– Other?
I Have a Question 3/4
• Bit
– 32?
– 64?
– Other?
I Have a Question 4/4
• Run Time Environment
– Managed (.NET)?
– Native?
– Other?
Talk Targets
• Explain some of the specific constraints of production environment / Field
• Introduce ways to get debug data from production environment with minimum disruption to the System / Users
• Several scenario Demos for Native and Manage code
• Tips
Prerequisites
• Experience in debugging
Agenda
• Theoretical background (Quantum physics )
• What is a production environment
• Dumping bodies (AdPlus)
• Mapping the bodies (Symbols)
• Autopsyng and analyzing bodies (WinDbg)
• The problem with the .NET way of handling bodies
• Tools for extracting information from .NET bodies (SOS)
• Things you can’t get from a dead body
• Working with live bodies (Live Debugging)
• IIS (Debug Diag)
• Q & A
Please!
• If you don’t understand what I am talking about, Stop me and ASK !!!, Don’t wait.
Gad's Guidelines
- Nothing in life is certain
- If you measure it, it will be wrong
- Any action has at least one unexpected reaction
- Debugging application with Visual Studio, on a live production system, with 10,000 online users, might affect your job security
Theoretical Basis
Newton's Laws of Motion: Isaac Newton (1643-1727)
Observer Effect
Murphy's Law
What is a Production Environment
What is a Production Environment
- Must be up and running all the time !!!
- Managed by administrators and help desk
- Under change control
- Managed remotely by management tools
- Different Hardware / Software
- Different OS constrains (Policy, Security, ...)
Development & Production
Setup
Development
Test
Staging
Production
QA Approval
IT Formal Approval
Production Environment
SW Manufacturer
Customer Site
Can’t install Visual Studio
Can’t Live debug
About a Dump
- A snapshot of the process memory at the time you take the dump
- Easy to get in production environments with minimum intervention with the production
- In most of the cases includes all the information needed to analyze the problem
Demo 010
• Analyzing a dump from a crashed program
Pathology Basics
• A dead body is as good as a live one
– The only thing you can’t do with a dump is single-step it
– You can duplicate and distribute dead bodies
• Conclusion and strategy # 1
– Take the money and run
6 Easy Steps for beginners
- Get the tools
- Get the Symbols
- Set the environment
- Take a Dump
- Drop the dump into the tool
- !analyze
How to Get the tools
• The Debugging tools for windows MSIs are In the SDK
• Download from http://msdn.microsoft.com/windows/hardware and go to Downloads
• Install once (for every hardware architecture)
• Zip and copy to you tools repository
• No need to install for using (Important for production)
• .
How to Get the Symbols
• The Symbols MSIs are In the SDK
• Download from http://msdn.microsoft.com/windows/hardware and go to Downloads and than to Other hardware and development tools and than to Download windows symbol packages
• Install once (for every hardware architecture and OS)
• Put in a public location
• Remember the path
Set the environment
• Open WinDbg
• Set the symbol path
– .sympath to app PDBs
– .sympath+ to the Windows (correct version) PDBs
– .symfix+ to the Microsoft Symbol server
• Save the WinDbg environment as a workspace for later use
Tools to Take a Dump
- Adplus
- Windbg .dump
- Process Explorer
- Task Manager (Vista & Above)
- DebugDiag
- UserDump
- ProcDump
- WER
- ...
Demo 020
- Taking a dump of a hanged program using Task manager
About the different types of Dumps
- Application Mini dump
- More or less just the call stack
- Application Full dump
- Everything
- (Kernel dumps mini, kernel and full)
- For BSODs
Demo 030
• Taking a dump of a hanged program using WinDbg
About .NET (CLR)
• CLR is a win32 program!
– A COM component
• CLR is the execution engine for IL code
• With win32 tools just the CLR engine is noticed
– IL running code is ignored!
• SOS debugger extension is required
– ‘Translates’ from Managed to Native
Minimum .NET Internals
• Stack Machine (Reverse Polish Notation)
• Basic data unit is an Object
• The IL code is JITed into Native Code
– On a function by function basis
– On the first encounter
Preparing the .NET Executable
- Source Code
- Compiler
- Executable (PE format)
- IL Code
- Manifest
- Symbols (PDB)
Any Language
Running the Code in the CLR
Class Loading
- Methods V-table to JITer
- Instantiating the Class
- Obj allocated in the GC Heap
Calling a Method
- First time only
- Method JITing
- 2 phase optimizer
- Native Code Execution
Problems with .NET
• No PDBs for JITed code
• JITed code is ‘nowhere’
• CLR handles all exceptions
• Hara-kiri effect when CLR can’t handle an exception
– By default, the CLR kills every one involved, cleans all the evidence from the crime scene and commits suicide, without leaving a comprehensible note
Demo 040
• .NET Hara-kiri effect
– Native Crash
– Managed Crash
### SOS !Help
<table>
<thead>
<tr>
<th><strong>Object Inspection</strong></th>
<th><strong>Examining code and stacks</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>DumpObj (do)</td>
<td>Threads</td>
</tr>
<tr>
<td>DumpArray (da)</td>
<td>CLRStack</td>
</tr>
<tr>
<td>DumpStackObjects (dso)</td>
<td>IP2MD</td>
</tr>
<tr>
<td>DumpHeap</td>
<td>U</td>
</tr>
<tr>
<td>DumpVC</td>
<td>DumpStack</td>
</tr>
<tr>
<td>GCRoot</td>
<td>EEStack</td>
</tr>
<tr>
<td>ObjSize</td>
<td>GCInfo</td>
</tr>
<tr>
<td>FinalizeQueue</td>
<td>EHInfo</td>
</tr>
<tr>
<td>PrintException (pe)</td>
<td>COMState</td>
</tr>
<tr>
<td>TraverseHeap</td>
<td>BPMD</td>
</tr>
</tbody>
</table>
### Examining CLR data structures
- DumpDomain
- EEHeap
- Name2EE
- SyncBlk
- DumpMT
- DumpClass
- DumpMD
- Token2EE
- EEVersion
- DumpModule
- ThreadPool
- DumpAssembly
- DumpMethodSig
- DumpRuntimeTypes
- DumpSig
- RCWCleanupList
- DumpIL
### Diagnostic Utilities
- VerifyHeap
- DumpLog
- FindAppDomain
- SaveModule
- GCHandle
- GCHandleLeaks
- VMMAP
- VMStat
- ProcInfo
- StopOnException (soe)
- MinidumpMode
### Other
- FAQ
Demo 050
• WinDbg Native and Managed view of .NET program
– Without SOS
– With SOS
Demo of a .NET Crash 060
• Call Stack
– !clrstack
• Objects and Values
– !do
• Object Stack
• !dso
Demo of a Deadlock Scenario 070
- !syncblk
Demo of Finalization Starvation 080
• !finalizequeue
Summery
• In the field you can’t use the same techniques you use in development.
• Extracting dumps is one of the ways to gather information in the field without disturbing production.
• Instrumentation is key to help you gather information in the field.
If you want to learn more
- IDAG Ltd. have a 3 day of practical workshop on the subject of “Production Time debugging”.
- The workshop contain practical labs based on real live scenarios.
- The workshop includes all the methodology and practical consideration to properly debug application in the field.
Resources
- [http://msdn.microsoft.com/windows/hardware](http://msdn.microsoft.com/windows/hardware)
- [winqual.microsoft.com](http://winqual.microsoft.com)
- “Debugging tools for Windows” help file
- “Debugging tools for Windows” SDK
- [Debugging MS .NET 2.0 Applications](http://msdn.microsoft.com/library/aa719997)
Ch 6
- [MSDN patterns & practices Debugging](http://msdn.microsoft.com/library/aa719997)
(Archived)
- !SOS.help & Q&A
- [http://support.microsoft.com/kb/q286350/](http://support.microsoft.com/kb/q286350/)
- **Advanced Windows Debugging**
- ISBN 0-321-37446-0 ,Addison Wesley, Mario Hewardt & Deniel Pravat
Some Philosophy
• IT managers appreciate professionalism
– Be prepared, know your tools and their footprints
– Learn enough about IT to show them you are not the enemy
– Listen, Listen, Listen
• Listen to the customer!
– You developed it, but they use it every day
– Write everything they complain about and put it straight into the product wish list
Questions?
Gad J. Meir
IDAG Ltd.
Bug Exterminator & Process Plumber
EBlog: weblogs.asp.net/gadim
HBlog: blogs.microsoft.co.il/blogs/gadim
Email: gadim@idag.co.il, Site: www.idag.co.il
© 2011 IDAG Ltd.
Thank You!
Gad J. Meir
IDAG Ltd.
Bug Exterminator & Process Plumber
EBlog: weblogs.asp.net/gadim
HBlog: blogs.microsoft.co.il/blogs/gadim
Email: gadim@idag.co.il, Site: www.idag.co.il
Copyright © 2011 by IDAG Ltd. and Gad J. Meir. All rights reserved. (Some parts quote Microsoft public materials). This presentation, its workshops, labs and related materials may not be distributed or used in any form or manner without prior written permission by the author(s).
Preparing Application for Production Environment
Gad J. Meir
IDAG Ltd.
Bug Exterminator & Process Plumber
EBlog: weblogs.asp.net/gadim
HBlog: blogs.microsoft.co.il/blogs/gadim
Email: gadim@idag.co.il, Site: www.idag.co.il
© 2011 IDAG Ltd.
About Gad J. Meir
• Experience: Since 1975
• Work: www.idag.co.il
• Function: www.productiondebugging.com
• Blog: http://weblogs.asp.net/gadim/default.aspx
• MSF Certified Trainer & Practitioner
• BSc. Computer engineering Technion
• Microsoft Certified MC...
About IDAG Ltd.
• Founded 1983
• Established the first Microsoft certified training center in Israel at 1992.
• Areas of operation
– Troubleshooting systems and procedures
– Production time debugging to root cause of failure
– Projects monitoring and guidance
– Knowledge gaps detection and filling
– Technologies and methodologies deployment
© 2011 IDAG Ltd.
The root cause of failure is always Architecture, Process (rarely Technology)
Talk Targets
• Explain some of the specific constraints of production environment / Field
• Introduce ways to Reduce the operation costs of an application in production environment with minimum overhead to the development team
• Several Demos
• Tips
Prerequisites
• None
Agenda
• The real life cycle of an application and the TCO of a software system
• your customer(s)
• production environment manageability and down time costs
• Ways to make the application production environment friendly
– Event logs
– Performance counters, Base lining and Trends
– Event Tracing for Windows (ETW)
– Windows Management Instrumentation (WMI)
– Windows Error Reporting (WER) and being ‘crash friendly’
– Production debugging in the field usage, features and specifications.
– Configuring the operating system for failure
– Power Shell
– ...
Please!
• If you don’t understand what I am talking about, Stop me and ASK !!!, Don’t wait.
Software Project Life Time
Sign off
- envision
- Design
- develop
- stabilize
- deploy
Software Project Life Time
You developed it 2 years and your customers suffer from it another 7 years
What have you done here
To help your customers there
You developed it 2 years and your customers suffer from it another 7 years
Time
© 2011 IDAG Ltd.
The Full Cost of an Application
The CustomerS
• The customer is the one that pays
• IT
• Help Desk
• Field Engineer and Field Support
• All levels of customer support
• QA & Testing
• Users
• Development Team
• Business decision makers
• Sales representative
What is a Production Environment
What is a Production Environment
- Must be up and running all the time !!!
- Managed by administrators and help desk
- Under change control
- Managed remotely by management tools
- Different Hardware / Software
- Different OS constrains (Policy, Security, ...)
What is a Production Environment
• Must be up and running all the time !!!
• Managed by administrators and help desk
• Under change control
• Managed remotely by management tools
• Different Hardware / Software
• Different OS constrains (Policy, Security, ...)
How many screens are there in a 100 server computer center
- What is the size of a 100 server computer center?
- How many screens are there in a 100 server computer center?
- About KVM
- Why MsgBox is not a very useful tool to notify the operator about an application problems
- Does a service have a Desktop?
- Who’s gonna click on the OK button
System management tools
• Microsoft Operations Manager (MOM) & Microsoft SCOM, Microsoft Opalis
• HP Openview Operations and BAC SiteScope
• Computer Associates CA Unicenter
• IBM Tivoli
• BMC ProactiveNet Performance Management
What is a Production Environment
• **Must be up and running all the time !!!**
• Managed by administrators and help desk
• Under change control
• Managed remotely by management tools
• Different Hardware / Software
• Different OS constrains (Policy, Security, ...)
Your application is going to crash!!!
• At the beginning of the envisioning phase of an application, you already know it’s going to crash in production or at a customer’s site.
• It’s not a question of IF but of WHEN.
Cost of Fixing a Solution
Relative Cost
Project Phase
Envisioning Planning Developing Stabilizing Deploying Operating
© 2010 IDAG Ltd.
How much does a crash costs?
- **Direct costs**
- $\alpha$ Clients can't use the system for $\beta$ hours
- $\gamma$ IT personal work for $\delta$ hours to fix the problem ($\delta \gg \beta$)
- **Indirect costs**
- Degradation in clients and IT satisfaction (reputation, attitude, trust)
- SLA Penalties
- Other expenses
Finding and fixing a bug faster
• With proper instrumentation, IT can find program abnormal behavior faster and reduce down time (responsibility of the development team)
• With proper production time data collection before and at time of abnormal behavior developers can find the bug quicker (responsibility of IT & operations)
• Reduces TCO
Make the application production environment friendly
- Event Logging
- ETW - Event Tracing for windows
- Performance Counters
- WMI - Windows management instrumentation
- WER - Windows error reporting
- MMC - Microsoft Management Console
- Power Shell
- System Management friendly
- Crash and Production Time Debugging friendly
Event Logging Demo
Event Logging Demo Debrief
• The infrastructure is built in the operating system
• Fully integrated with most of the automatic management tools.
• Simple API interface
• System event log for administrators and private event log if the need arise.
• The design of the “What to log where” is the most time consuming task
Trace Framework Requirements
• Works only when required
• Start & stop manually and/or conditionally
• Dynamic configuration of what to trace
• Versatile output logging options
• Time stamps and management data
• Suitable for production environments
• Low footprint
• Minimum performance degradation
ETW Demo
Performance Logs & Alerts
MMC snap in
WMI Script
Logman cmd
My custom controller
Disk File logger
In Memory logger
Custom logger
ETW Framework
Kernel Provider
YYY Provider
My Provider
© 2010 IDAG Ltd.
ETW Demo Debrief
- The infrastructure is built in the operating system (since Windows 2000!).
- Just 3 API calls
- Zero development effort Huge benefits
- The design of the “printf’s” is the most time consuming task
- Can be used for error tracing and performance measurements
Performance Counters Demo
Performance Counters Demo Debrief
- The infrastructure is built in the operating system (since windows NT 2000!).
- Simple API interface
- Zero development effort Huge benefits
- Capacity planning
- The design of the “Hart beat and test points” is the most time consuming task
WMI Demo
Management application
CIMOM WinMgmt
Provider (DLL) Provider (EXE)
COM interfaces
CIM repository
© 2010 IDAG Ltd.
WMI Demo Debrief
• The infrastructure is built in the operating system
• Full integration with all the automatic management tools
• Scripting interface as an added value
• Require understanding of DMTF, WBEM, CIM and MOF.
WER Demo
WIN32Err01.exe has encountered a problem and needs to close. We are sorry for the inconvenience.
If you were in the middle of something, the information you were working on might be lost.
Please tell Microsoft about this problem.
We have created an error report that you can send to us. We will treat this report as confidential and anonymous.
To see what data this error report contains, click here.
Send Error Report Don't Send
© 2010 IDAG Ltd.
WER Demo Debrief
• The infrastructure is built in the operating system (since Windows NT 3.11!).
• Gold mine for developers, call stack at the moment of crash
• Just IT configuration and sending the collected data
• Can be used locally and without user intervention
MMC Demo
There are no items to show in this view.
MMC Demo Debrief
• The infrastructure is built in the operating system (since windows 2000).
• The standard IT tool
• Set the management interface between your application and the IT
Power Shell Demo
Power Shell Demo Debrief
• Every product from Microsoft comes with Power Shell Applet
• Easy to incorporate
Management and crash friendly
• Script / MMC / Power shell applets / Troubleshooters
• Application specific monitoring and alerting utilities for management and control systems
• Application managed startup / shutdown
• Application current state date collection
• Application crash data setup and collection
• Log interpreting and analyzing utilities
Summery
• Proper instrumentation save a lot of time and money.
• Require cooperation between IT and development.
• Minimum overhead to Developers and IT, Huge benefits to the whole system
Do You Have IT Expert in your Development Team?
- Program Management: Delivering the solution within project constraints
- Building to specification
- Satisfied customers
- Enhanced user effectiveness
- Smooth deployment and ongoing operations
- Development
- Test
- User Experience
From MSF team model © 2010 IDAG Ltd.
If you want to learn more
• IDAG Ltd. have a 3 day of practical workshop on the subject of “preparing an application for production”.
• The workshop contain practical labs with all the building block code elements.
• The workshop includes all the methodology and practical consideration to make an application production environment friendly.
Resources
• [www.productiondebugging.com](http://www.productiondebugging.com)
• [technet.microsoft.com](http://technet.microsoft.com)
Questions?
Gad J. Meir
IDAG Ltd.
Bug Exterminator & Process Plumber
EBlog: weblogs.asp.net/gadim
HBlog: blogs.microsoft.co.il/blogs/gadim
Email: gadim@idag.co.il, Site: www.idag.co.il
Copyright © 2011 by IDAG Ltd. and Gad J. Meir. All rights reserved. (Some parts quote Microsoft public materials). This presentation, its workshops, labs and related materials may not be distributed or used in any form or manner without prior written permission by the author(s).
|
{"Source-Url": "http://bonn-to-code.net/files/downloads/2011/20111129_DebuggingInstrumentation.pdf", "len_cl100k_base": 4937, "olmocr-version": "0.1.50", "pdf-total-pages": 91, "total-fallback-pages": 0, "total-input-tokens": 116311, "total-output-tokens": 8202, "length": "2e12", "weborganizer": {"__label__adult": 0.0003614425659179687, "__label__art_design": 0.00035691261291503906, "__label__crime_law": 0.0002536773681640625, "__label__education_jobs": 0.00353240966796875, "__label__entertainment": 7.927417755126953e-05, "__label__fashion_beauty": 0.00015532970428466797, "__label__finance_business": 0.0006070137023925781, "__label__food_dining": 0.0002484321594238281, "__label__games": 0.0006117820739746094, "__label__hardware": 0.00072479248046875, "__label__health": 0.0001939535140991211, "__label__history": 0.00012290477752685547, "__label__home_hobbies": 0.00010854005813598631, "__label__industrial": 0.0003285408020019531, "__label__literature": 0.0002052783966064453, "__label__politics": 0.00011795759201049803, "__label__religion": 0.0003101825714111328, "__label__science_tech": 0.0013780593872070312, "__label__social_life": 0.00012969970703125, "__label__software": 0.010162353515625, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.00021982192993164065, "__label__transportation": 0.00034165382385253906, "__label__travel": 0.00016808509826660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20175, 0.01735]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20175, 0.09931]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20175, 0.79909]], "google_gemma-3-12b-it_contains_pii": [[0, 280, false], [280, 541, null], [541, 913, null], [913, 1053, null], [1053, 1151, null], [1151, 1297, null], [1297, 1353, null], [1353, 1442, null], [1442, 1701, null], [1701, 1742, null], [1742, 2148, null], [2148, 2241, null], [2241, 2663, null], [2663, 2696, null], [2696, 2958, null], [2958, 3166, null], [3166, 3414, null], [3414, 3466, null], [3466, 3692, null], [3692, 3831, null], [3831, 4136, null], [4136, 4470, null], [4470, 4707, null], [4707, 4850, null], [4850, 4915, null], [4915, 5104, null], [5104, 5163, null], [5163, 5428, null], [5428, 5628, null], [5628, 5764, null], [5764, 5989, null], [5989, 6297, null], [6297, 6366, null], [6366, 7608, null], [7608, 7696, null], [7696, 7804, null], [7804, 7848, null], [7848, 7902, null], [7902, 8160, null], [8160, 8465, null], [8465, 9155, null], [9155, 9518, null], [9518, 9724, null], [9724, 9910, null], [9910, 10190, null], [10190, 10432, null], [10432, 10693, null], [10693, 11065, null], [11065, 11143, null], [11143, 11394, null], [11394, 11416, null], [11416, 11991, null], [11991, 12084, null], [12084, 12173, null], [12173, 12431, null], [12431, 12463, null], [12463, 12691, null], [12691, 12724, null], [12724, 12987, null], [12987, 13250, null], [13250, 13602, null], [13602, 13832, null], [13832, 14098, null], [14098, 14317, null], [14317, 14461, null], [14461, 14795, null], [14795, 15138, null], [15138, 15467, null], [15467, 15486, null], [15486, 15806, null], [15806, 16107, null], [16107, 16329, null], [16329, 16607, null], [16607, 16633, null], [16633, 16911, null], [16911, 17042, null], [17042, 17265, null], [17265, 17728, null], [17728, 17995, null], [17995, 18046, null], [18046, 18230, null], [18230, 18247, null], [18247, 18356, null], [18356, 18708, null], [18708, 18899, null], [18899, 19229, null], [19229, 19229, null], [19229, 19573, null], [19573, 19708, null], [19708, 19896, null], [19896, 20175, null]], "google_gemma-3-12b-it_is_public_document": [[0, 280, true], [280, 541, null], [541, 913, null], [913, 1053, null], [1053, 1151, null], [1151, 1297, null], [1297, 1353, null], [1353, 1442, null], [1442, 1701, null], [1701, 1742, null], [1742, 2148, null], [2148, 2241, null], [2241, 2663, null], [2663, 2696, null], [2696, 2958, null], [2958, 3166, null], [3166, 3414, null], [3414, 3466, null], [3466, 3692, null], [3692, 3831, null], [3831, 4136, null], [4136, 4470, null], [4470, 4707, null], [4707, 4850, null], [4850, 4915, null], [4915, 5104, null], [5104, 5163, null], [5163, 5428, null], [5428, 5628, null], [5628, 5764, null], [5764, 5989, null], [5989, 6297, null], [6297, 6366, null], [6366, 7608, null], [7608, 7696, null], [7696, 7804, null], [7804, 7848, null], [7848, 7902, null], [7902, 8160, null], [8160, 8465, null], [8465, 9155, null], [9155, 9518, null], [9518, 9724, null], [9724, 9910, null], [9910, 10190, null], [10190, 10432, null], [10432, 10693, null], [10693, 11065, null], [11065, 11143, null], [11143, 11394, null], [11394, 11416, null], [11416, 11991, null], [11991, 12084, null], [12084, 12173, null], [12173, 12431, null], [12431, 12463, null], [12463, 12691, null], [12691, 12724, null], [12724, 12987, null], [12987, 13250, null], [13250, 13602, null], [13602, 13832, null], [13832, 14098, null], [14098, 14317, null], [14317, 14461, null], [14461, 14795, null], [14795, 15138, null], [15138, 15467, null], [15467, 15486, null], [15486, 15806, null], [15806, 16107, null], [16107, 16329, null], [16329, 16607, null], [16607, 16633, null], [16633, 16911, null], [16911, 17042, null], [17042, 17265, null], [17265, 17728, null], [17728, 17995, null], [17995, 18046, null], [18046, 18230, null], [18230, 18247, null], [18247, 18356, null], [18356, 18708, null], [18708, 18899, null], [18899, 19229, null], [19229, 19229, null], [19229, 19573, null], [19573, 19708, null], [19708, 19896, null], [19896, 20175, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20175, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20175, null]], "pdf_page_numbers": [[0, 280, 1], [280, 541, 2], [541, 913, 3], [913, 1053, 4], [1053, 1151, 5], [1151, 1297, 6], [1297, 1353, 7], [1353, 1442, 8], [1442, 1701, 9], [1701, 1742, 10], [1742, 2148, 11], [2148, 2241, 12], [2241, 2663, 13], [2663, 2696, 14], [2696, 2958, 15], [2958, 3166, 16], [3166, 3414, 17], [3414, 3466, 18], [3466, 3692, 19], [3692, 3831, 20], [3831, 4136, 21], [4136, 4470, 22], [4470, 4707, 23], [4707, 4850, 24], [4850, 4915, 25], [4915, 5104, 26], [5104, 5163, 27], [5163, 5428, 28], [5428, 5628, 29], [5628, 5764, 30], [5764, 5989, 31], [5989, 6297, 32], [6297, 6366, 33], [6366, 7608, 34], [7608, 7696, 35], [7696, 7804, 36], [7804, 7848, 37], [7848, 7902, 38], [7902, 8160, 39], [8160, 8465, 40], [8465, 9155, 41], [9155, 9518, 42], [9518, 9724, 43], [9724, 9910, 44], [9910, 10190, 45], [10190, 10432, 46], [10432, 10693, 47], [10693, 11065, 48], [11065, 11143, 49], [11143, 11394, 50], [11394, 11416, 51], [11416, 11991, 52], [11991, 12084, 53], [12084, 12173, 54], [12173, 12431, 55], [12431, 12463, 56], [12463, 12691, 57], [12691, 12724, 58], [12724, 12987, 59], [12987, 13250, 60], [13250, 13602, 61], [13602, 13832, 62], [13832, 14098, 63], [14098, 14317, 64], [14317, 14461, 65], [14461, 14795, 66], [14795, 15138, 67], [15138, 15467, 68], [15467, 15486, 69], [15486, 15806, 70], [15806, 16107, 71], [16107, 16329, 72], [16329, 16607, 73], [16607, 16633, 74], [16633, 16911, 75], [16911, 17042, 76], [17042, 17265, 77], [17265, 17728, 78], [17728, 17995, 79], [17995, 18046, 80], [18046, 18230, 81], [18230, 18247, 82], [18247, 18356, 83], [18356, 18708, 84], [18708, 18899, 85], [18899, 19229, 86], [19229, 19229, 87], [19229, 19573, 88], [19573, 19708, 89], [19708, 19896, 90], [19896, 20175, 91]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20175, 0.02105]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
7d127bb3f5432e6225abd51e141006b9fa92106d
|
Framework for Network Co-Simulation (FNCS) Tutorial
at the 3rd Workshop on Next-Generation Analytics for the Future Power Grid
JASON FULLER, JEFF DAILY LAURENTIU MARINOVICI, ANDREW FISHER, KHUSHBU AGARWAL
Pacific Northwest National Laboratory
July 16, 2014
A good scientist is a person with original ideas.
A good engineer is a person who makes a design that works with as few original ideas as possible.
~~Freeman Dyson
What is the need?
- Smart grid brings information and communication technologies together with power systems:
- Sensors and equipment gather information
- Information is processed locally or centrally
- Decisions are made based on this information
- But before deploying new technologies, it is important to understand:
- What is the performance of a given technology?
- How will new technologies interact with existing technologies?
- Will assets at the distribution level negatively impact controls at the transmission level?
- What are my communication system requirements to support an application?
- Can applications share network bandwidth?
Traditionally, power grid and communication network domains have not resided within a single simulator with relatively equal consideration to the complexity of each.
A number of very powerful, domain-specific tools exist:
- Transmission (PSLF, Powerworld, DSATools, PST, etc.)
- Distribution (WindMil, SynerGEE, CYMDIST, OpenDSS, GridLAB-D, etc.)
- Telecommunications (OPNET, NetSim, ns-2, ns-3, OMNet++, etc.)
We do not need to recreate these tools
- Re-use existing simulators
- Libraries of models already exist
- Most are well validated
- Integrate and enjoy!!
Scalability and Co-Simulation
- Co-simulation allows for expansion of capabilities with minimal investment
- Allows for re-use of existing software AND models
- Enables multi-scale modeling and simulation required for understanding TC2
- FNCS is a framework for integrating simulators across multiple domains
- Framework for Network Co-Simulation (FNCS – pronounced “Fee-nix”)
- Developed for HPC applications across multiple platforms
Intended uses?
- **Distribution and Communications**
- Sensor data and control (VVO, inverters, reconfiguration, etc.)
- Demand response and retail markets
- **Transmission and Communications**
- Wide Area Control (and Protection)
- Phasor Measurement Unit data collection and control
- Communication pathways and redundancy
- **Transmission, Distribution and Communications**
- Trade-offs of distributed versus centralized controls
- Hierarchical controls / reconfiguration during communication loss
- **Transmission, Distribution, Markets and Communications**
- Transactive energy/ancillary markets (with distributed resources)
- Integration of wholesale and retail markets
- **Visualization**
- With connection to GridOPTICS
- Generate simulated data sets for experimentation
FNCS Programming Guide
JEFF DAILY, JASON FULLER
LAURENTIU MARINOVICI, ANDREW FISHER, KHUSHBU AGARWAL
Pacific Northwest National Laboratory
July 16, 2014
FNCS Programming Guide Overview
- FNCS design goals.
- FNCS architecture overview.
- Overview on how to integrate simulators.
- FNCS assumptions.
- Programming with FNCS
- Time management.
- Object communication interface.
- Synchronization algorithms in detail.
Challenges in power grid and communication network co-simulation
- Time synchronization.
- Differences in time scales.
- Messages between simulators should be delivered without incurring delays.
- Re-use of models.
- Integrating both transmission and distribution level simulators.
This is our goal – and we are nearly there ➔
FNCS Design Goals
- **Re-use** existing simulators as much as possible.
- Provide the environment for **rapid co-simulation development**.
- Support co-simulations for **multiple platforms**: single node, multiple nodes, clusters, cloud…
Programmers need to use the components for:
- Time management
- The communication interface
All other components are hidden to ease the programming.
FNCS is programmed in C++, and interfaces for C, Java, Fortran are provided with FNCS distribution.
Simulator core (the component that decides the next time step of the simulation) needs to be modified to use the time management component.
- **FNCS requires** control over the next time step of the simulator.
- **For simulator with large time steps (e.g., 5mins)** or for **discrete event simulators**, FNCS can modify the next time step of the simulator.
Components that will communicate with other simulators need to be modified to use the communication interface.
- Components need to be assigned unique name.
- Users need to handle the de-/serialization, or our serialization code generator can be used.
The public interfaces of FNCS:
- **Integrator** – a class that provides time management functions, framework initialization, and component registration. All methods are static, so users do not have to deal with object creations, deletion…
- **ObjectCommInterface** – provides methods for sending and receiving messages. Instances created and managed by Integrator.
Programming with FNCS - Object Hierarchy (Abstract)
Implementation differs according to the type of simulator and the algorithm
Implementation differs according to underlying comm. lib (zmq, mpi,...)
Implementation differs according to the type of simulator
Programming with FNCS - Object Hierarchy for Time Stepped Simulator
Simulator Core
Integrator
OptimisticTickSyncAlgorithm
GracePeriodCommManager
ZmqNetworkInterface
Simulator Component
ObjectCommInterface
We can switch to a different algorithm with just one function call.
Programming with FNCS - Object Hierarchy for Network Simulator
Simulator Core
Integrator
OptimisticCommSync Algorithm
Simulator Component
ObjectCommInterface
CommunicationCommManager
ZmqNetworkInterface
We can switch to a different algorithm with just one function call
Before the simulator starts a timestep, FNCS needs to be initialized.
Factory methods are called to initialize the object hierarchy according to the type of simulator and user requirements.
Properties about the simulator and co-simulation can be specified in a json file or in a function call.
```cpp
Integrator::InitIntegrator(char *jsonfile, TIME initialTime)
Integrator::InitIntegrator<syncAlgo>(timemetric simmetric, TIME initialTime, TIME packetlostperiod, TIME onetimestep,...)
Integrator::registerTimeCallback(Callback<TIME> *given)
```
FNCS requires the following from the simulator:
- Time scale,
- Initial time,
- One time step,
- A callback function that returns the current time of the simulator.
FNCS' internal time is in nanoseconds.
```c
initIntegratorGracePeriod(
MILLISECONDS, 2300000000,
currentTime, 10);
setregistercallback(getCurrentTime);
TIME getCurrentTime()
{
return currentTime;
}
```
Initialize the framework for a simulator with 10 millisecond time steps.
Programming with FNCS: Initialization
- From users FNCS requires:
- Sync algorithm to use during co-simulation.
- Parameters of the sync algorithm.
- Init function with conservative synchronization algorithm, m:
```
initIntegratorGracePeriod(
MILLISECONDS, 2300000000, currentTime, 10);
```
- Init function with speculative algorithm with increasing speculation strategy and 5min initial speculation:
```
initIntegratorOptimisticIncreasing(
MILLISECONDS, 2300000000, currentTime, 300000);
```
Programming with FNCS: Initialization
- **Init ZMQ network interface for network simulator**
```cpp
zmqNetworkInterface *interface=new zmqNetworkInterface(true);
increasingSpeculationTimeStrategy *st=new increasingSpeculationTimeStrategy(NANOSECONDS,300000000000);
Integrator::initIntegratorOptimisticCommunicate(interface,NANOSECONDS,5100000000,0,300000000000,st);
sim_comm::CallBack<
uint64_t,sim_comm::empty,sim_comm::empty,sim_comm::empty>
*timerCallback=sim_comm::CreateCallback(…);
Integrator::setTimeCallBack(timerCallback);
```
- **Init FNCS for a network simulator with NANOSEOND timescale and speculative sync algo.**
- Specify a custom strategy to use for the network simulator.
- **Use increasing speculation strategy.**
Users can specify the network interface they want to use, such as ZMQ, MPI (experimental), or extend FNCS with another network interface.
- **Register callback to the method/function that returns the current time of the simulator.**
FNCS provides two methods for time management.
- `timeStepStart()` – Called at the beginning of a time step.
- `getNextTime()` – Called at the end of a time step.
The implementation of these methods are provided in concrete syncalgorithm classes.
```
Integrator::timeStepStart(currentTime) -> wait for other simulators iff synchronization is necessary at time step currentTime.
Integrator::getNextTime(xurrentTime, nexTime) -> get the next granted time for the simulator.
```
currentTime = initialTime;
do{
Integrator::timeStepStart(currentTime);
processTimeStep(currentTime);
nextTime = getNextTimeStep();
nextTime = Integrator::getNextTime(currentTime,
nextTime);
currentTime = nextTime;
}while(currentTime < endTime);
while(!empty(eventQ)){
toProcess=getNextEvent(eventQ);
currentTime=toProcess.time;
processEvent(toProcess);
Integrator::timeStepStart(currentTime);
if(!empty(eventQ))
nextTime=getNextEvent(eventQ).time;
else
nextTime=Infinity;
nextTime=Integrator::getNextTime(currentTime, nextTime);
}
Message exchange is designed to deliver messages without incurring delays (due to synchronization).
Message delivery is realized during synchronization.
- Messages are buffered until the synchronization is completed.
- The call order of the time management functions ensures messages are delivered to the network simulator on time!
At the method `getNextTime()` FNCS calculate a *Lower Bound on Time Step (LBTS)*.
LBTS is a time step until which we are sure none of the simulators are going to exchange messages.
- Calculation of LBTS is necessary for consistent delivery of inter-simulator messages.
Co-simulations can consist of simulators with different time scales.
- LBTS can be lower than the next time step of the simulator.
- Simulators with the coarser time scales will run for simulators with fine time scales.
- FNCS provides 4 algorithms for time management.
Time Management in FNCS: Conservative Algorithm
- LBTS is always set to the smallest next time step of simulators.
1. If there are in transit messages, nextTime_i = Δt, Δt is the time-step of the ith simulator.
2. LBTS = Reduce_min(nextTime_i)
3. If LBTS < nextTime_i, busy wait.
- Networks simulator does not participate in calculation of LBTS.
- Networks simulator is always synchronized with the smallest next time step of the other simulators, ensuring on-time delivery.
Cons:
- Performance, at worst case needs to synchronize every time step of Sim2.
- To many message exchanges.
- Suitable for short-time co-simulations on one computer.
Time Management in FNCS: Sleeping Conservative Algorithm
- LBTS calculation is same as the conservative algorithm.
- When LBTS < nextTime_i, the simulator sleeps instead of busy waiting.
- Simulator is woken up when LBTS == nextTime_i or when it receives a message while sleeping.
- Reduces messages required for synchronization, which in turn increases performance.
Cons:
- Performance, at worst case needs to synchronize every time step of Sim2.
- Suitable for short-time co-simulations on multiple computers.
Time Management in FNCS: Speculative Algorithms
- A simulator can potentially send a message at \( \text{nextTime} \).
- Conservative algorithms: Safe time synchronization choice, synchronize at \( \text{nextTime} \). (Synchronization is costly!)
- Observation: Simulators do not need to send a message at every time step! How can we avoid synchronization at every time step without delaying message delivery?
- Speculative time synchronization algorithm:
- Speculate that the simulators will not send messages until \( \text{specTime} >> \text{LBTS} \)
- Fork, child processes run independently until \( \text{specTime} \), parents run the conservative algorithm.
- Kill the children if they try to send a message before \( \text{specTime} \).
- Kill the parents if children do not send a message until \( \text{specTime} \).
- Fork is not costly -> uses copy on write!
- Utilize CPU and available memory to increase the performance.
Time Management in FNCS: Speculative Algorithm
The algorithm for calculating the LBTS:
1. Calculate the number of in-transit messages (each simulator sends the number of messages sent and received).
1. If there are in-transit messages LBTS is $\Delta t$, $\Delta t$ is the time-step of the simulator with the highest time scale.
2. Else, LBTS is $\text{currentTime} + \Delta t_{\text{next}}$, $\Delta t_{\text{next}}$ is the minimum next time.
3. If $\text{currentTime} + \text{LBTS} < \text{specTime}$
1. Fork(), children use $\text{specTime}$ as LBTS
2. If $\text{currentTime} + \text{LBTS} < \text{myNextTime}$ then enter a busy wait loop
• Cons:
– Might not work if the simulators send messages frequently.
– Threaded simulators need to be prepared for fork.
– Suitable for long co-simulations where simulators do not exchange messages frequently.
Time Management in FNCS: Speculative Re-compute Algorithm
When a child process is launched, it needs to register with the FNCS broker.
- This is a costly operation!
- Reduces performance when simulators exchange messages frequently.
Speculative re-compute strategy is designed to eliminate registrations.
- Child processes are used to discover the time steps of message exchanges.
Speculation:
- \( \text{specTime} \) is always set to infinity.
- Child processes execute until one of them sends a message.
- The time of message is sent to the parent, which in turn synchronizes at this time step.
Pros:
- Does not require additional models for time synchronization.
- Improves the performance for co-simulations with simulators exchanging messages frequently.
Cons:
- Might not work if the simulators send messages frequently.
- Performance improvement not as good as speculative algorithm.
- Suitable for co-simulations where simulators exchange a lot of messages.
Key:
- Current time in the simulator.
- Next time granted by the conservative algorithm.
- Time step of message exchange.
Time Management in FNCS: Sending/Receiving Messages.
- **Register a component with FNCS**: The components of a simulator that need to communicate with other simulators need to register with FNCS.
- **Send a new message**: The component can select to be notified when it receives a message, or the messages can be stored in inbox until it reads them.
- **Get buffered messages**: Components to register a callback function.
```cpp
ObjectCommInterface *interface = Integrator::getCommInterface(<name_of_the_component>);
... Message *mesg = new Message(<from>,<to>,<timestep>,<data>,<direct_or_network>);
Interface->send(mesg);
... while(interface->hasMoreMessage()){
Message *rm = interface->getNextMessage();
//process rm.
}
```
Programming with FNCS
- **Extension points of FNCS:**
- **BufferStrategy:** defines how messages are buffered.
- **SynchronizationAlgorithm:** FNCS provides 4 synchronization algorithms, but users can extend the framework with an algorithm suitable for their needs.
- **SpeculationStrategy:** FNCS provides synchronization based-on speculative execution to speed up co-simulations. Users can extend FNCS with strategies describing when to speculate.
- **NetworkInterface:** FNCS provides a well-defined interface for co-simulation inter-process communication. Users can extend interface to utilize an inter-process communication library. Currently, ZMQ is supported, experimental support for MPI.
Demand Response/Real-Time Pricing Example
JASON FULLER, JEFF DAILY
LAURENTIU MARINOVICI, ANDREW FISHER, KHUSHBU AGARWAL
Pacific Northwest National Laboratory
July 16, 2014
Demand Response/Real-Time Pricing Example
- RTP, double-auction, retail market
- Market accepts demand and supply bids
- Clears on five minute intervals
- Designed to also manage capacity constraints at substation
- Residential energy management system
- Acts as a distributed agent to offer bids & respond to clearing prices
- Consumer sets a preference for “savings” versus “comfort”
- Currently being tested as part of the AEP gridSMART® ARRA Demonstration in Columbus, OH
Basic Real-Time Price / Double Auction Market – *Typical Unconstrained Conditions*
- Market clears every 5-mins to ~match AC load cycle
- Cleared load varies with demand curve
- Clearing price is constant at base retail price
**Unresponsive Loads**
- Base retail price based on PJM 5-min real-time market
- Varies every 5-min
**Responsive Loads**
- Demand Curve: sorted (P, Q) bids from RTP-DA customers
- Feeder Supply Curve
**Feeder Capacity**
- Q_{clear}
- P_{clear}
Basic Real-Time Price / Double Auction Market – **Typical Constrained Conditions**
- **Unresponsive Loads**
- Base retail price based on PJM 5-min real-time market
- Feeder Supply Curve
- **Responsive Loads**
- Demand Curve: sorted \((P, Q)\) bids from RTP-DA customers
- Clearing price varies to keep load at capacity
- Market clears every 5-mins
- Cleared load is constant at feeder capacity
- Feeder Capacity
- \(P_{\text{clear}}\)
- \(P_{\text{base}}\)
- \(Q_{\text{clear}}\)
\(P, \text{Price} ($/MWh)\)
\(Q, \text{Load (MW)}\)
- Decreased wholesale energy costs
- Peak demand limited to feeder capacity
But what happens when including communication latency?
- IEEE-13 node model with 900 residential loads and controllers modeled in GridLAB-D
- Model was modified to work within FNCS framework
- An ns-3 communication network model was created (radial WIFI)
- EXTREME communication delays (for Wifi) were considered
But what happens when including communication latency?
- Excessive communication delays during critical period caused an “accounting error” in auction (this was considered in Demo deployment)
As simulated in GridLAB-D and ns-3
A few comments
- These communication concerns were dealt with during the design of the demonstration system
- However, it was mostly engineering judgment and the timescale of control is such that latency is not a major factor
- A co-simulation environment can help determine the most economic means of deploying smart grid technologies, specifically in terms of communication requirements for successful system operations
- How much communication infrastructure do I need?
- What affect will latency have on my monitoring / control scheme?
- This will become more important as
- Sampling / control action periods are decreased (real-time control)
- Multiple applications are layered over the same communication systems
Now let’s add a transmission element…
- Want to be able to integrate >2 simulators
- ns-3™
- GridLAB-D™
- transmission solver
- Example: Wide Area Monitoring, Protection, and Control (WAMPAC)
- Want to limit the power flowing through branch $3_4$
- Use a price “signal” broadcasted to a distribution circuit to limit demand
What data is being exchanged?
- GridLAB-D is posting current load to a transmission substation
- The transmission solver is performing power flow calculations with updated load information
- The control object is calculating the change in price needed
- A new price is being broadcasted to distributed devices in GridLAB-D via ns-3
Relatively simple control design
Simple PI control design
- Only used to show how the software works (does not deal with revenue, “price as a signal”, regulatory issues, etc.)
Demand response consumers are using the same mechanism as previous use case
“Price” is now derived as a function of the system constraints
Some results
- PI controller takes some time to learn the necessary price adjustment (not well tuned)
- In actual application, we would take some time to tune the parameters
- But, we can see the response within GridLAB-D
- Reduces the demand from hour 40 through 46
- Price signal is being produced in the transmission solver (this could be replaced with Matpower and LMPs)
- Price is broadcasted via ns-3 (we could look at affects of communication delays)
Closing Remarks
JEFF DAILY, JASON FULLER, LAURENTIU MARINOVICI, ANDREW FISHER, KHUSHBU AGARWAL
Pacific Northwest National Laboratory
July 16, 2014
Closing remarks
- Simple example(s) to demonstrate the simulation environment
- Any tool could be replaced with another of “better value”
- Complexity of design is up to user
- We will continue developing interconnections for further experimentation and additional use cases
- Exploring interface with GridPACK solvers
- Expanding MATPOWER connection
- Expanding GridLAB-D connection
- Finishing the interface for EnergyPlus
- Adding an interface to GridOPTICS (for data management and visualization)
- Suggestions?
Sources soon to be on GridOPTICS github site.
- [https://github.com/GridOPTICS/FNCS](https://github.com/GridOPTICS/FNCS) (empty placeholder)
- Can use issue tracker right now
- Code rollout is underway
Email us developers directly
- [jeff.daily@pnnl.gov](mailto:jeff.daily@pnnl.gov), PI
- [jason.fuller@pnnl.gov](mailto:jason.fuller@pnnl.gov), co-PI
|
{"Source-Url": "http://gridoptics.pnnl.gov/fpgws14/files/workshop/Daily-FNCSTutorial_GOWS_FY14.pdf", "len_cl100k_base": 4839, "olmocr-version": "0.1.48", "pdf-total-pages": 48, "total-fallback-pages": 0, "total-input-tokens": 74538, "total-output-tokens": 6817, "length": "2e12", "weborganizer": {"__label__adult": 0.0004699230194091797, "__label__art_design": 0.0003380775451660156, "__label__crime_law": 0.0005350112915039062, "__label__education_jobs": 0.000759124755859375, "__label__entertainment": 0.00011402368545532228, "__label__fashion_beauty": 0.0002061128616333008, "__label__finance_business": 0.0008883476257324219, "__label__food_dining": 0.0003783702850341797, "__label__games": 0.0010509490966796875, "__label__hardware": 0.007152557373046875, "__label__health": 0.0005736351013183594, "__label__history": 0.0004589557647705078, "__label__home_hobbies": 0.00022542476654052737, "__label__industrial": 0.004650115966796875, "__label__literature": 0.0002014636993408203, "__label__politics": 0.0006480216979980469, "__label__religion": 0.0005731582641601562, "__label__science_tech": 0.38671875, "__label__social_life": 9.715557098388672e-05, "__label__software": 0.0220794677734375, "__label__software_dev": 0.5693359375, "__label__sports_fitness": 0.0006194114685058594, "__label__transportation": 0.00177001953125, "__label__travel": 0.00022721290588378904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 21501, 0.01018]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 21501, 0.30559]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 21501, 0.84269]], "google_gemma-3-12b-it_contains_pii": [[0, 258, false], [258, 424, null], [424, 1089, null], [1089, 1656, null], [1656, 2101, null], [2101, 2907, null], [2907, 3061, null], [3061, 3335, null], [3335, 3664, null], [3664, 3903, null], [3903, 4154, null], [4154, 4512, null], [4512, 4764, null], [4764, 5131, null], [5131, 5392, null], [5392, 5673, null], [5673, 5951, null], [5951, 6500, null], [6500, 6957, null], [6957, 7477, null], [7477, 8474, null], [8474, 8953, null], [8953, 9223, null], [9223, 9553, null], [9553, 9887, null], [9887, 10430, null], [10430, 11083, null], [11083, 11602, null], [11602, 12555, null], [12555, 13437, null], [13437, 14533, null], [14533, 15275, null], [15275, 15981, null], [15981, 16154, null], [16154, 16643, null], [16643, 17117, null], [17117, 17664, null], [17664, 17740, null], [17740, 18054, null], [18054, 18283, null], [18283, 19015, null], [19015, 19349, null], [19349, 19682, null], [19682, 20000, null], [20000, 20468, null], [20468, 20616, null], [20616, 21149, null], [21149, 21501, null]], "google_gemma-3-12b-it_is_public_document": [[0, 258, true], [258, 424, null], [424, 1089, null], [1089, 1656, null], [1656, 2101, null], [2101, 2907, null], [2907, 3061, null], [3061, 3335, null], [3335, 3664, null], [3664, 3903, null], [3903, 4154, null], [4154, 4512, null], [4512, 4764, null], [4764, 5131, null], [5131, 5392, null], [5392, 5673, null], [5673, 5951, null], [5951, 6500, null], [6500, 6957, null], [6957, 7477, null], [7477, 8474, null], [8474, 8953, null], [8953, 9223, null], [9223, 9553, null], [9553, 9887, null], [9887, 10430, null], [10430, 11083, null], [11083, 11602, null], [11602, 12555, null], [12555, 13437, null], [13437, 14533, null], [14533, 15275, null], [15275, 15981, null], [15981, 16154, null], [16154, 16643, null], [16643, 17117, null], [17117, 17664, null], [17664, 17740, null], [17740, 18054, null], [18054, 18283, null], [18283, 19015, null], [19015, 19349, null], [19349, 19682, null], [19682, 20000, null], [20000, 20468, null], [20468, 20616, null], [20616, 21149, null], [21149, 21501, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 21501, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 21501, null]], "pdf_page_numbers": [[0, 258, 1], [258, 424, 2], [424, 1089, 3], [1089, 1656, 4], [1656, 2101, 5], [2101, 2907, 6], [2907, 3061, 7], [3061, 3335, 8], [3335, 3664, 9], [3664, 3903, 10], [3903, 4154, 11], [4154, 4512, 12], [4512, 4764, 13], [4764, 5131, 14], [5131, 5392, 15], [5392, 5673, 16], [5673, 5951, 17], [5951, 6500, 18], [6500, 6957, 19], [6957, 7477, 20], [7477, 8474, 21], [8474, 8953, 22], [8953, 9223, 23], [9223, 9553, 24], [9553, 9887, 25], [9887, 10430, 26], [10430, 11083, 27], [11083, 11602, 28], [11602, 12555, 29], [12555, 13437, 30], [13437, 14533, 31], [14533, 15275, 32], [15275, 15981, 33], [15981, 16154, 34], [16154, 16643, 35], [16643, 17117, 36], [17117, 17664, 37], [17664, 17740, 38], [17740, 18054, 39], [18054, 18283, 40], [18283, 19015, 41], [19015, 19349, 42], [19349, 19682, 43], [19682, 20000, 44], [20000, 20468, 45], [20468, 20616, 46], [20616, 21149, 47], [21149, 21501, 48]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 21501, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.