id
stringlengths 40
40
| text
stringlengths 9
86.7k
| metadata
stringlengths 3k
16.2k
| source
stringclasses 1
value | added
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
| created
stringdate 2024-11-21 00:00:00
2024-12-12 00:00:00
|
|---|---|---|---|---|---|
71cb5c30e7429fc63265c9f8bc65a58228ef56d9
|
The Impact of Event Processing Flow on Asynchronous Server Efficiency
Shungeng Zhang, Student Member, IEEE, Qingyang Wang, Member, IEEE, Yasuhioko Kanemasa, Member, IEEE, Huasong Shan, Student Member, IEEE, and Liting Hu, Member, IEEE
Abstract—Asynchronous event-driven server architecture has been considered as a superior alternative to the thread-based counterpart due to reduced multithreading overhead. In this paper, we conduct empirical research on the efficiency of asynchronous Internet servers, showing that an asynchronous server may perform significantly worse than a thread-based one due to two design deficiencies. The first one is the widely adopted one-event-one-handler event processing model in current asynchronous Internet servers, which could generate frequent unnecessary context switches between event handlers, leading to significant CPU overhead of the server. The second one is a write-spin problem (i.e., repeatedly making unnecessary I/O system calls) in asynchronous servers due to some specific runtime workload and network conditions (e.g., large response size and non-trivial network latency). To address these two design deficiencies, we present a hybrid solution by exploiting the merits of different asynchronous architectures so that the server is able to adapt to dynamic runtime workload and network conditions in the cloud. Concretely, our hybrid solution applies a lightweight runtime request checking and seeks for the most efficient path to process each request from clients. Our results show that the hybrid solution can achieve from 10 to 90 percent higher throughput than all the other types of servers under the various realistic workload and network conditions in the cloud.
Index Terms—Asynchronous, event-driven, thread-based, Internet servers, efficiency
1 INTRODUCTION
Modern Internet servers are expected to handle high concurrency workload at high resource efficiency in the cloud [1],[2]. To achieve this goal, many previous research efforts [3], [4] have shown that the asynchronous event-driven architecture could be a superior alternative to the traditional thread-based design. An important reason is that an asynchronous event-driven server can avoid the well-known multithreading overhead, which usually occurs in the thread-based counterpart when facing high concurrency workload. Though conceptually simple, building high-performance asynchronous event-driven servers is challenging because of the obscured non-sequential control flow rooted in the event-driven programming model [4].
In this paper, we study some non-trivial design deficiencies of asynchronous event-driven servers that make them less efficient than the thread-based counterparts when facing high concurrency workload. Through our extensive experiments, we show that constructing good performance and high efficiency asynchronous event-driven servers requires careful design of event processing flow and the capability to adapt to dynamic runtime workload and network conditions. For example, the conventional design practice of one-event-one-handler event processing flow may cause a significant performance loss of an asynchronous server by generating frequent unnecessary intermediate events and context switches, which occur at the transition of control flow between different event handlers. Our further analysis also shows that some runtime workload and network conditions might result in frequent redundant I/O system calls due to the non-blocking nature of asynchronous function calls, causing significant CPU overhead in an asynchronous server, but not in a thread-based one.
The first contribution of the paper is an empirical study illustrating the negative impact of the inefficient event processing flow on asynchronous server performance. Our study is motivated by running a standard 3-tier application benchmark RUBBoS [5] (see Fig. 3), where we observed a significant system throughput drop (28 percent) after we merely upgrade the Tomcat application server in the system from a thread-based version (Version 7) to its asynchronous event-driven version (Version 8). Our analysis reveals that such an unexpected performance degradation stems from the poor design of event processing flow of the asynchronous Tomcat server, causing a significant high CPU overhead due to unnecessary context switches. We further investigate many other representative asynchronous servers/middleware (see Table 1) and
find that such a poor design of event processing flow widely exists among Jetty [6], GlassFish [7], and MongoDB Java asynchronous driver [8].
The second contribution is a sensitivity analysis of how different runtime workload and network conditions impact the efficiency of the event processing flow of asynchronous servers. Concretely, we vary the server response size and network latency based on realistic conditions in a typical cloud environment and see their impact on the performance of servers with different architectures. Our experimental results show that an asynchronous server could encounter a severe write-spin problem, in which the server makes a large amount of unnecessary I/O system calls when sending a relatively large size of server response (e.g., 100KB), thus wastes the critical CPU resource up to 24 percent. Such a problem is caused by the lack of coordination between the non-blocking nature of asynchronous system calls in the application layer and the TCP wait-ACK mechanism in the OS kernel. Our experiments show that some network conditions (e.g., network latency) could exaggerate the CPU overhead caused by the write-spin problem, leading to a more severe performance drop of the server.
The third contribution is a hybrid solution which exploits the merits of different asynchronous event-driven architectures in order to adapt to dynamic runtime workload and network conditions. We first examined a widely-used asynchronous event-driven network application framework named “Netty [12]”, which adopts a write operation optimization technique to alleviate the aforementioned write-spin problem. However, we found that such an optimization introduces non-trivial CPU overhead in the case of the absence of the write-spin problem. Our hybrid solution extends the native Netty by applying a lightweight profiling technique to check whether the write-spin problem exists during the server runtime. Based on the runtime checking results, our solution chooses the most efficient event processing flow for each client request, avoiding both the write-spin problem and the non-trivial optimization overhead.
Overall, our study of asynchronous Internet server efficiency has a potentially significant impact on achieving good performance and high resource efficiency in today’s cloud data centers. Plenty of previous research efforts have shown the challenges of achieving high performance at high system utilization, especially for those latency-sensitive interactive web applications [13], [14]. Our work shows that, given the right design of event processing flow, asynchronous Internet servers could continuously achieve stable and high performance under the various runtime workload and network conditions (even at high resource utilization). Our work also provides future research opportunities as many system components (e.g., ZooKeeper [15]) have been shifting from the thread-based architecture to the asynchronous one, thus the similar problems may also occur.
We outline the rest of the paper as follows. Section 2 presents a motivation experiment that merely upgrading an application server from the thread-based version to its asynchronous counterpart causes large system performance loss. Section 3 studies the poor design of event processing flow leading to unnecessary context switch overhead. Section 4 shows the write-spin problem of an asynchronous server sending large size responses. Section 5 introduces our hybrid solution. Section 6 summarized the related work and Section 7 concludes the paper.
2 BACKGROUND AND MOTIVATION
2.1 RPC versus Asynchronous Network I/O
Modern Internet servers generally use either synchronous or asynchronous connectors for inter-tier (or between a client and a server) communications. These connectors mainly focus on the following activities: 1) manage network connections from both the upstream and the downstream tiers, 2) read (and write) data through established connections, and 3) parse and route new requests to the application layer (business logic) and vice versa. Although asynchronous and synchronous connectors are similar in functionality, they are very different in the underlying mechanism to interact with the application layer logic.
Synchronous connectors are mostly adopted by the RPC thread-based servers. There are two types of threads in this type connector: the main thread takes care of accepting new connections and dispatching each connection to a dedicated worker thread, and each worker thread will handle all activities of the corresponding connection until the close of it. Accordingly, a large number of worker threads are needed to handle high concurrency workload. Due to the user-perceived sequential processing logic, it is relatively easy for developers to build synchronous thread-based servers, but the overhead associated with multithreading (e.g., locks and context switches) can lead to performance degradation [3].
<table>
<thead>
<tr>
<th>Summary of Inefficient Event Processing Flow in Mainstreamed Asynchronous Servers/Middleware</th>
<th>Category</th>
<th>Software Name</th>
<th>Type</th>
<th>Note</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inefficient Event Processing Flow (Section 3 and 4)</td>
<td>Tomcat NIO Connector (refer as TomcatAsync)</td>
<td>application server</td>
<td>The reactor thread monitors events while the worker thread processes events, context switch happens between the reactor/worker thread which is similar as TomcatAsync design.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Eclipse Jetty</td>
<td>application server</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Grizzly/GlassFish</td>
<td>network framework</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>MongoDB Async Driver</td>
<td>database driver</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Improved Event Processing Flow, but with potential optimization overhead (Section 5.1)</td>
<td>Netty</td>
<td>network framework</td>
<td>The worker thread is responsible for both event monitoring and processing, no intermediate context switch, similar as Netty design.</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Lighttpd [10]</td>
<td>web server</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Asynchronous connectors are able to use only one or a few threads for handling high concurrency workload using an event-driven mechanism. Fig. 1 depicts the interactions of an asynchronous connector with the application layer and the underlying operating system. To process a pool of established connections, the asynchronous connector switches between two phases (event monitoring phase and event handling phase) to handle requests from these connections. The event monitoring phase determines connections with the occurrence of pending network I/O events, such as a readable or writable state of a particular connection. The underlying operating system provides the event notification mechanism (e.g., select, poll, or epoll). The event handling phase will perform the actual business logic by dispatching each event to the corresponding event handler [3], [4], [16].
In practice, there are two typical server designs using the asynchronous connectors. The first design is a single-threaded server which uses only one thread to handle both event monitoring and handling phase (e.g., Lighttpd [10] and Node.js [11]). Previous work [17] shows that such a design is able to minimize multithreading overhead when dealing with in-memory workloads. In the second design, a small size of the worker thread pool is used to concurrently process events in the event handling phase (e.g., the asynchronous Tomcat). Such a design is intended to efficiently exploit a multi-core CPU [16], or deal with complex workload involving transient disk I/O activities. Variants of the second design have been studied, such as the Staged Event-Driven Architecture (SEDA) adopted by Haboob [3].
In general, previous research demonstrates that asynchronous event-driven server is able to outperform the thread-based one in throughput due to the reduced multithreading overhead, especially for servers facing high concurrency workload. However, our study in the next section will show the contradictory results.
2.2 Experimental Setup
We conduct our experiments using RUBBoS [5], which is a representative n-tier benchmark modeled after the bulletin board applications like Slashdot [18]. In our experiments, we configure the benchmark as a typical 3-tier topology as shown in Fig. 2, with one Apache web server, one Tomcat application server, and one MySQL database server. There are 24 servlets providing different interactions, which can be further categorized into browse-only and read/write mixes workload. We use the former one in this experiment. The response size of each servlet varies from tens to hundreds of kilobytes in a Zipf-like distribution. The default workload generator simulates a number of concurrent users to mimic real user behaviors. Each user browses different pages following a Markov chain model, and the think time between two consecutive requests is averagely 7 seconds. Such a design of workload generator is widely adopted by other typical n-tier benchmarks like RUBiS [19], TPC-W [20], and Cloudstone [21]. We ran the experiments in our private cluster. Fig. 2 shows detailed software configurations, hardware specifications and a sample 3-tier topology.
2.3 Performance Degradation from Tomcat Upgrade
Software upgrade in web-facing n-tier systems is common for system admins due to the rapid application evolution. In this section, we show a case study that significant system performance loss in a 3-tier RUBBoS benchmark after we upgrade a thread-based application server to its asynchronous counterpart. Concretely, we first adopt Tomcat 7 (noted as TomcatSync) as the application server, which adopts a thread-based synchronous connector to communicate with other servers. We then upgrade the Tomcat server to a newer version (version 8, noted as TomcatAsync), the default connector of which has changed to the asynchronous one, with the expectation of system performance improvement after the Tomcat upgrade.



1. Only one core is enabled in BIOS unless explicitly mentioned.
However, Fig. 3 shows an unexpected system performance drop after the thread-based Tomcat upgrades to its asynchronous counterpart. We use notation SYS\textsubscript{tomcatV\textsubscript{T}} and SYS\textsubscript{tomcatV\textsubscript{S}} to represent the system with Tomcat\textsubscript{Sync} and Tomcat\textsubscript{Async}, respectively. The figure shows that the throughput of SYS\textsubscript{tomcatV\textsubscript{T}} stops increasing at workload 9000, which is much earlier than SYS\textsubscript{tomcatV\textsubscript{S}}. At workload 11000, SYS\textsubscript{tomcatV\textsubscript{T}} achieves 28 percent higher throughput than SYS\textsubscript{tomcatV\textsubscript{S}}, and the corresponding average response time is significantly increased by a factor of ten (300ms versus 3s). Considering that we merely upgrade a thread-based Tomcat server to a newer asynchronous one, such a result is counter-intuitive. We note the bottleneck resource in the system is the CPU of the Tomcat server in both cases, while the utilization of the hardware resources (memory, disk I/O, etc.) of all other components is moderate (less than 60 percent).
We also observed another interesting phenomenon that the asynchronous Tomcat\textsubscript{Async} experiences significantly higher frequency of context switches than the thread-based Tomcat\textsubscript{Sync} when facing the same workload. We monitor system-level metrics using Collectl [22]. For example, Tomcat\textsubscript{Async} encountered more than twice context switches per second than Tomcat\textsubscript{Sync} (12950 /sec versus 5930 /sec) at workload 11000. Since the high frequency of context switches causes high CPU overhead, it makes sense to suggest that the throughput gap between SYS\textsubscript{tomcatV\textsubscript{T}} and SYS\textsubscript{tomcatV\textsubscript{S}} (see Fig. 3) is caused by the different level of context switches in Tomcat. We note the Tomcat CPU is the bottleneck in the system. However, significant previous work shows that an asynchronous server is supposed to have much fewer context switches than a thread-based server, so why do we observe the contradictory results here? We will answer this question in the next section.
### 3 Ifficient Event Processing Flow
In this section, we introduce the inefficient event processing flow problem, which results in the system performance degradation of the 3-tier RUBBoS benchmark after the thread-based Tomcat server upgrades to its asynchronous counterpart. In the following experimental evaluation, we separate out Tomcat for better quantifying our performance analysis on different versions of Tomcat.
#### 3.1 Unnecessary Context Switches between Event Handlers
In this set of experiments, we use JMeter [23] as a workload generator sending HTTP requests to access Tomcat (both the thread-based and the asynchronous) directly (no Apache and MySQL is involved). We divide these HTTP requests into three categories: small, medium, and large, which are based on the response size of each request. Concretely, the Tomcat server will respond with 3 sizes of responses (i.e., 0.1KB, 10KB, and 100KB) according to different types of requests from JMeter. To simulate realistic business logic, the Tomcat server will generate a corresponding response (e.g., a 0.1KB/10KB/100KB random string) on-the-fly during runtime, such generation process (or computation) of each request is proportional to the response size. We choose these three sizes of server response because they are representative of the RUBBoS benchmark. We note that JMeter adopts threads to emulate real users sending requests. To precisely control the workload concurrency to the target Tomcat server, we set the think time between every two consecutive requests from each client thread to be zero.
We first compare the throughput between the thread-based Tomcat\textsubscript{Sync} and the asynchronous Tomcat\textsubscript{Async} under different workload congestions and response sizes, shown in Fig. 4. An interesting phenomenon is that Tomcat\textsubscript{Sync} outperforms Tomcat\textsubscript{Async} in throughput when the workload concurrency is less than a certain point (referred as the crossover point), and then the throughput superiority of two servers is reversed as the workload concurrency continues to increase. For example, the crossover point workload concurrency is 64 in the 10KB response size case (Fig. 4b), and 1600 in the 100KB response size case (Fig. 4c). We note that the response size for RUBBoS benchmark in Section 2 varies from tens to hundreds of kilobytes in a Zipf-like distribution and the average is about 20KB, and the request processing concurrency in Tomcat is averagely 35 when the system approaches saturation. According to our experimental results here, it is not surprising that Tomcat\textsubscript{Sync} outperforms Tomcat\textsubscript{Async} in the 3-tier RUBBoS benchmark experiments, leading to higher throughput of SYS\textsubscript{tomcatV\textsubscript{T}} than that of SYS\textsubscript{tomcatV\textsubscript{S}} since the Tomcat server is the bottleneck. The question is why the thread-based Tomcat\textsubscript{Sync} outperforms the asynchronous Tomcat\textsubscript{Async} before a certain workload concurrency.
Our further analysis shows that it is the poor design of the event processing flow in Tomcat\textsubscript{Async} that creates a large number of context switches which leads to non-trivial CPU overhead. In Table 2, we show that the frequency of context switches in the asynchronous Tomcat\textsubscript{Async} is significantly higher than that in the synchronous Tomcat\textsubscript{Sync} when
Facing the same concurrency workload (e.g., from 8 to 3200). Such results are consistent with our observations in the previous RUBBoS experiments. We note that TomcatAsync uses the second asynchronous server design, in which the server monitors events by a reactor thread (event monitoring) and handles events by a small size of worker thread pool (event handling) (see Section 2.1). To process a new incoming request, Fig. 5 illustrates the event processing flow in TomcatAsync, which includes the following four steps:
1) the reactor thread dispatches a read event to a worker thread (reactor thread → worker thread A);
2) the worker thread reads and parses the event (read request), prepares the response for the request, and then generates a write event; the reactor thread is notified the occurrence of the write event (worker thread A → reactor thread);
3) the reactor thread dispatches the write event to a worker thread to send the response out (reactor thread → worker thread B);
4) the worker thread finishes sending the response, and the control returns to the reactor thread (worker thread B → reactor thread).
Accordingly, TomcatAsync needs four context switches between the reactor thread and the worker threads to process one client request. Such an inefficient design of the event processing flow is widely adopted by many representative asynchronous software (see Table 1), showing a general problem in designing asynchronous software. On the other hand, each client request in the thread-based TomcatSync is dispatched to a dedicated worker thread, which is intended to handle all the activities associated with this request, including reading the request, preparing the response, and sending the response out. Therefore, the context switch only happens when the processing worker thread is interrupted or swapped out by the operating systems due to running out of CPU quota.
To better quantify the performance impact of context switches on servers with different architectures, we simplify the design of TomcatAsync and TomcatSync by ruling out some unnecessary modules (e.g., cache management and logging) and only keep the necessary parts related to the business logic. We refer the simplified TomcatAsync and TomcatSync as $s$Tomcat-Async and $s$Tomcat-Sync, respectively. We also implement two alternative asynchronous servers with reduced context switches as a reference. The first alternative is $s$Tomcat-Async-Fix, which uses the same worker thread to process both the read and the write events of the same request. In this case, the same worker thread, after finishing preparing the response, will continue to send the response out (step 2 and 3 in Fig. 5 are merged with step 4), thus only two context switches are required to process one client request: 1) the reactor thread dispatches a read event to one available worker thread in the thread pool, and 2) the same worker thread returns the control back to the reactor thread after sending response out. The second alternative design is SingleT-Async, which adopts a single thread to process events in both event monitoring and event handling phase. Such a design is supposed to avoid the context switch overhead. We summarize the four types of servers with their associated context switches when processing one client request in Table 3. Readers who are interested can refer to our source code of server implementation from our repository [24].
### Table 2
<table>
<thead>
<tr>
<th>Workload concurrency</th>
<th>Response size [KB]</th>
<th>TomcatAsync</th>
<th>TomcatSync</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>[×1000/sec]</td>
<td></td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>0.1</td>
<td>40</td>
<td>16</td>
</tr>
<tr>
<td></td>
<td>10</td>
<td>25</td>
<td>7</td>
</tr>
<tr>
<td></td>
<td>100</td>
<td>28</td>
<td>2</td>
</tr>
<tr>
<td>100</td>
<td>0.1</td>
<td>38</td>
<td>15</td>
</tr>
<tr>
<td></td>
<td>10</td>
<td>26</td>
<td>5</td>
</tr>
<tr>
<td></td>
<td>100</td>
<td>25</td>
<td>2</td>
</tr>
<tr>
<td>3200</td>
<td>0.1</td>
<td>37</td>
<td>15</td>
</tr>
<tr>
<td></td>
<td>10</td>
<td>22</td>
<td>6</td>
</tr>
<tr>
<td></td>
<td>100</td>
<td>28</td>
<td>3</td>
</tr>
</tbody>
</table>
The workload concurrency varies from 8 to 3200.
### Table 3
<table>
<thead>
<tr>
<th>Server type</th>
<th>Context Switch</th>
<th>Note</th>
</tr>
</thead>
<tbody>
<tr>
<td>sTomcat-Async</td>
<td>4</td>
<td>Different worker threads handle read and write events respectively.</td>
</tr>
<tr>
<td>sTomcat-Async-Fix</td>
<td>2</td>
<td>The same worker thread handles read and write events.</td>
</tr>
<tr>
<td>sTomcat-Sync</td>
<td>0</td>
<td>Dedicated worker thread for each request. Context switches only occur by interrupt or swapped out due to running out of CPU quota.</td>
</tr>
<tr>
<td>SingleT-Async</td>
<td>0</td>
<td>One thread handles both event monitoring and processing. No context switches exist.</td>
</tr>
</tbody>
</table>
Other implicit context switches such as interrupt or swapped-out are not counted.
---
Fig. 5. *Inefficient event processing flow in TomcatAsync for one client request processing*. There are four context switches between the reactor thread and the worker threads.
We show the performance comparison among the four architecturally different servers under different workload concurrencies and response sizes as shown in Fig. 6. Through Figs. 6a and 6d, we observe the negative correlation between the server throughput and the corresponding frequency of context switches of each server type. For example, sTomcat-Async-Fix achieves 22 percent higher throughput than sTomcat-Async while the context switch frequency is 34 percent less when the server response size is 0.1KB and the workload concurrency is 16 in Figs. 6a and 6d, respectively. In this set of experiments, the computation for each request is proportional to the server response size, which means in the small server response size scenario (e.g., the 0.1KB case), more CPU cycles will be wasted in context switches compared to those consumed in actual request processing. For example, in the 0.1KB response size case, the gap of the context switches between sTomcat-Async-Fix and sTomcat-Async in Fig. 6d could reflect the throughput difference in Fig. 6a. We further validate such a hypothesis by the other two types of servers SingleT-Async and sTomcat-Sync, which achieves 91 percent (42K req/sec versus 22 req/sec) and 57 percent (35K req/sec versus 22 req/sec) higher throughput than sTomcat-Async at workload concurrency 100, respectively (Fig. 6a). The context switch comparison in Fig. 6d can help explain the throughput difference. For example, SingleT-Async only encounters a few hundred per second context switches (due to other daemon processes such as monitoring tools like Collectl [22] and perf [25]), which is three orders of magnitude less than that of sTomcat-Async.
Recall our previous study in Table 3, context switches for the thread-based sTomcat-Sync only occur when the processing worker thread is interrupted or swapped out due to running out of CPU quota; on the other hand, context switches for the asynchronous sTomcat-Async happen when the events are dispatched between the reactor thread and the processing worker thread (e.g., step 1~4 in Fig. 5). In this case, the lock operation is required to synchronize the threads (the reactor thread and the worker threads), which introduces lock contention overhead. We then use perf [25] (a performance analysis tool) to validate our hypothesis in Table 4. The server response size is 0.1KB and the workload concurrency is 100. Our results show that the lock contention between threads to coordinate information (e.g., connection context) plays a significant role in performance overhead for asynchronous servers with a design of inefficient event processing flow. For example, the futex overhead and cache miss in sTomcat-Async is 13.86 and 0.07 percent respectively, which is the highest out of four servers. Such high overhead further results in less CPU efficiency (i.e., the lowest instructions per cycle), leading to significant throughput loss.
We note that the portion of the CPU overhead associated with context switches becomes less when the size of the server response becomes larger. This is because more CPU cycles will be consumed for processing requests and sending responses given the same number of context switches. Fig. 6b and 6c show the throughput comparison of different types of servers when the size of server response increases from 0.1KB to 100KB. Sub-figure (a) and (d) show the negative correlation between server throughput and corresponding context switch frequency of each server type. However, when the response size is large (100KB), subfigure (c) shows that sTomcat-Sync performs the best compared to other types of servers before the workload concurrency 400, suggesting other factors create additional overhead in asynchronous servers.
Fig. 6. Performance comparison among four architecturally different servers when the size of server response increases from 0.1KB to 100KB. Sub-figure (a) and (d) show the negative correlation between server throughput and corresponding context switch frequency of each server type. However, when the response size is large (100KB), subfigure (c) shows that sTomcat-Sync performs the best compared to other types of servers before the workload concurrency 400, suggesting other factors create additional overhead in asynchronous servers.
servers with the 10KB and the 100KB response size, respectively. The throughput gap among these four types of servers becomes narrower, suggesting that context switches have less impact on server throughput.
In fact, we also observe another interesting phenomenon that the asynchronous // Async achieves lower throughput than the thread-based // Async when the workload concurrency is less than 400 as shown in Fig. 6c. Although the context switches in // Async are much less than those in // Async (see Fig. 6f). These results suggest that other factors introduce significant overhead in the asynchronous // Async as the server response size increases (e.g., 100KB). We will explain these factors in Section 4.
3.2 Evaluation in a Multi-Core Environment
Multi-core has been rapidly adopted in cloud data centers, thus one requirement for modern Internet servers lies in the ability to scale-out in a multi-core hardware setting. Our investigation shows that the context switch overhead caused by inefficient event processing flow also has a significant impact on the performance of asynchronous servers in a multi-core environment (see Fig. 7). Previous studies show that the N-copy model is widely adopted as a successful solution to enabling an asynchronous server to leverage multiple CPUs in a multi-core environment. For example, the asynchronous // Async only uses one thread, we adopt the N-copy model for // Async and each copy consumes one CPU core; N equals the number of cores enabled in the host. To avoid CPU crosstalk penalty, we use CPU affinity to launch multiple copies of servers in a multi-core environment. To conduct a fair comparison, we also apply the N-copy model to the other three types of servers. Interested readers can refer to Veal’s work which discusses the challenges to scale-up web servers to multi-core. Nevertheless, the N-copy model is a common practice in scaling modern Internet servers, especially in the emerging microservices architecture, where each micro-service can scale-out multiple replicas to handle workload increase.
We set the workload concurrency to 100 (high enough to saturate the quad-core CPU) and the server response size to 0.1KB in all cases. Fig. 7a shows that the throughput of each server scales almost linearly as the number of cores increases; Fig. 7b shows the frequency of context switches of different servers. These two figures show the consistent results as in the single-core case (Fig. 6a and 6d), where the inefficient event processing flow causes frequent context switches in asynchronous servers and degrades the server performance in a multi-core environment.
On the other hand, We note that the asynchronous servers such as // Async and // Async-Fix delegate event processing to the small size of the worker thread pool (see Section 2.1). Such a design is intended to efficiently exploit multiple CPUs in a multi-core environment since most of the computations (business logic) rely on the worker thread pool. We conduct the same experiments on asynchronous // Async-Fix with only a single instance running in a multi-core environment in Fig. 8, we refer it as // Async-Fix w/ 1-copy. An interesting observation is that in Fig. 8a, // Async-Fix w/ 1-copy outperforms // Async-Fix w/ N-copy by 13 percent in throughput in a dual-core environment. Such a performance improvement is because that // Async-Fix encounters less context switches in the 1-copy case, as shown in Fig. 8b. Recall our previous study in Table 3, context switches for the asynchronous // Async-Fix happen when the events are dispatched between the reactor thread and the processing worker thread (i.e., step 1 and 4 in Fig. 5). In a dual-core environment, the reactor thread and the processing worker thread could be running on separate CPUs due to the operating system process scheduling, thus such event dispatches between these two threads only involve the thread coordination among CPUs instead of context switches. In this case, the context switch overhead is significantly reduced.
---
Fig. 7. The context switch problem caused by inefficient event processing flow also occurs in a multi-core environment. The workload concurrency keeps 100. The server response size is 0.1KB so that the computation of each request is light. Thus the throughput difference in (a) is mainly caused by the the context switch difference in (b).
Fig. 8. The 1-copy model mitigates context switches problem in a multi-core environment for // Async-Fix, but it cannot solve the problem. The workload concurrency keeps 100 and the server response size is 0.1KB.
However, SingleT-Async w/ N-copy still performs the best in all multi-core cases in Fig. 8a, showing that 1-copy model only mitigates the context switch problem for sTomcat-Async-Fix, it cannot solve the problem completely.
Summary. Through our extensive experiments on four architecturally different servers (Single-Async, sTomcat-Async, sTomcat-Async-Fix, and sTomcat-Sync), we observed that Single-Async is able to achieve the best performance under various workload concurrencies when the response size is small. The main reason is because Single-Async reduces the multithreading overhead (e.g., context switches and lock) caused by inefficient event processing flow, which generates frequent unnecessary intermediate events, as shown in the other three servers (i.e., sTomcat-Async-Fix, sTomcat-Async, and sTomcat-Sync). In the next section, we will discuss other factors that make SingleT-Async less efficient, for example, when the server response size is large (e.g., 100KB).
4 Write-Spin in Asynchronous Invocation
In this section, we analyze the performance degradation of an asynchronous server in the large response size case. We measure the CPU usage in both the user/kernel space and profile some critical system calls of servers with different architectures using monitoring tools like Collectl [22] and JProfiler [32]. Our measurements show that an asynchronous server could encounter a severe write-spin problem in which the server invokes a large amount of unnecessary I/O system calls when sending a large size server response, thus degrades the server efficiency. We then explore some realistic factors in cloud data centers that could exaggerate the negative effect of the write-spin problem, degrading an asynchronous server performance further.
4.1 Overhead Caused by Write-Spin
Recall the experimental results in Fig. 6a, which shows that the asynchronous SingleT-Async outperforms the thread-based sTomcat-Sync 20 percent in throughput at the workload concurrency 8 in a small response size scenario (i.e., 0.1KB). However, such a throughput superiority is reversed at the same workload concurrency once the server response increases to 100KB (Fig. 6c). Such a result suggests that sending a large size of server response brings a significant overhead for the asynchronous SingleT-Async but not for the thread-based sTomcat-Sync.
To study the throughput drop of SingleT-Async in a large size response scenario, we collect the performance metrics (e.g., CPU) of the server with different response sizes using Collectl. We show the CPU utilization comparison between SingleT-Async and sTomcat-Sync as we increase the response size from 0.1KB to 100KB as shown in Table 5. The workload concurrency is 100 and the CPU of both servers is 100 percent utilized. The table shows that when increasing the 0.1KB response size to 100KB, the CPU consumption in user-space of the asynchronous SingleT-Async increases 25 percent (80-55 percent), which is much less than 34 percent (92-58 percent) of the thread-based sTomcat-Sync. Such a result indicates that SingleT-Async is more sensitive than sTomcat-Sync in user-space CPU utilization as response size increases.
We then profile SingleT-Async in different server response size cases by JProfiler during the experiment runtime, and see the difference of application-level activities. We observed that the frequency of system call socket.write() is exceptionally high when the response size is 100KB in Table 6. In fact, socket.write() will be called when a server tries to send a response out. For example, the synchronous thread-based sTomcat-Sync will call socket.write() only once when processing each client request regardless of the size of the server response. Such a pattern is also true for asynchronous SingleT-Async in the case of the 0.1KB and 10KB response sizes. However, the table shows SingleT-Async requires averagely 102 calls of socket.write() per request in the 100 KB response case. It is well-known that system calls are expensive because of the associated kernel-user switching overhead [33], thus the high CPU overhead in user-space of SingleT-Async sending a large response (in Table 5) can be explained.
We further investigate the exceptionally high socket writes in SingleT-Async, which are caused by a combination of a small size TCP send buffer (16KB by default) and the TCP wait-ACK mechanism. We refer it as the write-spin problem in Fig. 9. Concretely, the processing thread in SingleT-Async invokes a Java library method java.nio.channels.SocketChannel.write() [34], which wraps the system call socket.write(). In this case, the method tries to transfer 100KB data to the TCP send buffer, but it is only able to transfer at most 16KB data at first because of the limited size of TCP send buffer, which is structured as a byte buffer ring. A TCP sliding window determines the actual amount of data to be sent to the client and frees up the occupied TCP send buffer space only if the ACKs are received from the previously sent-out packets. Considering the non-blocking nature of the asynchronous servers, such a library method in SingleT-Async immediately returns the total amount of bytes copied to the TCP send buffer once called; in the worst case, it returns zero when the TCP send buffer is already full.
### Table 5
More User-Space CPU Resource is Consumed in SingleT-Async than that in sTomcat-Sync
<table>
<thead>
<tr>
<th>Server Type</th>
<th>sTomcat-Sync</th>
<th>SingleT-Async</th>
</tr>
</thead>
<tbody>
<tr>
<td>Response Size</td>
<td>0.1KB</td>
<td>100KB</td>
</tr>
<tr>
<td>TP [req/sec]</td>
<td>35000</td>
<td>590</td>
</tr>
<tr>
<td>User total %</td>
<td>55%</td>
<td>80%</td>
</tr>
<tr>
<td>System total %</td>
<td>45%</td>
<td>20%</td>
</tr>
</tbody>
</table>
We set the workload concurrency to 100 in all cases.
### Table 6
Severe Write-Spin Problem Happens in 100KB Response Size Case
<table>
<thead>
<tr>
<th>Response Size</th>
<th># req. socket.write()</th>
<th># write() per req.</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.1KB</td>
<td>238530</td>
<td>1</td>
</tr>
<tr>
<td>10KB</td>
<td>9400</td>
<td>1</td>
</tr>
<tr>
<td>100KB</td>
<td>2971</td>
<td>102</td>
</tr>
</tbody>
</table>
The table shows the total number of socket.write() for each request in SingleT-Async under different response size during a one-minute experiment.
system call `socket.write()` returns EWOULDBLOCK, indicating the fullness of TCP send buffer [35], resulting in a severe write spin problem. In contrast, the method to transfer data in the thread-based `sTomcat-Sync` is blocking; the actual write loop (data transfer) occurs in kernel throughout the limited TCP send buffer, and it is much more efficient than that occurs in user-space, which is what the non-blocking `socket.write()` in the asynchronous `SingleT-Async` does (the write spin-problem). In this case, `sTomcat-Sync` calls only one such a method for each request, avoiding unnecessary spins of the processing worker thread in `SingleT-Async`.
A straightforward solution is to manually set the TCP send buffer to the same size (or even larger) as the server response. However, in practice it is a non-trivial task due to the following three reasons. First, predicting the response size of internet services is difficult since the web application workloads are dynamic by nature. For example, the responses from a Tomcat server can vary from tens of bytes to megabytes since requests may require dynamic content from the downstream databases. Second, HTTP/2 introduces Server Push, which allows a server to push several responses for answering one client request [36]. For example, with HTTP/2 Server Push, a typical news website like CNN.com can reply for one request with multiple responses which may easily accumulate up to tens of megabytes (e.g., static and dynamic contents such as images and database query results). Third, setting an oversized TCP send buffer only for the peak size of server responses could lead to TCP over-buffering, which not only risks running out of the server memory under high concurrency workload but also causes the sluggish interactive response problem [37]. Thus, it is a big challenge to set an appropriate size of the TCP send buffer to the same size (or even larger) as the server response.
In fact, `TCP Auto-Tuning` function already presents in Linux kernel above 2.4, which is supposed to automatically adjust the size of the TCP send buffer to maximize the bandwidth utilization [38] according to the runtime network conditions. However, TCP Auto-Tuning mainly focuses on maximizing the utilization of the available bandwidth of the link between the client and the server using Bandwidth-Delay Product (BDP) rule [39] without the knowledge of the application information such as response sizes. Besides, our experiments show that the default TCP Auto-Tuning algorithm is conservative in choosing the send buffer size to avoid frequent packets loss, thus avoid more delay for the application caused by subsequent TCP retransmissions. As a result, the size of the send buffer after auto-tuning may still be deficient for applications, resulting in the write-spin problem for the asynchronous servers. Fig. 10 shows `SingleT-Async` with auto-tuning performs worse than the other case with a fixed large TCP send buffer (100KB), suggesting the occurrence of the write-spin problem. We note that TCP auto-tuning is intended to maximize the bandwidth utilization regardless of different runtime network conditions. In this set of experiments, we also vary network latency between the server node and the client node from 0ms to 20ms. The experimental results show that the performance gap between two servers is notably enlarged when non-negligible network latency exists as shown in Fig. 10. We will discuss more in the next section.
### 4.2 Write-Spin Exaggerated by Network Latency
Network latency is inevitable in modern cloud data centers. In general, it ranges from a few to tens of milliseconds depending on the location of the component servers, which may run on the different physical nodes located in different racks or even data centers. Our experiments reveal that the non-negligible network latency can exaggerate the overhead caused by the write-spin problem, leading to significant performance loss.
We show the impact of network latency on the performance of the thread-based and asynchronous servers in Fig. 11. The workload concurrency is 100 and the server response size is 100KB. The TCP send buffer size is 16KB by default, with which the write-spin problem occurs in the asynchronous servers. To quantitatively control the network latency, we use the traffic control tool “tc” in the client node. Fig. 11a shows that in a non-negligible network latency scenario, both the asynchronous `SingleT-Async` and `sTomcat-Async-Fix` have a significant throughput drop. For example, the maximum achievable throughput of `SingleT-Async` surprisingly degrades by 95 percent when a small 5-millisecond increase in network latency.
Our further analysis shows that such a significant throughput degradation is caused by the amplification effect
on response time when the write-spin problem occurs. In a write-spin scenario, an asynchronous server needs multiple rounds of data transfer to send a large size response (e.g., 100KB) out because of the small TCP send buffer. The server only continues to transfer data until it receives the ACKs from the clients for the previously sent-out packets (see Fig. 9). Therefore, a small increase in network latency can lead to a long delay in the server response time in Fig. 11b. For instance, the response time of SingleT-Async is amplified from 0.18s to 3.60s after adding 5-millisecond network latency. Based on Little’s Law, the server throughput has a negative correlation with the response time if the workload concurrency (concurrent requests in the server) keeps the same. Thus, 20 times increase in the server response time leads to 95 percent throughput degradation of SingleT-Async as shown in Fig. 11a.
### 4.3 Impact of Client Receive Buffer Size
Other network factors may also trigger the write-spin problem in an asynchronous server, causing significant performance degradation. For example, a client’s TCP receive buffer size decides how much data that the sender (the server) can send at one time. Small TCP receive buffer size means the server needs to transfer a large size of response multiple times in order to finish the transfer, which exaggerates the write-spin problem and degrades the server throughput [39]. In fact the receive buffer size is determined by the TCP flow control mechanism between a client and a server (similar to the send buffer size in the server side), however, the diversity of clients in recent years (e.g., cell phones or tablets) may limit a client’s send buffer size due to its limited physical resources (e.g., memory). For example, the receive buffer size in the popular mobile OS Android [40] is only 4098 bytes (≈4KB) [41], which is far from sufficient for most modern web applications.
In this set of experiments, we vary the client receive buffer size from 4KB to 100KB to study its impact on the performance of an asynchronous server when sending a relatively large response size (100KB), shown in Fig. 12. The thread-based single Tomcat-Sync acts as a baseline. The network latency in both cases keeps zero. This figure shows that SingleT-Async achieves 170 percent higher throughput when the client receive buffer increases from 4KB to 100KB. The poor performance in the 4KB case is because of the severe write-spin problem caused by small client receive buffer size and the TCP flow control mechanism as mentioned above. On the other hand, single Tomcat-Sync has a more stable performance under different client receive buffer settings because multithreading mitigates the write-spin problem through parallel data transfer as shown in Fig. 12.
### 5 Solution
In the previous sections, we have studied two design deficiencies of asynchronous servers due to the inefficient event processing flow: the context switch problem and the write-spin problem. The former one is caused by the poor design of unnecessary event dispatching between the reactor thread and the worker threads (see Table 3), while the latter one results from the unpredictability of the server response size and the limited TCP send buffer size. Although our work is motivated by a 3-tier system throughput drop caused by an inefficient asynchronous Tomcat server (see Section 2.3), we found that many open-sourced asynchronous software packages suffer from the same problems as in the asynchronous Tomcat (see Table 1).
To design a good performance and high efficiency asynchronous server, we should solve the aforementioned two deficiencies under different runtime workload and network conditions. In this section, we first study Netty [12], a widely-used asynchronous event-driven network I/O framework. Netty employs an improved design of event processing flow and provides application-level write operation optimization, with the aim of mitigating the overhead caused by two deficiencies mentioned above, but with a non-trivial optimization overhead. We then present our hybrid solution, which aims to solve the two deficiencies while avoids the Netty optimization overhead by exploiting the merits of the different asynchronous architectures.
#### 5.1 Netty for Reducing Context Switches and Write-Spin
Netty is a widely-used asynchronous event-driven network I/O framework for rapid development of good performance Internet servers. Netty can be categorized into the second design of asynchronous architectures (see Section 2.1), which uses a reactor thread to accept new connections and small size of the worker thread pool to handle established connections with pending events.
Although using worker threads, Netty makes two significant changes to minimize the context switches compared to single Tomcat-Async and single Tomcat-Async-Fix. First, the reactor thread and the worker threads in Netty take different
roles compared to those in stTomcatAsync and stTomcat-Async-Fix. We note that in the case of stTomcat-Async and stTomcat-Async-Fix, the reactor thread is responsible for accepting new connections and monitoring events, and worker threads only take charge of handling events; the event dispatches between the reactor thread and worker threads involve context switches (step 1~4 in Fig. 5). On the other hand, such frequent event dispatching no longer exists in Netty: the reactor thread only takes charge of accepting new connections and assigning each established connection to a worker thread; each worker thread takes charge of both monitoring and handling events for the assigned connections. As a result, such a role change of the reactor thread and the worker threads in Netty can reduce context switches significantly. Second, Netty adopts a pipeline design of event handlers for business logic, in which the output of a predecessor handler is passed to the next handler in line through a function call (all handlers are processed by the same worker thread), thus avoiding unnecessary intermediate events and the associated context switches between the reactor thread and the worker threads.
To alleviate the write-spin problem, Netty uses a runtime write-spin checking when the processing worker thread tries to send a large amount of data (i.e., server responses) to the kernel using socket.write(). Concretely, each Netty worker thread records the total number of socket.write() has been called to copy a single response to TCP send buffer, noted as a counter return_size, shown in Fig. 13. For each socket.write(), the processing worker thread keeps track of the total bytes of data which has been sent to the kernel, referred as return_size. We note that the processing worker thread will jump out the write spin if either two of the following conditions is met:
- The return_size equals to zero, suggesting the fullness of the TCP send buffer;
- The writeSpin is greater than a user-defined threshold (16 in Netty-v4 by default), indicating a severe write-spin problem;
When jumping out, the processing worker thread will suspend current data transfer, save the connection context, and resume this data transfer after it loops over other available connections with pending events. Thus Netty is able to prevent the processing worker thread from blocking on a certain connection for copying a large size response to the TCP send buffer in the kernel. However, such an optimization brings the inevitable CPU overhead when no write-spin problem exists in the small size response case.
We demonstrate the effectiveness of a Netty-based server to mitigate the write-spin problem but with the associated optimization overhead in Fig. 14. We build a Netty-based simple application server, noted as NettyServer. The figure shows the throughput comparison among three types of servers (Single-Async, NettyServer, and stTomcat-Sync) under different workload concurrencies and response sizes. To evaluate the impact of performance on different servers in both with and without write-spin problem scenarios, the TCP send buffer size is set to 16KB by default. Thus no write-spin problem occurs in the 0.1KB and 10KB response size cases, but a serious write-spin problem in the 100KB response size case. We show that NettyServer outperforms other two types of servers when the response size is 100KB, shown in Fig. 14c. For example, NettyServer achieves 27 percent higher throughput than Single-Async as the workload concurrency is 100, indicating the effectiveness of NettyServer’s write optimization technique in mitigating the write-spin problem; NettyServer achieves 10 percent higher throughput than stTomcat-Async at the same workload concurrency, suggesting NettyServer minimizes the heavy multithreading overhead. However, such performance superiority is reversed as the response size decreases to 0.1KB and 10KB in Fig. 14a and 14b. For example, NettyServer performs 17 percent less in throughput.
Fig. 13. Netty adopts a runtime checking to mitigate the overhead caused by write-spin problem.
Fig. 14. Netty effectively mitigates the write-spin problem in the large response size case but introduces non-trivial write optimization overhead in the small response size case. We set the size of TCP send buffer to the default 16KB. (a) and (b) show that NettyServer has lower throughput than SingleT-Async, suggesting non-trivial write optimization overhead, while (c) shows that Netty achieves the best performance out of three types of servers, indicating the effectiveness of alleviating the write-spin problem.
compared to SingleT-Async at the workload concurrency 100 when the response size is 0.1KB in Fig. 14a, suggesting the non-trivial optimization overhead in the case of the absence of the write-spin problem in NettyServer. Thus there is no one-size-fits-all solution that outperforms the other types of servers under various workload conditions. We also found such a non-trivial optimization overhead widely exists in many mainstreamed asynchronous servers (see Table 1), for example, Nginx [9] and Lighttpd [10].
5.2 A Hybrid Solution
So far we have shown that an appropriately chosen asynchronous solution (see Fig. 14) can always provide better performance than the thread-based counterpart under various runtime workload conditions. However, there is no one-size-fits-all asynchronous solution which always achieves the best performance. Concretely, SingleT-Async encounters the write-spin problem when the response size is large (see Section 4.1); and NettyServer encounters the non-trivial optimization overhead when the response size is small (see Section 5.1). To address such two server design deficiencies, we propose a hybrid solution by exploiting the merits of both SingleT-Async and NettyServer regardless of different runtime workload and network conditions. There are two assumptions for our hybrid solution:
- The response size is unpredictable.
- The workload is in-memory workload.
The first assumption eliminates the case that the server is launched with a large fixed size of TCP send buffer for each connection to avoid the write-spin problem. It is valid due to the difficulty of predicting the server response size and the over-buffering problem that we have discussed in Section 4.1. The second assumption excludes the case of frequent disk I/O blocking the processing worker thread. This is also valid since in-memory stores like Memcached [42] and Redis [43] are widely used by modern internet services due to the strict low-latency requirement [44]. The solution for workloads involving frequent disk I/O is beyond the scope of this paper and requires additional research.
Our hybrid solution integrates the merits of different asynchronous architectures to efficiently handle client requests under various runtime workload and network conditions, shown in Fig. 15. We refer our hybrid solution as HybridNetty. Concretely, HybridNetty extends the native Netty by applying a lightweight profiling technique to check the occurrence of the write-spin problem for each request at the beginning of server runtime (i.e., the initial warm-up phase). In this case, HybridNetty categorizes all incoming requests into two classes: the writeSpinReq requests and the nonWriteSpinReq requests. The writeSpinReq requests cause the write-spin problem while the nonWriteSpinReq requests do not. During runtime phase, HybridNetty maintains a map object which keeps a record of category for each request. Thus when a new request comes, HybridNetty checks the category of the request in the map object, and then determines the most efficient event processing flow to process the request (see check req type in Fig. 15). Concretely, HybridNetty will choose the NettyServer execution path to process each writeSpinReq request to avoid the write-spin problem and the SingleT-Async execution path to process each nonWriteSpinReq request to avoid the overhead caused by the write operation optimization. We note the server response size even for the same request could change over time due to the changes of system state (e.g., the dataset has changed). In this case, the map object will be updated once HybridNetty detects a request has been categorized into a wrong class, thus HybridNetty is able to keep the latest category of each request for future efficient processing. Since our hybrid solution passively profile each incoming request, it cannot completely solve the write-spin problem. The write-spin problem occurs first, and then it is fixed by dynamically choosing the most efficient execution path according to different the request category. Thus the frequency of the write-spin problem is dramatically reduced.
There are two potential extensions of HybridNetty to further improve its performance. The first one is an alternative non-blocking I/O pattern, which blocks the write-spin worker thread using select, poll, or epoll and enables the reactor thread to poll for completed I/O operations. Such a pattern is able to provide potential performance improvement of HybridNetty by completely avoiding the write-spin problem. The second one is that the reactor thread can be removed in HybridNetty, and the limited number of the worker threads can compete for a shared spin-lock to call the system call socket.accept() and accept new connections. Such an extension can further remove the context switch overhead between the main reactor thread and other worker threads, especially under certain workload that the main reactor thread needs to frequently hand over new established connections to worker threads. However, such two extensions of HybridNetty need more investigation, and we will make it as our future work.
5.3 Experimental Validation
Validation in a Single-Core Environment. We validate the efficiency of our hybrid solution compared to SingleT-Async and NettyServer under various runtime workload conditions and network latencies in Fig. 16. The workload is composed of two categories of requests: the heavy requests and the light requests. These two categories of request differ in the size of the server response; heavy requests, due to their large response size (100KB), are able to trigger the write-spin
problem while light requests (0.1KB) can not. To simulate different realistic workload scenarios, we vary the ratio of heavy requests from 0 and 100 percent. To clearly show the effectiveness of our hybrid solution, we adopt the normalized throughput comparison and use the HybridNetty as the baseline.
Validation in a Multi-Core Environment. We also validate the effectiveness of our hybrid solution in a multi-core environment in Fig. 17. The workload concurrency is 100 and the TCP send buffer size is 16KB by default. In this set of experiments, the workload is composed of 2 percent heavy requests and 98 percent light requests, which follows Facebook’s Memcached workload report [46]. To perform a fair comparison, we adopt the N-copy model [28] to enable three servers to take advantage of multiple cores in Fig. 17. The figure shows that the maximum achievable throughput of each type of servers scales as the number of cores increases; HybridNetty outperforms the other two in all scenarios. For example, Fig. 17b shows that HybridNetty performs almost 10X higher than SingleT-Async and 35 percent higher than NettyServer in throughput respectively in a quad-core environment when the network latency is 5ms, suggesting the effectiveness of our solution in resolving the context switch problem and the write-spin problem without significant write operation optimization overhead.
6 RELATED WORK
Synchronous Thread-Based Server Designs for High Concurrency Support. Many previous research efforts in this category [4], [47], [48], [49] share a similar goal: achieving the same or even better performance compared to the corresponding asynchronous event-driven counterparts. For instance, Behren et al. [47] presents a scalable user-space thread library Capriccio [48] and demonstrates that the threads can provide all the benefits of the events but with a simpler and more natural programming model. They further show a Capriccio-based synchronous server Knot is able to outperform SEDA’s event-driven server Haboo [3] under high concurrency workload (up to tens of thousands of concurrent clients). However, Krohn et al. [4] show that the thread library Capriccio uses sophisticated stack management to mimic the event handling to the underlying operating system. In addition, the authors of Capriccio also notice that the thread interface still lacks flexibility compared to the events [47]. These research efforts imply that the asynchronous event-driven architecture still plays an important role in constructing high performance and high-efficiency Internet servers.
There are plenty of research works to show that asynchronous event-driven architecture has been considered as a superior alternative to the thread-based design for high performance systems [50], [51], [52]. For example, Cheng et al. adopt asynchronous design in an I/O efficient graph system to solve problems of poor I/O locality, efficient selective scheduling, and expensive synchronization cost. The optimizations for asynchronous event-driven servers can be further divided into two broad categories.
OS-Level Optimization for Asynchronous Event-Driven Servers. Research work in this category is mainly motivated by mitigating the unnecessary system calls (e.g., event notification mechanisms such as select, poll, and epoll) and the associated CPU overhead [53], [54], [55] when facing high concurrency workload. For example, Lever et al. [47] present a high-performance in-kernel web server TUX, which eliminates the user/kernel crossings overhead by delegating both event monitoring and event handling to the kernel thread. Han et al. [54] present a scalable and
efficient network I/O named MegaPipe, to lighten application layer socket-related system calls for message-oriented workloads.
**Configurations Tuning for Asynchronous Event-Driven Servers.** Previous work concludes that an asynchronous web server needs to be well-tuned for the best performance [16], [50], [56], [57], [58], [59], [60], [61]. For example, Brecht et al. [56] study the impact of connection-accepting strategies (either aggressive or passive) on the performance of the asynchronous event-driven μServer. Pariag et al. [16] analyze the impact of maximum simultaneous connections and different socket I/O (either blocking or non-blocking) on the performance of different asynchronous architectures such as the single-threaded μServer and the staged design WatPipe. Google’s research group [57] reports that increasing TCP’s initial congestion window is able to significantly improve average HTTP response latency in high latency and low bandwidth networks. Our work in the paper is closely related to their research. Previous research efforts focus on either OS-level optimization or configurations tuning for asynchronous event-driven servers. Our approach focuses more on optimizing server architecture, thus our work and their work are complementary. The lessons that we learned from their work may also apply to our proposed solution.
7 Conclusions
In this paper, we show the impact of the event processing flow on the efficiency of asynchronous servers. Through extensive experiments using both realistic macro- and micro-benchmarks, we observe that the inefficient design of the event processing flow in an asynchronous server may cause high CPU overhead and result in significant performance loss in comparison with the thread-based counterpart. Concretely, the inefficient design of event processing flow may either cause high CPU context switch overhead between event handlers (see Section 3) or the write-spin problem when dealing with large size of server responses (see Section 4). Some network-related factors (e.g., network latency and client receive buffer) will exaggerate the degradation of asynchronous server performance. We present a hybrid solution which exploits the merits of different asynchronous event-driven architectures to adapt to dynamic runtime workload and network conditions (see Section 5). In general, our research results provide a solid building block in developing modern Internet servers that can achieve both high performance and high resource efficiency in the cloud.
Acknowledgments
This research has been partially funded by National Science Foundation by CISEs CNS (1566443), Louisiana Board of Regents under grant LEQSF(2015-18)-RD-A-11, and gifts or grants from Fujitsu.
References
Shungeng Zhang received the BS degree from Huazhong University of Science & Technology in China, in 2014. He is working towards the Ph.D. degree in the Department of EECS, Louisiana State University-Baton Rouge. He is currently working in the computing lab as a research assistant. His research interest include performance and scalability analysis of internet server architecture, aiming to achieve responsive web applications running in the cloud. He is a student member of the IEEE.
Qingyang Wang received the BSc and MSc degrees from Chinese Academy of Sciences and Wuhan University, in 2004 and 2007 and the PhD degree in computer science from Georgia Tech, in 2014. He is an assistant professor with the Department of EECS, Louisiana State University-Baton Rouge. His research is in distributed systems and cloud computing with a current focus on performance and scalability analysis of large-scale web applications (e.g., Amazon.com). He has published research projects with LSU on cloud performance measurements, scalable web application design, and automated system management in clouds. He is a member of the IEEE.
Yasuhioko Kanemasa received the BEng degree in computer engineering from Tokyo Institute of Technology, Tokyo, Japan, in 1996, and the MS degree in computer science from Japan Advanced Institute of Science and Technology, Nomi, Japan, in 1998. He has been working with Fujitsu Laboratories Ltd., Kawasaki, Japan since 1998 and is in the position of research manager currently. His research interests include data processing systems, application performance management, and cloud computing. He is a member of the IEEE, IPSJ, and DBJ.
Huasong Shan received the PhD degree in computer engineering from Louisiana State University-Baton Rouge, in 2017. He received the MS and BS degree in computer science and technology, Huazhong University of Science and Technology, China, in 2003 and 2006, respectively. He has been working with JD.com American Technologies Corporation, Mountain View, California as staff scientist. His research interests include distributed system, application of AI on system and security, cloud computing, storage system etc. He is a student member of the IEEE.
Liting Hu received the BS degree in computer science from Huazhong University of Science and Technology, China, in 2003 and the PhD degree in computer science from Georgia Institute of Technology. Her research is in the general area of distributed systems and its intersection with big data analytics, resource management, power management, and system virtualization. She interned with IBM T.J. Watson Research Center, Intel Science and Technology Center for Cloud Computing, Microsoft Research Asia, VMware, and has been working closely with them. She is a member of the IEEE.
For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/csdl.
|
{"Source-Url": "http://users.cis.fiu.edu/~lhu/doc/qingyang2020.pdf", "len_cl100k_base": 14654, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 49627, "total-output-tokens": 18850, "length": "2e13", "weborganizer": {"__label__adult": 0.0003330707550048828, "__label__art_design": 0.000640869140625, "__label__crime_law": 0.0003466606140136719, "__label__education_jobs": 0.004886627197265625, "__label__entertainment": 0.0002129077911376953, "__label__fashion_beauty": 0.0002244710922241211, "__label__finance_business": 0.0006680488586425781, "__label__food_dining": 0.00031304359436035156, "__label__games": 0.0007576942443847656, "__label__hardware": 0.005077362060546875, "__label__health": 0.0006232261657714844, "__label__history": 0.0005469322204589844, "__label__home_hobbies": 0.00014126300811767578, "__label__industrial": 0.000713348388671875, "__label__literature": 0.0003960132598876953, "__label__politics": 0.00037741661071777344, "__label__religion": 0.0005230903625488281, "__label__science_tech": 0.4248046875, "__label__social_life": 0.0001442432403564453, "__label__software": 0.032562255859375, "__label__software_dev": 0.5244140625, "__label__sports_fitness": 0.00022971630096435547, "__label__transportation": 0.0006442070007324219, "__label__travel": 0.0002081394195556641}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79850, 0.03739]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79850, 0.24596]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79850, 0.88253]], "google_gemma-3-12b-it_contains_pii": [[0, 4451, false], [4451, 10400, null], [10400, 14747, null], [14747, 20422, null], [20422, 25756, null], [25756, 30039, null], [30039, 34641, null], [34641, 40963, null], [40963, 45772, null], [45772, 50727, null], [50727, 55354, null], [55354, 61001, null], [61001, 64657, null], [64657, 71932, null], [71932, 79850, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4451, true], [4451, 10400, null], [10400, 14747, null], [14747, 20422, null], [20422, 25756, null], [25756, 30039, null], [30039, 34641, null], [34641, 40963, null], [40963, 45772, null], [45772, 50727, null], [50727, 55354, null], [55354, 61001, null], [61001, 64657, null], [64657, 71932, null], [71932, 79850, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79850, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79850, null]], "pdf_page_numbers": [[0, 4451, 1], [4451, 10400, 2], [10400, 14747, 3], [14747, 20422, 4], [20422, 25756, 5], [25756, 30039, 6], [30039, 34641, 7], [34641, 40963, 8], [40963, 45772, 9], [45772, 50727, 10], [50727, 55354, 11], [55354, 61001, 12], [61001, 64657, 13], [64657, 71932, 14], [71932, 79850, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79850, 0.16883]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
cd329fe485c1c1cc4f4fc4917b16bf2881c83330
|
[REMOVED]
|
{"Source-Url": "https://jes-eurasipjournals.springeropen.com/track/pdf/10.1186/s13639-016-0040-z?site=jes-eurasipjournals.springeropen.com", "len_cl100k_base": 13123, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 47965, "total-output-tokens": 16599, "length": "2e13", "weborganizer": {"__label__adult": 0.00038361549377441406, "__label__art_design": 0.0003995895385742187, "__label__crime_law": 0.0002789497375488281, "__label__education_jobs": 0.0013895034790039062, "__label__entertainment": 5.424022674560547e-05, "__label__fashion_beauty": 0.0002191066741943359, "__label__finance_business": 0.0005016326904296875, "__label__food_dining": 0.0003407001495361328, "__label__games": 0.0006685256958007812, "__label__hardware": 0.0031299591064453125, "__label__health": 0.0004661083221435547, "__label__history": 0.0002930164337158203, "__label__home_hobbies": 0.00013637542724609375, "__label__industrial": 0.0008702278137207031, "__label__literature": 0.00019729137420654297, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0005154609680175781, "__label__science_tech": 0.0164642333984375, "__label__social_life": 7.021427154541016e-05, "__label__software": 0.003849029541015625, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.00036454200744628906, "__label__transportation": 0.0009908676147460938, "__label__travel": 0.000209808349609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 76526, 0.03227]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 76526, 0.34675]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 76526, 0.96253]], "google_gemma-3-12b-it_contains_pii": [[0, 3563, false], [3563, 9109, null], [9109, 14584, null], [14584, 19731, null], [19731, 24504, null], [24504, 28798, null], [28798, 34386, null], [34386, 39871, null], [39871, 45617, null], [45617, 49990, null], [49990, 54298, null], [54298, 57789, null], [57789, 62493, null], [62493, 67996, null], [67996, 76080, null], [76080, 76526, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3563, true], [3563, 9109, null], [9109, 14584, null], [14584, 19731, null], [19731, 24504, null], [24504, 28798, null], [28798, 34386, null], [34386, 39871, null], [39871, 45617, null], [45617, 49990, null], [49990, 54298, null], [54298, 57789, null], [57789, 62493, null], [62493, 67996, null], [67996, 76080, null], [76080, 76526, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 76526, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 76526, null]], "pdf_page_numbers": [[0, 3563, 1], [3563, 9109, 2], [9109, 14584, 3], [14584, 19731, 4], [19731, 24504, 5], [24504, 28798, 6], [28798, 34386, 7], [34386, 39871, 8], [39871, 45617, 9], [45617, 49990, 10], [49990, 54298, 11], [54298, 57789, 12], [57789, 62493, 13], [62493, 67996, 14], [67996, 76080, 15], [76080, 76526, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 76526, 0.159]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
0aebecaff83abcd2cc43d4d3136078e6e469e510
|
Interprocedural Path Profiling
David Melski and Thomas Reps
Computer Sciences Department, University of Wisconsin,
1210 West Dayton Street, Madison, WI, 53706, USA,
{melski, reps}@cs.wisc.edu
Abstract. In path profiling, a program is instrumented with code that counts the number of times particular path fragments of the program are executed. This paper extends the intraprocedural path-profiling technique of Ball and Larus to collect information about interprocedural paths (i.e., paths that may cross procedure boundaries).
1 Introduction
In path profiling, a program is instrumented with code that counts the number of times particular finite-length path fragments of the program’s control-flow graph — or observable paths — are executed. A path profile for a given run of a program consists of a count of how often each observable path was executed. This paper extends the intraprocedural path-profiling technique of Ball and Larus [3] to collect information about interprocedural paths (i.e., paths that may cross procedure boundaries).
Interprocedural path profiling is complicated by the need to account for a procedure’s calling context. There are really two issues:
- What is meant by a procedure’s “calling context”? Previous work by Ammons et al. [1] investigated a hybrid intra-/interprocedural scheme that collects separate intraprocedural profiles for a procedure’s different calling contexts. In their work, the “calling context” of procedure $P$ consists of the sequence of call sites pending on entry to $P$. In general, the sequence of pending call sites is an abstraction of any of the paths ending at the call on $P$.
The path-profiling technique presented in this paper profiles true interprocedural paths, which may include call and return edges between procedures, paths through pending procedures, and paths through procedures that were called in the past and completed execution. This means that, in general, our technique maintains finer distinctions than those maintained by the profiling technique of Ammons et al.
- How does the calling-context problem impact the profiling machinery? In the method presented in this paper, the “naming” of paths is carried out via an edge-labeling scheme that is in much the same spirit as the path-naming scheme of the Ball-Larus technique, where each edge is labeled with a number, and the “name” of a path is the sum of the numbers on the path’s edges. However, to handle the calling-context problem, in our method
edges are labeled with functions instead of values. In effect, the use of edge-functions allows edges to be numbered differently depending on the calling context.
At runtime, as each edge $e$ is traversed, the profiling machinery uses the edge function associated with $e$ to compute a value that is added to the quantity pathNum. At the appropriate program points, the profile is updated with the value of pathNum.
Because edge functions are always of a particularly simple form (i.e., linear functions), they do not complicate the runtime-instrumentation code greatly:
- The Ball-Larus instrumentation code performs 0 or 1 additions in each basic block; a hash-table lookup and 1 addition for each control-flow-graph backedge; 1 assignment for each procedure call; and a hash-table lookup and 1 addition for each return from a procedure.
- The technique presented in this paper performs 0 or 2 additions in each basic block; a hash-table lookup, 1 multiplication, and 4 additions for each control-flow-graph backedge; 2 multiplications and 2 additions for each procedure call; and 1 multiplication and 1 addition for each return from a procedure.
(The frequency with which our technique and the Ball-Larus technique can avoid performing any additions in a basic block should be about the same.) Thus, while interprocedural path profiling will involve more overhead than intraprocedural path profiling via the Ball-Larus technique, the overheads should not be prohibitive.
The specific technical contributions of this paper include:
- In the Ball-Larus scheme, a cycle-elimination transformation of the (in general, cyclic) control-flow graph is introduced for the purpose of numbering paths. We present the interprocedural analog of this transformation.
- In the case of intraprocedural path profiling, the Ball-Larus scheme produces a dense numbering of the observable paths within a given procedure: That is, in the transformed (i.e., acyclic) version of the control-flow graph for a procedure $P$, the sum of the edge labels along each path from $P$’s entry vertex to $P$’s exit vertex falls in the range $[0..\text{number of paths in } P]$; and each number in the range $[0..\text{number of paths in } P]$ corresponds to exactly one such path.
The techniques presented in this paper produce a dense numbering of interprocedural observable paths. The significance of the dense-numbering property is that it ensures that the numbers manipulated by the instrumentation code have the minimal number of bits possible.
Our work encompasses two main algorithms for interprocedural path profiling, which we call context path profiling and piecewise path profiling, as well as several hybrid algorithms that blend aspects of the two main algorithms. Context path profiling is best suited for software-maintenance applications, whereas piecewise path profiling is better suited for providing information about interprocedural hot paths, and hence is more appropriate for optimization applications [4].
This paper focuses on context path profiling, and, except where noted, the term “interprocedural path profiling” means “context path profiling”. We chose to discuss the context-path-profiling algorithm because the method is simpler to present than the algorithm for piecewise path profiling. However, the same basic machinery is at the heart of both algorithms (see [4]).
The remainder of the paper is organized into four sections: Section 2 presents background material and defines terminology needed to describe our results. Section 3 gives an overview of interprocedural context path profiling. Section 4 describes the technical details of this approach. Section 5 discusses future work.
2 Background
2.1 Supergraph
As in many interprocedural program-analysis problems, we work with an interprocedural control-flow graph called a supergraph. Specifically, a program’s supergraph $G^*$ consists of a unique entry vertex $Entry_{\text{global}}$, a unique exit vertex $Exit_{\text{global}}$, and a collection of control-flow graphs (one for each procedure), one of which represents the program’s main procedure. For each procedure $P$, the flowgraph for $P$ has a unique entry vertex, $Entry_P$, and a unique exit vertex, $Exit_P$. The other vertices of the flowgraph represent statements and predicates of the program in the usual way, except that each procedure call in the program is represented in $G^*$ by two vertices, a call vertex and a return-site vertex. In addition to the ordinary intraprocedural edges that connect the vertices of the individual control-flow graphs, for each procedure call (represented, say, by call vertex $c$ and return-site vertex $r$) to procedure $P$, $G^*$ contains a call-edge, $c \rightarrow Entry_P$, and a return-edge, $Exit_P \rightarrow r$. The supergraph also contains the edges $Entry_{\text{global}} \rightarrow Entry_{\text{main}}$ and $Exit_{\text{main}} \rightarrow Exit_{\text{global}}$. An example of a supergraph is shown in Fig. 1(a).
For purposes of profiling, we assume that all branches are logically independent, i.e., the result of one branch does not affect the ability to take any other branch. However, we do not wish to consider paths in $G^*$ that violate the nature of procedure calls (as the path in Fig. 1(b) does). We now develop a language for describing the set of paths in $G^*$ that we wish to consider valid. To do this, let each call site be assigned a unique index between 1 and $\text{NumCallSites}$, where $\text{NumCallSites}$ is the total number of call sites in the program. Then, for each call site with index $i$, let the call-edge from the call site be labeled with the symbol “(i”, and let the return-edge to the call site be labeled with the symbol “)”. Let each edge of the form $Entry_{\text{global}} \rightarrow Entry_P$ be labeled with the symbol “(p” and each edge of the form $Exit_P \rightarrow Exit_{\text{global}}$ be labeled with the symbol “)”). Let all other edges be labeled with the symbol $e$. Then a path $p$ in $G^*$ is a same-level valid path if and only if the string formed by concatenating the labels
\footnote{The vertices of a flowgraph can represent individual statements and predicates; alternatively, they can represent basic blocks.}
Fig. 1. (a) Schematic of the supergraph of a program in which main has two call sites on the procedure pow. (b) Example of an invalid path in a supergraph. (c) Example of a cycle that may occur in a valid path.
of p’s edges is derived from the non-terminal SLVP in the following context-free grammar:
\[
\text{SLVP} := e \quad \text{SLVP} \quad \text{SLVP} := (i \; \text{SLVP} \; ), \quad \text{SLVP} \quad \text{SLVP} := (\rho \; \text{SLVP} \; ) \quad \text{for } 1 \leq i \leq \text{NumCallSites} \\
\]
Here, \( e \) denotes the empty string. A same-level valid path \( p \) represents an execution sequence where every call-edge is properly matched with a corresponding return-edge and vice versa.
We also need to describe paths that correspond to incomplete execution sequences in which not all of the procedure calls have been completed. (For example, a path that begins in a procedure \( P \), crosses a call-edge to a procedure \( Q \), and ends in \( Q \).) Such a path \( p \) is called an unbalanced-left path. The string formed by concatenating the labels on \( p \)'s edges must be derived from the non-terminal UnbalLeft in the following context-free grammar:
\[
\text{UnbalLeft} := \text{UnbalLeft} \; (i \; \text{UnbalLeft} \; ) \quad \text{for } 1 \leq i \leq \text{NumCallSites} \\
\text{UnbalLeft} := \text{UnbalLeft} \; (\rho \; \text{UnbalLeft} \; ) \quad \text{for each procedure } P \\
\text{UnbalLeft} := \text{SLVP}
\]
2.2 Modifying \( G^* \) to Eliminate Backedges and Handle Recursion
For purposes of numbering paths, the Ball-Larus technique modifies a procedure’s control-flow graph to remove cycles. This section describes the analogous
step for interprocedural context profiling. Specifically, this section describes modifications to \(G^*\) that remove cycles from each procedure and from the call graph associated with \(G^*\). The resulting graph is called \(G^*_{\text{fin}}\). Each unbalanced-left path through \(G^*_{\text{fin}}\) defines an “observable path” that can be logged in an interprocedural profile. The number of unbalanced-left paths through \(G^*_{\text{fin}}\) is finite [4], which is the reason for the subscript “fin”.
In total, there are three transformations that are performed to create \(G^*_{\text{fin}}\). Fig. 3 shows the transformed graph \(G^*_{\text{fin}}\) that is constructed for the example program in Fig. 2 (the labels on the vertices and edges of this graph are explained in Section 3.1).
**Transformation 1:** For each procedure \(P\), add a special vertex \(G\text{Exit}_P\). In addition, add an edge \(G\text{Exit}_P \rightarrow \text{Exit}_{\text{global}}\).
The second transformation removes cycles in each procedure’s flow graph. As in the Ball-Larus technique, the procedure’s control-flow graph does not need to be reducible; backedges can be determined by a depth-first search of the control-flow graph.
**Transformation 2:** For each procedure \(P\), perform the following steps:
1. For each backedge target \(v\) in \(P\), add a surrogate edge \(\text{Entry}_P \rightarrow v\).
2. For each backedge source \(w\) in \(P\), add a surrogate edge \(w \rightarrow G\text{Exit}_P\).
3. Remove all of \(P\)'s backedges.
The third transformation “short-circuits” paths around recursive call sites, effectively removing cycles in the call graph. First, each call site is classified as recursive or nonrecursive. This can be done by identifying backedges in the call graph using depth-first search; the call graph need not be reducible.
**Transformation 3:** The following modifications are made:
1. For each procedure \(R\) called from a recursive call site, add the edges \(\text{Entry}_{\text{global}} \rightarrow \text{Entry}_R\) and \(\text{Exit}_R \rightarrow \text{Exit}_{\text{global}}\).
2. For each pair of vertices \(c\) and \(r\) representing a recursive call site that calls procedure \(R\), remove the edges \(c \rightarrow \text{Entry}_R\) and \(\text{Exit}_R \rightarrow r\), and add the summary edge \(c \rightarrow r\). (Note that \(c \rightarrow r\) is called a “summary” edge, but not a “surrogate” edge.)
As was mentioned above, the reason we are interested in these transformations is that each observable path—an item we log in an interprocedural path profile—corresponds to an unbalanced-left path through \(G^*_{\text{fin}}\). Note that the observable paths should not correspond to just the same-level valid paths through \(G^*_{\text{fin}}\): as a result of Transformation 2, an observable path \(p\) may end with \(... \rightarrow G\text{Exit}_P \rightarrow \text{Exit}_{\text{global}}\), leaving unclosed left parentheses. Furthermore, a path in \(G^*_{\text{fin}}\) that is not unbalanced-left cannot represent any feasible execution path in the original graph \(G^*\).
**Indirect Procedure Calls** The easiest way to handle indirect procedure calls is to treat them as recursive procedure calls, and not allow interprocedural paths that cross through an indirect procedure call. Another possibility does
allow interprocedural paths to cross through an indirect procedure call: For purposes of numbering the paths in $G_{fn}$, each indirect procedure call through a procedure variable $fp$ is turned into an if-then-else chain that has a separate (direct) procedure call for each possible value of $fp$. Well-known techniques (e.g., such as flow insensitive points-to analysis [2,6]) can be used to obtain a reasonable (but still conservative) estimate of the values that $fp$ may take on.
3 Overview
In this section, we illustrate, by means of the example shown in Fig. 2, some of the difficulties that arise in collecting an interprocedural path profile. Fig. 1(a) shows a schematic of the supergraph $G^*$ for this program. One difficulty that arises in interprocedural path profiling comes from interprocedural cycles. Even after the transformations described in Section 2.2 are performed (which break intra-procedural cycles and cycles due to recursion), $G^*$ will still contain cyclic paths, namely, those paths that enter a procedure from distinct call sites (see Fig. 1(c)). This complicates any interprocedural extension to the Ball-Larus technique, because the Ball-Larus numbering scheme works on acyclic graphs. There are several possible approaches to overcoming this difficulty:
- One possible approach is to create a unique copy of each procedure for each non-recursive call site and remove all recursive call and return edges. In our example program, we would create the copies $pow1$ and $pow2$ of the $pow$ function, as shown in Fig. 4. $pow1$ can be instrumented as if it had been inlined in $main$, and likewise for $pow2$. In many cases, this approach is impractical because of the resulting code explosion.
- A second approach—which is the one developed in this paper—is to parameterize the instrumentation in each procedure to behave differently for different calling contexts. In our example, $pow$ is changed to take an extra parameter. When $pow$ is called from the first call site in $main$, the value of...
Fig. 3. $G_{\text{in}}^*$ for the code in Fig. 2. Dashed edges represent surrogate edges; the supergraph for the program in Fig. 2 includes the backedges $v_13 \rightarrow v_4$ and $u_5 \rightarrow u_3$, which have been removed here by Transformation 2. Here the ordered pair $(a, b)$ represents the linear function $\lambda x::a \cdot x + b$. Each vertex $v$ is assigned the linear function $\psi_v$, which is shown in a rounded box. Each intraprocedural edge $e$ is assigned the linear function $\rho_e$, which is shown in a doubled, rounded box. Unlabeled intraprocedural edges do not have $\rho$ functions.
The new parameter causes the instrumentation of $\text{pow}$ to mimic the behavior of the instrumentation of $\text{pow1}$ in the first approach above; when $\text{pow}$ is called from the second call site in $\text{main}$, the value of the new parameter causes $\text{pow}$'s instrumentation to mimic the behavior of the instrumentation of $\text{pow2}$. Thus, by means of an appropriate parameterization, we gain the advantages of the first approach without duplicating code.
Section 3.1 gives a high-level description of our path-numbering technique and Section 4 gives a detailed description of the profiling algorithm.
in a double circle. Undirected edges are given the value \( d \) by the Ball-Larue numbering scheme.
![Diagram]
Figure 4. Modified version of \( G^a \) from Figure 2 with two copies of node labels on the edges.
3.1 Numbering Unbalanced-Left Paths
Extending the Ball-Larus technique to number unbalanced-left paths in $G_{\text{fin}}^*$ is complicated by the following facts:
1. While the number of unbalanced-left paths is finite, an unbalanced-left path may contain cycles (such as those in Fig. 1(c)).
2. The number of paths that may be taken from a vertex $v$ is dependent on the path taken to reach $v$: for a given path $p$ to vertex $v$, not every path $q$ from $v$ forms an unbalanced-left path when concatenated with $p$.
These facts mean that it is not possible to assign a single integer value to each vertex and edge of $G_{\text{fin}}^*$ as the Ball-Larus technique does. Instead, each occurrence of an edge $e$ in a path $p$ will contribute to the path number of $p$, but the value that an occurrence of $e$ contributes will be dependent on the part of $p$ that precedes that occurrence of $e$. In particular, $e$'s contribution is determined by the sequence of unmatched left parentheses that precede the occurrence of $e$ in $p$. (The sequence of unmatched left parentheses represents a calling context of the procedure containing $e$.)
Consider the example shown in Figs. 2 and 3. Notice that $G_{\text{fin}}^*$ in Fig. 3 contains cyclic, unbalanced-left paths. For example, the following path is a cycle from $u_1$ to $u_1$ that may appear as a subpath of an unbalanced-left path:
$$u_1 \rightarrow u_3 \rightarrow u_7 \rightarrow u_6 \rightarrow v_2 \rightarrow v_3 \rightarrow v_4 \rightarrow v_5 \rightarrow v_6 \rightarrow v_7 \rightarrow u_1.$$
Fig. 4 shows a modified version of $G_{\text{fin}}^*$ with two copies of the procedure $\text{pow}$, one for each call site to $\text{pow}$ in $\text{main}$. This modified graph is acyclic and therefore amenable to the Ball-Larus numbering scheme: Each vertex $v$ in Fig. 4 is labeled with $\text{numPaths}[v]$, the number of paths from $v$ to $\text{Exit}_{\text{global}}$; each edge $e$ is labeled with its Ball-Larus increment [3]. Note that there is a one-to-one and onto mapping between the paths through the graph in Fig. 4 and the unbalanced-left paths through the graph in Fig. 3. This correspondence can be used to number the unbalanced-left paths in Fig. 3: each unbalanced-left path $p$ in Fig. 3 is assigned the path number of the corresponding path $\text{q}$ in Fig. 4.
The following two observations capture the essence of our technique:
- Because the labeling passes of the Ball-Larus scheme work in reverse topological order, the values assigned to the vertices and edges of a procedure are dependent upon the values assigned to the exit vertices of the procedure. For instance, in Fig. 4, the values assigned to the vertices and edges of $\text{pow1}$ are determined by the values assigned to $\text{Exit}_{\text{pow1}}$ and $\text{GExit}_{\text{pow1}}$ (i.e., the values 5 and 1, respectively), while the values assigned to the vertices and edges of $\text{pow2}$ are determined by the values assigned to $\text{Exit}_{\text{pow2}}$ and $\text{GExit}_{\text{pow2}}$ (i.e., the values 1 and 1, respectively). Note that $\text{numPaths}[\text{GExit}_P] = 1$ for any procedure $P$ (since the only path from $\text{GExit}_P$ to $\text{Exit}_{\text{global}}$ is the path consisting of the edge $\text{GExit}_P \rightarrow \text{Exit}_{\text{global}}$). Thus, the values on the edges and the vertices of $\text{pow1}$ differ from some of the values on the corresponding edges and vertices of $\text{pow2}$ because $\text{numPaths}[\text{Exit}_{\text{pow1}}] \neq \text{numPaths}[\text{Exit}_{\text{pow2}}]$.
Given that a program transformation based on duplicating procedures is undesirable, a mechanism is needed that assigns vertices and edges different numbers depending on the calling context. To accomplish this, each vertex \( u \) of each procedure \( P \) is assigned a linear function \( \psi_u \) that, when given a value for \( \text{numPaths}[\text{Exit}_P] \), returns the value of \( \text{numPaths}[u] \). Similarly, each edge \( e \) of each procedure \( P \) is assigned a linear function \( \rho_e \) that, when given a value for \( \text{numPaths}[\text{Exit}_P] \), returns the Ball-Larus value for \( e \).
Fig. 3 shows \( G^*_{\text{fn}} \) labeled with the appropriate \( \psi \) and \( \rho \) functions. Note that we have the desired correspondence between the linear functions in Fig. 3 and the integer values in Fig. 4. For example, in Fig. 3 vertex \( u_1 \) has the function \( \psi_{u_1} = \lambda x.2 \cdot x + 2 \). This function, when supplied with the value \( \text{numPaths}[\text{Exit}_{\text{pow1}}] = 5 \) from Fig. 4 evaluates to 12, which is equal to \( \text{numPaths}[u_1] \) in Fig. 4. However, when \( \lambda x.2 \cdot x + 2 \) is given the value \( \text{numPaths}[\text{Exit}_{\text{pow2}}] = 1 \), it evaluates to 4, which is equal to \( \text{numPaths}[u''_1] \) in Fig. 4.
To collect the number associated with an unbalanced-left path \( p \) in \( G^*_{\text{fn}} \), as \( p \) is traversed, each edge \( e \) contributes a value to \( p \)'s path number. As illustrated below, the value that \( e \) contributes is dependent on the path taken to \( e \):
**Example 1.** Consider the edge \( u_1 \rightarrow u_3 \) in \( G^*_{\text{fn}} \), and an unbalanced-left path \( s \) that begins with the following path prefix:
\[
\text{Entry}_{\text{global}} \rightarrow v_1 \rightarrow v_4 \rightarrow v_5 \rightarrow u_1 \rightarrow u_3
\]
(1)
In this case, the edge \( u_1 \rightarrow u_3 \) contributes a value of 6 to \( s \)'s path number. To see that this is the correct value, consider the path prefix in Fig. 4 that corresponds to (1):
\[
\text{Entry}_{\text{global}} \rightarrow v_1 \rightarrow v_4 \rightarrow v_5 \rightarrow u'_1 \rightarrow u'_3
\]
In Fig. 4, the value on the edge \( u'_1 \rightarrow u'_3 \) is 6.
In contrast, in an unbalanced-left path \( t \) that begins with the path prefix
\[
\text{Entry}_{\text{global}} \rightarrow v_1 \rightarrow v_4 \rightarrow v_5 \rightarrow v_9 \rightarrow v_{10} \rightarrow u_1 \rightarrow u_3
\]
(2)
the edge \( u_1 \rightarrow u_3 \) will contribute a value of 2 to \( t \)'s path number. (To see that this is the correct value, consider the path prefix in Fig. 4 that corresponds to (2).)
It can even be the case that an edge \( e \) occurs more than once in a path \( p \), with each occurrence contributing a different value to \( p \)'s path number. For example, there are some unbalanced-left paths in \( G^*_{\text{fn}} \) in which the edge \( u_1 \rightarrow u_3 \) appears twice, contributing a value of 6 for the first occurrence and a value of 2 for the second occurrence.
To determine the value that an occurrence of the edge \( e \) should contribute to a path number, the profiling instrumentation will use the function \( \rho_e \) and the appropriate value for \( \text{numPaths}[\text{Exit}_P] \), where \( P \) is the procedure containing \( e \). Thus, as noted above, an occurrence of the edge \( u_1 \rightarrow u_3 \) may contribute the value \( (\lambda x.x + 1)(1) = 2 \) or the value \( (\lambda x.x + 1)(5) = 6 \) to a path number, depending on the path prior to the occurrence of \( u_1 \rightarrow u_3 \).
Figs. 5 and 6 show the program from Fig. 2 with additional instrumentation code — based on the linear functions in Fig. 3 — that collects an interprocedural path profile. The output from the instrumented program is as follows:
<p>| | | | | | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0</td>
<td>1</td>
<td>0</td>
<td>2</td>
<td>0</td>
<td>3</td>
<td>0</td>
</tr>
<tr>
<td>9</td>
<td>0</td>
<td>10</td>
<td>0</td>
<td>11</td>
<td>0</td>
<td>12</td>
<td>0</td>
</tr>
<tr>
<td>18</td>
<td>9</td>
<td>19</td>
<td>0</td>
<td>20</td>
<td>0</td>
<td>21</td>
<td>0</td>
</tr>
<tr>
<td>27</td>
<td>3</td>
<td>28</td>
<td>3</td>
<td>29</td>
<td>6</td>
<td>30</td>
<td>3</td>
</tr>
</tbody>
</table>
Section 4 presents an algorithm that assigns linear functions to the vertices and edges of $G^*_{fin}$ directly, without referring to a modified version of $G^*_{fin}$, like the one shown in Fig. 4, in which procedures are duplicated.
### 3.2 What Do You Learn From a Profile of Unbalanced-Left Paths?
Before examining the details of interprocedural path profiling, it is useful to understand the information that is gathered in this approach:
- Each unbalanced-left path $p$ through $G^*_{fin}$ from $Entry_{global}$ to $Exit_{global}$ can be thought of as consisting of a context-prefix and an active-suffix. The active-suffix $q''$ of $p$ is a maximal-size, surrogate-free subpath at the tail of $p$ (though the active-suffix may contain summary edges of the form $c \rightarrow r$, where $c$ and $r$ represent a recursive call site). The context-prefix $q'$ of $p$ is the prefix of $p$ that ends at the last surrogate edge before $p$'s active suffix. (The context-prefix $q'$ can be the empty path from $Entry_{global}$ to $Entry_{global}$.)
- The counter associated with the unbalanced-left path $p$ counts the number of times during a program's execution that the active-suffix of $p$ occurs in the context summarized by $p$'s context-prefix.
int main() {
unsigned int pathNum = 0;
unsigned int pathNumOnEntry = 0;
unsigned int numValidCompsFromExit = 1;
double t, result = 0.0;
int i = 1;
while (i < 18) {
if (i%2 == 0) {
t = pow(i, 2, pathNum, 0 * numValidCompsFromExit * 5);
// On entry to pow: pathNum is 0 or 18; fourth arg, always 5 */
// On exit from pow: pathNum is 1, 7, 19, or 25 */
result += t;
} else
pathNum += 0 * numValidCompsFromExit * 12;
if (i%3 == 0) {
t = pow(i, 2, pathNum, 0 * numValidCompsFromExit * 1);
// On entry to pow: pathNum in 2, 3, 8, 9, 13, 14, 20, 21, 26, 27, 31, or 32 */
// On exit from pow: pathNum in 1, 7, 12, 19, 25, or 30: 4th arg, always 1 */
// From edge v9->v13 */
profile[pathNum]++;
// From surrogate edge v1->v4: */
pathNum = 1 * numValidCompsFromExit + 17 * pathNumOnEntry;
}
pathNum += 0 * numValidCompsFromExit * 17; /* From edge v4->v15 */
profile[pathNum]++;
for (i = 0; i < 36; i++) {
cout.width(3); cout << i << " ";
cout.width(2); cout << profile[i] << " ";
if ((i+1) % 9 == 0) cout << endl;
}
return 0;
}
}
Fig. 6. Part of the instrumented version of the program from Fig. 2. Instrumentation
code is shown in italics. (See also Fig. 5.)
Example 2. Consider the path in Fig. 3 with path number 24:
24: Entry\textsubscript{global} \rightarrow v_1 \rightarrow v_4 \rightarrow v_5 \rightarrow v_6 \rightarrow u_1 \rightarrow u_3 \rightarrow u_4 \rightarrow u_6 \rightarrow \text{Exit}\textsubscript{global}
This path consists of the context-prefix Entry\textsubscript{global} \rightarrow v_1 \rightarrow v_4 \rightarrow v_5 \rightarrow v_6 \rightarrow u_1
and the active-suffix u_3 \rightarrow u_4 \rightarrow u_6. The output obtained from running
the program shown in Figs. 5 and 6 indicates that the active suffix was executed 9
times in the context summarized by the context-prefix. Note that the context-
prefix not only summarizes the call site in main from which \texttt{pow} was called, but
also the path within main that led to that call site. In general, a context-prefix
(in an interprocedural technique) summarizes not only a sequence of procedure
calls (i.e., the calling context), but also the intraprocedural paths taken within
each procedure in the sequence.
4 Interprocedural Path Profiling
In this section, we discuss the $\psi$ and $\rho$ functions that serve as replacements for the vertex and edge values of the Ball-Larus technique.
4.1 Assigning $\psi$ and $\rho$ Functions
**Solving for $\psi$ Functions** For a vertex $v$ in procedure $P$, the function $\psi_v$ takes the number of valid completions from $\text{Exit}_P$ (for an unbalanced-left path $p$ to $\text{Entry}_P$, concatenated with any same-level valid path to $\text{Exit}_P$) and returns the number of valid completions from $v$ (for the path $p$ concatenated with any same-level valid path to $v$).
We can find the $\psi$ functions by setting up and solving a collection of equations. For an exit vertex $\text{Exit}_P$, $\psi_{\text{Exit}_P}$ is the identity function: $\psi_{\text{Exit}_P} = \text{id}$. For a vertex of the form $G\text{Exit}_P$, we have the equation $\psi_{G\text{Exit}_P} = \lambda x.1$. This equation reflects the fact that the number of valid completions from $G\text{Exit}_P$ is always 1, regardless of the number of valid completions from $\text{Exit}_P$. For a call vertex $c$ to a procedure $Q$ associated with the return-site vertex $r$, where $c$ and $r$ represent a non-recursive call site, we have the equation $\psi_c = \psi_{\text{Entry}_Q} \circ \psi_r$. For all other cases, for a vertex $m$, we have the equation $\psi_m = \sum_{n \in \text{succ}(m)} \psi_n$, where $\text{succ}(m)$ denotes the successors of $m$, and the addition $f + g$ of function values $f$ and $g$ is defined to be the function $\lambda x.f(x) + g(x)$.
Because $\text{id}(= \lambda x.x)$ and $\lambda x.1$ are both linear functions of one variable, and the space of linear functions of one variable is closed under function composition and function addition, each $\psi$ function is a linear function of one variable. Furthermore, each $\psi$ function $\lambda x.a \cdot x + b$ can be represented as an ordered pair $(a, b)$.
To find $\psi$ functions that satisfy the above equations, each procedure $P$ is visited in reverse topological order of the call graph, and each vertex $v$ in $P$ is visited in reverse topological order of $P$'s control-flow graph. (For purposes of ordering the vertices of a procedure $P$, a return-site vertex $r$ is considered to be a successor of its associated call vertex $c$.) As each vertex $v$ is visited, the appropriate equation given above is used to determine the function $\psi_v$.
The order of traversal guarantees that when vertex $v$ is visited, all of the functions that are needed to determine $\psi_v$ will be available. This follows from the fact that the call graph associated with $G^*_P$ is acyclic and the fact that the flow graph of each procedure in $G^*_P$ is acyclic. (The fact that the call graph and flow graphs are acyclic also explains why each vertex needs to be visited only once.)
**Solving for $\rho$ functions** Each intraprocedural edge $e$ in procedure $P$ is assigned a linear function $\rho_e$. The function $\rho_e$, when supplied with the number of valid completions from $\text{Exit}_P$ (for an unbalanced-left path $p$ to $\text{Entry}_P$, concatenated with any same-level valid path from $\text{Entry}_P$ to $\text{Exit}_P$), returns the
---
2 The equations for the $\psi$ functions closely resemble the $\phi$ functions of Sharir and Pnueli's functional approach to interprocedural data-flow analysis [4, 5].
value that \( e \) contributes (to the path number of the path \( p \) concatenated with any same-level valid path to \( e \)).
Let \( v \) be an intraprocedural vertex that is the source of one or more intraprocedural edges. (Note that \( v \) cannot be a call vertex for a nonrecursive call site, nor have the form \( \text{Exit}_P \), nor have the form \( G\text{Exit}_P \).) Let \( w_1 \ldots w_k \) be the successors of \( v \). Then we make the following definition:
\[
\rho_{v \to w_i} = \begin{cases}
\lambda x.0 & \text{if } i = 1 \\
\sum_{j < i} \psi_{w_j} & \text{otherwise}
\end{cases}
\]
(3)
Clearly, each \( \rho \) function is a linear function of one variable. Furthermore, (3) can be used to find each \( \rho \) function when the \( \psi \) functions are known.
4.2 Computing Values for Interprocedural Edges
Unlike intraprocedural edges, an interprocedural edge \( e \) always contributes the same value, independent of the path taken to \( e \) [4]. For intraprocedural edges that are not of the form \( \text{Entry}_{\text{global}} \rightarrow \text{Entry}_P \), this value is always 0.
For each edge \( \text{Entry}_{\text{global}} \rightarrow \text{Entry}_P \) and each unbalanced-left path \( p \) that starts with this edge, we define the integer value \( \text{edgeValue}[\text{Entry}_{\text{global}} \rightarrow \text{Entry}_P] \) to be the value that \( \text{Entry}_{\text{global}} \rightarrow \text{Entry}_P \) contributes to \( p \)’s path number. To find the \( \text{edgeValue} \) values, it is necessary to use a fixed (but arbitrary) ordering of the edges of the form \( \text{Entry}_{\text{global}} \rightarrow \text{Entry}_P \). For convenience, we number each edge \( \text{Entry}_{\text{global}} \rightarrow \text{Entry}_P \) according to this ordering, and use the notation \( Q_i \), to refer to the procedure that is the target of the \( i \)th edge. We have the following:
\[
\text{edgeValue}[\text{Entry}_{\text{global}} \rightarrow \text{Entry}_{Q_i}] = \begin{cases}
0 & \text{if } i = 0 \\
\sum_{j < i} \psi_{\text{Entry}_{Q_j}} & \text{otherwise}
\end{cases}
\]
(1)
4.3 Calculating the Path Number of an Unbalanced-Left Path
In this section, we show how to calculate the path number of an unbalanced-left path \( p \) through \( G_{\text{fin}}^* \), from \( \text{Entry}_{\text{global}} \) to \( \text{Exit}_{\text{global}} \). This is done during a single traversal of \( p \) that sums the values contributed by each edge \( e \) for each path prefix \( p' \) such that \( [p' \parallel e] \) is a prefix of \( p \).
For an intraprocedural edge \( e \), the value \( \text{edgeValue}[e] \) contributed by \( e \) is calculated as described in Section 4.2. For an intraprocedural edge \( e \) in procedure \( P \), the value contributed by \( e \) (for the path \( p' \) leading to \( e \)) is calculated by applying the function \( \rho_e \) to the number of valid completions from \( \text{Exit}_P \). (The number of valid completions from \( \text{Exit}_P \) is determined by the path taken to \( \text{Entry}_P \)—in this case a prefix of \( p' \).)
We now come to the crux of the matter: how to determine the contribution of an edge \( e \) when the edge is traversed without incurring a cost for inspecting the path \( p' \) taken to \( e \). The trick is that, as \( p \) is traversed, we maintain a variable, \( \text{numValidCompsFromExit} \), that holds the number of valid completions from the exit vertex \( \text{Exit}_Q \) of the procedure \( Q \) that is currently being visited. The number
of valid completions from Exit\(_R\) is uniquely determined by \(p'\) — specifically, the sequence of unmatched left parentheses in \(p'\). The value numValidCompsFromExit is maintained by the use of a stack, NVCStack, and the \(\psi\) functions for return-site vertices. The following steps describe the algorithm to compute the path number for a path \(p\) (this number is accumulated in the variable pathNum):
- When the traversal of \(p\) is begun, numValidCompsFromExit is set to 1. This indicates that there is only one valid completion from Exit\(_R\), where \(R\) is the first procedure that \(p\) enters; if \(p\) reaches the exit of the first procedure it enters, then it must follow the edge Exit\(_P\) \(\rightarrow\) Exit\(_{global}\). The value of pathNum is initialized to the value edgeValue[\(e\)] on the first edge \(e\) of \(p\) (see Section 4.2).
- As the traversal of \(p\) crosses a call-edge \(c \rightarrow \text{Entry}_T\) from a procedure \(S\) to a procedure \(T\), the value of numValidCompsFromExit is pushed on the stack, and is updated to \(\psi_r(\text{numValidCompsFromExit})\), where \(r\) is the return-vertex in \(S\) that corresponds to the call-vertex \(c\). This reflects the fact that the number of valid completions from Exit\(_T\) is equal to the number of valid completions from \(r\).
- As the traversal of \(p\) crosses a return-edge Exit\(_T\) \(\rightarrow r\) from a procedure \(T\) to a procedure \(S\), the value of numValidCompsFromExit is popped from the top of the stack. This reflects the fact that the number of valid completions from the exit of the calling procedure \(S\) is unaffected by the same-level valid path through the called procedure \(T\).
- As the traversal of \(p\) crosses an intraprocedural edge \(e\), the value of pathNum is incremented by \(\rho_e(\text{numValidCompsFromExit})\).
- At the end of the traversal of \(p\), pathNum holds the path number of \(p\).
### 4.4 Runtime Environment for Collecting a Profile
We are now ready to describe the instrumentation code that is introduced to collect an interprocedural path profile. In essence, the instrumentation code threads the algorithm described in Section 4.3 into the code of the instrumented program. Thus, the variables pathNum and numValidCompsFromExit become program variables. There is no explicit stack variable corresponding to NVCStack; instead, numValidCompsFromExit is passed as a value-parameter to each procedure and the program's execution stack is used in place of NVCStack. The instrumentation also makes use of two local variables in each procedure:
- pathNumOnEntry stores the value of pathNum on entry to a procedure. When an intraprocedural backedge is traversed in a procedure \(P\), the instrumentation code increments the count associated with the current observable path and begins recording a new observable path that has the context-prefix indicated by the value of pathNumOnEntry.
- pathNumBeforeCall stores the value of pathNum before a recursive procedure call is made. When the recursive procedure call is made, the instrumentation begins recording a new observable path. When the recursive call returns, the
instrumentation uses the value in \texttt{pathNumBeforeCall} to resume recording the observable path that was executing before the call was made.
Figs. 5 and 6 show an instrumented version of the code in Fig. 2. Reference [4] gives a detailed description of the instrumentation used to collect an interprocedural path profile and describes how the instrumentation can be made more efficient than the code shown in Figs. 5 and 6.
5 Future Work
We are currently in the process of implementing the algorithm described in the paper, and thus do not yet have performance figures to report. The main reasons for believing that the technique described (or a variation on it) will prove to be practical are:
- The Ball-Larus technique for intraprocedural profiling has very low overhead (31% on the SPEC benchmarks [3]). As discussed in the Introduction, although interprocedural path profiling involves more overhead than the Ball-Larus technique, the additional overhead should not be prohibitive.
- In the worst case, the number of paths through a program is exponential in the number of branch statements \( b \), and thus the number of bits required to represent paths is linear in \( b \). However, as in the Ball-Larus approach, it is possible to control the explosion in the number of paths by altering \( G_{fin}^* \) to remove paths from it (and adjusting the instrumentation code accordingly).
There are a variety of techniques that can be applied without having to fall back on pure intraprocedural profiling [4].
Acknowledgements
This work was supported in part by the NSF under grants CCR-9625667 and CCR-9619219, by an IBM Partnership Award, by a Vilas Associate Award from the Univ. of Wisconsin, and by the "Cisco Systems Wisconsin Distinguished Graduate Fellowship".
References
|
{"Source-Url": "http://research.cs.wisc.edu/wpis/papers/cc99.pdf", "len_cl100k_base": 10595, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 93864, "total-output-tokens": 11912, "length": "2e13", "weborganizer": {"__label__adult": 0.00031495094299316406, "__label__art_design": 0.0002944469451904297, "__label__crime_law": 0.0002753734588623047, "__label__education_jobs": 0.0006203651428222656, "__label__entertainment": 5.012750625610352e-05, "__label__fashion_beauty": 0.00013756752014160156, "__label__finance_business": 0.00018870830535888672, "__label__food_dining": 0.0002741813659667969, "__label__games": 0.0005631446838378906, "__label__hardware": 0.0012445449829101562, "__label__health": 0.00041103363037109375, "__label__history": 0.0002213716506958008, "__label__home_hobbies": 0.00011479854583740234, "__label__industrial": 0.00040268898010253906, "__label__literature": 0.0001996755599975586, "__label__politics": 0.0001779794692993164, "__label__religion": 0.0004041194915771485, "__label__science_tech": 0.021148681640625, "__label__social_life": 7.665157318115234e-05, "__label__software": 0.005474090576171875, "__label__software_dev": 0.96630859375, "__label__sports_fitness": 0.000274658203125, "__label__transportation": 0.0004804134368896485, "__label__travel": 0.0001844167709350586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41468, 0.03248]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41468, 0.48698]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41468, 0.83349]], "google_gemma-3-12b-it_contains_pii": [[0, 2492, false], [2492, 5498, null], [5498, 8749, null], [8749, 10424, null], [10424, 13770, null], [13770, 15805, null], [15805, 17042, null], [17042, 17255, null], [17255, 20844, null], [20844, 24499, null], [24499, 26195, null], [26195, 28528, null], [28528, 31944, null], [31944, 35518, null], [35518, 38692, null], [38692, 41468, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2492, true], [2492, 5498, null], [5498, 8749, null], [8749, 10424, null], [10424, 13770, null], [13770, 15805, null], [15805, 17042, null], [17042, 17255, null], [17255, 20844, null], [20844, 24499, null], [24499, 26195, null], [26195, 28528, null], [28528, 31944, null], [31944, 35518, null], [35518, 38692, null], [38692, 41468, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41468, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41468, null]], "pdf_page_numbers": [[0, 2492, 1], [2492, 5498, 2], [5498, 8749, 3], [8749, 10424, 4], [10424, 13770, 5], [13770, 15805, 6], [15805, 17042, 7], [17042, 17255, 8], [17255, 20844, 9], [20844, 24499, 10], [24499, 26195, 11], [26195, 28528, 12], [28528, 31944, 13], [31944, 35518, 14], [35518, 38692, 15], [38692, 41468, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41468, 0.0274]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
7bb514a09d8d729bcdd720dcbceed2d2930d8c22
|
A fast and anti-matchability matching algorithm for content-based publish/subscribe systems
Shiyu Qian a,b, Jian Cao a,b, Weichao Mao a,b, Yanmin Zhu b, Jiadi Yu b, Minglu Li b, Jie Wang c
a Shanghai Institute for Advanced Communication and Data Science, Shanghai Jiao Tong University, China
b Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
c Department of Civil and Environmental Engineering, Stanford University, USA
A R T I C L E I N F O
Article history:
Received 3 May 2018
Revised 23 October 2018
Accepted 3 December 2018
Available online 4 December 2018
Keywords:
Matching algorithm
Matchability
Performance
Publish/Subscribe system.
A B S T R A C T
The content-based publish/subscribe system is a flexible many-to-many communication middleware that meets the demands of many large-scale distributed applications. It is well known that event matching is a fundamental component of the content-based publish/subscribe system. When designing matching algorithms, matching speed is a major objective being pursued. Moreover, through theoretical analysis and experimental verification, we discover that the matching speed of most existing matching algorithms is affected by the subscriptions’ matchability which is defined as the matching probability of subscriptions with events. Nevertheless, this problem has not been considered in existing matching algorithms. To address this problem, we propose REIN (REctangle INtersection), a fast and anti-matchability matching algorithm for content-based publish/subscribe systems. REIN is a fast matching algorithm, following the conventional design objective of pursuing a high matching speed. Furthermore, due to the utilization of a negative searching strategy that aims to filter out unmatched subscriptions in the matching process, the matching speed of REIN is not affected by the subscriptions’ matchability, but rather is improved. To evaluate the performance of REIN, comprehensive experiments are conducted. The experiment results show that REIN not only has an excellent matching performance, but also possesses a beneficial anti-matchability feature.
© 2018 Elsevier B.V. All rights reserved.
1. Introduction
The content-based publish/subscribe system is a flexible many-to-many communication middleware that meets the demands of many large-scale distributed applications, such as information filtering, selective content dissemination, location-based services, and workload monitoring and management. This communication middleware is attractive in that it achieves full decoupling of the communication parties in time, space and synchronization [1]. In order to distribute workload and be scalable, content-based publish/subscribe systems often use a network of brokers to route subscriptions and forward events. Each broker needs to check the high volume of events against a large number of subscriptions, namely performing event matching, to properly forward events to interested subscribers. It is well known that event matching is a fundamental component of any content-based publish/subscribe system.
Since event matching has an effect on the overall performance of large-scale content-based publish/subscribe systems, different techniques have been proposed to improve the matching speed over the last two decades. For example, efficient data structures have been designed to index subscriptions for the purpose of speeding up event matching [2–7]. The representative data structures include matching tree [8], matching table [3,6], binary decision diagram [9,10], BE-Tree [11], OpIndex [12] and bloom filter [13]. In addition, it is clear that matching speed can be promoted by reducing the number of subscriptions through the covering, subsumption, merging and summarization of subscriptions [10,14–17]. Therefore, it is natural that matching speed is one of the major objectives being pursued when designing matching algorithms. Usually, matching time is used to measure matching speed, which is defined as the running time spent on matching an event against a set of subscriptions.
Generally, there are two searching strategies that can be adopted by matching algorithms to find matching subscriptions from candidates: positive searching and negative searching. Positive searching directly locates matching subscriptions, ignoring unmatched ones in the process of event matching. On the contrary,
negative searching first identifies unmatching subscriptions. With
the availability of all subscriptions and unmatching ones, it is easy
to obtain matching subscriptions indirectly.
Existing matching algorithms usually utilize a positive searching
strategy to pursue a high matching speed, such as SIENA [3], TAM
A [6] and OplIndex [12]. However, one side effect of positive searching
is that the performance of the matching algorithms is affected by
the subscriptions’ matchability which is defined as the matching
probability of subscriptions with events. Through theoretical anal-
ysis and experimental verification, we discover that the matcha-
ibility of subscriptions has a great effect on the matching speed
for matching algorithms that follow a positive searching strategy.
Specifically, given a set of subscriptions, the matching time in-
creases linearly or logarithmically with the number of matching
subscriptions, giving rise to performance variation. To the best of
our knowledge, this problem has not yet been addressed in the lit-
erature.
The performance variation of matching algorithms has two
drawbacks. First, it is difficult to estimate the throughput of match-
ing algorithms with unsteady performance. To deal with workload
spikes during rush hour, optimal resource reservation and schedul-
ing is usually based on precise estimates. Second, performance
variation may cause violent fluctuation in event transfer latency for
subscribers. Theoretically, it would be ideal to find all the matching
subscriptions at the same moment, thus ensuring that there is no
obvious difference in event transfer latency for subscribers. How-
ever, even though subscriptions are partitioned into groups and
parallel matching algorithms are employed, a time gap remains be-
tween the identifying time of matching subscriptions due to the
large number of subscriptions. For matching algorithms that follow
a positive searching strategy, when there are more matching sub-
scriptions, a longer matching time is needed. Obviously, the longer
the matching time, the larger the time gap between the identifying
time of matching subscriptions, as well as the higher the fluctua-
tion of event transfer latency. Thus, while simultaneously pursuing
a high matching speed, one should consider the effect of the sub-
scriptions’ matchability when designing matching algorithms.
In relation to the searching strategy being applied to matching
algorithms, there are two options: positive searching and negative
searching. Negative searching and positive searching differ in the
way they retrieve matching subscriptions. The basic idea of nega-
tive searching is that, given a set of subscriptions, if all unmatch-
ingsubscriptions are known, it is easy to determine the matching
ones. Negative searching is more efficient than positive searching
in terms of predicate evaluations. For example, when matching an
event against a subscription that contains five predicates, all five
predicates should be evaluated to establish the matching relation.
On the contrary, the subscription can be determined as unmatch-
ing as soon as one of the five predicates is found to be false. The
count of predicate evaluations is minimally one and at most five.
Furthermore, another advantage of negative searching is that the
matching time decreases with the number of matching subscrip-
tions. In other words, the matchability of subscriptions improves
the matching speed, instead of degrading it.
In this paper, we present REIN (RECTangle INtersection), a fast
and anti-matchability matching algorithm for content-based pub-
lish/subscribe systems. The key idea behind REIN is to employ a
negative searching strategy. For the data model of subscriptions, it
is assumed that each subscription is composed of multiple interval
predicates. An interval predicate is a condition specified on an at-
tribute with a low value and a high value. The attributes appearing
in events form a high-dimensional space. In this space, a subscrip-
tion is a high-dimensional rectangle (for short rectangle), and an
event is a point. Therefore, the event matching problem is equiva-
dent to the point enclosure problem. As proved in [18], the rectan-
gle intersection problem is equivalent to the point enclosure prob-
lem. We utilize the rapid detection method of disjoint rectangles
to quickly find unmatching subscriptions. In addition, an efficient
index structure is designed to address the rectangle intersection
problem by using bit operations.
Compared with matching algorithms that follow a positive
searching strategy, REIN has five attractive features. First, with
the help of a specialized data structure, REIN is highly efficient in
searching unmatching subscriptions, having a high matching speed.
Second, the matching time of REIN decreases with the number of
matching subscriptions, exhibiting a nice anti-matchability feature.
Third, the matching performance of REIN is fairly stable, showing
low standard deviation of matching times. Fourth, REIN is not af-
fected by the distributions of predicate values, maintaining robust-
ness. Fifth, REIN is relatively efficient in updating (insert or delete)
subscriptions in its underlying data structure, making it applicable
to very dynamic environments.
Extensive experiments are conducted to evaluate the perfor-
mance of REIN. First, the parameters that impact the performance
of matching algorithms are identified, including the matchability
of subscriptions, the number of interval predicates, the number
of subscriptions, and the distribution of predicate values. In the
experiments, the number of subscriptions is up to 2 million and the
number of predicates contained in the subscriptions is up to 23.
The experiment results show that REIN strongly outperforms five
reference matching algorithms in terms of matching speed and
performance stability. As discussed in this paper, our main contri-
butions are:
• We discover the effect of the subscriptions’ matchability on the
performance of matching algorithms and run a series of ex-
periments to verify this.
• We propose a negative searching strategy to alleviate the effect
of the subscriptions’ matchability and design an efficient index
structure to implement this strategy.
• We conduct extensive experiments to evaluate the performance
of REIN and thoroughly analyze the experimental results.
The remainder of this paper is organized as follows. Section 2
provides background knowledge on publish/subscribe systems. Sec-
tion 3 reviews the related work. Section 4 describes the effect of the
subscriptions’ matchability. Section 5 details the design of REIN. Sec-
tion 6 presents the experiment results. Section 7 discusses two issues relating to REIN. Finally, Section 8 concludes the paper. This paper is an extended version of [19].
2. Background
Some preliminary knowledge on publish/subscribe systems is
given in this section. We first present the data model by defin-
ing the terms, and then describe the architecture and rationale of
publish/subscribe systems.
2.1. Terms
Definition 1. Event
An event is an observable occurrence, also called a message,
publish or notification. An event usually contains multiple at-
tributes, expressed as a conjunction of attribute-value pairs. As a
convention, each attribute appears only once in an event expres-
sion.
For example, \((\text{tem} = 35), \text{(hum} = 15)\) is an event describing
weather conditions. The set of attributes appearing in the event
expression is defined as \(A = \{a_1, a_2, \ldots, a_m\}\) and the number of at-
tributes in \(A\) is denoted by \(m\). The sources of events are called pub-
lishers.
Definition 2. Predicate
A predicate is a condition specified on an attribute selected from \( A \). We consider interval predicates in inclusive form in this paper, which are represented as a 3-tuple of \( \{a, v_1, v_2\} \), where \( a \) is an attribute in \( A \), \( v_1 \) and \( v_2 \) are bounded by the value domain of \( a \), and \( v_1 \) is not larger than \( v_2 \). \( v_1 \) and \( v_2 \) are termed as predicate values in this paper.
Given the value domain of attributes, other forms of predicates can be transformed into interval predicates. For example, if the data type of \( tem \) is an integer with a value domain \([0, 100]\), then the simple predicate \( \{\text{tem} \geq 20\} \) can be transformed into the interval predicate \( \{\text{tem}, 20, 100\} \).
Definition 3. Subscription
A subscription expresses users’ interest in events and is specified as a conjunction of multiple interval predicates.
Users who issue subscriptions are called subscribers. Subscriptions are used to forward events from publishers to subscribers. Each subscription is identified by a unique \( subID \). The number of interval predicates contained in a subscription is not larger than \( m \), where \( m \) is the number of attributes appearing in events. A subscription matches an event if all the interval predicates contained in the subscription are satisfied when they are assigned the corresponding attribute values of the event.
Definition 4. Matchability
The matchability of a subscription is the average probability that the subscription matches events. The matchability of a predicate is the probability that the predicate is satisfied by assigning the corresponding attribute value of events. The matchability of a subscription is determined by the matchabilities of predicates contained in the subscription.
Definition 5. Event Matching
Given a set of \( n \) subscriptions \( S = \{s_1, s_2, \ldots, s_n\} \) and an event \( e \), the problem of event matching is to retrieve all subscriptions that match \( e \) from \( S \). The set of the matching subscriptions \( S_e \) is a subset of \( S \), \( S_e \subseteq S \).
\[
S_e = \{s_i \mid s_i \in S \cap s_i \text{ matches } e\}
\]
(1)
2.2. Publish/Subscribe system
A typical publish/subscribe system consists of subscribers, publishers, and a network of brokers. Publishers inject events into the system while subscribers issue subscriptions for interested events. The core of the publish/subscribe system is to disseminate events from publishers to subscribers as quickly as possible. Usually, subscriptions are broadcasted to all brokers [20,21] or directed to rendezvous brokers [22,23]. In the former case, event matching is performed by each broker on the reverse path that is constructed during the broadcasting process of subscriptions, and events are delivered to subscribers step by step. In the latter case, event matching is carried out only at the rendezvous brokers, and events are directly sent to interested subscribers by those rendezvous brokers. An example of a publish/subscribe system is shown in Fig. 1. Here, the system has 8 subscribers, 2 publishers, and a network of 7 brokers.
Whenever a broker receives an event from one of its neighbors (either another broker or a publisher), event matching is carried out to decide whether the event should be forwarded to the next-hop brokers or subscribers. For large-scale distributed publish/subscribe systems, there may be millions of subscriptions maintained by brokers. Brokers with low matching speed are apt to be potential performance bottlenecks. For example, when the event arrival rate is larger than the matching rate of a broker, the broker becomes a performance bottleneck. Therefore, improving matching speed is critical for large-scale contented-based publish/subscribe systems.
3. Related work
Designing highly efficient matching algorithms has drawn a great amount of attention in the last two decades, and many different techniques have been proposed. These techniques can be classified roughly into three categories: (i) proposing new index structures, (ii) reducing the number of subscriptions, and (iii) utilizing the parallel computing capability of hardware.
3.1. Proposing new index structures
New index structures that have been proposed to improve matching efficiency include the work presented in [2,3,5,6,12,13]. Most of these are counting-based matching algorithms that take a positive searching strategy. The matching procedure of these algorithms generally consists of three steps. Step 1, all satisfied predicates are quickly evaluated through the index structures. Step 2, counting algorithms are used to sum up the number of satisfied predicates for each subscription. Step 3, the number of satisfied predicates is compared with the number of predicates contained in the subscription to judge whether the subscription is matching. SIENA [3] and TAMA [6] are two representative counting-based matching algorithms. The index structure of SIENA is a two-level forwarding table, which is applicable to the situation where subscriptions change infrequently, as every modification of subscriptions leads to rebuilding the whole table. The index structure of TAMA is a two-layer matching table used for approximate event matching. H-TREE is not a counting-based algorithm which puts similar subscriptions together and skips subtrees to filter unmatched subscriptions in the process of event matching [7]. When matching events, a lot of unmatched subscriptions are filtered out to improve the matching speed. H-TREE is highly efficient in cases where the width of interval predicates is smaller than the width of cells divided on the attributes’ value domain.
One problem with counting-based matching algorithms is low efficiency. Although an unmatched subscription can ultimately be determined in the matching process, it is counted multiple times which equals the number of satisfied predicates. Therefore, the time complexity of counting-based matching algorithms is linear or logarithmic in the number of satisfied predicates.
3.2. Reducing the number of subscriptions
In addition to proposing new index structures, reducing the number of subscriptions is another way to improve matching speed. The covering and subsumption relations among subscriptions are utilized to reduce the number of subscriptions [4,10,14–17,24–29]. If subscription $s_1$ covers subscription $s_2$, if and only if for any event $e$ that is a match of $s_2$, $e$ must be a match of $s_1$. Given a set of existing subscriptions $S = \{s_1, s_2, \ldots, s_n\}$ and a new subscription $s$, the result of subsumption checking for $s$ is true if and only if $s \subseteq u_{i=1}^n s_i$.
All of the relations of subscriptions, subsumption is the most efficient to reduce the number of subscriptions [4,15,17,26–28]. For example, space-filling curves, such as Hilbert, are used to represent the content space to efficiently check subscription subsumption [4,28]. Subscription covering is less efficient than subsumption, but at a lower cost, such as in [24,25]. Based on the covering and subsumption relations of subscriptions, merging and summarization can be used to reduce the routing table size (the number of subscriptions) [10,14,30]. These techniques are complementary to our proposed matching algorithm, and it is beneficial to incorporate them in REIN. However, checking the subsumption relationships among subscriptions is not trivial, and it is not suitable for dynamic environments.
3.3. Utilizing parallel computing capability
Traditionally, most matching algorithms are sequential. Although these algorithms are effective for improving matching speed and increasing processing throughput, they fail to efficiently utilize the hardware’s parallel computing capability. In recent years, some parallel matching algorithms have been proposed to take advantage of the development of hardware, such as CPUs with multi-cores as well as GPUs. For example, the parallel computing capability of today’s multi-processor chips is exploited to improve matching efficiency in [31,32]. New matching algorithms have been designed to run efficiently on GPUs in [33,34]. In addition, matching algorithms running on FPGAs have been proposed to achieve fine-rate processing by exploring various degrees of parallelism in [35]. Overall, parallel matching algorithms have been developed on the foundation of sequential ones. Yet, the cost of communication and synchronization needs to be carefully considered when designing parallel algorithms.
In summary, REIN differs from existing works in two aspects. Firstly, existing algorithms mainly adhere to the design objective of pursuing a high matching speed, without considering the effect of the subscriptions’ matchability on matching performance. Furthermore, the negative searching strategy is seldom employed in existing matching algorithms. Exploring a different direction, REIN utilizes a negative searching strategy to alleviate the impact of the subscriptions’ matchability.
4. Effect of subscriptions matchability
In this section, we theoretically analyze the impact of the subscriptions’ matchability on the efficiency of matching algorithms. Since the matchability of a subscription is determined by the matchabilities of the subscription’s predicates, we theoretically analyze the impact of predicates’ matchability on the efficiency of matching algorithms that apply a positive searching strategy.
According to the relations between subscriptions and events, subscriptions can be placed into two categories: matching and unmatching. Of the unmatching subscriptions, two subcategories can be divided further, namely partially unmatching and completely unmatching. For a given subscription, if all of its predicates are satisfied, it is matching; if none of the predicates are satisfied, it is completely unmatching; otherwise, the subscription is partially unmatching.
For most matching algorithms applying a positive searching strategy, such as SIENA [3], TAMA [6] and OIndex [12], predicates are the basic units indexed in their underlying data structures. In this way, it is efficient to retrieve satisfied predicates when matching events. When predicates contained in a subscription are independently indexed, the relationship among predicates that consist of the subscription needs to be maintained. A common method is to let predicates contained in the same subscription point to a counter. As discussed in Section 3.1, for counting-based matching algorithms, the number of satisfied predicates is counted for both matching and partially unmatching subscriptions. However, it is useless to process partially unmatching subscriptions, as this degrades the matching efficiency. Therefore, it is time-consuming to obtain the set of matching subscriptions through counting algorithms.
One advantage of negative searching over positive searching is that the relationship of predicates contained in a subscription does not need to be maintained in the data structures. For a subscription, as soon as a predicate contained in the subscription is evaluated as false, the subscription is determined as unmatching. In terms of predicate evaluations, negative searching is more efficient than positive searching when the matchability of predicates is relatively high or the number of predicates contained in the subscription is large.
Given a subscription $s$ containing $k$ independent predicates, and the matchability of predicates is $p$, it is assumed that the data structures for both positive searching and negative searching cost the same to evaluate the predicates. For simplicity, we assume that $p$ is the matchability of all predicates. In real use cases, predicates can have different matchability.
Lemma 1. For matching algorithms applying a positive searching strategy, the expectation of predicate evaluations is $pk$.
Proof. The proof of this lemma is very straightforward. For positive searching, the number of satisfied predicates should be counted for each subscription. The count of predicate evaluations equals the number of predicates that are evaluated as true. Given the number of predicates $k$ and the matchability of predicates $p$, $pk$ predicates are evaluated to be true on average. □
Lemma 2. For matching algorithms applying a negative searching strategy, the expectation of predicate evaluations is $\frac{1-p^k}{1-p}$.
Proof. For negative searching, since we can assert that a subscription is unmatched as long as we find one unsatisfied predicate, the expected number of predicates that needs to be checked in a subscription is
$$\sum_{i=1}^{k} (i * p^{i-1} * (1-p)) + k * p^{k-1} * p = \frac{1-p^k}{1-p}.$$ \(2\)
□
Lemma 3. Given the number of predicates $k (k > 3)$ contained in subscriptions, there is a turning point of $p$ that makes negative searching more efficient than positive searching in terms of predicate evaluations.
Proof. When $\frac{1-p^k}{1-p} < pk$, negative searching is more efficient than positive searching in terms of predicate evaluations. When $k \leq 3$, there are no real solutions to the inequality. The Abel—Ruffini Theorem states that there is no algebraic solution when $k > 5$. However, these solutions can be computed to any desired degree of accuracy using numerical methods such as the Newton—Raphson method or the Laguerre method. For example, when $k = 5$, for all $p$
5. Design of REIN
In this section, we detail the design of REIN and analyze its time complexity. An example is also provided to illustrate the data structure and matching procedure of REIN.
5.1. Overview
The design objective of REIN is twofold. First, since matching performance is critical to both matching algorithms and publish/subscribe systems, naturally, the clear aim of REIN is to pursue a high matching speed. Second, the performance of REIN should be stable, exhibiting an anti-matchability feature. As analyzed and verified in Section 4, matching algorithms following a positive searching strategy do not possess an anti-matchability feature. To pursue a high matching speed and to alleviate the effect of the subscriptions’ matchability, REIN uses a negative searching strategy. The basic idea of negative searching is that, given a set of subscriptions, if the unmatching subscriptions can be identified quickly, it will be easy to obtain the matching ones.
After establishing the design objective of REIN, the next step is to investigate this method and gauge its results. As defined in Definition 5, given a set of subscriptions S and an event e, the problem of event matching is to find all subscriptions from S that match e. The set of attributes $A = \{a_1, a_2, \ldots, a_m\}$ in events forms a $m$-dimensional space. In the space, events are points and subscriptions are rectangles. Therefore, the event matching problem is equivalent to the point enclosure problem [36], that is, finding all rectangles that contain a given point. Points can be seen as special rectangles. As proved in [18], the rectangle intersection problem is equivalent to the point enclosure problem. We utilize the rapid detection method of disjoint rectangles to quickly search unmatching subscriptions.
5.2. Index structure
In order to pursue a high matching speed without sacrificing performance stability, REIN uses a negative searching strategy that searches unmatching subscriptions during the process of event matching. Therefore, when matching events against subscriptions, the challenge of designing REIN is to realize a fast searching method that is able to find all unmatching subscriptions. To tackle this challenge, it is necessary to design an index structure.
The index structure of REIN consists of a collection of bucket lists. The number of bucket lists is $2m$, where $m$ is the number of attributes appearing in events. For each attribute, two bucket lists are constructed. One bucket list is for the low values of the interval predicates specified on the attribute and the other is for the high values of the interval predicates. A bucket list is constructed by dividing the value domain of an attribute into cells and realizing the mapping from the cells to the buckets. All values belonging to a cell map to the corresponding bucket. When a predicate is indexed in a bucket, the predicate value and the corresponding subID are inserted into the bucket. An example of the index structure is shown in Fig. 3, where four bucket lists are created for two attributes $a_1$ and $a_2$. The value domain of $a_1$ and $a_2$ is $[0, 20]$ from which four cells are divided. Each cell maps to a bucket in which the predicate values and subIDs are stored.
The number of cells divided on a value domain is determined by multiple factors. One is the stability of subscriptions. For a publish/subscribe system, if the subscriptions are relatively static, fewer cells can be created, and the items in each bucket can be sorted on the predicate values to obtain better matching performance. Otherwise, more cells are needed to reduce the cost of
subscription modifications. Another factor is the number of subscriptions. In order to improve matching efficiency, the size of the buckets, represented by the number of items in the buckets, is critical. Given the number of subscriptions, there is a turning point for the number of buckets. After reaching the turning point, the performance of REIN degrades with the addition of more buckets.
5.3. Event matching
The event matching procedure of REIN is straightforward, as described by Algorithm 1 in Fig. 4. When matching an event, a bit-set is initialized in which the number of bits equals the number of subscriptions (line 3). All unmatching subscriptions are marked in the bitset. Given an event that is specified by \( \{ a_1 = v_1, a_2 = v_2, \ldots, a_m = v_m \} \), for each attribute, the values (low value and high value) of the predicates are compared with the values of the event, finding and marking all subscriptions where (i) the high value of the interval predicates is less than the value of the event (lines 4 – 11), and (ii) the low value of the interval predicates is larger than the value of the event (lines 12 – 18). The unmarked bits in the bitset represent the matching subscriptions (lines 20 – 24).
The matching efficiency of REIN is manifested in three aspects. First, the divide-and-conquer strategy is applied to the index structure, dividing the value domain of attributes into multiple cells that map to buckets. Thus, the size of the buckets is much smaller compared with the number of subscriptions. Second, comparison operations are only executed in two buckets for each attribute when matching an event, rapidly traversing the remainder of the buckets. Third, the index structure of REIN is concise, embedding the principle of simplicity.
5.4. Complexity analysis
We now analyze the time complexity of REIN. To facilitate the analysis, it is assumed that the distribution of predicate values and event values are uniform and the predicate attributes are uniformly selected from the event attributes. For the following lemmas, the number of subscriptions is \( n \), the number of predicates in the subscriptions is \( k \), the matchability of predicates is \( p \), the number of attributes in the events is \( m \), and the number of buckets is \( b \).
**Lemma 4.** The number of predicates in a bucket is \( e = \frac{nk}{mb} \).
**Proof.** The total number of buckets is \( m^b \), since for each one of the \( m \) event attributes, the value domain is divided into \( b \) cells and each cell is mapped to one bucket. Since the \( n^k \) predicates are evenly inserted into the \( m^b \) buckets, the number of predicates in a bucket is \( e = \frac{nk}{mb} \). \( \square \)
**Lemma 5.** The time complexity of event matching is \( O(mb) \).
**Proof.** According to the matching procedure of REIN, the main operations performed by REIN are comparison, traversing and switching. For each attribute, the comparison cost is \( O(e) \) since the comparison operation is performed in 2 buckets. On average, half of the buckets are traversed, causing the predicates traversing cost and bucket switching cost both to be \( O(be) \). Since there are \( m \) attributes in events, the time complexity of event matching is \( O(mb) \). \( \square \)
**Lemma 6.** The time complexity of inserting a subscription is \( O(k) \).
**Proof.** For a subscription containing \( k \) predicates, the subID of the subscription should be inserted into \( 2k \) buckets. For each of the \( k \) predicates, computing the mapped bucket can be done in \( O(1) \). Since sorting is not implemented in the buckets, appending an item in a bucket is \( O(1) \). Therefore, the time complexity of inserting a subscription is \( O(k) \). \( \square \)
**Lemma 7.** The time complexity of deleting a subscription is \( O(ke) \).
**Proof.** When deleting a subscription containing \( k \) predicates, the subID of the subscription should be removed from \( 2k \) buckets. Since each bucket has \( e \) predicates and sorting is not implemented in buckets, locating the subID of the subscription costs \( O(e) \). Therefore, the time complexity of deleting a subscription is \( O(ke) \). \( \square \)
**Theorem 2.** Given the number of subscriptions \( n \), the number of predicates \( k \) contained in the subscriptions, the optimal number of buckets is \( b^* = \sqrt{\frac{nk}{2mp}} \).
**Proof.** Given the number of buckets \( b \) and the number of attributes \( m \) in events, let \( \alpha \) be the unit time to compare a predicate in the bucket, \( \beta \) be the unit time to traverse a predicate in the bucket, and \( \gamma \) be the unit time to switch buckets when traversing, then the total matching time can be denoted as:
\[
T(b) = 2\alpha e + b(\beta e + \gamma) = 2\alpha \frac{nk}{mb} + \frac{\beta nk}{m} + by.
\]
Taking the derivative of the total cost \( T(b) \) with respect to \( b \), setting it to 0 and solving the equation, we get \( b^* = \sqrt{\frac{2nk}{mp}} \). \( \square \)
5.5. Example
To illustrate the index structure and the matching procedure of REIN, we present an example in Fig. 3. There are two attributes in the index structure. For each attribute, two bucket lists are created, one for the low values of the interval predicates and the other for the high values. The value domain of each attribute is \([0, 20]\). 10 subscriptions listed in Table 1 are stored in the index structure, which is represented in Fig. 3(a). When indexing a subscription, the interval predicates contained in the subscription are processed one by one. For each interval predicate, the low value and the high value are used to determine the corresponding bucket to store the subID of the subscription. Please note that if there are \( k \) interval predicates in a subscription, the subID of the subscription is stored \( 2k \) times. For example, when indexing \( s_1 \), the low value on \( a_1 \) is 4 which is located in the cell mapping to bucket \( b_0 \) as shown in Fig. 3(b). The high value specified on \( a_1 \) is 12, which is in the cell mapping to bucket \( b_2 \) as shown in Fig. 3(c). The same is true for processing the interval predicate specified on \( a_2 \), indexing \( s_1 \) in \( b_1 \) and \( b_2 \) as shown in Fig. 3(d) and Fig. 3(e), respectively.
The rectangles that represent the 10 subscriptions are shown in Fig. 5(a). Given an event \( e \) \( \{a_1 = 6, a_2 = 10\} \) (denoted as a red point), the matching subscriptions of the event \( e \) are those rectangles that intersect with the point. When the rectangles that disjoin with the point are determined, it is easy to obtain those that intersect with the point. The rectangles that have their right sides on the left side of the point are marked, namely \( s_0 \). The resulting rectangles for the right side of the point are \( s_4, s_5, s_6, s_7, s_8 \) and \( s_9 \). For the top side of the point, \( s_0, s_2 \) and \( s_3 \) are the resulting rectangles. Additionally, \( s_4, s_8 \) and \( s_9 \) are the rectangles for the bottom side. Obviously, some rectangles are marked multiple times. By checking the bitset, we see \( s_1 \) and \( s_2 \) are the matching subscriptions for the event \( e \), as shown in Fig. 5(b).
system of the server is Ubuntu 11.10 with Linux kernel 3.0.0–12. Parallelism is not used in the experiments. In each experiment, 400 events are matched.
6.1.4. Metric
To comprehensively evaluate the performance of the six tested algorithms, three time metrics are measured: matching time, construction time and deletion time. Matching time is the most important metric used to evaluate the matching speed of matching algorithms. Construction time and deletion time represent the maintenance cost of matching algorithms. In addition, the memory consumption of all tested matching algorithms is compared.
6.2. Effect of subscriptions’ matchability
We first conduct an experiment to confirm that the effect of the subscriptions’ matchability on the performance of matching algorithms does exist, verifying the analysis in Section 4. In the experiment, the parameters are set as follows: $n = 2,000,000$, $m = 20$, $k = 10$ and $w = 0.5$. The average matchability of subscriptions is almost 0.001 ($= 0.5^{10}$). For each of 400 events, the number of matching subscriptions and the corresponding matching time are recorded. The correlation between the matching time and the number of matching subscriptions of the six algorithms are depicted in Fig. 6. By regression analysis, the goodness of these fits is all above 0.7 in terms of $R^2$ with 95% confidence.
Overall, we find that the matching time of the five compared matching algorithms increases either logarithmically or linearly with the number of matching subscriptions. Specifically, the matching time of SIENA, TAMA and OplIndex increases logarithmically with the number of matching subscriptions as shown in Fig. 6(a), (b) and (c), respectively. H-TREE and BE-TREE exhibit linear correlation, which is depicted in Fig. 6(d) and (e) respectively. On the contrary, the matching time of REIN decreases logarithmically with the number of matching subscriptions, which is verified in Fig. 6(f).
SIENA, TAMA and OplIndex are based on counting algorithms to positively search matching subscriptions. Counting-based matching algorithms have an obvious performance drawback that is an unmaching subscription may be counted multiple times because parts of its predicates are satisfied. When the number of predicates in subscriptions is large, much time is wasted checking the vast number of partially unmatching subscriptions that contain one or more satisfied predicates. Therefore, the performance of counting-based matching algorithms degrades dramatically when there are millions of subscriptions and each subscription contains tens or even hundreds of predicates, as shown in Fig. 6(a), (b) and (c).
H-TREE and BE-TREE are tree-based matching algorithms. The basic idea of tree-based matching algorithms is that matching speed can be improved when the search space is substantially reduced by pruning most of the unmatching subscriptions. To this end, subtrees are skipped according to the coverage relationship between subscriptions, just checking the subtrees that contain subscriptions with a high matching probability. After identifying these subtrees, a naive matching method is used to determine the matching subscriptions in H-TREE and BE-TREE, which causes the matching time to increase linearly with the matchability of subscriptions, as depicted in Fig. 6(d) and (e).
REIN exhibits an excellent anti-matchability feature in that the matching time of REIN decreases logarithmically with the number of matching subscriptions, as shown in Fig. 6(f). This can be explained in that when more subscriptions match an event, fewer bits in the bitset are marked in the course of matching, thus reducing the matching time.
Table 2
<table>
<thead>
<tr>
<th>Name</th>
<th>Meaning</th>
</tr>
</thead>
<tbody>
<tr>
<td>$n$</td>
<td>the number of subscriptions</td>
</tr>
<tr>
<td>$m$</td>
<td>the number of attributes contained in events</td>
</tr>
<tr>
<td>$k$</td>
<td>the number of predicates contained in subscriptions</td>
</tr>
<tr>
<td>$w$</td>
<td>the matchability of interval predicates</td>
</tr>
<tr>
<td>$d$</td>
<td>the discretization level in TAMA</td>
</tr>
<tr>
<td>$c$</td>
<td>the number of cells in H-TREE</td>
</tr>
<tr>
<td>$l$</td>
<td>the number of indexed attributes in H-TREE</td>
</tr>
<tr>
<td>$s$</td>
<td>the number of segments in OplIndex</td>
</tr>
<tr>
<td>$g$</td>
<td>the bits of signature in OplIndex</td>
</tr>
<tr>
<td>$b$</td>
<td>the number of buckets in REIN</td>
</tr>
</tbody>
</table>
6.3. Matching time
The matching time of REIN is affected by multiple parameters, including the number of subscriptions, the number of predicates contained in subscriptions, the matchability of predicates, the distribution of predicate values, and the number of buckets. We thoroughly evaluate the effects of these parameters in this section.
6.3.1. Number of subscriptions
In this experiment, we evaluate the effect of the number of subscriptions. The parameters set in the experiment are as follows: \( m = 20 \), \( k = 10 \) and \( w = 0.5 \). The experiment results shown in Fig. 7 reveal two more attractive features of REIN, namely rapidity and stability.
Overall, the matching time increases with the number of subscriptions for all the tested algorithms, including REIN. In the experiment, the matchability of subscriptions is deterministic. With more subscriptions, the number of matching subscriptions for the events increases accordingly, resulting in a longer matching time. Compared with the other five matching algorithms, the performance of REIN is least affected by the number of subscriptions. When the number of subscriptions is 2,000,000, REIN is almost 2.3, 2.5, 2.9, 8.8 and 4.5 times faster than SIENA, TAMA, H-TREE, BE-TREE and OpIndex, respectively.
Besides OpIndex, REIN performs more stably than other four matching algorithms. The minimum (Min), maximum (Max) and standard deviation (Std) of the matching time are listed in Table 3, where the number of subscriptions is 2,000,000. By observing the standard deviation, the matching performance of OpIndex is the most stable, a little better than REIN. However, the standard deviation of REIN is almost 1.1, 1.9, 17.0 and 85.1 times smaller than the standard deviations of SIENA, TAMA, H-TREE and BE-TREE, respectively. In addition, the stability of REIN is also manifested in the difference between the maximum and the minimum of matching time. The difference of REIN is 26.74 ms, which is on the same scale of OpIndex which has the smallest difference of 24.31 ms. However, the difference of SIENA, TAMA, H-TREE and BE-TREE is 27.77, 53.65, 614.45 and 1660.48 ms, respectively.
6.3.2. Number of predicates
An experiment is conducted to evaluate the effect of the number of predicates. In the experiment, the number of attributes \( m \) is variable, and the number of predicates \( k \) is half of \( m \). The results of the experiment are shown in Fig. 8, where \( n = 1,000,000 \), \( w = 0.5 \), and the y-axis represents the matching time in log scale. As previously mentioned, the matching time of H-TREE is linear to the number of matching subscriptions. When the number of predicates increases, the matchability of subscriptions decreases. Therefore, the matching time of H-TREE first drops quickly with the number of predicates. Then, the matching time of H-TREE is mainly affected by the number of predicates, which is similar to the other five algorithms. In general, the matching time of the six algorithms increases with the number of predicates. When the number of predicates is 23, REIN is 2.7, 1.9, 2.2, 10.2 and 5.2 times faster than SIENA, TAMA, H-TREE, BE-TREE and OpIndex, respectively.
| Table 3 The maximum, minimum, and standard deviation of matching time (ms). |
|------------------|----------------|------------------|------------------|-----------------|------------------|
| | SIENA | TAMA | H-TREE | BE-TREE | OpIndex |
| Min | 87.930 | 87.109 | 17.348 | 70.286 | 190.137 |
| Max | 115.697 | 140.756 | 631.795 | 1730.762 | 214.444 |
| Std | 5.539 | 10.122 | 88.352 | 442.972 | 4.736 |
Fig. 7. Effect of number of subscriptions.
Fig. 6. Relationship between matching time and the number of matching subscriptions.
faster than SIENA, TAMA, H-TREE, BE-TREE and OpIndex, respectively.
6.3.3. Matchability of predicates
Given the distribution of event values, the width of the inter-
val predicates determines the matchability of subscriptions. In gen-
eral, the wider the width, the larger the matchability. An experi-
ment is conducted to evaluate the effect of interval width (predic-
tes’ matchability) on the matching time. The results are shown in
Fig. 9, where \( n = 1,000,000, m = 20, k = 10 \), and the y-axis repre-
sents the matching time in log scale. The matching time of SIENA
increases with the matchability of interval predicates. With the in-
crease of \( w \), TAMA behaves asymptotically, as does SIENA. Of the six tested algorithms, H-TREE behaves best when \( w \leq 0.3 \). When
\( w > \frac{1}{2}, \) a subscription is split into \( \left\lceil \frac{k}{w} \right\rceil \) subscriptions with nar-
row interval predicates, where \( c \) is the number of cells and \( l \) is
the number of indexed attributes in H-TREE. When \( w \geq 0.7, \) 32GB
memory is used up due to the exponential growth. Therefore, there
are no results shown in the figure when \( w = 0.7 \) and \( w = 0.8 \) for H-
TREE. The matching time of REIN decreases with \( w \), exhibiting an
anti-matchability feature. When \( w = 0.6, \) REIN is 3.0, 2.3, 26.3,
50.9 and 5.4 times faster than SIENA, TAMA, H-TREE, BE-TREE and
OpIndex, respectively.
6.3.4. Distribution of predicate values
Ideally, the subIDs of subscriptions should be stored evenly in
the buckets of REIN, but this is impractical in reality. REIN is nearly
unaffected by the distribution of the predicate values, which is
the fourth feature of REIN. When the number of buckets is large
enough, the impact of the distribution of the predicate values can
be eliminated by decreasing the size of the buckets and the corre-
spending comparison operations executed in the buckets. To evalu-
ate the effect of the distributions on the matching time, we use our
own data generator to generate predicate values according to three
different distributions: uniform, normal and Pareto. For the normal
distribution, the mean and the variance are set to 0.5 and 0.02, re-
spectively. For the Pareto distribution, the mean and the scale are
set to 0.5 and 2, respectively. Event values and predicate attributes
are generated randomly. Compared with the uniform distribution,
the other two distributions have nearly the same results, as shown in
Fig. 10.
6.3.5. Number of buckets
For REIN, the comparison operations are executed in two buck-
ets for each attribute. Obviously, the size of the buckets affects
the performance of REIN. When the number of subscriptions is large,
more buckets are needed to improve matching efficiency. However,
when the number of buckets reaches a turning point, the perfor-
ance of REIN degrades. This can be explained by the following:
although the size of the buckets is reduced with more buckets,
the cost to switch between buckets increases, which offsets
the benefits obtained from the reduction of comparison operations. As
shown in Fig. 11, when there are 2,000,000 subscriptions, the turn-
ning point of buckets is about 1,000. The matching time first de-
creases with the number of buckets before reaching the turning
point. After the turning point, the performance of REIN decreases
with more buckets. Theorem 2 gives an equation to compute the
optimal number of buckets.
6.4. Construction time
Each of the six tested matching algorithms has its own spe-
cialized index structure. We conduct experiments to measure the
time spent on constructing the index structures for these algo-
rithms.
For SIENA, because an interval predicate is converted into two
simple predicates, the subID of a subscription is stored 2k times in
the index structure, where \( k \) is the number of interval predicates
contained in subscriptions. As for TAMA, the width of the inter-
val predicates determines the times to store the subID of a subscrip-
tion. When \( w = 0.5, \) the subID is stored at least 10k times. For H-
TREE, the subID is stored \( \left\lceil \frac{k}{w} \right\rceil \) times. For BE-TREE, the underly-
ring tree structure needs to split and merge nodes with the insertion
of subscription, so its construction operation is costly. For OpIndex,
an interval predicate is also converted into two simple predicates, just
like SIENA. The times to store the subID in REIN is 2k. These
results are shown in Fig. 12, where \( m = 20, k = 10, w = 0.5, \) and
the y-axis represents the construction time in log scale. As shown in
Fig. 12, SIENA spends the least amount of time constructing its
index structure. REIN and OpIndex have the same scale construc-
tion time. When \( n = 2,000,000 \), the construction time of TAMA,
H-TREE and BE-TREE is, respectively, 14.8, 3.6 and 11.5 times larger
than SIENA. Please note the construction time of SIENA, BE-TREE,
OpIndex and REIN is not affected by the width of interval predic-
tives. However, the construction time of TAMA and H-TREE in-
creases with the width of the interval predicates.
6.5. Deletion time
For TAMA, since the subID of a subscription is stored in multi-
ple buckets, deletion is very time-consuming. The index structure
of SIENA is a two-level matching table, with the first level indexed
on attributes and the second level on operators. There are only
two operators for each attribute, namely \( \leq \) and \( \geq \). Each operator
maps to a bucket, and the number of buckets is \( 2m \), where \( m \) is the
number of attributes appearing in events. For each bucket, the size
is \( n \), where \( n \) is the number of subscriptions. The index structure
of OpIndex is similar to SIENA. When the number of subscriptions
is large, deleting a subscription is also costly for SIENA and OpIndex. Here, we delete 1000 subscriptions from different numbers of subscriptions and compute the average deletion time of one subscription. The results are shown in Fig. 13, where $m = 20$, $k = 10$, $w = 0.5$, and the y-axis represents the deletion time in log scale. By observing the construction time and the deletion time, the fifth feature of REIN is that REIN is applicable to dynamic environments where subscriptions update frequently. Since the binary executable of BE-TREE does not provide an interface to delete subscriptions, BE-TREE is not included in Fig. 13.
6.6. Memory consumption
We also measure the memory consumption of the six matching algorithms with different number of subscriptions. The results are shown in Fig. 14, where $m = 20$, $k = 10$, $w = 0.5$ and the y-axis in log scale. As shown in this figure, BE-TREE occupies the largest memory because each clustering directory contains a large number of levels. This construction method is also adopted by TAMA but with a limited level of discretization. On the contrary, the index structures of the other four matching algorithms are relatively concise. H-TREE consumes the least amount of memory. The memory consumption of REIN is moderate, like SIENA.
7. Discussion
7.1. Dependency of predicates
In this paper, the predicates in a subscription are assumed to be independent. However, this may not hold true in reality. In
practice, methods of machine learning, such as primary component analysis (PCA) [38], can be used to transform the subscriptions and events that may include dependent predicates to the predicates that are independent. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables, i.e., PCA maps a data vector from an original space of \( m \) variables to a new space of \( n \) variables which is uncorrelated over the dataset. Therefore, before constructing the index structure of REIN, subscriptions and events can be transformed by PCA-like methods to guarantee the dependency of predicates.
7.2. Drawback of REIN
The main advantages of REIN over the compared matching algorithms are the improved matching speed and the beneficial anti-matchability feature. One disadvantage of REIN is the delay of the determining times of matching subscriptions when matching events. The matching procedure of REIN can be partitioned into two stages: marking and outputting. In the marking stage, all unmatching subscriptions are marked in the bitset. In the outputting stage, all unmarked bits in the bitset are outputted as matching subscriptions. So, the determining times of matching subscriptions are delayed in REIN, compared with counting-based matching algorithms, such as Oplex-index [12].
8. Conclusion
In this paper, we present REIN, a fast and anti-matchability matching algorithm for large-scale content-based publish/subscribe systems. By convention, pursuing a high matching speed is one of the major objectives when designing matching algorithms, usually without taking into account the effect of the subscriptions’ matchability. One problem caused by ignoring the subscriptions’ matchability is the resulting performance variation of matching algorithms. To tackle this problem, the design objective of REIN is established as pursuing a high matching speed while keeping performance stable. To conquer the impact of the subscriptions’ matchability, REIN utilizes a negative searching strategy, rather than a positive searching strategy which is widely used by existing matching algorithms. Therefore, REIN has five attractive features, namely rapidity, anti-matchability, stability, robustness and dynamism. To evaluate the performance of REIN, comprehensive experiments are conducted. The experiment results show that REIN strongly outperforms its counterparts in terms of matching speed and performance stability.
Acknowledgement
This work was supported by National Key R&D Program of China (2018YFB1003800), the Joint Key Project of the National Natural Science Foundation of China (U1736207), the National Science Foundation of China (61772334, 61702151, 61702320, 61572324), and Shanghai Talent Development Fund, Shanghai Jiao Tong Arts and science inter-project (15JCMY08).
References
Shiyou Qian received the PhD degree from Shanghai Jiao Tong University in 2015. He is currently a research assistant at the Department of Computer Science and Engineering in Shanghai Jiao Tong University. His research interests include event matching for content-based publish/subscribe systems, resource scheduling for Hybrid-Cloud, and driving recommendation with vehicular networks.
Jian Cao received the PhD degree from the Nanjing University of Science and Technology in 2000. He is currently a professor in the Department of Computer Science and Engineering, Shanghai Jiao Tong University. His main research topics include service computing, network computing, and intelligent data analytics. He is a member of the IEEE.
Weichao Mao is currently pursuing the B.S. degree with the Department of Computer Science and Engineering, Shanghai Jiao Tong University, China. His research interests lie in the interdisciplinary topics between computer science and economics.
Yanmin Zhu is a professor with the Department of Computer Science and Engineering at Shanghai Jiao Tong University. He obtained his PhD in computer science from Hong Kong University of Science and Technology in 2007, and BEng from Xian Jiao Tong University in 2002. Prior to joining Shanghai Jiao Tong University, he was a Research Associate in the Department of Computing in Imperial College London. His research interests include ad hoc sensor networks, vehicular networks, and mobile computing and systems. He is a member of the IEEE, the IEEE Communication Society and the IEEE Computer Society.
Jadi Yu received the PhD degree in computer science from Shanghai Jiao Tong University, Shanghai, China, in 2007. He is currently an associate professor in the Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China. Prior to joining Shanghai Jiao Tong University, he was a postdoctoral fellow in the Data Analysis and Information Security (DASy) Laboratory, Stevens Institute of Technology from 2009 to 2011. His research interests include cyber security and privacy, mobile and pervasive computing, cloud computing, and wireless sensor networks. He is a member of the IEEE and the IEEE Communication Society.
Minglu Li received the PhD degree from Shanghai Jiao Tong University in 1996. He is currently a professor at the Department of Computer and Engineering in Shanghai Jiao Tong University. He is the director of the IBM-SJTU Grid Research Center at Shanghai Jiao Tong University. His main research topics include cloud computing, image processing, and e-commerce.
Jie Wang received the BS degree from Shanghai Jiao Tong University, two MS degrees from Stanford University and the University of Miami, and the PhD degree from Stanford University. He is a consulting professor with Stanford University focusing on interdisciplinary research in adaptive computational learning and reasoning for complex physical and social systems. He has conducted researches at Stanford, worked at several startups in Silicon Valley, and consulted for multi-national companies and for government agencies. Currently, he is the executive director of the Stanford Center for Sustainable Development and Global Competitiveness.
|
{"Source-Url": "https://weichaomao.com/src/CN19-paper.pdf", "len_cl100k_base": 12729, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 68086, "total-output-tokens": 15980, "length": "2e13", "weborganizer": {"__label__adult": 0.0003314018249511719, "__label__art_design": 0.0005469322204589844, "__label__crime_law": 0.0003821849822998047, "__label__education_jobs": 0.002353668212890625, "__label__entertainment": 0.00017940998077392578, "__label__fashion_beauty": 0.0002225637435913086, "__label__finance_business": 0.0006847381591796875, "__label__food_dining": 0.0003762245178222656, "__label__games": 0.0007710456848144531, "__label__hardware": 0.002635955810546875, "__label__health": 0.0007262229919433594, "__label__history": 0.0004677772521972656, "__label__home_hobbies": 0.00013780593872070312, "__label__industrial": 0.0006232261657714844, "__label__literature": 0.0004611015319824219, "__label__politics": 0.0003848075866699219, "__label__religion": 0.0005087852478027344, "__label__science_tech": 0.457275390625, "__label__social_life": 0.00013971328735351562, "__label__software": 0.02142333984375, "__label__software_dev": 0.50830078125, "__label__sports_fitness": 0.0002429485321044922, "__label__transportation": 0.0006842613220214844, "__label__travel": 0.0002180337905883789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64771, 0.02672]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64771, 0.34951]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64771, 0.89409]], "google_gemma-3-12b-it_contains_pii": [[0, 4418, false], [4418, 12108, null], [12108, 18223, null], [18223, 25575, null], [25575, 29228, null], [29228, 31008, null], [31008, 36508, null], [36508, 40939, null], [40939, 44914, null], [44914, 50679, null], [50679, 52149, null], [52149, 61542, null], [61542, 64771, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4418, true], [4418, 12108, null], [12108, 18223, null], [18223, 25575, null], [25575, 29228, null], [29228, 31008, null], [31008, 36508, null], [36508, 40939, null], [40939, 44914, null], [44914, 50679, null], [50679, 52149, null], [52149, 61542, null], [61542, 64771, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64771, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64771, null]], "pdf_page_numbers": [[0, 4418, 1], [4418, 12108, 2], [12108, 18223, 3], [18223, 25575, 4], [25575, 29228, 5], [29228, 31008, 6], [31008, 36508, 7], [36508, 40939, 8], [40939, 44914, 9], [44914, 50679, 10], [50679, 52149, 11], [52149, 61542, 12], [61542, 64771, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64771, 0.04444]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
1eb4d6b20481538602dcd3580005c7d0dc74407b
|
[REMOVED]
|
{"Source-Url": "https://backend.orbit.dtu.dk/ws/files/199144752/paper.pdf", "len_cl100k_base": 10265, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 63103, "total-output-tokens": 13555, "length": "2e13", "weborganizer": {"__label__adult": 0.0004839897155761719, "__label__art_design": 0.0006022453308105469, "__label__crime_law": 0.0006275177001953125, "__label__education_jobs": 0.001735687255859375, "__label__entertainment": 0.00014710426330566406, "__label__fashion_beauty": 0.00026917457580566406, "__label__finance_business": 0.0007605552673339844, "__label__food_dining": 0.000545501708984375, "__label__games": 0.0007071495056152344, "__label__hardware": 0.0011758804321289062, "__label__health": 0.001377105712890625, "__label__history": 0.0005803108215332031, "__label__home_hobbies": 0.0002009868621826172, "__label__industrial": 0.0008759498596191406, "__label__literature": 0.000935077667236328, "__label__politics": 0.0005002021789550781, "__label__religion": 0.0007309913635253906, "__label__science_tech": 0.392578125, "__label__social_life": 0.00020194053649902344, "__label__software": 0.0110931396484375, "__label__software_dev": 0.58251953125, "__label__sports_fitness": 0.0002987384796142578, "__label__transportation": 0.0008974075317382812, "__label__travel": 0.0002639293670654297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44571, 0.02434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44571, 0.62032]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44571, 0.81708]], "google_gemma-3-12b-it_contains_pii": [[0, 761, false], [761, 3027, null], [3027, 5617, null], [5617, 9160, null], [9160, 11074, null], [11074, 14848, null], [14848, 17527, null], [17527, 21188, null], [21188, 23928, null], [23928, 26261, null], [26261, 29495, null], [29495, 32179, null], [32179, 34286, null], [34286, 37199, null], [37199, 39287, null], [39287, 42334, null], [42334, 44571, null]], "google_gemma-3-12b-it_is_public_document": [[0, 761, true], [761, 3027, null], [3027, 5617, null], [5617, 9160, null], [9160, 11074, null], [11074, 14848, null], [14848, 17527, null], [17527, 21188, null], [21188, 23928, null], [23928, 26261, null], [26261, 29495, null], [29495, 32179, null], [32179, 34286, null], [34286, 37199, null], [37199, 39287, null], [39287, 42334, null], [42334, 44571, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44571, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44571, null]], "pdf_page_numbers": [[0, 761, 1], [761, 3027, 2], [3027, 5617, 3], [5617, 9160, 4], [9160, 11074, 5], [11074, 14848, 6], [14848, 17527, 7], [17527, 21188, 8], [21188, 23928, 9], [23928, 26261, 10], [26261, 29495, 11], [29495, 32179, 12], [32179, 34286, 13], [34286, 37199, 14], [37199, 39287, 15], [39287, 42334, 16], [42334, 44571, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44571, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
5c942ffaf275597a936d76108b2257f6c2492362
|
Chapter II
A Semantic Service-Oriented Architecture for Business Process Fusion
Athanasios Bouras, National Technical University of Athens, Greece
Panagiotis Gouvas, National Technical University of Athens, Greece
Gregoris Mentzas, National Technical University of Athens, Greece
Abstract
Most enterprises contain several heterogeneous systems, creating a fuzzy network of interconnected applications, services, and data sources. In this emerging business context, a clear need appears to link these former incompatible systems by using enterprise application integration (EAI) solutions. We propose a semantically enriched service-oriented business applications (SE-SOBA) framework that will provide a dynamically reconfigurable architecture enabling enterprises to respond quickly and flexibly to market changes. We also propose the development of a pure semantic-based implementation of the universal description, discovery, and integration (UDDI) specification, called pure semantic registry (PSR), which provides
a flexible, extendable core architectural component allowing the deployment and business exploitation of Semantic Web services. The implementation of PSR involves the development of a semantic-based repository and an embedded resource definition framework (RDF)-based reasoning engine, providing strong query and inference capabilities to support effective service discovery and composition. We claim that when SE-SOBAs are combined with PSR and rule-based formalizations of business scenarios and processes, they constitute a holistic business-driven semantic integration framework, called FUSION, applied to intra- and inter-organizational EAI scenarios.
Introduction
In today’s fiercely competitive global economy, companies are realizing that new initiatives such as e-business, customer relationship management, and business intelligence go hand-in-hand with the proven organization-wide EAI strategy. The goal of EAI is to integrate and streamline heterogeneous business processes across different applications and business units while allowing employees, decision makers, and business partners to readily access corporate and customer data no matter where it resides. More and more, EAI involves integrating information and processes not only across the enterprise but also beyond organizational walls to encompass business-to-business (B2B) integration supporting large scale value-added supply chains across the enlarged worldwide economy.
Business process fusion is the transformation of business activities that is achieved by integrating the interfaces of previously autonomous business processes by pipelining different middleware technologies and enabling the effective (semi-)automated exchange of information between various systems within a company or between enterprises. The development of SOBAs (which constitutes a set of independently running services communicating with each other in a loosely coupled message-based manner) and the publishing of Web services may implement the vision of business process fusion, by providing an abstraction layer for the involved interfaces through the Web service description language (WSDL). While SOBA and Web services have already made headway within large organizations, the technology will start filtering down to small- and medium-sized enterprises (SMEs) and will expand into supply chains. This architecture will also play a significant role in streamlining mergers and acquisitions, by linking previously incompatible systems.
Despite the aforementioned trends, users and professionals have high expectations towards software applications and enterprise application integration. They want to access the content they need, while this content must be accurate and free of redundancy. So, the enterprise applications must be intuitive and easy to use; reus-
able and extendable; implemented in a short and inexpensive way; and within the current information technology (IT) legacy environment. Enterprise applications and information systems also need to support a more general notion that involves relating the content and representation of information resources to entities and concepts in the real world.
This need imposes the use and interpretation of semantics in EAI. Semantic interoperability will support high-level, context-sensitive, information requests over heterogeneous information resources, heterogeneous enterprise applications, hiding systems, syntax, and structural heterogeneity. This semantically enriched approach eliminates the problem of knowing the contents and structure of information resources and the structure and architecture of heterogeneous enterprise applications.
Semantics and ontologies are important to application integration solutions because they provide a shared and common understanding of data, services, and processes that exist within an application integration problem domain, and how to facilitate communication between people and information systems. By leveraging this concept we can organize and share enterprise information, as well as manage content and knowledge, which allows better interoperability and integration of inter- and intra-enterprise information systems.
We claim that recent innovations in the development of SE-SOBA—which enlarge the notion of service-oriented architecture (SOA) by applying Semantic Web service technology and using ontologies and Semantic Web markup languages to describe data structures and messages passed through Web service interfaces—combined with the rule-based formalization of business scenarios and processes will provide a dynamically reconfigurable architecture that will enable enterprises to respond quickly and flexibly to market changes, thereby supporting innovation and business growth, increasing the potential for an improved return on IT investments, and a more robust bottom line.
The structure of this chapter is as follows: in the following section, we define the concept of EAI and present the traditional and current trends of EAI from the technology perspective. In the section called The Road to Enterprise Application Integration, we present the way that the emerging Semantic Web technologies apply to EAI scenarios and analyze the state-of-the-art technologies and techniques. The conceptual framework, called FUSION, which we propose referring to the innovative business-driven, semantic-enriched, service-oriented architecture, as well as the proposed business-oriented ontologies that extends OWL-S Service Profile are defined in the next section, called FUSION Conceptual Framework, while the technical implementation of our approach is presented in FUSION Technical Implementation. Moreover, the section FUSION Adoption: Integration Scenario and Applying Methodology specifies a light FUSION adoption methodology and a typical application scenario of the proposed solution. Finally, we present further work; future trends and technologies; and concluding remarks.
The Road to Enterprise Application Integration
Traditional Enterprise Application Integration
Most enterprises contain a systemic infrastructure of several heterogeneous systems, creating a complex, fuzzy network of interconnected applications, services, and data sources, which is not well documented and expensive to maintain (Samtani & Sadhwani, 2001). Moreover, the introduction of multi-oriented, separate legacy systems concerning enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM), e-business portals and B2B transactions, increases the complexity of systems integration, making the support of the interoperability among these systems a challenging task.
In this emerging business context, a clear need appears to link these former incompatible systems to improve productivity and efficiency. The solution to this need is what is called EAI, which can be defined as the use of software and architectural principles to bring together (integrate) a set of enterprise computer applications (see Figure 1). The goal of EAI is to integrate and streamline heterogeneous business processes across different applications and business units. We distinguish between intra- and inter-organizational enterprise application integration. Intra-organizational EAI, commonly referred as Application to Application Integration (A2A) (Bussler, 2003a), specifies the automated and event-driven exchange of information between heterogeneous enterprise applications and systems operating within an organization or enterprise. On the other hand, inter-organizational EAI, or else B2B integration (Bussler, 2003a), specifies the automated and event-driven information exchange between various systems of several collaborating organizations and enterprises.
Figure 1. The enterprise system environment: With and without an EAI system
Moreover, Apshankar et al. (2002) identify different types of EAI levels/layers, explaining the various dimensions of the integration task, namely:
- data-oriented integration, occurring at the database and data source level, either real time or non-real time, constituting the most widespread form of EAI today;
- function or method integration, involving the direct and rigid application-to-application integration of cross-platform applications over a network—it can be achieved using custom code, application program interface (APIs), remote procedure calls (RPCs) or distributed middleware and distributed objects (CORBA, RMI, DCOM);
- user interface integration, consisting on using a standardized user interface for accessing a group of legacy systems and applications. The new presentation layer is integrated with the existing business logic of the legacy systems or packaged applications; and
- business process integration, occurring at the business process level.
In recent years, most enterprises and organizations have made extensive investments in several EAI systems and solutions that promise to solve the major integration problem among their existing systems and resources. The business driver behind all these traditional EAI projects is to integrate processes across third-party applications as well as legacy systems to decrease the number of adapters one has to develop if connecting two systems (Laroia & Sayavedra, 2003). Therefore, the traditional EAI focuses (Haller, Gomez, & Bussler, 2005) on the message-based communication of software applications interfaces, by pipelining different middleware technologies and developing various adapters, connectors, and plug-ins to provide efficient messaging support among heterogeneous systems, allowing their effective interconnection. However, traditional EAI efforts lack of an upper abstraction layer, as well as standardized architectures and implementations, making customers and end users captive of EAI vendor-specific solutions, and arising a new, high-level integration problem of interconnecting various EAI systems with one another. The growth of the EAI market and the involvement of new EAI vendors have intensified the integration problems identified, considering the standardization of integration frameworks and architectures a necessity. The development and introduction of Web service enabled service-oriented architecture solutions, completely based on widely known and accepted standards, overcomes the aforementioned EAI obstacles.
Web Services-Enabled Service-Oriented Architecture
The SOA is an architectural style for building software applications that use services available in a network such as the Web (Mahmoud, 2005). It promotes loose coupling between software components so that they can be reused. Applications in SOA are built based on services, which constitute implementations of well-defined business functionalities and can then be consumed by clients in different applications or business processes, enabling enterprises to leverage existing investments by allowing them to reuse existing applications and promise interoperability between heterogeneous applications and technologies. SOA-based applications are distributed multi-tier applications that have presentation, business logic, and persistence layers. Services are the building blocks of SOA applications. While any functionality can be made into a service, the challenge is to define a service interface that is at the right level of abstraction. Services should provide coarse-grained functionality. SOA is emerging as the premier integration and architecture framework in today’s complex and heterogeneous computing environment. Previous attempts did not enable open interoperable solutions, but relied on proprietary APIs and required a high degree of coordination between groups. SOA can help organizations streamline processes so that they can do business more efficiently and adapt to changing needs and competition, enabling the software as a service concept.
Web services, the preferred standards-based way to realize SOA, are designed to support interoperable machine-to-machine interaction over a network. This interoperability is gained through a set of Extensible Markup Language (XML)-based open standards. In specific, the Web services architecture (WSA) and the Web Services Interoperability Model (WS-I) comprising three emerging key technologies: such as Web Services Description Language (WSDL), Simple Object Access Protocol (SOAP), and UDDI. These standards provide a common approach for defining, publishing, and using Web services. The Web services interface is described in a machine-processable format (specifically WSDL). Other systems and Web services interact with the Web service in a manner prescribed by its description using SOAP-messages, typically conveyed using Hyper Text Transfer Protocol (HTTP) with an XML serialization in conjunction with other Web-related standards.
In the literature, the Web services are defined as:
1. “loosely coupled, reusable software components that semantically encapsulate discrete functionality and are distributed and programmatically accessible over standard Internet protocols,”
2. “a new breed of application, which are self-contained, self-describing, modular applications that can be published, located, and invoked across the Web. Web Services perform functions, which can be anything from simple request to complicated business processes.”
The typical business scenario (Kreger, 2001), invoking and benefiting from the Web services-oriented solutions, identifies as core element of the implementation of the Web service architecture the UDDI services registry that acts as an intermediary between Web services providers and requesters, storing and categorizing services in taxonomies (directory services) (see Figure 2). The service provider deploys Web services and defines their service description, representing its available services, applications, and system features and publishes them in the service registry. The service requester takes advantage of the search capabilities of the registry’s directory service, searches the registry trying to find the composed service required and uses it, binding with the service provider. The main entities identified in a Web services-based business scenario, the service registry, the supplier (service provider), and the client (service) requester, interact in three ways: (1) the service provider publishes (publish activity) the WSDL service description in the service registry in order to allow the requester to find it, (2) the service requester retrieves (discover activity) a service description directly or queries the service registry for the type of service required, and (3) the service requester invokes or initiates an interaction (invoke activity) with the service at run time using the binding details in the service description to locate, contact, and invoke the service.
Web services, in their current form of loosely bound collections of services, are more of an ad hoc solution that can be developed quickly and easily, published, discovered, and bound dynamically (Samtani & Sadhwani, 2001). Web service-enabled SOA encourages and supports the reuse of existing enterprise assets, for example, already developed services and applications and allows the creation and deployment of new services from the existing infrastructure of systems. In other words, the Web service-enabled SOA facilitates businesses to leverage existing investments by allowing them to reuse existing applications and promises interoperability between heterogeneous applications and technologies. SOA provides a level of flexibility that was not possible before (Mahmoud, 2005) in the sense that:
Figure 2. Web services architecture, models and standards
• The Web services are software components with well-defined interfaces that are implementation independent, separating completely the service interface from its implementation. The deployed Web services are used and consumed by clients (services requesters) that are not concerned with how these services will execute their requests.
• The Web services are self-contained (perform predetermined tasks) and loosely coupled (for independence).
• The Web services can be dynamically discovered.
• Composed services can be built from aggregates of preexisting Web services.
A few essential differences between traditional EAI solutions and Web services (Samtani & Sadhwani, 2001) are presented in Table 1.
Although, the Web services applied to specific EAI scenarios provide an abstraction and flexibility layer supporting SOA and simplifying the application integration, they are based on exclusively syntactical-oriented technologies, not defining formally the semantics of services interfaces and of the data structures of the messages Web services exchanges. The main reason resulting in the failure of the majority of EAI implementations (some articles even account for 70% of EAI projects as failure) is that the semantics of different systems have to be formally defined and integrated at one point. The lack of formal semantics regarding the applications and services to be integrated makes it difficult for software engineers and developers to manually interconnect heterogeneous applications, impeding the automation regarding application integration, data exchange, and complex services composition. Engineers integrating the enterprise application systems have to know the meaning of the low-level data structures in order to implement a semantically correct integration. No formal definition of the interface data exist (Bussler, 2003b), which implies that the knowledge of every developer of applications involved in the integration project is assumed to be consistent.
Therefore, the problem that still exists, which the traditional Web services technologies are weak to solve, refers to the formalization and the documentation of the semantics related to the interfaces and the data structures of the deployed Web services. By applying Semantic Web technologies to SOAs and deploying Semantic Web services so as to integrate various systems, the notion of Semantic Web services enables SOA is emerging, paving the way to the semi-automated semantic-based enterprise application integration.
Table 1. Traditional EAI and Web services: Identified differences
<table>
<thead>
<tr>
<th>Aspect</th>
<th>Traditional EAI vs. Web Service Enabled EAI</th>
</tr>
</thead>
<tbody>
<tr>
<td>Simplicity</td>
<td>Web Services are much simpler to design, develop, deploy, maintain, and use as compared to a typical, traditional EAI solution which may involve distributed technology such as DCOM and CORBA.</td>
</tr>
<tr>
<td>Reusability</td>
<td>Once the framework of deploying and using Web Services is ready, it is relatively easy to compose new, aggregated services, reuse the existing IT systems infrastructure and automate new business processes spanning across multiple applications.</td>
</tr>
<tr>
<td>Open Standards</td>
<td>Unlike proprietary, traditional EAI solutions, Web Services are based on open XML-based standards such as WSDL, UDDI, SOAP and this is probably the single most important factor that leads to the wide adoption of Web Services technologies. Web Services are built on existing and ubiquitous protocols eliminating the need for companies to invest in supporting new network protocols.</td>
</tr>
<tr>
<td>Flexibility</td>
<td>Traditional EAI solutions require endpoint-to-endpoint integration. Changes made at one end have to be propagated to the other end, making them very rigid and time consuming in nature. Web Services based integration is quite flexible, as it is built on loose coupling between the application publishing the services and the application using those services.</td>
</tr>
<tr>
<td>Cheap</td>
<td>Traditional EAI solutions, such as message brokers, are very expensive to implement. Web Services, in the future, may accomplish many of the same goals - cheaper and faster.</td>
</tr>
<tr>
<td>Scope</td>
<td>Traditional EAI solutions consider and treat applications as single entities, whereas Web Services allow companies to break down complex services into small independent logical units and build wrappers around them.</td>
</tr>
<tr>
<td>Efficiency</td>
<td>Web Services allow applications and services to be broken down into smaller logical components, which make the integration of applications easier as it is done on a granular basis.</td>
</tr>
<tr>
<td>Dynamic</td>
<td>Web Services provide a dynamic approach to integration by offering dynamic interfaces, whereas traditional EAI solutions are pretty much static in nature.</td>
</tr>
</tbody>
</table>
Semantic Web Services in EAI Scenarios
The Emerging Semantic Web Services
The long-term goal of the Web services effort is seamless interoperation among networked programs and devices. Once this is achieved, Web services can be seen as providing the infrastructure for universal plug-and-play and ubiquitous computing (Weiser, 1993). However, the main obstacle of achieving interoperability among
deployed Web services is that the technical and functional description (profile) of the services is based on semi-formal natural language descriptions, which are not formally defined, not allowing computers to understand and interpret the data to be exchanged among Web services. The Semantic Web initiative’s purpose is similar to that of the Web services (Preece & Decker, 2002): to make the Web machine processable rather than merely “human processable.” Thus, Web services are considered as an essential ingredient of the Semantic Web and benefit from the Semantic Web technologies. Key components of the Semantic Web technology are:
- a unified data model such as RDF,
- languages with well defined, formal semantics, built on RDF, such as the Web ontology language (OWL) DARPA agent markup language and ontology inference layer (DAML+OIL), and
- ontologies of standardized terminology for marking up Web resources, used by semantically enriched service level descriptions, such as OWL-S (former DAML-S, DAML-based Web service ontology).
Enriching Web services descriptions with formal defined semantics by introducing the notion of semantic markup, leading towards the Semantic Web services (see Figure 3), enables machine-interpretable profiles of services and applications, realizing the vision of dynamic and seamless integration. As this semantic markup is machine—processable and—interpretable, the developed semantic profiles of Web services can be exploited to automate the tasks of discovering Web services, executing them, composing them, and interoperating with them (McIlraith, Son, & Zeng, 2001), moving a step forward towards the implementation of intelligent, Semantic Web services.
The combination of Web services and Semantic Web technologies, resulting in the deployment of machine processable and, therefore, usable for automation Semantic Web services, supports and allows a set of essential automated services regarding the use of deployed Web services (McIlraith et al., 2001a; McIlraith et al., 2001b):
- automatic Web service discovery, involving automatic location Web services that provide a particular functionality and that adhere to requested properties expressed as a user goal,
- automatic Web service composition, involving dynamic combination and aggregation of several Web services to provide a given functionality,
- automatic Web service invocation, involving automatic execution of an identified Web service by an agent, and
automatic Web service interoperation within and across organizational boundaries.
These semantically enriched Web services-oriented features can constitute the ideal solution to integration problems, as they enable dynamic, scalable, and reusable cooperation between different systems and organizations. Table 2 summarizes the main improvements that the semantic markup resulted in Web services:
**Semantic Web Services Registries**
As presented in the first section, the Web services architecture involves three core entities: (1) the service provider (supplier), (2) the service requester (client), and (3) the business services registry serving as a business mediator. The Semantic Web services deploy a similar architectural schema, with the crucial difference that the service technical and functional descriptions are semantically enriched with concepts defined in reference ontologies. However, current widely—known and—used service registries (i.e., UDDI and ebXML registry) specifications and implementations do not support the effective handling of semantic profiles of Web services, and a number of research activities have taken place, recently, trying to semantically enrich the standardized service registries. Their common goal has focused on the capability of registries to store and publish semantic data, so as to facilitate the semantic-based description of Web services, the ontology-based categorization and discovery of Web services, and, therefore, the semantic integration of business services and applications.
In specific, Moreau, Miles, Papay, Decker, and Payne (2003) present an approach and implementation for service registration and discovery that uses an RDF triple store to express semantic service descriptions and other task/user-specific metadata, using a mechanism for attaching structured and unstructured metadata. The result
is an extremely flexible service registry that can be the basis of a sophisticated semantically enhanced service discovery engine. This solution extends service descriptions using RDF and changes UDDI APIs for support of semantic search. Moreover, Pokraev, Koolwaaij, and Wibbels (2003) present the design and implementation of an enhanced UDDI server, capable of storage, matching, and retrieval of semantically rich service profiles that contain contextual information, mapping DAML-S to UDDI publish message and introducing, with their approach, additional elements such as a matchmaker, an ontology repository, and a proxy API to invoke UDDI APIs. The approach of Pokraev et al. (2003) does not change the publish and inquiry interfaces of the UDDI. In addition, Paolucci, Kawamura, Payne, and Sycara (2002) show how DAML-S service profiles, which describe service capabilities within DAML-S, can be mapped into UDDI records and how the encoded information can be mapped within the UDDI registry to perform semantic matching. This work proposes semantic search based on an externally created and operated matchmaker, as the semantic data are stored outside of the UDDI registry, while the mapping is implemented with links from the UDDI tModel to the semantic profile of the Web service. Finally, Srinivasan, Paolucci, and Sycara (2005) base the discovery mechanism on OWL-S. OWL-S allows to semantically describe Web services in terms of capabilities offered and to perform logic inference to match the capabilities requested with the capabilities offered. Srinivasan et al. (2005) propose OWL-S/UDDI matchmaker that combines the better of two technologies.
As shown previously, current technologies and research efforts, towards the realization of semantic-enriched services registry, use current UDDI implementation and try to extend their functionalities with semantic-based capabilities, introducing external matchmakers and mapping techniques. We claim that a pure semantic-based implementation of the UDDI specification, called pure semantic registry, provides a flexible, extendable core architectural component to allow the deployment and
<table>
<thead>
<tr>
<th>Dimension</th>
<th>Existing Web Services</th>
<th>Semantic Web Services</th>
</tr>
</thead>
<tbody>
<tr>
<td>Services</td>
<td>Simple</td>
<td>Composable</td>
</tr>
<tr>
<td>Requestor</td>
<td>Human (developer)</td>
<td>Agent</td>
</tr>
<tr>
<td>Provider</td>
<td>Registration</td>
<td>No registration</td>
</tr>
<tr>
<td>Mediator</td>
<td>Key Player</td>
<td>Facilitator</td>
</tr>
<tr>
<td>Description</td>
<td>Taxonomy</td>
<td>Ontology</td>
</tr>
<tr>
<td>Descriptiveness</td>
<td>Closed world</td>
<td>Open world</td>
</tr>
<tr>
<td>Representation</td>
<td>Syntax-based</td>
<td>Semantics-based</td>
</tr>
</tbody>
</table>
business exploitation of Semantic Web services. The implementation of the PSR involves the design and development of a semantic-based repository and an embedded RDF-based reasoning engine. The PSR enables and supports the storage, administration, and handling of the deployed Semantic Web services and their profiles in a unique semantic repository. The semantic service profiles are annotated by using internally store domain ontologies facilitating, thus, the ontology-based categorization of services. Finally, the semantic registry benefits from its powerful RDF-based query and inference engine to support effective service discovery and composition.
**FUSION Conceptual Framework**
**FUSION: Towards the Business Intelligent Semantic SOA**
The FUSION solution is an integration framework that facilitates the integration of heterogeneous enterprise applications that exist in the same organization or in different organizations. The design of the FUSION approach has been based on a layer-oriented architecture (see Figure 4), using several structural components and preexisting technologies (Web services, semantics, services registry, etc.) benefiting from the typical advantages of each technology. This innovative, structured compilation of technologies and EAI techniques reduces the integration obstacles, which each technology when applied to EAI scenarios could face, enabling the intelligent integration of business services.
*Figure 4. Layer-oriented EAI architecture*
In specific, FUSION framework involves:
- A Web services infrastructure, which provides an initial interoperable capability based on Web services interface and communication integration, serving as a common deployment basis for all the enterprise applications and business services. As the Web services infrastructure applies the notion of SOA to the proposed framework, FUSION basis constitutes a pragmatic, applied SOA architecture.
- A semantic enrichment layer, which adds semantics to the technical and functional descriptions of the Web services, making the ontology-annotated Web services understandable and profiles machine interpretable. The semantic enrichment layer extends the notion of SOA with formal, well-defined semantics, moving towards a semantically enriched SOA.
- A semantic registry that constitutes an implementation of the latest UDDI specification based on Semantic Web technologies, supporting and semantically extending the main functionalities of service registries (i.e., UDDI and ebXML registries): the storage, categorization and discovery of the deployed business Web services. The FUSION semantic registry does not proposes a new registry architecture and specification, but it constitutes an alternative of the implementation that benefits from the intelligent ontology-based categorization, the strong RDF-based query language and inference engine.
- A business process layer facilitating the design and execution of Web services processes and workflows. The designed workflows invoke the business services stored in the semantic registry, retrieving them by using the semantic-based services of the registry. The interaction of the process design and execution environment with the service registry facilitates the automatic service discovery, composition, and invocation, supporting the interoperability among previous incompatible enterprise applications.
- A business scenarios and rules layer that defines and models, using formal ontologies that conceptualize e-business and B2B transactions, typical business scenarios occurring within companies and/or across collaborating enterprises. The formal business rules are transformed into parameterized workflow models, and are executed within the business process layer.
The upper two business-oriented layers, the business process layer and the business scenarios and rules layer adds business intelligence in the applied SOA, allowing the automated composition and orchestration of the deployed Web services, and supporting the automatic integration of business services. Apart from the aforementioned layers, the FUSION framework involves an ontology-based layer, which interacts with most of the rest of the integration layers. FUSION ontologies, which formalize
the concepts, the relations, and the events existing in an e-business environment, are separated in three main ontologies:
- The **business data ontology** defines the basic business data types and relations used in business services and transactions. The business data ontology is taken into consideration in the semantic enrichment of the deployed Web services, so as to define formally the data structure of the SOAP messages exchanged during a business transaction.
- The **business service ontology** conceptualizes the functionality of a given application that is used to annotate the functional profiles of Web services (during the semantic enrichment phase).
- The **business scenarios ontology** models the business rules identified, by business analysts and consultants during the business scenarios phase, in typical inter- and intra-organizational business scenarios. The ontology-based business rules defined are used in the business processes design to enable the composition of complex, aggregated Web services.
The next sections present in detail the FUSION conceptual framework, specify the several integration layers required for realizing business intelligent semantic SOA applied to inter- and intra-organizational and/or enterprise EAI scenarios, analyze how FUSION ontologies extends the OWL-S upper ontology concepts, and define the OWL-S representation of services.
**FUSION Integration Layers**
*Web Services Infrastructure and Semantic Enrichment Layer*
The conceptual architecture of the FUSION integration approach is based on a Web services infrastructure (see Figure 5). The, so-called, *Web service-enabled SOA infrastructure* allows the deployment of Web service software instances of each business applications and services, respectively, so as to provide a first integration layer, regarding the interfacing (WSDL) and communication (SOAP) of initially incompatible business applications.
Although, this first layer of abstraction, involving WSDL interfaces, provides a universal standards-based, highly flexible and adaptable implementation of business applications integration (Haller et al., 2005), the problem of documenting and understanding the semantics of these interfaces not only still exists, but it becomes a crucial issue to deal with. The significance of interpreting semantics in a machine understandable way arises from the continuously increasing average amount of
Web services that are stored in typical UDDI registries used in the Web service-enabled SOA approach, which makes it difficult for the developer and/or software engineer to manually integrate and put together the suitable Web services. That is why FUSION framework contains a second integration layer (see Figure 5) that "adds formal and well-defined business data and services functionality semantics in the Web services descriptions and interfaces," enlarging the notion of SOA and Web services applying common reference business ontologies.
This second integration layer supports the semantic enrichment of the Web services descriptions (WSDL files) taking into account two basic facets. Firstly, we should provide a formal description of the functionality of the Web service in order to facilitate efficient categorization and discovery of Web services. Therefore, the business service ontology is needed to identify the events that could occur in an e-business and/or B2B environment and to organize the business logic of this domain, creating an ontology-based dictionary conceptualizing functionality aspects of potential services of the e-business domain.
As real-life business services contain several and quite complex parameters and structures, we have recognized the need of developing the business data ontology formalizing the types of data contained in WSDL interfaces as well as the structure of the information that Web services exchange through SOAP messages. So, the FUSION second integration layer provides the mechanism, the graphical interface, and the common-reference business ontologies, to semantically annotate the Web services descriptions.
Figure 5. FUSION (Semantic) Web services-enabled SOA infrastructure
services profiles using the appropriate functionality and data concepts, and to create semantically enriched OWL-S descriptions of the Web services software instances, applying and leveraging the use of the *Semantic Web services in service-oriented architecture* deployed to business environments.
**Semantic Business Services Registry**
Once the Web services instances are deployed and their OWL-S semantic profiles are created, they should be categorized and published in business service registries in order to allow users (i.e., agents and humans) to discover, compose, and use, on demand, the services published there. As the most common service registries (i.e., UDDI and ebXML registries) do not support the storage and maintenance of ontologies and/or semantic profiles—Internally to the registry, methods have been developed to associate the set of semantics that characterizes a Web service with the service advertised through the business registry. A common drawback identified to all the existing techniques, trying to add semantics or semantically enrich predefined service registries, is that the reference ontologies and the semantic profiles of the Web service instances are stored externally to the registry, using informal, complex mapping tables and association rules to support the basic UDDI and ebXML registries services, they fail to embed effectively the dynamic and flexible Semantic Web technologies in the main services powered by such registries: categorization and discovery of Web services.
The FUSION approach has studied the methodologies and the lessons learned by research efforts focusing on the semantic enrichment of formal service registries and tries a different and innovative orientation. As the FUSION approach seeks to benefit more from the emerging Semantic Web technologies and standards, it moves towards the implementation of a “pure” *FUSION semantic registry*, based on a full functional RDF semantic repository (see Figure 6). FUSION approach develops a “thin-UDDI” API, internally to the semantic registry, to realize the basic set of functions of the traditional registries. In order for the proposed approach to be fully compliant with the dominant standards of the e-business domain (i.e., UDDI), FUSION transforms the XSD Schema of the latest UDDI specification in a RDF-Schema stored in the developed RDF repository, so as to preserve the widely known informational and relational infrastructure of the UDDI registry and to take advantage of its well-defined internal structure. This implementation benefits from the new possibilities provided by the RDQL query language when combined with the reasoning and inference engine of the RDF repository facilitates. Therefore, the FUSION semantic registry supports the storage and lifecycle management of RDF files and reference ontologies, internally, while it uses the query language and the inference engine provided to enable categorization and discovery services based on well-defined (formal) common semantics.
Furthermore, an upper layer of abstraction is needed in FUSION approach to move the EAI efforts, which follows the SOA and Web services architectures, a step forward towards the vision of the intelligent Web services and the business intelligent semantic SOA. This “ultimate” integration layer invokes the use of business process-driven workflows and modeling, taking into account and analyzing the most typical e-business and/or B2B scenarios, so as to design workflows that model the behavior of the selected business services in a business process interaction.
The intelligent SOA allows the experience and knowledge of business consultants and experts to be conceptualized and embedded to typical business scenarios, facilitating the formal modeling and execution of business processes using the Business Process Execution Language for Web Services (BPEL4WS) workflow modeling language. While the business consultants develop and model the desirable business scenarios, they define the Web services required by referring to the functionality aspects of services and using the common reference business services ontology. As this service functionality-oriented ontology is also used to annotate, characterize, and categorize the deployed Web service in the common semantic registry, the execution defined workflow models realizes the automated composition of intelligent Web services and the orchestration of flexible, complex business services.
There have been a number of efforts to add semantics to the discovery process of Web services. An upper ontology for services has already been developed and presented to the Semantic Web services project of the DAML program, called OWL-S (formerly DAML-S). OWL-S upper service ontology provides three essential types of knowledge about a service, each characterized by the question it answers:
- What does the service provide for prospective clients? The answer to this question is given in the “profile,” which is used to advertise the service. To capture this perspective, each instance of the class Service presents a ServiceProfile (see Figure 7).
- How is it used? The answer to this question is given in the “process model.” This perspective is captured by the ServiceModel class. Instances of the class Service use the property describedBy to refer to the service’s ServiceModel.
- How does one interact with it? The answer to this question is given in the “grounding.” Grounding provides the needed details about transport protocols. Instances of the class Service have a supports property referring to a Service-Grounding.
Figure 7. OWL-S service profile classes and properties
Generally speaking, the service profile provides the information needed for an agent to discover a service, while the service model and service grounding, taken together, provide enough information for an agent to make use of a service, once found.
The grounding concept in the OWL-S ontology provides information about how to access (invoke) the service, that is, details on the protocol, message formats, serialization, transport, and so forth. It is viewed as a mapping from an abstract to a concrete specification of those service description elements that are required for interacting with the service. OWL-S only defines such grounding for WSDL and SOAP (see Figure 8), although additional groundings can be defined. A summary of the automation support each upper level concept (or its subconcepts) of the OWL-S ontology is intended to cover is given in Table 3.
Table 3. Purpose of OWL-S upper level concepts
<table>
<thead>
<tr>
<th>Upper level concept</th>
<th>Automation support</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Profile</strong></td>
<td>• Discovery</td>
</tr>
<tr>
<td><strong>Model</strong></td>
<td>• Planning</td>
</tr>
<tr>
<td></td>
<td>• Composition</td>
</tr>
<tr>
<td></td>
<td>• Interoperation</td>
</tr>
<tr>
<td></td>
<td>• Execution monitoring</td>
</tr>
<tr>
<td><strong>Grounding</strong></td>
<td>• Invocation</td>
</tr>
</tbody>
</table>
Business-Oriented OWL-S Extension for Describing Web Services
In the complicated business services, the service profile should provide a clear description of the functionality of the service to be used, while the service model involves retrieving the suitable Web service and the service grounding the way the object is exchanged. As the OWL-S ontology provides a high abstraction layer for semantic description of Web services, a business-oriented extension of OWL-S service profile is needed (see Figure 9) to provide the ontology-based infrastructure enabling the semantic description of business services concerning three main aspects: (1) the business service provider entity, (2) the functionality of the Web service, and (3) the data types that the Web service exchanges.
This business-oriented OWL-S extension, called e-business and B2B ontology, provides the necessary semantics, concepts, classes, and interrelations, to characterize the Web services deployed by annotating the OWL-S profiles of services with formal, well-defined semantics.
FUSION Ontologies
For the realization of the business services ontology-based infrastructure that is presented in the paragraph, we have developed three interconnected ontologies, called the FUSION ontologies, that describe the various entities and components
that participate in business transactions. The FUSION ontologies serve the objective of making the technical realization as declarative as possible.
The FUSION ontologies constitute the cornerstone for the semantic description and modeling of business-oriented web services. The core objective of these business ontologies is to facilitate efficient business collaboration and interconnection between heterogeneous, incompatible services supporting the semantic fusion of service-oriented business applications that exist within an enterprise or in several collaborating companies.
The FUSION ontologies conceptualize the identified attributes, concepts, and their relationships of the service-oriented businesses applications and will be developed in three layers, each of them referring to a significant business entity—aspect: the service provider, the service functionality, and the services data types. This multi-layer architecture of FUSION ontologies provides a rich representation of service-oriented business applications, captures the significant requirements of both services functionality and data, supports efficient representation of services in intra- and inter-organizational level, and provides a flexible structure that could be easily refined and updated. The ontologies define:
- the basic description of the functionality that the business services provides to the end user (functional semantics) in order to capture the (semi-) formal representation of the functional capabilities of Web services in order to support the semantic-based discovery and automated composition of Web services, annotating the operations of services software instances as well as providing preconditions and effects—the business service ontology provides this type of information;
- the data types and relevant semantics required for representing the message structures and information that the Web services exchange (data/information semantics), capturing the (semi-) formal definition of data in input and output messages of a Web service, supporting discovery and interoperability by annotating input and output data of Web services using data-oriented ontologies—this information is specified in the business data ontology;
- the processes and scenarios identified in typical intra- and inter-organizational business transactions using a rule-based modeling approach (process and execution semantics), facilitating the automated composition and orchestration of complex Web services and workflows—this information is formally defined by the business scenarios ontology; and
- the categorization of the business entities that provide the deployed Web service software instances—this information is provided by the service provider ontology.
During the development of the FUSION ontologies, we have taken into consideration and examined already available ontologies and e-business standards. As a result, we have reused and built on already established and widely used domain knowledge, eliminating the danger of “reinventing the wheel.” So, we have based on two dominants XML-based business standards: ebXML (the Core Components Technical Specification and the Catalog of Common Business Processes) and RosettaNet (the Technical Dictionary and the Business Dictionary) defining a list of terms which can be used in business documents, as well as in other formal business vocabularies and taxonomies.
**FUSION Technical Implementation**
FUSION architecture is in line with the applied SOA architecture targeting smooth integration and dynamic service creation of services related with an ERP and a CRM system. Consequently the basis of the architecture is the ERP and the CRM software components. *FUSION adoption guideline* requires the existence of:
- a standard set of *exported Web services* that facilitate the software’s functionality. These Web services will be used for dynamic service creation during a complex service composition;
- a functional ontology, which is a domain specific ontology used for the semantic annotation of exported Web services; and
- An annotation procedure that aims at the semantically enrichment of Web services’ description.
*Figure 10. FUSION technical architecture overview*
**FUSION Architecture**
An overview of FUSION architecture is presented in Figure 10.
As mentioned previously, the elementary component in a SOA approach is Web services, since Web services provide a standard means of interoperating between different software applications running on a variety of platforms and/or frameworks. Web services are characterized by their interoperability and extensibility as well as their machine-processable descriptions thanks to the use of XML, and they can then be combined in a loosely coupled way in order to achieve complex operations. Consequently the first step of the FUSION adoption guideline is the provision of simple services derived from ERP and CRM functionality (domain specific functionality). This is an extremely crucial task since simple services can interact with each other in order to deliver sophisticated added-value services. However it is not a trivial task because SOA is a complete overhaul impacting how systems are analyzed, designed, built, integrated, and managed.
The next step is the semantic annotation of exported Web services and more specifically the semantic annotation of their WSDL file. As mentioned previously, WSDL is an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, and then bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow description of endpoints and their messages regardless of what message formats or network protocols are used to communicate, however, the only bindings described in this document describe how to use WSDL in conjunction with SOAP 1.1, HTTP GET/POST, and MIME.
The cornerstone of FUSION architecture is, as expected, the *enterprise application server* which encapsulates the following modules:
- *semantic registry*, which is a variation of a classic Web services registry used for service discovery, and
- a *business process execution engine*, which executes Business Process Execution Language (BPEL) scenarios.
**Semantic Registry**
The extension of traditional Web services to Semantic Web services raises the necessity of semantic support in current Web services registries. A lot of effort has been put
into this field. Research that has been conducted with the aim of extending registries so they could support semantic discovery can be classified into two groups:
- those who extend legacy Web services standards by adding semantic annotation to reinforce the discovery function in registries, and
- those who preserve semantic advertisements into legacy registries by mapping semantic information into the registry information model.
FUSION approach aims to tackle this issue in a more unified way through the implementation of a PSR. PSR is a variation of a classic registry (UDDI, ebXML) that can store additional semantic metadata that accompany the Web service description model. PSR handles ebXML v.2.5 and UDDI v.3. At first all the entries of each registry are converted into OWL-S ontologies with additional classes. The persistence model of PSR is not based in a database but in an integrated ontology. Service discovery within the ontology is made using RDQL queries. The semantic registry utilizes Jena\(^\text{10}\) for storage and discovery. Jena is a Java framework for writing Semantic Web applications developed under HP Labs Semantic Web Programme. It features:
- statement-centric methods for manipulating an RDF model as a set of RDF triples,
- resource-centric methods for manipulating an RDF model as a set of resources with properties,
- cascading method calls for more convenient programming,
- built in support for RDF containers—bag, alt, and seq,
- enhanced resources—the application can extend the behavior of resources,
- integrated parsers and writers for RDF/XML (ARP), N3, and N-TRIPLES, and
- support for typed literals.
**BPEL Engine**
Since many organizations are moving from an object-oriented paradigm for managing business processes toward a service-oriented approach, services are becoming the fundamental elements of application development. At the same time, BPEL has become the de facto standard for orchestrating these services and managing flawless execution of business process. The confluence of these trends is presenting some
interesting opportunities for more flexible, cost-effective management of business processes.
ERP and CRM business processes contain multiple decision points. At these decision points, certain criteria are evaluated. Based on these criteria or business rules, business processes change their behavior. In essence, these business rules drive the business process. Frequently, these rules are embedded within the business process itself or inside custom Java code, which can cause several problems such as:
- Business rules change more often than the processes themselves, but changing and managing embedded business rules is a complex task beyond the abilities of most business analysts. Thus, as business rules change, programmers often have to commit expensive time to this task.
- Most organizations lack a central rules repository. Consequently, any organization-wide change in policy cannot be applied across all business processes.
- Business processes cannot reuse rules. Hence, IT personnel end up designing rules for each and every process, often leading to inconsistency or redundancy.
The best way to avoid these problems is to use a rules engine to separate business processes from business rules. In this approach, rules are exposed as services and BPEL processes leverage these services by querying the engine when they reach decision points. This approach is much more flexible—Instead of coding rules in programming languages or inside a process, rules can be manipulated graphically. Business users with tools can write rules themselves and make post-deployment rule changes without IT assistance. With business users doing most of the updates and enhancements, maintenance costs can be reduced substantially. Consequently, rule engines and BPEL are complementary technologies.
It is rather important to delineate rules from processes. Hence, a major decision in FUSION architecture is how to implement business policies, business processes, and supporting business logic. Business logic is spread across three different layers of the IT infrastructure: (1) business process, (2) Web services, and (3) rules (see Figure 11).
**Business Process Layer**
This layer is responsible for managing the overall execution of the business process. These business processes, implemented using BPEL, can be long running, transactional, and persistent. The BPEL engine supports audit and instrumentation of workflow and thus is well suited for:
- separating less volatile workflow steps from more volatile business rules,
- implementing line-of-business processes,
- implementing process flows requiring compensation,
- supporting large-scale instantiation of process flows,
- designing process flows that need auditing, and
- orchestrating heterogeneous technologies such as connectors, Web services, and Web Services Invocation Framework (WSIF)-enabled logic.
**Semantic Web Services Layer**
The Web services layer exposes the existing application layer functionality as services. Multiple business processes can then reuse these services, thereby fulfilling the promise of a SOA.
Web services implement functional and domain logic. Functional methods are typically stateless and medium grained. Web services may, for example, contain utility methods, entity operations, and inquiry methods for system data. Web services can be implemented using multiple technologies and hide differences among implementation platforms. Thus, this layer is well suited for:
implementing medium-grained methods for a particular entity/domain area,
integrating legacy code/third-party tools, and
encapsulating logic, custom code, and implementation from the application layer.
**Rules Layer**
The rule engine is typically the home for complex logic that involves a number of interdependencies between entities and order-dependent logic calculation. Extracting business rules as a separate entity from business process leads to better decoupling of the system, which, in consequence, increases maintainability.
Rules engines allow for evaluation of rules sets in parallel and in a sequential order. In addition, rules have the ability to evaluate the values of input and intermediate data and determine if a rule should be fired. This modular design provides a simpler and more maintainable solution than traditional Java procedural code.
Furthermore, rules are declarative and allow high-level graphical user interface (GUI) editing by business analysts. Modern rule engines execute extremely quickly and provide built-in audit logging. The typical traits of a rules layer are as follows:
- contains coupled and complex logic,
- supports efficient business logic evaluation using parallel execution,
- contains complex return structure built from multiple business rule evaluations,
- allows for translation of domain logic into simple rules, and
- implements highly volatile business policy.
Because rules are exposed as services in the Web services layer, they can be reused across all inter-enterprise applications, making the development of new applications and integrations easier.
In the scope of FUSION approach BPEL4WS has been used. BPEL4WS provides a language for the formal specification of business processes and business interaction protocols. By doing so, it extends the Web services interaction model and enables it to support business transactions. BPEL4WS defines an interoperable integration model that should facilitate the expansion of automated process integration in both the intra-corporate and the B2B spaces. IBM BPWS4J has been utilized in the scope of FUSION solution. BPWS4J includes a platform upon which can be executed business processes written using the BPEL4WS and a tool that validates
BPEL4WS documents. Additionally, the enterprise application server includes a scenario repository that stores already existing BPEL scenarios for future use.
**FUSION Adoption: Integration Scenario and Applying Methodology**
**Typical Integration Scenario: Multinational, Franchising Firms**
A typical use case scenario, applying FUSION framework to solve EAI problems, refers to multi-national, franchising firms and is presented in the following section. Multi-national, franchising firms constitute a typical integration case, because of the fact that they involve several, geographically distributed legacy systems that need to be integrated at one point so as to facilitate the exchange of crucial business information among the networked franchising companies. As national systems work in isolation, any business interaction between headquarters is done currently, by mail, phone, or fax. Today, most of the steps in international workflows require human participation and batch data exchange to complete. For example, phone calls and human conversations are instantiated to carry out simple product availability requests and mails containing financial reports are exchanged for the purpose of financial auditing.
Humans, by making implicit interpretations of exchanged information, can reach a common understanding about things. Machines, on the contrary, require explicit and formal information interpretations in order to communicate. But, the company has concluded that manual execution of activities is expensive, while it does not allow jobs to repeat as often as needed. Human conversations and batch data exchanges are point-to-point interactions restricted to proprietary information structures. Even a fully automated point-to-point connection requires specific meanings and tightly bounded ends, which implies large volumes of implementation effort.
**Franchising Firms Application Scenario**
Product, inventory, demand, and financial concepts must have consistent meanings throughout the national headquarters network. For example, product classifications will keep a unique identity and a set of well-defined properties for each product across the enterprise. Once a common repository of semantics has been established,
Web services can be formally described by using common meanings from that pool. Services can then be published in registries public to all national headquarters, thereby becoming available for process composition. Semantic description and publishing of Web services deliver interoperable business services, which mean that services will exhibit consistent accessibility to any business process composite that wish to use it. Both stock management and purchase management processes may use a service that returns product stock levels in sibling headquarters and discovering and binding to that service will execute identically. Business operations planned for reengineering should be modeled from scratch and services recognized as parts should be described and published. Product availability and product stock level requests are business services that already exist in current stock management and purchase management processes.
By enabling national headquarters to publish loose-coupled, commonly accessed Web services the company becomes capable to compose highly automated business activities, avoiding thus human intervention. Services participating in a composite process of stock, purchase, or financial control are now selected from common pools (service registries). Therefore, no point-to-point connections are necessary, and the internals of the headquarters systems remain intact. Business processes are composed and executed at a higher semantic (abstract) level.
*Expected Results and Added Value: The Business Perspective*
The deployment of a business intelligent semantic service-oriented architecture to a multi-national, franchising firm, which requires several business transactions and information exchange, provides significant benefits to the firm, including:
- common access to all relevant information and functionality (interoperability), due to semantic networks and the common service registry (in place of “hard-wired” point-to-point connections),
- better quality of business services, due to standardization in service descriptions and publishing,
- business process reengineering (BPR) and analysis opportunities, due to changes that FUSION will bring in the very nature of business,
- faster responds to market changes, due to BPR flexibility,
- savings in resources, time, and money, as processes will be modeled and run automatically, and
- centralized management capabilities.
The FUSION solution intends to provide the national headquarters with a semantic service infrastructure, which will enable semantic service-oriented integration and interoperability, towards a vision of gradually incorporating all headquarters of a multi-national franchising firm into a virtual enterprise environment. The franchising firm should follow the adoption framework described next, in order to apply the FUSION integration solution to its enterprise system environment.
**A Methodology for Applying the FUSION Solution**
As described in the previous sections, FUSION solution allows the integration of heterogeneous enterprise applications that exist in the same organization or in different organizations. The FUSION solution involves the creation, administration, and deployment of Web services software instances of preselected features of the enterprise applications and the development of their semantic description (profile) based on the annotation of the technical descriptions of Web services with functionality concepts and semantics defined in the FUSION ontologies that serve as a common reference allowing the semantic integration of the business applications. The deployed Web services instances and their developed profiles will be stored.
and published at the business services registry (pure semantic registry [PSR]) that constitutes a semantic-based implementation of the UDDI specification and supports the categorization and discovery services of the PSR.
The step-oriented way we envision software engineers and business analysts of cooperating enterprises and organizations (service providers) to work with the FUSION solution (see Figure 12), in order to allow the semantic interoperability based on business intelligence among former incompatible business services and applications, is presented as follows:
- **Step 1. “As is analysis” of the pilot experiments.** This constitutes an in-depth analysis of the current situation of the service providers. The business analysts identifies the business systems and applications (e.g., legacy systems, ERP, CRM, SCM, etc.) existing within the environment of the service providers and selects the specific features and services of the existing business systems to be semantically integrated. The business analysts specifies both technically and functionally the selected business services.
- **Step 2. Deployment of Web service software instances.** The software engineers of the service provider company create and administrate Web services instances that realize the preselected features of the business applications.
- **Step 3. Web service semantic profile creation.** The business analysts identifies the concepts (e.g., product, contact, order) that are related to the deployed Web services and use well-defined concept models (business data and services ontologies) to enrich the technical description of the Web services instances.
- **Step 4. Semantic profiles publishing.** The software engineers register the semantically enriched functional and technical profiles of the provided business services on the PSR. The registered Web services are published at the, so-called, “yellow pages” of the registry, which support fully functional ontology-based categorization and discovery services.
- **Step 5. Business concepts analysis.** The business analysts identify the typical business scenarios involving the preselected enterprise applications. The business analyst defines formally the concepts and relations that exist within the identified scenarios and models these integration scenarios using a rule-based approach formalized in the developed business scenarios ontology.
- **Step 6. Services orchestration.** The software engineers design workflows that materialize the aforementioned identified business scenarios so as to support the semantic-driven orchestration of aggregated, complex compositions of Web services instances.
A service provider or a group of collaborating service providers should precede in the implementation of the activities described in these six phases in order to realize selected integration scenarios.
## Conclusion and Future Work
In this chapter, we have proposed a semantic integration framework, called FUSION, based on Web services and Semantic Web technologies. Our proposed approach introduces the deployment of SE-SOBAs that enlarge the notion of SOA by using ontologies to describe data structures and messages passed through Web service interfaces. We have also proposed the development of a pure semantic-based implementation of the UDDI specification, called Pure Semantic Registry.
The combination of SE-SOBAs with the pure semantic-based registry and the rule-based formalization of business scenarios and processes constitute a business-driven semantic integration framework applied to intra- and inter-organizational integration scenarios. Moreover, we have specified the FUSION adoption framework that constitutes a light, concrete methodology that supports enterprises and organizations to apply the FUSION integration solution to their enterprise system environment, as well as a typical integration scenario that uses the case of multinational, franchising firms.
The combination of Web services, Semantic Web technologies, and SOA results in the deployment of semantic SOA architectural framework, which is based on machine processable and, therefore, usable for automation semantic Web services, supporting a set of essential automated services regarding the use of the deployed SE-SOBAs: (1) automatic SE-SOBAs discovery, automatic complex, (2) aggregated SE-SOBAs composition, (3) automatic SE-SOBAs invocation (execution), and (4) automatic SE-SOBAs interoperation within and across organizational boundaries. The proposed semantic SOA framework, FUSION, enables the formalization and the documentation of the semantics related to the interfaces and the data structures of the deployed Web services, a capability that could not be supported by the current Web services-enabled SOA and technologies.
As the functional and technical FUSION architecture is already well specified and defined, the basic technical, structural components are being developed. However, a lot of work is still to be done towards the finalization of the integrated FUSION technical solution, its deployment in real enterprise scenarios, and the evaluation of the proposed semantic service-oriented architecture.
Acknowledgments
The work presented in this chapter constitutes the core conceptual and technical architecture and framework of a European Commission so-funded project, entitled FUSION. FUSION project is a specific targeted research project that focuses on semantic interoperability, enterprise application integration, and B2B process fusion. Led by SAP AG, the FUSION consortium consists of 14 partners from five European countries (Germany, Poland, Greece, Hungary, Bulgaria), including research institutes, technology providers, innovation transfer bodies, as well as end users.
References
---
**Endnotes**
2. http://www.w3.org/TR/ws-arch/
3. http://www.ws-i.org/Profiles/Basic/2003-05/BasicProfile-1.0-WGAD.htm
4. http://www.w3.org/TR/wsd1/
http://www.w3.org/TR/SOAP/
http://www.uddi.org/
The Stencil Group (www.stencilgroup.com/ideas_scope_200106wsdefined.html)
http://jena.sourceforge.net/
<table>
<thead>
<tr>
<th>Term</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>Business Process</td>
<td>A collection of related structural activities that produce something of value to the organization, its stakeholders or its customers. The recipe for achieving a commercial result.</td>
</tr>
<tr>
<td>Business Process</td>
<td>Business process fusion is the transformation of business activities that is achieved by integrating the interfaces of previously autonomous business processes by pipelining different middleware technologies and enabling the effective (semi-)automated exchange of information between various systems within a company or between enterprises</td>
</tr>
<tr>
<td>CRM</td>
<td>Customer Relationship Management (CRM) enables organizations to better serve their customers through the introduction of reliable processes and procedures for interacting with those customers.</td>
</tr>
<tr>
<td>EAI</td>
<td>Enterprise Application Integration is the use of software and architectural principles to bring together (integrate) a set of enterprise computer applications. The goal of EAI is to integrate and streamline heterogeneous business processes across different applications and business units.</td>
</tr>
<tr>
<td>ERP</td>
<td>Enterprise resource planning system is a management information system that integrates and automates many of the business practices associated with the operations or production aspects of a company.</td>
</tr>
<tr>
<td>Service</td>
<td>Service is the non-material equivalent of a good provided to customers.</td>
</tr>
<tr>
<td>Se-SOBA</td>
<td>Semantically-enriched Service-Oriented Business Applications (SE-SOBA) - a set of independently running services communicating with each other in a loosely coupled message-based manner using ontologies and semantic web mark-up languages to describe data structures and messages passed through their web service interfaces</td>
</tr>
<tr>
<td>SOA</td>
<td>Service Oriented Architecture - a software architectural concept that defines the use of services, which communicate with each other involving simple data passing, to support the requirements of software users.</td>
</tr>
<tr>
<td>SOBA</td>
<td>Service Oriented Business Applications - a set of independently running services communicating with each other in a loosely coupled message-based manner</td>
</tr>
<tr>
<td>Web Service</td>
<td>Web service is a software system designed to support interoperable machine-to-machine interaction over a network</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "http://imu.ntua.gr/sites/default/files/biblio/Papers/a-semantic-service-oriented-architecture-for-business-process-fusion.pdf", "len_cl100k_base": 13819, "olmocr-version": "0.1.53", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 97358, "total-output-tokens": 16207, "length": "2e13", "weborganizer": {"__label__adult": 0.00035691261291503906, "__label__art_design": 0.0008087158203125, "__label__crime_law": 0.00044608116149902344, "__label__education_jobs": 0.0010967254638671875, "__label__entertainment": 0.00013113021850585938, "__label__fashion_beauty": 0.00019860267639160156, "__label__finance_business": 0.00658416748046875, "__label__food_dining": 0.0004124641418457031, "__label__games": 0.0006046295166015625, "__label__hardware": 0.0011568069458007812, "__label__health": 0.0004162788391113281, "__label__history": 0.0004048347473144531, "__label__home_hobbies": 0.00010979175567626952, "__label__industrial": 0.0007696151733398438, "__label__literature": 0.0003685951232910156, "__label__politics": 0.0004363059997558594, "__label__religion": 0.00039124488830566406, "__label__science_tech": 0.0760498046875, "__label__social_life": 9.1552734375e-05, "__label__software": 0.037078857421875, "__label__software_dev": 0.87060546875, "__label__sports_fitness": 0.0002256631851196289, "__label__transportation": 0.0008435249328613281, "__label__travel": 0.00029754638671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 78857, 0.00756]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 78857, 0.17479]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 78857, 0.89986]], "google_gemma-3-12b-it_contains_pii": [[0, 1022, false], [1022, 3847, null], [3847, 6981, null], [6981, 8857, null], [8857, 11383, null], [11383, 14348, null], [14348, 16704, null], [16704, 19214, null], [19214, 21852, null], [21852, 24323, null], [24323, 26192, null], [26192, 28940, null], [28940, 30429, null], [30429, 33191, null], [33191, 35615, null], [35615, 37355, null], [37355, 40376, null], [40376, 41826, null], [41826, 43017, null], [43017, 44360, null], [44360, 45676, null], [45676, 48423, null], [48423, 49899, null], [49899, 52315, null], [52315, 54393, null], [54393, 56847, null], [56847, 57865, null], [57865, 60117, null], [60117, 62362, null], [62362, 64778, null], [64778, 66046, null], [66046, 68712, null], [68712, 71229, null], [71229, 73498, null], [73498, 75566, null], [75566, 76134, null], [76134, 78857, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1022, true], [1022, 3847, null], [3847, 6981, null], [6981, 8857, null], [8857, 11383, null], [11383, 14348, null], [14348, 16704, null], [16704, 19214, null], [19214, 21852, null], [21852, 24323, null], [24323, 26192, null], [26192, 28940, null], [28940, 30429, null], [30429, 33191, null], [33191, 35615, null], [35615, 37355, null], [37355, 40376, null], [40376, 41826, null], [41826, 43017, null], [43017, 44360, null], [44360, 45676, null], [45676, 48423, null], [48423, 49899, null], [49899, 52315, null], [52315, 54393, null], [54393, 56847, null], [56847, 57865, null], [57865, 60117, null], [60117, 62362, null], [62362, 64778, null], [64778, 66046, null], [66046, 68712, null], [68712, 71229, null], [71229, 73498, null], [73498, 75566, null], [75566, 76134, null], [76134, 78857, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 78857, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 78857, null]], "pdf_page_numbers": [[0, 1022, 1], [1022, 3847, 2], [3847, 6981, 3], [6981, 8857, 4], [8857, 11383, 5], [11383, 14348, 6], [14348, 16704, 7], [16704, 19214, 8], [19214, 21852, 9], [21852, 24323, 10], [24323, 26192, 11], [26192, 28940, 12], [28940, 30429, 13], [30429, 33191, 14], [33191, 35615, 15], [35615, 37355, 16], [37355, 40376, 17], [40376, 41826, 18], [41826, 43017, 19], [43017, 44360, 20], [44360, 45676, 21], [45676, 48423, 22], [48423, 49899, 23], [49899, 52315, 24], [52315, 54393, 25], [54393, 56847, 26], [56847, 57865, 27], [57865, 60117, 28], [60117, 62362, 29], [62362, 64778, 30], [64778, 66046, 31], [66046, 68712, 32], [68712, 71229, 33], [71229, 73498, 34], [73498, 75566, 35], [75566, 76134, 36], [76134, 78857, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 78857, 0.13781]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
a2e15b3a2162b5f6e7903ff263b1736aaddfda79
|
[REMOVED]
|
{"len_cl100k_base": 12497, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 49999, "total-output-tokens": 15272, "length": "2e13", "weborganizer": {"__label__adult": 0.0005345344543457031, "__label__art_design": 0.002979278564453125, "__label__crime_law": 0.000396728515625, "__label__education_jobs": 0.0034942626953125, "__label__entertainment": 0.00017690658569335938, "__label__fashion_beauty": 0.00027179718017578125, "__label__finance_business": 0.00034809112548828125, "__label__food_dining": 0.00042128562927246094, "__label__games": 0.0009021759033203124, "__label__hardware": 0.0007824897766113281, "__label__health": 0.0005331039428710938, "__label__history": 0.0004878044128417969, "__label__home_hobbies": 0.0001671314239501953, "__label__industrial": 0.0004901885986328125, "__label__literature": 0.0011339187622070312, "__label__politics": 0.00032019615173339844, "__label__religion": 0.0007352828979492188, "__label__science_tech": 0.04852294921875, "__label__social_life": 0.00015115737915039062, "__label__software": 0.0079803466796875, "__label__software_dev": 0.92822265625, "__label__sports_fitness": 0.00033020973205566406, "__label__transportation": 0.0005564689636230469, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63763, 0.04269]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63763, 0.30518]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63763, 0.89724]], "google_gemma-3-12b-it_contains_pii": [[0, 1127, false], [1127, 1345, null], [1345, 4705, null], [4705, 9675, null], [9675, 14839, null], [14839, 16612, null], [16612, 20065, null], [20065, 23650, null], [23650, 25248, null], [25248, 28433, null], [28433, 32610, null], [32610, 36750, null], [36750, 40240, null], [40240, 43858, null], [43858, 46704, null], [46704, 48719, null], [48719, 53477, null], [53477, 58798, null], [58798, 63763, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1127, true], [1127, 1345, null], [1345, 4705, null], [4705, 9675, null], [9675, 14839, null], [14839, 16612, null], [16612, 20065, null], [20065, 23650, null], [23650, 25248, null], [25248, 28433, null], [28433, 32610, null], [32610, 36750, null], [36750, 40240, null], [40240, 43858, null], [43858, 46704, null], [46704, 48719, null], [48719, 53477, null], [53477, 58798, null], [58798, 63763, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63763, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63763, null]], "pdf_page_numbers": [[0, 1127, 1], [1127, 1345, 2], [1345, 4705, 3], [4705, 9675, 4], [9675, 14839, 5], [14839, 16612, 6], [16612, 20065, 7], [20065, 23650, 8], [23650, 25248, 9], [25248, 28433, 10], [28433, 32610, 11], [32610, 36750, 12], [36750, 40240, 13], [40240, 43858, 14], [43858, 46704, 15], [46704, 48719, 16], [48719, 53477, 17], [53477, 58798, 18], [58798, 63763, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63763, 0.19048]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
34aca28f1a145aa7137f1b9d68847ddccfbbc975
|
Generalizing the Edge-Finder Rule for the Cumulative Constraint
Vincent Gingras
Université Laval, Québec, QC, Canada
v Vincent.gingras.5@ulaval.ca
Claude-Guy Quimper
Université Laval, Québec, QC, Canada
Claude-Guy.Quimper@ift.ulaval.ca
Abstract
We present two novel filtering algorithms for the Cumulative constraint based on a new energetic relaxation. We introduce a generalization of the Overload Check and Edge-Finder rules based on a function computing the earliest completion time for a set of tasks. Depending on the relaxation used to compute this function, one obtains different levels of filtering. We present two algorithms that enforce these rules. The algorithms utilize a novel data structure that we call Profile and that encodes the resource utilization over time. Experiments show that these algorithms are competitive with the state-of-the-art algorithms, by doing a greater filtering and having a faster runtime.
1 Introduction
Scheduling consists of deciding when a set of tasks needs to be executed on a shared resource. Applications can be found in economics [Buyya et al., 2005] or in industrial sequencing [Harjunkoski et al., 2014].
Constraint programming is an efficient way to solve scheduling problems. Many powerful filtering algorithms that prune the search space have been introduced for various scheduling problems [Baptiste et al., 2001]. These algorithms are particularly adapted for the cumulative problem in which multiple tasks can be simultaneously executed on a cumulative resource. Among these algorithms, we note the Time-Table [Beldiceanu and Carlsson, 2002], the Energetic Reasoning [Lopez and Esquirol, 1996], the Overload Check [Wolf and Schrader, 2006], the Edge-Finder [Mercier and Van Hentenryck, 2008] and the Time-Table Edge-Finder [Vilim, 2011].
Constraint solvers call filtering algorithms multiple times during the search, hence the need for them to be fast and efficient. Cumulative scheduling problems being NP-Hard, these algorithms rely on a relaxation of the problem in order to be executed in polynomial time. In this paper, we introduce a novel relaxation that grants a stronger filtering when applied in conjunction with known filtering algorithms.
In the next section, we formally define what a Cumulative Scheduling Problem (CuSP) is. Then, we present two state-of-the-art filtering rules: the Overload Check and Edge-Finder. We generalize these rules so that they become function of the earliest completion time of a set of tasks. We introduce a novel function computing an optimistic value of earliest completion time for a set of tasks, based on a more realistic relaxation of the CuSP. Along with this function, we present a novel data structure, named Profile, we use to compute the function. We introduce two algorithms to enforce the generalized rules while using our own novel function. Finally, we present experimental results obtained while solving CuSP instances from two different benchmark suites.
2 The Cumulative Scheduling Problem
We consider the scheduling problem where a given set of tasks $I = \{1, \ldots, n\}$ must be executed, without interruption, on a cumulative resource of capacity $C$. A task $i \in I$ has an earliest starting time $est_i \in \mathbb{Z}$, a latest completion time $lct_i \in \mathbb{Z}$, a processing time $p_i \in \mathbb{Z}^+$, and a resource consumption value, commonly referred as height, $h_i \in \mathbb{Z}^+$. The energy of a task $i$ is given by $e_i = p_i h_i$. We denote the earliest completion time of a task $ect_i = est_i + p_i$ and the latest starting time $lst_i = lct_i - p_i$. Some of these parameters can be generalized for a set of tasks $\Omega \subseteq I$.
$$\begin{align*}
est_\Omega &= \min_{i \in \Omega} est_i
lct_\Omega &= \max_{i \in \Omega} lct_i
eq &= \sum_{i \in \Omega} e_i
\end{align*}$$
Let $S_i$ be the starting time of task $i$, and its domain be $\text{dom}(S_i) = [est_i, lst_i]$. The constraint $\text{CUMULATIVE}([S_1, \ldots, S_n], C)$ is satisfied if the total resource consumption of the tasks executing at any time $t$ does not exceed the resource capacity $C$, which is expressed as:
$$\forall t : \sum_{i \in I, S_i \leq t < S_i + p_i} h_i \leq C. \quad (1)$$
A solution to the $\text{CUMULATIVE}$ constraint is a solution to the Cumulative Scheduling Problem (CuSP). In addition to satisfying the $\text{CUMULATIVE}$ constraint, one usually aims at optimizing an objective function, such as minimizing the makespan, i.e. the time at which all tasks are completed.
Such scheduling problems are NP-Hard [Garey and Johnson, 1979], therefore it is NP-Hard to remove every inconsistent values from the domains of the starting time variables $S_i$. However, there exist many powerful filtering algorithms running in polynomial time for the $\text{CUMULATIVE}$ constraint. To
execute in polynomial time, these algorithms rely on a relaxation of the original problem that generally revolves around a task property referred as the elasticity [Baptiste et al., 2001]. A task \( i \) becomes fully elastic if we allow its resource consumption to fluctuate (and even to interrupt), as long as the amount of resource consumed in the interval \([\text{est}_i, \text{lct}_i]\) is equal to its energy \( e_i \).
3 Preliminaries
We present two filtering algorithms based on an energetic relaxation that we later improve using a novel relaxation.
3.1 Overload Check
The Overload Check is a test that detects inconsistencies in the problem and triggers backtracks in the search tree. The Overload Check rule enforces the condition that the energy consumption required by a set of tasks \( \Omega \) cannot exceed the resource capacity over the time interval \([\text{est}_\Omega, \text{lct}_\Omega]\).
\[
\exists \Omega \subseteq I : C[(\text{lct}_\Omega - \text{est}_\Omega) < e_\Omega] \Rightarrow \text{fail} \tag{2}
\]
This condition is necessary to the existence of a feasible solution to the problem. [Wolf and Schrader, 2006] present an algorithm enforcing this rule running in \( O(n \log n) \) time. More recently, [Fahimi and Quimper, 2014] presented an Overload Check algorithm that runs in \( O(n) \) time, using a data structure named Timeline. Although initially conceived for the DISJUNCTIVE constraint, it is demonstrated that the algorithm can be adapted for the CUMULATIVE constraint, while maintaining its running time complexity of \( O(n) \).
3.2 Edge-Finder
The Edge-Finder algorithm filters the starting time variables. The algorithms by [Vilím, 2009] and [Kameugne et al., 2014] proceed in two phases: the detection and the adjustment.
The detection phase detects \( end \) before \( end \) temporal relations between the tasks. The relation \( \Omega < i \) indicates that the task \( i \) must finish after all tasks in \( \Omega \) are completed.
Given a set of tasks \( \Omega \subseteq I \), the Edge-Finder detection rule enforces the condition that a task \( i \notin \Omega \) cannot be concurrently executed along the tasks in \( \Omega \) without having any of them missing their deadline, then the tasks in \( \Omega \) must end before the task \( i \) ends, i.e. \( \Omega < i \). [Baptiste et al., 2001] present the following rule.
\[
e_{\Omega \cup \{i\}} > C[(\text{lct}_\Omega - \text{est}_{\Omega \cup \{i\}})] \Rightarrow \Omega < i \tag{3}
\]
[Vilím, 2009] detects all precedences using this rule. He demonstrates that the rule does not need to be applied for all subsets. Let the left cut of task \( i \) be the set of tasks whose \( \text{lct} \) is no greater than \( \text{lct}_i \), i.e. \( \text{LCut}(I, i) = \{ j \in I \mid \text{lct}_j \leq \text{lct}_i \} \). Then the algorithm only needs to enforce the rule (3) for all distinct left cuts. Additionally, Vilím’s detection algorithm introduces the \( \Theta \)-\( \Lambda \)-tree data structure to achieve a \( O(n \log n) \) time complexity.
Assuming a precedence relation \( \Omega < i \) is found during the detection phase, the adjustment phase proceeds to filter the earliest starting time of task \( i \). The new value is computed by spending the energy \( e_\Omega \) in the time interval \([\text{est}_\Omega, \text{lct}_\Omega]\) as follows. A maximum amount of energy is spent on the interval with a restricted capacity of \( C - h_i \). The remaining energy, i.e. \( \max(0, e_\Omega - (\text{lct}_\Omega - \text{est}_\Omega)(C - h_i)) \), is spent as early as possible on the interval using the remaining capacity \( h_i \). The time when this remaining energy completes its execution is the new bound \( \text{est}_i \). [Baptiste et al., 2001] present the following rule for the adjustment phase.
\[
\Omega < i \Rightarrow \text{est}_i \geq \max_{\Omega \subseteq I} \left\{ \text{est}_\Omega' + \left[ e_{\Omega'} - (C - h_i)(\text{lct}_\Omega' - \text{est}_\Omega') \right] \frac{h_i}{C} \right\} \tag{4}
\]
The main difficulty of the adjustment phase is to compute the subset \( \Omega' \) that results in the optimal adjustment. [Vilím, 2009] introduces an adjustment algorithm running in \( O(kn \log n) \) time, where \( k \) is the number of distinct heights. His algorithm uses an extended \( \Theta \)-\( \Lambda \)-tree. In an orthogonal work, [Kameugne et al., 2014] introduce an adjustment algorithm running in \( O(n^2) \) time, based on notions of minimum slack and maximum density. Although their algorithm does not strictly dominate Vilím’s algorithm complexity, they demonstrate that it performs better in practice.
4 Novel function of earliest completion time
The earliest completion time of a set of tasks \( \Omega \), denoted \( e_{\Omega \cup \{i\}} \), is \( \text{NP-Hard} \) to compute [Garey and Johnson, 1979]. One normally uses a relaxation in order to compute an optimistic (smaller) value of earliest completion time. This relaxation can be identified as the fully-elastic relaxation [Baptiste et al., 2001]. In order to differentiate the two functions of \( e_{\Omega \cup \{i\}} \) presented in this paper, we took the discretion to rename the function of \( e_{\Omega \cup \{i\}} \) presented by [Vilím, 2009] as \( e_{F_{\Omega \cup \{i\}}} \). It is computed by spending a maximum amount of energy as early as possible without any regards to the heights of the tasks. [Vilím, 2009] uses the following formula to compute it.
\[
e_{F_{\Omega \cup \{i\}}} = \left[ \max\{C \cdot \text{est}_{\Omega' \cup \{i\}} + e_{\Omega'} \mid \Omega' \subseteq \Omega \} \right] C \tag{5}
\]
We introduce a generalization of the Overload Check rule based on the function \( e_{\Omega \cup \{i\}} \) or any of its relaxation. If a particular set of tasks cannot be completed before the deadline of this set, then the problem does not have a solution.
\[
\exists \Omega \subseteq I : e_{F_{\Omega \cup \{i\}}} > \text{lct}_\Omega \Rightarrow \text{fail} \tag{6}
\]
Substituting the function \( e_{\Omega \cup \{i\}} \) by the fully-elastic relaxed \( e_{F_{\Omega \cup \{i\}}} \) (5) into rule (6) leads to the well known form of the Overload Check rule (2). The demonstration follows from inequalities (7)-(10) being equivalent.
\[
\exists \Omega \subseteq I : \quad e_{F_{\Omega \cup \{i\}}} > \text{lct}_\Omega \quad \Rightarrow \text{fail} \tag{7}
\]
\[
\exists \Omega \subseteq I : \quad \frac{\max[C \cdot \text{est}_{\Omega' \cup \{i\}} + e_{\Omega'} \mid \Omega' \subseteq \Omega]}{C} > \text{lct}_\Omega \quad \Rightarrow \text{fail} \tag{8}
\]
\[
\exists \Omega' \subseteq I : \quad C \cdot \text{est}_{\Omega' \cup \{i\}} > \text{lct}_{\Omega'} \quad \Rightarrow \text{fail} \tag{9}
\]
\[
\exists \Omega' \subseteq I : \quad e_{\Omega' >} > C[(\text{lct}_{\Omega'} - \text{est}_{\Omega'})] \quad \Rightarrow \text{fail} \tag{10}
\]
However, since \( e_{F_{\Omega \cup \{i\}}} \leq e_{\Omega \cup \{i\}} \), rule (6) detects more failure cases than its fully-elastic relaxed version. This suggests to find stronger relaxations for the function \( e_{\Omega \cup \{i\}} \) than \( e_{F_{\Omega \cup \{i\}}} \).
From [Vilim, 2009], we generalize the Edge-Finder detection rule. A precedence is detected when a set of tasks Ω, executing along a task \( i \notin Ω \), cannot meet its deadline.
\[ \forall \Omega \subset \mathcal{I}, \forall i \in \mathcal{I} \setminus \Omega : \text{ect}_{\Omega \cup \{i\}} > \text{lct}_\Omega \Rightarrow \Omega \not\subseteq i \quad (11) \]
Since computing \( \text{ect}_\Omega \) is NP-Hard, one needs to use a relaxation. The fully-elastic relaxation results in rule (3). The demonstration is similar to the one for the Overload Check.
We introduce a stronger relaxation for the function \( \text{ect} \) that we call horizontally-elastic. With this relaxation, a task \( i \) is allowed to consume, at any time \( t \in [\text{est}_i, \text{lct}_i] \), between 0 and \( h_i \) units of resource. Unlike the fully-elastic relaxation, it cannot consume more than \( h_i \) units of resource. Given a set of tasks \( \Omega \), \( \text{ect}_\Omega^H \) is computed using the following formulas.
Let \( h_{\max}(t) \) represent the amount of resource that can be allocated to the tasks \( \Omega \) at time \( t \). A task \( i \) can consume at most \( h_i \) units of resource at any time in its execution window \([\text{est}_i, \text{lct}_i] \). The resource has a capacity \( C \).
\[ h_{\max}(t) = \min \left( \sum_{i \in \Omega} \text{est}_i, \text{lct}_i, h_i, C \right) \quad (12) \]
Let \( h_{\text{req}}(t) \) be the amount of resource required at time \( t \) by the tasks in \( \Omega \) if they were all starting at their earliest starting times. In this context, a task \( i \) consumes \( h_i \) units of resource throughout the interval \([\text{est}_i, \text{ect}_i] \).
\[ h_{\text{req}}(t) = \sum_{i \in \Omega} \text{est}_i, \text{lct}_i, h_i \quad (13) \]
We call overflow \( ov(t) \) the energy from \( h_{\text{req}}(t) \) that cannot be executed at time \( t \) due to the limited capacity \( h_{\max}(t) \). This overflow is accumulated over time and released when the resource is no longer saturated. Let \( h_{\text{cons}}(t) \) be the amount of resource that is actually consumed at time \( t \). This amount is given by \( h_{\text{req}}(t) \) to which we add the previously accumulated overflow. The resource consumed is limited by \( h_{\max}(t) \).
\[ h_{\text{cons}}(t) = \min(h_{\text{req}}(t) + ov(t - 1), h_{\max}(t)) \quad (14) \]
\[ ov(t) = ov(t - 1) + h_{\text{req}}(t) - h_{\text{cons}}(t) \quad (15) \]
\[ ov(\min \text{est}_i) = 0 \quad (16) \]
The earliest completion time occurs when all tasks are completed. Figure 1 shows the distribution of the energy along the time line.
\[ \text{ect}^H = \max \{ t \mid h_{\text{cons}}(t) > 0 \} + 1 \quad (17) \]
Theorem 1 shows that the horizontally-elastic relaxation is stronger than the fully-elastic one.
**Theorem 1.** For all \( \Omega \subseteq \mathcal{I} \), \( \text{ect}_{\Omega}^H \leq \text{ect}_{\Omega}^F \leq \text{ect}_\Omega \)
**Proof.** The fully-elastic relaxation requires a task \( i \) to consume \( e_i \) units of resource within the interval \([\text{est}_i, \text{lct}_i] \). The horizontally-elastic relaxation has the same constraint with the added restriction that no more than \( h_i \) units of resource should be consumed at any given time, which can only make a schedule finish later, hence \( \text{ect}_{\Omega}^H \leq \text{ect}_{\Omega}^F \). In the CuSP, a task must consume either 0 or \( h_i \) units of resource at time \( t \in [\text{est}_i, \text{lct}_i] \). Moreover, there is no interruption. These conditions are even stronger, hence \( \text{ect}_{\Omega}^H \leq \text{ect}_\Omega \). □
**Theorem 2.** The Overload Check and Edge-Finder based on the horizontally-elastic relaxation detect a superset of inconsistencies and precedences than their respective version based on the fully-elastic relaxation.
**Proof.** Since \( \text{ect}_{\Omega}^F \leq \text{ect}_{\Omega}^H \) for all \( \Omega \) (Theorem 1), the fail condition \( \text{ect}_{\Omega}^F > \text{lct}_\Omega \) implies the fail condition \( \text{ect}_{\Omega}^H > \text{lct}_\Omega \). Similarly, \( \text{ect}_{\Omega \cup \{i\}}^F > \text{lct}_\Omega \) implies the detection condition \( \text{ect}_{\Omega \cup \{i\}}^H > \text{lct}_\Omega \). Consider the instance with \( C = 2 \) and four tasks whose parameters \((\text{est}_i, \text{lct}_i, p_i, h_i)\) are \( (0, 4, 2, 1) \), \( (1, 4, 1, 2) \), \( (1, 4, 1, 2) \), and \( (1, 4, 1, 2) \). Only the Overload Check based on the horizontally-elastic relaxation fails. In the instance with \( C = 2 \) and the tasks \( x : (0, 5, 2, 1) \), \( y : (1, 5, 2, 1) \), \( z : (1, 5, 2, 1) \), and \( w : (1, 10, 2, 1) \). The precedence \((x, y, z) \leq w \) is only detected when using the horizontally-elastic relaxation. □
### 5 Resource Utilization Profile
To efficiently compute \( \text{ect}^H \), we introduce a data structure called Resource Utilization Profile, or simply Profile, that stores the resource utilization over time, as in Figure 1. Similarly to [Gay et al., 2015], we represent the Profile as an aggregation of juxtaposed rectangles of different lengths and heights. Rectangles are expressed with tuples \((\text{time}, \text{capacity}, \Delta_{\text{max}}, \Delta_{\text{req}})\) where \text{time} is the start time, \text{capacity} is the remaining capacity of the resource at the start time, \Delta_{\text{max}} and \Delta_{\text{req}} are two quantities initialized to zero. The ending time is the starting time of the next rectangle. These tuples are stored in a sorted linked list whose nodes are called time points, referring to the starting times of the rectangles.
The Profile is initialized with a time point of capacity \( C \) for every distinct value of \text{est}, \text{ect} and \text{lct}. We add a sufficiently large time point to act as sentinel. Finally, while initializing the data structure, pointers are kept so that \( T_{\text{est}_i}, T_{\text{ect}_i}, \) and \( T_{\text{lct}_i} \) return the time point associated to \text{est}_i, \text{ect}_i, \) and \text{lct}_i.
The algorithm ScheduleTasks proceeds in batch to schedule a set of tasks \( \Theta \) on the profile \( P \). The algorithm computes the functions \text{hreq}(t), \text{hmax}(t), \text{hcons}(t), \) and \text{ov}(t). To ensure a strongly polynomial running time complexity, the algorithm does not process individual time points, but time intervals for which the functions do not fluctuate. Line 23 sets the remaining capacity of a time point to the capacity of the resource minus the consumed capacity \text{hcons}. For future uses, line 11 stores the overflow at the moment of processing the time point and line 32 resets this overflow to the minimum overflow encountered in later time points.
We present an algorithm that enforces the Overload Check. Theorem 3.
Proof. Let \( \overline{c} \) be a Profile containing \( n \) points. By Lemma 1, they execute \( n \) times. \( \square \)
Lemma 3. OverloadCheck runs in \( O(n^2) \) time.
Proof. The linear time algorithm ScheduleTasks is called \( n \) times. \( \square \)
7 Edge-Finder Detection
We introduce an algorithm that enforces the Edge-Finder rule (11) based on the horizontally-elastic relaxation.
Like Vilm's [2009] algorithm, Detection iteraes over all tasks in non-increasing order of \( \text{lct} \). On each iteration, the function ScheduleTasks schedules on an empty Profile the left cut \( \Theta \) of the current task. DetectPrecedences tests for precedence detection the tasks in \( \Lambda \). The function DetectPrecedences returns all tasks \( \j \in \Lambda \) for which the rule (11) detects \( \Theta \) except for the value of \( \text{ect} \) that is computed using ScheduleTasks. The algorithm also fails if some overflow was unspent beyond time \( \text{lct}_\Theta \).
The algorithm OverloadCheck is essentially the same as Vilm's [2009] except for the value of \( \text{ect} \) that is computed using ScheduleTasks. The algorithm also fails if some overflow was unspent beyond time \( \text{lct}_\Theta \).
Lemma 3. OverloadCheck runs in \( O(n^2) \) time.
Proof. The linear time algorithm ScheduleTasks is called \( n \) times. \( \square \)
Algorithm 1: ScheduleTasks(\( \Theta, c \))
1: for all time point \( t \) do \( t, \Delta_{\text{max}} \leftarrow 0, t, \Delta_{\text{req}} \leftarrow 0 \)
2: for \( i \in \Theta \) do
3: \( \text{Incernent} T_{\text{est}_i}, \Delta_{\text{max}} \) and \( T_{\text{est}_i}, \Delta_{\text{req}} \) by \( h_i \)
4: \( \text{Decrement} T_{\text{lct}_i}, \Delta_{\text{max}} \) and \( T_{\text{lct}_i}, \Delta_{\text{req}} \) by \( h_i \)
5: \( t \leftarrow P \) first, \( ov \leftarrow 0, \text{ect} \leftarrow -\infty, S \leftarrow 0, h_{\text{req}} \leftarrow 0 \)
6: while \( t, \text{time} \neq \text{lct}_\Theta \) do
7: \( t, \text{ov} \leftarrow ov \)
8: \( l \leftarrow t, \text{next.time} - t, \text{time} \)
9: \( S \leftarrow S + t, \Delta_{\text{max}} \)
10: \( h_{\text{max}} \leftarrow \max(S, c) \)
11: \( h_{\text{req}} \leftarrow h_{\text{req}} + t, \Delta_{\text{req}} \)
12: \( h_{\text{cons}} \leftarrow \min(h_{\text{req}} + ov, h_{\text{max}}) \)
13: if \( 0 < ov < (h_{\text{cons}} - h_{\text{req}}) \cdot t \) then
14: \( l \leftarrow \max(l, \left[ \frac{ov}{h_{\text{cons}} - h_{\text{req}}} \right]) \)
15: \( t, \text{insertAfter}(t, \text{time} + l, t, \text{capacity}, 0, 0) \)
16: \( ov \leftarrow ov + (h_{\text{req}} - h_{\text{cons}}) \cdot l \)
17: \( t, \text{capacity} \leftarrow c - h_{\text{cons}} \)
18: if \( t, \text{capacity} < c \) then \( \text{ect} \leftarrow t, \text{next.time} \)
19: \( t \leftarrow t, \text{next} \)
20: \( t, \text{ov} \leftarrow ov \)
21: \( m \leftarrow \infty \)
22: while \( t \neq P, \text{first} \) and \( m > 0 \) do
23: \( m \leftarrow \min(m, t, \text{ov}) \)
24: \( t, \text{ov} \leftarrow m \)
25: \( t \leftarrow t, \text{previous} \)
26: return \( \text{ect, ov} \)
Algorithm 2: OverloadCheck(\( \mathcal{I}, C \))
1: \( \Theta \leftarrow \emptyset \)
2: for \( i \in \mathcal{I} \) in ascending order of \( \text{lct}_i \) do
3: \( \Theta \leftarrow \Theta \cup \{i\} \)
4: \( \text{ect, ov} \leftarrow \text{ScheduleTasks}(\Theta, C) \)
5: if \( \text{ect} > \text{lct}_i \) or \( \text{ov} > 0 \) then fail
Algorithm 3: Detection(\( \mathcal{I}, C \))
1: \( \text{Prec} \leftarrow \emptyset \)
2: for \( t \in \{\text{lct}_i \mid i \in \mathcal{I}\} \) in desc. ord. do
3: \( \Theta \leftarrow \Theta \setminus \{j \mid \text{lct}_j = t\} \)
4: \( \Lambda \leftarrow \Lambda \cup \{j \mid \text{lct}_j = t\} \)
5: \( \text{ect, ov} \leftarrow \text{ScheduleTasks}(\Theta, C) \)
6: if \( \text{ov} > 0 \) then fail
7: for \( h \in \{\text{lct}_i \mid i \in \Lambda\} \) do
8: \( \Lambda^h \leftarrow \{i \in \Lambda \mid h_i = h\} \)
9: \( \Omega \leftarrow \text{DetectPrecedences}(\Theta, \Lambda^h, h, \text{lct}_\Theta) \)
10: \( \text{Prec} \leftarrow \text{Prec} \cup \{\Theta < j \mid j \in \Omega\} \)
11: \( \Lambda \leftarrow \Lambda \setminus \Omega \)
6 Overload Check
We present an algorithm that enforces the Overload Check rule based on the horizontally-elastic relaxation, i.e.:
\[
\exists \Omega \subseteq \mathcal{I} : \text{ect}_\Omega > \text{lct}_\Omega \Rightarrow \text{fail}
\]
Algorithm 4: DetectPrecedences($\Theta, \Lambda^h, h, lct$)
1. for all time point $t$ do $t, \Delta_{max} \leftarrow 0$
2. for $i \in \Theta$ do
3. Decrement $T_{est}, \Delta_{max}$ by $h_i$
4. Increment $T_{est}, \Delta_{max}$ by $h_i$
5. minest $\leftarrow \min_{i \in \Lambda^h} est_i$
6. $i \leftarrow $ getNode(lct).previous
7. $\Omega \leftarrow \{0, e \leftarrow 0, ov \leftarrow 0, h_{max} \leftarrow h$
8. while $t, time \geq \minest$ do
9. $l \leftarrow t, next.time - t, time$
10. $h_{max} \leftarrow h_{max} + t, next, \Delta_{max}$
11. $c \leftarrow \min(t, capacity, h_{max} - \max(0, \min(o, (h - c))))$
12. $e \leftarrow e + (C \cdot t, capacity))$
13. $ov \leftarrow \max(0, ov + l(c - h))$
14. $\Omega \leftarrow \Omega \cup \{j \in \Lambda^h |\}
15. $est_j = t, time, e_j - \min(0, h_j(ect_j - lct)) > e\}$
16. $t \leftarrow t, previous$
17. return $\Omega$
Algorithm 5: Adjustment($Prec, C$)
1. for $\Theta < i \in Prec$ do
2. $ect, ov \leftarrow $ ScheduleTasks($\Theta, C - h_i$)
3. $est_i \leftarrow \max(est_i, ComputeBound(i, \Theta, ov))$
max$(0, h_j(ect_j - lct))$ represents the energy of task $j$ that cannot be spent within $[est_j, lct_j]$.
Lemma 4. DetectPrecedences runs in $O(n)$ time.
Proof. All loops iterate $O(n)$ times, once per time point (Lemma 1). Tasks in $\Lambda$ are added at most once to $\Omega$ on line 15. Therefore DetectPrecedences runs in $O(n)$. □
Lemma 5. Detection runs in $O((n^2))$ time, where $k$ is the number of distinct heights.
Proof. Detection calls ScheduleTasks for each task. For all distinct heights, it calls DetectPrecedences. Hence a complexity of $O(n(n + k)) = O(n^2)$. □
8 Edge-Finder Adjustment
We introduce a stronger adjustment algorithm that utilizes the strength of the horizontally-elastic relaxation. Given a precedence relation $\Theta < i$ discovered during the detection phase, Adjustment computes the new value for $est_i$.
Let the bottom (resp. upper) part of the resource be a portion with capacity $C - h_i$ (resp. $h_i$). Adjustment iterates through all detected precedences $\Theta < i$ and schedules the tasks in $\Theta$ on the bottom part of the resource. Because of this restriction on the capacity, the energy of the tasks is not fully scheduled which results in an overflow $ov$ returned by ScheduleTasks. This overflow is an accumulation of small overflows that ScheduleTasks encoded on the profile as follows. For each time point $t$, $t, ov$ indicates how much overflow, that contributed to the final overflow, was accumulated on the time interval $(-\infty, t, time)$. All this overflow must be scheduled on the upper part of the resource. ComputeBound simulates the execution of ScheduleTasks that schedules the tasks in $\Theta$. However, it only allows $t, ov$ units of energy on the time interval $(-\infty, t, time)$ to be scheduled on the upper part of the resource. Term $d_{total}$ represents the energy that was scheduled on the upper part in the previous iterations, while $d$ represents the energy that is about to be scheduled on the upper part in the current iteration. When $ov_{max}$ units of energy are scheduled on the upper part, the algorithm stops and returns the time when this event occurs. This is where $est_i$ is adjusted.
Lemma 6. Adjustment runs in $O(n^2)$.
Proof. ComputeBound iterates $O(n)$ times (Lemma 1) and each iteration executes in $O(1)$. ScheduleTasks also runs in linear time. Adjustment calls these algorithms $O(n)$ times for a total complexity of $O(n^2)$. □
Consider a detected precedence $\Theta < i$. Let $\text{adj}_i^P$ ($\text{adj}_i^H$) be the earliest starting time of task $i$ after being adjusted using the classic rule (4) (the algorithm Adjustment).
Theorem 4. For all precedences $\Theta < i$, $\text{adj}_i^P \leq \text{adj}_i^H$
Proof. Both adjustments assign as much energy as possible on the bottom part of the resource and the remaining energy onto the upper part. But the horizontally-elastic relaxation
Algorithm 6: ComputeBound($i, \Theta, ov_{max}$)
1. for all time point $t$ do $t, \Delta_{max} \leftarrow 0, t, \Delta_{req} \leftarrow 0$
2. for $i \in \Theta$ do
3. Increment $T_{est}, \Delta_{max}$ and $T_{est}, \Delta_{req}$ by $h_i$
4. Decrement $T_{est}, \Delta_{max}$ and $T_{est}, \Delta_{req}$ by $h_i$
5. $t \leftarrow \min\{t | t, ov > 0\}, ov \leftarrow 0, d_{total} \leftarrow 0$
6. $S \leftarrow 0, h_{req} \leftarrow 0$
7. while $t, next \neq null$ do
8. $l \leftarrow t, next, time - t, time$
9. $S \leftarrow S + t, \Delta_{max}$
10. $h_{max} \leftarrow \min(S, C)$
11. $h_{req} \leftarrow \min(h_{req} + ov, h_{max})$
12. if $0 < ov < (h_{cons} - h_{req}) \cdot l$ then
13. $l \leftarrow \max\left(\frac{ov}{h_{cons} - h_{req}}, 1\right)$
14. $d \leftarrow (h_{cons} - C + h_i) \cdot l$
15. $d \leftarrow \max(0, \min(d, ov_{max} - d_{total}, t, next, ov - d_{total})$
16. if $d_{total} + d = ov_{max}$ then
17. $d_{total} \leftarrow d_{total} +d$
18. $ov \leftarrow ov + (h_{req} - h_{cons}) \cdot l$
19. if $t, time + l < t, next, time$ then
20. $t, time \leftarrow t, time + l$
21. else $t \leftarrow t, next$
22. return $-\infty$
limits to $h_i$ the amount of energy spent by a task $i$ at any given time which shifts the energy later on the schedule. The fully-elastic relaxation consumes the bottom part of the resource entirely and packs the remaining energy as soon as possible on the upper part. The horizontally-elastic relaxation might not fully consume the bottom part and does not necessarily pack the remaining energy at the earliest time on the upper part. Consider the instance with $C = 3$ and five tasks whose parameters $(\text{est}_i, \text{lct}_i, p_i, h_i)$ are $x : \langle 0, 4, 2, 1 \rangle$, $y : \langle 1, 4, 1, 3 \rangle$, $z : \langle 2, 4, 1, 3 \rangle$, $w : \langle 2, 4, 1, 1 \rangle$, and $v : \langle 1, 10, 3, 1 \rangle$. We get $\text{adj}^F_v = 2 < \text{adj}^H_v = 3$ for the precedence $\{x, y, z, w\} < v$.
9 Experimental results
We implemented the algorithms for the Choco 2 solver and tried them on the benchmarks BL [Baptiste and Le Pape, 2000] and PSPLib [Kolisch and Sprecher, 1997] of Resource Constrained Project Scheduling Problems (RCPSP). This problem has multiple resources of varied capacities on which the tasks, subject to precedence constraints, are simultaneously executed. We minimize the makespan. The base model has one starting time variable per task, one makespan variable, and one CUMULATIVE constraint per resource for which the Time-Table rule is enforced [Beldiceanu and Carlsson, 2002]. From this common core, we create two models that are compared against each other. In the horizontally-elastic model, we enforce the Overload Check and Edge-Finder rules as explained in the previous sections. In the fully-elastic model, we rather post our implementation of a constraint that enforces the Overload Check and Edge-Finder rules as described by [Vilím, 2009]. We used three different branching heuristics: a static variable and value ordering, DomOverWDeg [Boussemart et al., 2004], and Impact Based Search [Refalo, 2004]. All experiments were run on an Intel Xeon X5560 2.667GHz quad-core processor.
Figure 2 compares runtimes and backtracks for both models. We only report instances that were solved within 25 minutes. For the highly-cumulative instances from the BL benchmark, our method gives a speedup for 85%, 82% and 63% of the instances using the static, the DomOverWDeg and the Impact Based Search heuristics. The Impact Based Search leads to the lesser improvement since it tends not to branch on values that would be filtered. For the highly-disjunctive instances from the PSPLib benchmark, we notice a lesser improvement of runtimes since the instances are generally easier to resolve. Our method still allows a speedup for 75% of the instances for all three heuristics.
The horizontally-elastic relaxation grants a significant improvement in the runtimes, although not as significant for the backtracks. This is explained by Vilím’s algorithm that processes a large number of detected precedences that do not lead to adjustments. We observed that our algorithm only discovers precedences that lead to adjustments. Moreover, our algorithm processes each precedence in linear time.
10 Conclusion
We generalized the Overload Check and Edge-Finder rules using a function of earliest completion time (ect). We introduced a stronger relaxation of ect. We presented an innovative data structure used in two new algorithms that enforce the Overload Check and Edge-Finder rules. Experimental results demonstrate the effectiveness of our method. In fact, our algorithms solved many RCPSP instances with fewer backtracks and in faster times than the state-of-the-art algorithms.
References
|
{"Source-Url": "https://www.ijcai.org/Proceedings/16/Papers/440.pdf", "len_cl100k_base": 9480, "olmocr-version": "0.1.49", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 34018, "total-output-tokens": 11435, "length": "2e13", "weborganizer": {"__label__adult": 0.0003783702850341797, "__label__art_design": 0.0005650520324707031, "__label__crime_law": 0.0006480216979980469, "__label__education_jobs": 0.002124786376953125, "__label__entertainment": 0.00013577938079833984, "__label__fashion_beauty": 0.00025010108947753906, "__label__finance_business": 0.0017805099487304688, "__label__food_dining": 0.0004930496215820312, "__label__games": 0.0010843276977539062, "__label__hardware": 0.0013170242309570312, "__label__health": 0.000988006591796875, "__label__history": 0.0005583763122558594, "__label__home_hobbies": 0.00020897388458251953, "__label__industrial": 0.0022373199462890625, "__label__literature": 0.0002796649932861328, "__label__politics": 0.000576019287109375, "__label__religion": 0.0006699562072753906, "__label__science_tech": 0.416748046875, "__label__social_life": 0.0001289844512939453, "__label__software": 0.0216064453125, "__label__software_dev": 0.54541015625, "__label__sports_fitness": 0.0003843307495117187, "__label__transportation": 0.0010519027709960938, "__label__travel": 0.0003108978271484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35870, 0.02684]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35870, 0.37391]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35870, 0.76104]], "google_gemma-3-12b-it_contains_pii": [[0, 4853, false], [4853, 12043, null], [12043, 18854, null], [18854, 23387, null], [23387, 28532, null], [28532, 32152, null], [32152, 35870, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4853, true], [4853, 12043, null], [12043, 18854, null], [18854, 23387, null], [23387, 28532, null], [28532, 32152, null], [32152, 35870, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35870, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35870, null]], "pdf_page_numbers": [[0, 4853, 1], [4853, 12043, 2], [12043, 18854, 3], [18854, 23387, 4], [23387, 28532, 5], [28532, 32152, 6], [32152, 35870, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35870, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
e3287da622fe36e9210c64487178765cfe4efa18
|
Classification Techniques Use to Empirically Validate Redundancy Metrics as Reliability Indicators based on Fault-proneness Attribute
Dalila Amara\textsuperscript{a} and Latifa Ben Arfa Rabai\textsuperscript{b}
Université de Tunis, Institut Supérieur de Gestion de Tunis, SMART Lab, Tunis, Tunisia
Keywords: Software Reliability, Software Redundancy Metrics, Software Metrics Validation, Fault-proneness.
Abstract: Software metrics are proposed as quantitative measures of internal quality factors like cohesion and complexity. For the external ones such as reliability and maintainability, they are usually predicted by means of various metrics of internal attributes. In this context, we have focused on a suite of four entropy-based software redundancy metrics considered as software reliability indicators. Despite their important purpose, they are manually computed and only theoretically validated. Hence, we have implemented an empirical approach for assessing these metrics, using a set of programs retrieved from real software projects. Given that software reliability as external attribute, cannot be directly evaluated, we employ other measurable quality factors representing direct reflections of this attribute. Among them, defect density and fault-proneness are widely used as means to measure and predict software reliability based on software metrics. The basic idea is to generate an empirical dataset embodying for each program, the values of the redundancy metrics and the values of one of these measurable attributes. In our previous work, we have studied their relationship with the defect density attribute in order to validate them as useful reliability indicators. Promising results indicating the usefulness of these metrics as defect density indicators are obtained. Classifying modules (functions or classes) as defective or not defective is also an important reliability indicator. Literature review shows that software reliability counts on its fault-prone modules and more trusted software consists of less fault-prone units. Therefore, we aim in this paper to propose an empirical approach to validate the redundancy metrics as significant reliability indicators. The validation is carried out using the accuracy measure and results show that the fault proneness attribute can be predicted using the redundancy metrics with a good accuracy rate of 0.82.
1 INTRODUCTION
One common way to verify and validate software quality is software testing which consists on identifying software faults (Lyu et al., 1996). This process takes too much time and requires a large amount of resources (Gondra, 2008). Therefore, methodologies of predicting software quality prior the testing phase are required to increase the efficiency of time, effort and cost usage. Software quality prediction requires the development of a measurement plan providing the needed data on software factors (Arvanitou et al., 2017). Software quality measurement consists on assigning numbers or symbols to software factors to evaluate their performance using software metrics (Nakai et al., 2016; Fenton and Bieman, 2014). These metrics provide quantitative values of different software factors related to the process and product entities. In addition, they are used to develop quality prediction models (Reddivari and Raman, 2019). Most of software metrics were defined to evaluate internal quality attributes including coupling and complexity (Chidamber and Kemerer, 1994). For external attributes like reliability and maintainability, their measurement was usually determined by combining different metrics measuring internal characteristics (Briand and Wüst, 2002; Jabangwe et al., 2015). According to (Fenton and Bieman, 2014), external attributes are more difficult to understand than internal ones since they depend on the program behaviour and they are available at later phases of the development process. Thus, further studies focusing on the prediction of these attributes still required. Authors in (Mili et al., 2014) proposed a suite of metrics to monitor software reliability by evaluating the code redundancy based on Shannon entropy measure. The main limitation of this suite is the lack of an empirical validation showing its utility.
as reliability indicator. As software reliability is an external attribute that cannot be directly evaluated, we have focused on measurable attributes that reflect it to address this issue. In our previous work (Amara et al., 2021), we have studied the relationship between the redundancy metrics and the defect density attribute in order to validate them as useful reliability indicators. Promising results indicating the usefulness of these metrics as defect density indicators are obtained. Fault-proneness was also identified as an important reliability indicator (Gondra, 2008; Singh et al., 2018). Authors in (Verma and Kumar, 2017) noted that software reliability counts on its fault-prone modules; more trusted software consists of less fault-prone units. Thus, it will be possible to monitor software reliability by predicting the number of fault-prone modules based on software metrics (Gyimothy et al., 2005; Olague et al., 2007; Jabangwe et al., 2015).
Therefore, we aim in this paper to use the fault-proneness attribute to answer this question: Are redundancy metrics useful for software reliability prediction? To perform the empirical assessment and validation of the redundancy metrics, the data collection phase is required. For that step, Apache Common Mathematics Library was deployed in this research. Given its availability, two main elements of data are obtained:
- Different classes satisfying the redundancy metrics assumption (Metrics are computed at method-level). These methods manipulate input and output variables. This means programs with input states represented by the declared variables and output states represented by the modified states of these variables (Mili et al., 2014)) are selected to compute them in order to construct an empirical data set containing the values of these metrics.
- The bug information of the selected classes needed to compute the values of the fault-proneness attribute was unavailable. Therefore, a fault injection procedure was used to obtain them and to perform the empirical validation of the redundancy metrics. Thus, in this study, the dataset we used to perform our validation and to train and evaluate the classification models contains the values of the redundancy metrics for each function and the related fault-proneness (0 or 1) attribute.
Different experiments based on classification techniques are conducted to address these issues. The validation is carried out using the accuracy measure and results confirm the predictive capability of the redundancy metrics for software fault prediction.
The paper is organized as follows: Section 2 summarized the purpose of the redundancy metrics and provides an overview of software fault-proneness prediction. Section 3 presents the empirical validation approach, the data set collection and analysis procedures. Section 4 describes the performed experiments and results. Section 5 presents the performed experiments and results. Finally, Section 6 includes the conclusion.
2 RELATED WORKS
In this section, we present the purpose of the redundancy metrics. We also provide an overview of software fault prediction using software metrics.
2.1 Software Reliability
Software reliability is an important software quality attribute defined as the probability of failure-free operation for a specified period of time in a specified environment. It can be described by other sub-characteristics like maturity, availability, fault tolerance and recoverability (Febrero et al., 2016; Amara and Rabai, 2017). For (Bansiya and Davis, 2002), it is one of high-level quality attributes that cannot be directly observed and measured.
Different models based on direct metrics were proposed to predict it (Catal and Diri, 2009, Radjenović et al., 2013). These models used software metrics (called independent variables) to evaluate measurable reliability attributes (called dependent variable) like defect density, fault-proneness and defect count (Briand and Wüst, 2002). Authors in (Mili et al., 2014) also proposed a suite of four metrics to monitor programs reliability based on their redundancy. Different forms of software redundancy were defined including information redundancy (code redundancy) (Shannon, 2001), functional redundancy (Asghari et al., 2018) and time redundancy (Dubrova, 2013). The redundancy metrics proposed by (Mili et al., 2014) assess the information redundancy provided by the different states of the program (Shannon, 2001). These states reflect the uncertainty about the outcome of the program’ variables. The terminology related to program states includes (Mili et al., 2014):
- Software program state: is the set of values given by its variables which may change by one or more actions (functions) of the program.
- State space: is the set of values taken by the declared program variables.
• Initial state space: is the state of the program represented by its input variables.
• Current state (actual state): represents the different states that the program may be in at any given point in the program.
• Final state space: represents the state of the program that is produced by its outputs for the relevant initial states.
• State redundancy: the extra range of values allowed by a program than it is needed to represent the program states. The state redundancy is represented by the initial and final state redundancy metrics defined above.
Example 1 illustrates these definitions.
Example 1: Let a program (method) g defined by:
```java
int s; /*state space of g*/
s=2; /* initial state of g */
s=s+1; /* internal state 1 of g*/
s=2*s; /* internal state 2 of g*/
s=ss=s+12; /* final state of g */
```
Notation:
ISR is the gap between the declared state and the initial state of the program.
FSR is the gap between the declared state and the final state of the program.
S is the program’s declared state represented by its all declared variables.
\[ H(S) \] is the state space of the program as the maximum entropy (bits) taken by its declared variables.
\[ \sigma_1 \] is the initial state of the program, represented by its input variables.
\[ H(\sigma_f) \] is the state space (entropy) of the initial program’s state.
\[ \sigma_f \] is the final state of the program given by its output variables.
To compute the state redundancy (SR) metric (ISR and FSR), each data type is mapped to its width in bits. For instance, for Java language, the entropy of variable declarations of basic data types is illustrated in Table 1.
Table 1: Entropy for basic data type.
<table>
<thead>
<tr>
<th>Data type</th>
<th>Entropy</th>
</tr>
</thead>
<tbody>
<tr>
<td>Boolean</td>
<td>1</td>
</tr>
<tr>
<td>Byte</td>
<td>8</td>
</tr>
<tr>
<td>Char, short</td>
<td>16</td>
</tr>
<tr>
<td>Int, float</td>
<td>32</td>
</tr>
<tr>
<td>Double, long</td>
<td>64</td>
</tr>
</tbody>
</table>
Example 2: Let a program (method) g defined by:
```java
int x, y, z; // the program state is represented by x, y and z variables
x= 21; // initial state of x
y= 90; // initial state of y
z=(x+y)/2; // final state
```
The declared space of this program is defined by three integer variables; x, y and z, hence, using the metrics definitions, \( H(S) = 96 \) bits since 3 integer variables are used. Its initial state is defined by three variables; x, y and z. The input variables x and y require respectively 5 and 7 bits to be stored. The output variable z has a free range (32 bits). Hence \( H(\sigma_1) = 5+7+32= 44 \) bits. For the final state, is determined by the state of the variable z (its entropy), \( H(\sigma_f) = H((21+90)/2) =6 \) bits, then: ISR= (96-44)/96 =0.54 FSR= (96-6)/96 =0.93
2.2 Software Redundancy Metrics Suite
Redundancy metrics were defined based on Shannon entropy measure of programs code (Shannon, 2001). Four metrics were defined which are initial state redundancy, final state redundancy, functional redundancy and non-injectivity (Mili et al., 2014).
2.2.1 Initial and Final State Redundancy Metrics
The state redundancy represents the gap between the declared state and the actual state (really used) of a program (Mili et al., 2014; Ayad et al., 2018). For instance, the age of an employee is generally declared as an integer variable type. However, only a restrict range i.e between 0 and 120 is really required. This means that 7 bits are sufficient to store the age variable but the typical 32 bits size of an integer variable is used. The unused bits measure the code redundancy. The program moves from its initial states (\( \sigma_1 \)) to its final states (\( \sigma_f \)), then two state redundancy measures namely initial state redundancy (ISR) and final state redundancy (FSR) were defined by:
\[
ISR(g) = \frac{H(S) - H(\sigma_1)}{H(S)}
\]
\[
FSR(g) = \frac{H(S) - H(\sigma_f)}{H(S)}
\]
Notation:
ISR is the gap between the declared state and the initial state of the program.
FSR is the gap between the declared state and the final state of the program.
S is the program’ declared state represented by its all declared variables.
\[ H(S) \] is the state space of the program as the maximum entropy (bits) taken by its declared variables.
\[ \sigma_1 \] is the initial state of the program g, represented by its input variables.
\[ H(\sigma_f) \] is the state space (entropy) of the initial program’ state.
\[ \sigma_f \] is the final state of the program given by its output variables.
The declared space of this program is defined by three integer variables; x, y and z, hence, using the metrics definitions, \( H(S) = 96 \) bits since 3 integer variables are used. Its initial state is defined by three variables; x, y and z. The input variables x and y require respectively 5 and 7 bits to be stored. The output variable z has a free range (32 bits). Hence \( H(\sigma_1) = 5+7+32= 44 \) bits. For the final state, is determined by the state of the variable z (its entropy), \( H(\sigma_f) =H((21+90)/2)=6 \) bits, then: ISR= (96-44)/96 =0.54 FSR= (96-6)/96 =0.93
2.2.2 Functional Redundancy Metric (FR)
According to (Mili et al., 2014; Ayad et al., 2018), the functional redundancy metric is a function from initial states to final states. It reflects how initial states are mapped to final states. For a program (function) $g$, FR is the ratio of the output data delivered by $g$ prorated to the input data received by $g$ and given by:
$$FR = \frac{H(Y)}{H(X)}$$ (3)
Notation
- $X$ is a random variable representing the program’ input data.
- $Y$ is a random variable that represents the program’ output data.
- $H(Y)$ is the entropy of the output data delivered by $g$.
- $H(X)$ is the entropy of input data passed through parameters, global variables, read statements, etc.
In Example 2, $H(S) = 96$ bits. The Random variable $Y$ is defined by the integer variable $z$ represented by 32 bits. Then, $H(Y) = log_2(2^{32}) = 32$bits. $H(X)$ is the input data received by $g$ and represented by the two integer variables $x$ and $y$. Then, $H(X) = 2* log_2(2^{16}) = 64$bits. FR is given by:
$$FR = \frac{32}{64} = 0.5$$
2.2.3 Non-injectivity (NI)
According to (Catal and Diri, 2009), a major source of program (function) redundancy is its non-injectivity. An injective function is a function whose value changes whenever its argument does. A function is non-injective when it maps several distinct arguments (initial states $\sigma_1$) into the same image (final states $\sigma_f$). NI was defined by:
$$NI = \frac{H(\sigma_1 | \sigma_f)}{H(\sigma_1)} = \frac{H(\sigma_1) - H(\sigma_f)}{H(\sigma_1)}$$ (4)
In Example 2, NI is equal to $(44-6)/44=0.86$.
2.3 Overview of Software Fault-proneness Prediction
Fault-proneness consists on classifying modules (functions or classes) as defective or not defective (Singh et al., 2018). For (Rathore and Kumar, 2017; Kumar et al., 2017), software fault prediction (SFP) consists on identifying faulty modules as software parts containing faults. This attribute was usually estimated and predicted using predictive models compromised of software metrics (Gondra, 2008). The early application of these models helps reducing the testing effort (Singh et al., 2018) as the identified defect-prone parts are tested with more rigor compared to other ones. In addition, effective resource allocation and reduction in cost and development time will be obtained (Kalaivani and Beena, 2018).
Different software fault prediction models have been studied since 1990. The development of these models was performed using classification techniques as fault-proneness attribute consists on classifying modules (functions or classes) as defective or not defective. These models play a crucial role in understanding, evaluating and improving the quality of software systems. According to (Singh et al., 2018), the early application of these models helps to reduce the testing effort as testing activities will be planned. Also, the parts of software system identified as defect-prone will be tested with more rigor in comparison to other system parts (Gondra, 2008). In the same context, (Kalaivani and Beena, 2018) noted that the early identification of faulty software parts provides an effective resource allocation and reduces the cost and time of software development. Numerous studies were defined to predict this attribute based on software metrics.
(Menzies et al., 2004) conducted an experiment where different fault prediction models were constructed using CART, NB and J48 algorithms over different projects taken from PROMISE repository. Results showed that the performance provided by NB is better than that is provided by J48.
(Olague et al., 2007) investigated six different versions of Mozilla Rhino project. The goal of the study was to study the ability of C&K, QMOOD, MOOD suites of metrics in predicting faulty modules. They applied the Univariate and multivariate binary logistic regression to the cited suites. The authors concluded that C&K and QMOOD suites are very useful for fault prediction by contrast to the MOOD.
(Zhou et al., 2010) examined C&K metrics suite for a defect prediction models based on LR, NB, RF algorithms. The data set under study consists on KC1 project taken from NASA data set. The objective was to predict the severity of faults. Authors concluded that the best fault prediction is achieved by most of C&K metrics expected NOC.
(Catal and Diri, 2009) conducted a comparative analysis to study the efficiency of RF and NB algorithms in predicting fault-proneness modules. Authors examined C&K metrics suite taken from NASA data sets. Results showed that for large data sets, RF,
provides the best prediction, whereas, for small data sets, NB provides best results.
(He et al., 2015) compared the performance of LR, J48, NB, SVM, DT and BN algorithms in predicting faulty classes. They examined 34 releases obtained from 10 open source PROMISE projects. Authors concluded that SVM and DT perform well in predicting faulty classes.
(Kaur and Kaur, 2018) compared the performance of Bagging, J48, DT, RF and NB classifiers. They constructed different defect prediction models based on C&K and QMOOD metrics. Authors concluded that only Bagging and J48 are the best defect predictors.
(Lomio et al., 2021) have also compared the performance of Machine and Deep Learning models models in predicting faults. They have conducted a case study among 33 Java projects and results showed that deep learning provide a more accurate fault detection accuracy.
2.4 Formulation of the Research Hypothesis
The presented fault prediction studies highlighted the usefulness and the effectiveness of classification techniques in fault-proneness prediction. Thus, to validate the redundancy metrics as reliability indicators using fault-proneness attribute, we have designed the following hypotheses:
- **H1 (Alternative Hypothesis):** Redundancy metrics are significant indicators of software fault-proneness attribute.
- **H2 (Null Hypothesis):** There is no significant relationship between the redundancy metrics and fault-proneness attribute.
Through these hypothesis, we aim to verify if a relationship between the different metrics and fault-proneness attribute exists in order to confirm their utility in monitoring software reliability.
3 Empirical Validation of Redundancy Metrics as Fault-proneness Indicators
3.1 Empirical Validation Approach
According to (Rathore and Kumar, 2017; Kumar et al., 2017), the fault prediction was conducted based on three main steps:
1. Data set collection and exploration. This step consists on collecting data related to software metrics and faults.
2. Data set analysis and models building. This step consists on performing the data set analysis, data splitting into learn and test sets and models building.
3. Models performance evaluation. Numerous evaluation measures were defined to evaluate the overall performance of the prediction models.
3.2 Data Set Collection
The development of fault prediction models starts by data set collection phase that requires two main elements: software metrics and software faults. Data related to these elements can include data from similar software projects or existed software metrics and historical fault data-sets of previous projects (Turbieh et al., 2019). In this paper, the fault-proneness attribute indicating whether a module is fault-free (0) or fault-prone (1) will be considered to perform our validation work. As explained in our previous work (Amara et al., 2021), as redundancy metrics are computed from the programs states manipulated by its variables, software classes containing functions of input/output types were selected. This means programs (functions) with input states represented by the declared variables and output states represented by modified states of these variables.
We have focused on Apache Commons Math library (https://commons.apache.org/) (Kumar and Rathore, 2018) to selected different classes from which the metrics were computed.
To select the needed repository, we have considered Apache Commons products library which respects all our requirements and hypothesis. Then, from the selected repository, we have considered a set 43 classes (see (Amara et al., 2021)) containing functions manipulating variables in the input and the output state. A description of each class and its related function is available at http://commons.apache.org/proper/commonsmath/javadocs/api-3.6/. As this library contains only the source code and the associated unit tests, we have used fault injection procedure to obtain the fault-proneness values.
One of the well-known fault injection techniques is mutation testing which consists on automatically seeding into each class’ code a number of faults (or mutations). The fault injection procedure is used to obtain fault data set. This prevents us to compute fault-proneness values at the class-level as all of the classes contain faults. Therefore, we ought to com-
pute this attribute at the function-level. The redundancy metrics will be also computed at this level leading to increase the size of our data set. Details of the redundancy metrics and fault-proneness computing are described in the subsequent sub-sections.
3.2.1 Redundancy Metrics Collection
We have computed the redundancy metrics at the function-level of each class as all classes will contain faults. The process we used to compute these metrics consists of the following steps:
- For each class, we have considered each function separately to generate the different metrics.
- For each function, we have focused on its input and output variables. Then, we have computed the metrics for random inputs using their equations (1) to (4).
- The output of this process is an Excel file in which the four redundancy metrics values of the different functions of each class were saved.
These steps were performed using the Eclipse development environment (version: Neon.3 Release (4.6.3)). Details of metrics computing are available in (Amara et al., 2021).
3.2.2 Fault Data Set Collection
Software fault-proneness attribute is a direct reflection of software reliability since as noted by (Karimian and Babamir, 2017; Reddivari and Raman, 2019), more trusted software consists of less fault-prone units. Software fault prediction (SFP) consists on classifying modules (functions or classes) as defective or not defective by identifying the faulty modules as software parts containing faults (Singh et al., 2018; Rathore and Kumar, 2017; Turabieh et al., 2019). According to (Gondra, 2008), this attribute can be estimated and predicted using prediction models based on software metrics.
Fault injection procedure is performed based on automated mutation tools like MuJava, MuEclipse, PiTest and much more (Delahaye and Du Bousquet, 2013). In our research work, PiTest is used within Maven environment. To inject faults, we have adopted the following steps:
- All possible faults which are active by default in PiTest are injected into the source code of the selected classes These faults include the replacement of binary arithmetic operations by another ones (+ by -, - by +, * by /, / by *), etc.
- PiTest runs, and related reports are generated. They indicate for each function, the type and the location of the injected fault.
- PiTest reports are analyzed to identify for each function whether it is fault-free or not. Thus, we have determined the value of fault-proneness attribute (1 or 0) as follows:
- If all injected faults are detected (killed), then the function is not defective (killed) and the value 0 (fault-free) is assigned to fault-proneness attribute of this function. An example of non-defective function is depicted in Figure 1.

- If at least one of the injected faults is masked (survived), then this function is considered as defective and the value 1 is assigned to the attribute fault-proneness for this function. An example of defective function is depicted in Figure 2.

The final obtained data set contains for each method the values of the redundancy metrics and the associated fault proneness attribute indicating whether this function contains faults (1) or not (0).
3.3 Data Set Analysis
In this section, we have performed the data exploration and correlation analysis. Data exploration is an important step required before the application of classification techniques to analyze the data set. Thus, we visualize in Figure 3, the percentage of fault-prone (1) and no-fault prone (0) functions. Figure 3 shows that
43% of functions in the selected classes are defective and 57% are fault-free.
We have used the correlation matrix to identify the correlation between the independent variables; ISR, FSR, FR, and NI. The objective is to consider metrics which are not inter-correlated in order to achieve better model accuracy. Results are illustrated in Figure 4. Figure 4 shows a strong correlation between ISR and FSR as their correlation coefficient is strong and equal to 0.93. FSR and NI have also significant correlation as their correlation coefficient is 0.63. Therefore, FSR will be omitted in our prediction. ISR and FSR metrics are strongly correlated as the FSR metric is computed using the value of $H(\sigma_f)$ which in turn depends on the value of $H(\sigma_1)$ used to compute ISR metric (See equations (1) and (2)). Therefore any changes in ISR values will lead to changes in FSR and NI ones which explain their correlation.
4 EXPERIMENTS AND RESULTS
This section summarizes the well-used software fault prediction techniques and presents the performed experiments.
4.1 Software Faults Prediction Techniques
The development of fault prediction models requires the use of software prediction techniques. To select which technique to use, we have to focus first on the response variable we aim to predict. In this paper, the output to predict is fault-proneness classifying modules (functions or classes) as fault-prone or fault-free. Therefore, classification techniques are useful to predict this attribute using the redundancy metrics. Different classification techniques were defined including Decision Trees (DT), Support Vector Machine (SVM), Naive Bayes (NB), Logistic Regression (LR), Random Forest (RF) and much others (Prasad et al., 2015; Turabieh et al., 2019; Malhotra, 2015; Singh et al., 2018).
As discussed in section 2, several studies are proposed to predict the fault-proneness attribute based on these techniques. The objective is to validate different software metrics or to compare the performance of these techniques. Most of these studies showed up the effectiveness of the classification techniques in predicting fault-proneness attribute. However, we have stated that different criteria like the size of the used data set (Catal and Diri, 2009), the level of metrics’ computing (Koru and Liu, 2005) provide a variation in the performance of these techniques. As our main objective is to study the usefulness of the redundancy metrics in reflecting fault-proneness attribute and not to compare the classification techniques, we have started by applying some of them to reach this issue.
4.2 Experiments
To build the classification models, we have proceeded as follows:
1. To start with, data exploration phase is performed as explained above. In addition, required Python packages are imported.
2. Next, data set analysis and models building are performed. In this step, we have studied the correlation between the independent variables (redundancy metrics) to consider only metrics that are not inter-correlated as explained above. Also, data is divided into two parts; train data (80%) and test data (20%). In addition, the different cited classification techniques are used to build prediction models based on the train data.
3. Finally, the prediction is performed on the test data and evaluated based on different performance
evaluation measures.
The presented steps are performed based on appropriate modules and scripts available in the Python language and used to build the different considered classification techniques in order to test the stated hypothesis.
4.3 Results
In this section, we present the results of predicting faulty and non-faulty modules using the classification techniques in order to answer the specified question: "Is there a significant correlation between the redundancy metrics and the fault-proneness attribute?". Then, we compare their performance based on different performance evaluation measures.
4.3.1 Common Performance Evaluation Measures
Various measures were defined to evaluate the performance of the classification techniques (Elish and El-ish, 2008; Abaei and Selamat, 2014; Reddivari and Raman, 2019). A binary classifier uses data instances in the test data to predict either they are positive or negative. Then, four possible outcomes are obtained: True positive (TP), False positive (FP), True negative (TN), and False negative (FN). These four outcomes are presented in a confusion matrix from which different measures were derived:
- **Precision**: indicates how many classes are actually defect-prone from those returned by a model. The best value of this measure is 1. The high value of precision indicates fewer FP (correct elements which are classified incorrectly as defect-prone elements). This measure is defined by: Precision = TP / TP+FP
- **Recall**: indicates how many of the defect-prone classes are returned actually by a model. The best value of this measure is 1. High value of recall measure indicates lower number of FN (defective classes non-indicated by the model). It is defined by: Recall = TP / TP+FN
- **Accuracy**: indicates the rate of correct classification. It is presented as ratio of the number of correctly predicted modules to the total number of modules and defined by: Accuracy = TP+TN / TP+TN +FP+FN
- **Area under the curve (AUC)**: is a curve with two dimensions; x-axis is represented by FP and y-axis is represented by TP.
4.3.2 Results
The presented evaluation measures are used to evaluate the performance of the different used classification techniques. Results are illustrated in Tables 2 to 6.
Table 2: Results of DT prediction model.
<table>
<thead>
<tr>
<th></th>
<th>Precision</th>
<th>Recall</th>
<th>F1-score</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.83</td>
<td>0.87</td>
<td>0.70</td>
</tr>
<tr>
<td>1</td>
<td>0.81</td>
<td>0.76</td>
<td>0.79</td>
</tr>
<tr>
<td>Accuracy</td>
<td>0.82</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
(b) Confusion matrix
Table 3: Results of LR prediction model.
<table>
<thead>
<tr>
<th></th>
<th>Precision</th>
<th>Recall</th>
<th>F1-score</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.61</td>
<td>0.83</td>
<td>0.70</td>
</tr>
<tr>
<td>1</td>
<td>0.56</td>
<td>0.29</td>
<td>0.38</td>
</tr>
<tr>
<td>Accuracy</td>
<td>0.60</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
(b) Confusion matrix
Table 4: Results of NB prediction model.
<table>
<thead>
<tr>
<th></th>
<th>Precision</th>
<th>Recall</th>
<th>F1-score</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.63</td>
<td>0.83</td>
<td>0.72</td>
</tr>
<tr>
<td>1</td>
<td>0.60</td>
<td>0.35</td>
<td>0.44</td>
</tr>
<tr>
<td>Accuracy</td>
<td>0.82</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
(b) Confusion matrix
Tables 2 to 6 illustrate the different evaluation measures obtained for the selected classification techniques.
Table 5: Results of SVM prediction model.
(a) Performance measure
<table>
<thead>
<tr>
<th></th>
<th>Precision</th>
<th>Recall</th>
<th>F1-score</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.70</td>
<td>0.83</td>
<td>0.76</td>
</tr>
<tr>
<td>1</td>
<td>0.69</td>
<td>0.53</td>
<td>0.60</td>
</tr>
<tr>
<td>Accuracy</td>
<td></td>
<td></td>
<td>0.70</td>
</tr>
</tbody>
</table>
(b) Confusion matrix
<p>| | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>19</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>8</td>
<td>4</td>
<td>9</td>
</tr>
</tbody>
</table>
Table 6: Results of RF prediction model.
(a) Performance measure
<table>
<thead>
<tr>
<th></th>
<th>Precision</th>
<th>Recall</th>
<th>F1-score</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.80</td>
<td>0.87</td>
<td>0.83</td>
</tr>
<tr>
<td>1</td>
<td>0.80</td>
<td>0.71</td>
<td>0.77</td>
</tr>
<tr>
<td>Accuracy</td>
<td></td>
<td></td>
<td>0.80</td>
</tr>
</tbody>
</table>
(b) Confusion matrix
<p>| | | | |</p>
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>20</td>
<td>3</td>
</tr>
<tr>
<td>1</td>
<td>4</td>
<td>13</td>
<td></td>
</tr>
</tbody>
</table>
5 DISCUSSION, THREATS TO VALIDITY AND COMPARISON WIDTH RELATED WORKS
This section summarizes the results and presents the identified threats to validity. Also, a comparison with the related works is presented.
5.1 Overall Discussion of Results and Threats to Validity
We have experimented with different popular classifiers the usefulness of the redundancy metrics as reliability indicators using fault-proneness attribute. A set of 200 functions selected from Commons-math Apache project is used. Considering accuracy as the evaluating parameter, results show that the fault proneness attribute can be predicted using the redundancy metrics with a good accuracy rate of 0.82. This leads us to accept the stated H1 hypothesis indicating that the redundancy metrics are useful indicators of fault proneness attribute and reject the null hypothesis of no relationship between these two variables. Therefore, these results can be used as first guidance to predict faulty modules based on the redundancy metrics.
We have obtained promising results proposing validated ISR and NI redundancy metrics as significant reliability indicators for both considered defect density and fault-proneness attributes. However, we have noted several threads to validity:
- First, the proposed redundancy metrics are semantic as they depend on the program functionality; each program (function or class) has its state represented by the manipulated variables. Hence, each time the used variables in the program input state change, the output state will change, and the values of the redundancy metrics will change too. Therefore, the proposed computing process described in the previous work is not fully automated and it is implemented separately for each program.
- Second, as more we use larger training data sets and optimizing model parameters, better we improve the model prediction performance (Singh et al., 2018), our data set can be extended to enhance the performance of the proposed prediction model.
- Comparing the redundancy metrics with other existed metrics validated as fault-proneness indicators can enhance their performance as significant quality indicators.
- Performing other experiments using the same dataset and the same classification techniques by taking into account different metrics that include internal attributes such as complexity and cohesion measured by C&K metrics (Chidamber and Kemerer, 1994) and compare results with entropy metrics.
5.2 Comparison between Our Proposed Approach and Related Works
In (Ayad et al., 2018), authors have proposed an empirical validation of the redundancy metrics using the rate of mutants as quality attribute. We compare our work with their work in Table 8.
<table>
<thead>
<tr>
<th>Criteria</th>
<th>(Ayad et al., 2018)</th>
<th>Our work</th>
</tr>
</thead>
<tbody>
<tr>
<td>Suite of metrics (Independent variables)</td>
<td>ISR, FSR, FR and NI</td>
<td>ISR, FSR, FR and NI</td>
</tr>
<tr>
<td>Quality attribute (Dependent variable)</td>
<td>Survival rate of mutants</td>
<td>Fault-proneness.</td>
</tr>
<tr>
<td>Data repository</td>
<td>Apache Common Mathemat- ics Library</td>
<td>Apache Common Mathemat- ics Library</td>
</tr>
<tr>
<td>Size of the used data set</td>
<td>- 19 functions</td>
<td>- 200 functions for fault- proneness attribute.</td>
</tr>
<tr>
<td>Quality attribute collection procedure</td>
<td>Fault injection procedure based on PiTest tool is used then PiTest reports are analyzed to obtain the values of the considered attribute.</td>
<td>Fault injection procedure based on PiTest tool is used then PiTest reports are analyzed to obtain the values of the considered attribute.</td>
</tr>
<tr>
<td>Statistical techniques</td>
<td>- Correlation analysis between the independent variables is not performed. - Linear multivariate regression technique is used.</td>
<td>- Correlation analysis between the independent variables is performed. - Different classification techniques are used.</td>
</tr>
<tr>
<td>Results</td>
<td>All redundancy metrics are identified as significant indicators of the survival rate of mutants.</td>
<td>Only ISR and NI are identified as significant indicators of defect density and fault-proneness attributes.</td>
</tr>
</tbody>
</table>
Only ISR and FR are considered in our experimentation because we have identified a strong correlation between ISR and FSR as shown in Figure 4 which leads us to omit the FSR metric. Concerning the FR metric, we have included it in our experimentation, but we have stated that it hasn’t any change on the results contrary to ISR and NI.
As shown in Table 8, little works are proposed to empirically validate the redundancy metrics as reliability predictors. The presented comparison shows that:
- The same validation approach was used. In both cases, data set is first collected, then, data analysis, models building and performance evaluation steps are performed. In addition, the work described in (Ayad et al., 2018) is comparable to our work as the same data repository is used to compute the metrics.
- Authors in (Ayad et al., 2018) showed that all of the redundancy metrics are significant predictors of the survival rate of mutants and software reliability. However, in our validation work, only ISR and NI metrics appeared to be adequate in predicting software reliability using defect density and fault-proneness attributes. The lack of correlation tests between the independent variables in their study and the difference in selecting reliability quality attributes can explain these different results. On another hand, the nature of the considered fault-proneness quality attribute as dependent variable lead us to use various classification techniques.
6 CONCLUSION AND PERSPECTIVES
Initial state redundancy, final state redundancy, non-injectivity, and functional redundancy metrics were proposed to assess the code’ redundancy in order to monitor software reliability. However, all of these metrics are manually computed and theoretically presented. In this research, we aim at empirically validating these metrics as significant reliability indicators. We have used the fault proneness attribute as a direct reflection of software reliability to reach our objective.
We have used an empirical database including a set of Java functions taken from the Commons Math Library, all related redundancy metrics’ values, and the fault-proneness attribute as a direct reliability indicator. Five classification techniques (LR, SVM, DT, RF, and NB) are then used to assess the relationship between these two variables. The obtained results can be used as first guidance to predict faulty modules based on the redundancy metrics. The primary contribution is to assess the capability of the redundancy metrics in predicting faulty modules.
As the initial state redundancy metric only measures the program redundancy in its initial and final states without considering the redundancy of its internal states, we propose in the future work, to improve this metric by considering its internal states in order to reflect the overall program redundancy. In addition, replicated studies with large sized software should be carried out so that generalized results can be obtained.
REFERENCES
|
{"Source-Url": "https://www.scitepress.org/PublishedPapers/2022/110819/110819.pdf", "len_cl100k_base": 9571, "olmocr-version": "0.1.51", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39010, "total-output-tokens": 12159, "length": "2e13", "weborganizer": {"__label__adult": 0.0003862380981445313, "__label__art_design": 0.00026488304138183594, "__label__crime_law": 0.00033283233642578125, "__label__education_jobs": 0.0007891654968261719, "__label__entertainment": 6.079673767089844e-05, "__label__fashion_beauty": 0.00016546249389648438, "__label__finance_business": 0.0001984834671020508, "__label__food_dining": 0.0003428459167480469, "__label__games": 0.0007181167602539062, "__label__hardware": 0.00077056884765625, "__label__health": 0.0005931854248046875, "__label__history": 0.00019407272338867188, "__label__home_hobbies": 7.444620132446289e-05, "__label__industrial": 0.0003209114074707031, "__label__literature": 0.0003094673156738281, "__label__politics": 0.0002038478851318359, "__label__religion": 0.0004177093505859375, "__label__science_tech": 0.016998291015625, "__label__social_life": 9.208917617797852e-05, "__label__software": 0.005443572998046875, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.00028252601623535156, "__label__transportation": 0.0003685951232910156, "__label__travel": 0.00016546249389648438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48359, 0.04179]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48359, 0.45825]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48359, 0.88444]], "google_gemma-3-12b-it_contains_pii": [[0, 4256, false], [4256, 9098, null], [9098, 14093, null], [14093, 18701, null], [18701, 23046, null], [23046, 26777, null], [26777, 30144, null], [30144, 33233, null], [33233, 34247, null], [34247, 38235, null], [38235, 42971, null], [42971, 48359, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4256, true], [4256, 9098, null], [9098, 14093, null], [14093, 18701, null], [18701, 23046, null], [23046, 26777, null], [26777, 30144, null], [30144, 33233, null], [33233, 34247, null], [34247, 38235, null], [38235, 42971, null], [42971, 48359, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48359, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48359, null]], "pdf_page_numbers": [[0, 4256, 1], [4256, 9098, 2], [9098, 14093, 3], [14093, 18701, 4], [18701, 23046, 5], [23046, 26777, 6], [26777, 30144, 7], [30144, 33233, 8], [33233, 34247, 9], [34247, 38235, 10], [38235, 42971, 11], [42971, 48359, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48359, 0.17626]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
1a3dd7463f3832a6f1e992635d424506830be038
|
Reductio ad Absurdum Argumentation in Normal Logic Programs
Luís Moniz Pereira and Alexandre Miguel Pinto
{lmp|amp}@di.fct.unl.pt
Centro de Inteligência Artificial (CENTRIA)
Universidade Nova de Lisboa
Quinta da Torre
2829-516 Caparica, Portugal
Abstract. This paper introduces a new method for defining the argumentative semantics of Normal Logic Programs. In doing so, our single and unified approach allows one to obtain the Stable Models [11] as a special case, or the more general Revision Complete Scenarios here defined.
Normal Logic Programs are approached as assumption-based argumentation systems. We generalize this setting by allowing both negative and positive assumptions. Negative assumptions are made maximal, consistent with existence of a semantics, and positive assumptions are adopted only insofar as they guarantee such existence. Our argumentation semantics thus extends the classical one of [7], and guarantees existence of semantics for any Normal Logic Program, whilst providing all the scenarios corresponding to Stable Models semantics.
Additionally, we provide equivalent and correct algorithms for incrementally computing our scenarios, with three variants. One starts by assuming all atoms as positive assumptions; another assumes them all negative; a third rests on a combination of the first two, and may start with any choice of assumptions. The latter may be employed to address the problem of finding those complete scenarios most compatible with an initial collection of complete scenarios. Consequently, argumentation can be put to collaborative use, not just an antagonistic one. Our results are achieved by generalizing the definitions of the classical approach, which allows only for negative hypotheses, and our definitions fall back on the classical ones when specialized to disallow positive hypotheses.
Finally, integrity constraints are introduced to prune undesired scenarios, whilst permitting these to be produced nevertheless.
Keywords: Argumentation, Reductio ad Absurdum, Logic Programs, Argument Revision
1 Introduction
After introducing in [15] and [14] the new Revised Stable Models semantics for Normal Logic Programs further work using the Reductio ad Absurdum (RAA) principle has been developed, namely the Revised Well-Founded Semantics [16]. Considering an argument-based view of Logic Programs, we define a new semantics which inherits the RAA principle studied in [15, 14] and apply it to argumentation.
Logic Programs can be viewed as a collection of argumentative statements (rules) based on arguments (default negated literals) \[5, 2, 6, 17, 3, 9, 8, 7\]. In the quest for finding a Consistent and Complete argumentative scenario one can guess it and check its compliance with these properties; or, innovatively, start with an arbitrary scenario, calculate its consequences, and make revisions to the initial assumptions if necessary in order to achieve 2-valued Completeness and Consistency. This is the road we propose now, revision of assumptions justified by means of *Reductio ad Absurdum* reasoning.
This paper introduces a new method for defining the argumentative semantics of Normal Logic Programs. In doing so, our single and unified approach allows one to get the Stable Models [11] as a special case, or the more general Revision Complete Scenarios here defined.
Normal Logic Programs are approached as assumption-based argumentation systems. We generalize this setting by allowing both negative and positive assumptions. Negative assumptions are made maximal, consistent with existence of a semantics, and positive assumptions are adopted only insofar as they guarantee such existence. The justification of positive assumptions rests on the use of *reductio ad absurdum*, to the effect that replacing any one positive hypothesis (or assumption) by its negative counterpart, in a complete scenario, would result in its inconsistency. Hence, that complete 2-valued scenario must retain its positive assumptions. Our argumentation semantics thus extends the classical one of [7], and guarantees existence of semantics for any Normal Logic Program, whilst providing all the scenarios corresponding to Stable Models semantics.
Additionally, we provide equivalent and correct algorithms for incrementally computing our scenarios, with three variants. One starts by assuming all atoms as positive assumptions; another assumes them all negative; a third rests on a combination of the first two, and may start with any choice of assumptions. The latter may be employed to address the problem of finding those complete scenarios most compatible with an initial collection of complete scenarios. Consequently, argumentation can be put to collaborative use, not just an antagonistic one. Our results are achieved by generalizing the definitions of the classical approach, which allow only for negative hypotheses, and our definitions fall back on the classical ones when specialized to disallow positive hypotheses.
Finally, integrity constraints are introduced to prune undesired scenarios, whilst permitting these to be produced nevertheless.
In essence, our approach caters for the treatment of loops over an odd number of default negated literals, in that it assigns and justifies complete 2-valued models to any Normal Logic Program.
We start by presenting the general Motivation of this paper and, after introducing some needed Background Notation and Definitions, the more detailed Problem Description. We proceed by setting forth our proposal — the Revision Complete Scenarios—and show how it extends previous known results.
Before the Conclusions and Future Work, we show how our approach can enable Collaborative Argumentation, complementing the classical Competitive view of Argumentation.
1.1 Motivation
Ever since the beginning of Logic Programming the scientific community has formally define, in several ways, the meaning, the semantics of a Logic Program. Several semantics were defined, some 2-valued, some 3-valued, and even multi-valued semantics. The current standard 2-valued semantics for Normal Logic Programs— the Stable Models Semantics [11] — has been around for almost 20 years now, and it is generally accepted as the de facto standard 2-valued semantics for NLPs. This thoroughly studied seman-
tics, however, lacks some important properties among which the guarantee of Existence of a Model for every NLP.
In [14] we defined a 2-valued semantics— the Revised Stable Models— which ex-
tends the Stable Models Semantics, guarantees Existence of a Model for every Normal Logic Program, enjoys Relevancy (allowing for top-down query-driven proof-procedures to be built) and Cumulativity (allowing the programmer to take advantage of tabling techniques for speeding up computations).
Aiming to find a general perspective to seamlessly unify the Stable Models Seman-
tics and the Revised Stable Models Semantics we drew our attention to Argumentation as a means to achieve it. This is the main motivation of the work we present in this pa-
er: by taking the Argumentation perspective we intend to show methods of identifying and finding a 2-valued complete Model for any NLP. The approach is unifying in the sense that it allows us to find the Stable Models and also some other Models needed to ensure guarantee of Existence of a Model. In the process we extend the argumentation stance itself with the ability to incorporate positive hypotheses as needed.
Example 1. An invasion problem Some political leader thinks that “If Iran will have Weapons of Mass Destruction then we intend to invade Iran”, also “If we do not intend to invade then surely they will have Weapons of Mass Destruction”.
\[
\begin{align*}
\text{intend\_we\_to\_invite} & \leftarrow \text{iran\_will\_have\_WMD} \\
\text{iran\_will\_have\_WMD} & \leftarrow \text{not intend\_we\_to\_invite}
\end{align*}
\]
If we assume that “we do not intend to invade Iran” then, according to this program we will conclude that “Iran will have Weapons of Mass Destruction” and “we intend to invade Iran”. These conclusions, in particular “we intend to invade Iran”, contradict the initial hypothesis “we do not intend to invade Iran”. So, reasoning by \textit{Reductio ad Absurdum} in a 2-valued setting, we should “intend to invade Iran” in the first place.
This example gives a hint on how we resolve inconsistent scenarios in the rest of the paper.
Example 2. A vacation problem Another example puts together three friends that are discussing where they will spend their next joint vacations. John says “If I cannot go the mountains I’d rather go traveling”. Mary says “Well, I want to go to the beach, but if that’s not possible then I’d rather go to the mountains”. Finally, Michael says “I want to go traveling, and if that’s not possible then I want to go to the beach”.
We put together the three friends’ statements formalized into a Normal Logic Pro-
gram:
Now, because the three friends need to save money, they must minimize the number of places they will go on vacation. So they start by assuming they are going nowhere — the cheapest solution. That is, they assume \{not mountain, not beach, not travel\} as true. According to the program above, with these initial hypotheses the friends will conclude they will go traveling, to the beach and to the mountains; and this contradicts the initial hypotheses. They need to revise some of their initial assumptions. If they revise not mountain to mountain they will now conclude \{mountain, beach\} and if we put it together with the new set of hypotheses \{not beach, not travel, mountain\} we get the resulting set \{mountain, beach, not beach, not travel\}. We still have a contradiction on beach and not beach, which we can easily remove by transforming the hypotheses set into \{mountain, beach, not travel\}.
There are two more alternative solutions — \{beach, travel, not mountain\} and \{travel, mountain, not beach\} — which are symmetric to this one.
*Example 3.* A *time-out problem* John likes Mary a lot so he asked her out; he said “We could go to the movies”. Mary is more of a sports girl, so she replies “Either that, or we could go to the swimming pool”. “Now, that’s an interesting idea”, John thought. The problem is that John cannot swim because he hasn’t started learning to. He now thinks “Well, if I’m going to the swimming pool with Mary, and I haven’t learned how to swim, I’m might risk drowning! And if I’m risking drowning then I really should want to start learning to swim”.
Here is the Normal Logic Program corresponding to these sentences:
\[
\begin{align*}
\text{start\_learning\_to\_swim} & \leftarrow \text{risk\_drowning} \\
\text{risk\_drowning} & \leftarrow \text{go\_to\_pool, not start\_learning\_to\_swim} \\
\text{go\_to\_pool} & \leftarrow \text{not go\_to\_movies} \\
\text{go\_to\_movies} & \leftarrow \text{not go\_to\_pool}
\end{align*}
\]
If John is not willing to go to the swimming pool — assuming not go\_to\_pool — he just concludes go\_to\_movies and maybe he can convince Mary to join him.
On the other hand, if the possibility of having a nice swim with Mary is more tempting, John assumes he is not going to the movies not go\_to\_movies and therefore he concludes go\_to\_pool. In this case, since John does not know how to swim he could also assume not start\_learning\_to\_swim. But since John is going to the swimming pool, he concludes risk\_drowning. And because of risk\_drowning he also concludes start\_learning\_to\_swim. That is, he must give up the hypothesis of not start\_learning\_to\_swim in favor of start\_learning\_to\_swim because he wants to go to the swimming pool with Mary. As a nice side-effect he no longer risks drowning.
*Example 4.* *Middle Region Politics* In a Middle Region two factions are at odds. One believes that if terrorism does not stop then oppression will do it and hence become
unnecessary.
\[ \text{oppression} \leftarrow \text{not end_of_terrorism} \quad \text{end_of_terrorism} \leftarrow \text{oppression} \]
The other faction believes that if oppression does not stop then terrorism will do it and hence become unnecessary.
\[ \text{terrorism} \leftarrow \text{not end_of_terrorism} \quad \text{end_of_terrorism} \leftarrow \text{terrorism} \]
According to these rules, if we assume the not end_of_terrorism we conclude that there is oppression which in turn will cause the end_of_terrorism. So, the end_of_terrorism should be true in the first place, instead of not end_of_terrorism. The same happens with end_of_terrorism. In spite of the peaceful resulting scenario we propose, \( \{\text{end_of_terrorism}, \text{end_of_terrorism}\} \), there is no Stable Model for this program.
1.2 Background Notation and Definitions
**Definition 1. Logic Rule** A Logic Rule \( r \) has the general form
\[ L \leftarrow b_1, b_2, \ldots, b_n, \text{not } c_1, \text{not } c_2, \ldots, \text{not } c_m \]
where \( L \) is a literal, i.e., an atom \( h \) or its default negation not \( h \), and \( n, m \geq 0 \).
We call \( L \) the head of the rule — also denoted by head\( (r) \). And body\( (r) \) denotes the set \( \{b_1, b_2, \ldots, b_n, \text{not } c_1, \text{not } c_2, \ldots, \text{not } c_m\} \) of all the literals in the body of \( r \). Throughout this paper we will use ‘not’ to denote the default negation.
When the body of the rule is empty, we say the head of rule is a fact and we write the rule as just \( h \) or not \( h \).
**Definition 2. Logic Program** A Logic Program (LP for short) \( P \) is a (possibly infinite) set of ground Logic Rules of the form presented in definition 1. If the heads of all the rules in \( P \) are positive literals, i.e., they are simple atoms, and not default negated literal, we say we have a Normal Logic Program (NLP). If at least one of the heads of a rule of \( P \) is a default negated literal, and there is no explicit negation in the program — we say we have a Generalized Logic Program (GLP). If there is explicit negation, besides default negation, in the program we say we have an Extended Logic Program (ELP).
**Definition 3. Atoms of a Logic Program** \( P \) — Atoms\( (P) \) Atoms\( (P) \) denotes the set of all atoms of \( P \). Formally,
\[
\text{Atoms}(P) = \{a : \exists r \in P (\text{head}(r) = a \lor \text{not } a \in \text{body}(r)) \}
\]
Throughout the rest of this paper we will focus solely on Normal Logic Programs hence, when we write just a Program or a Logic Program we mean a Normal Logic Program.
**Definition 4. Default negation of a set \( S \) of literals** — not \( S \) Throughout this paper we will sometimes use the not \( S \) default negation of a set \( S \) notation, where \( S \) is a set of literals, in order to denote the set resulting from default negating every literal of \( S \). Formally, not \( S = \{\text{not } a : a \in S\} \cup \{b : \text{not } b \in S\} \)
Definition 5. **Scenario** A scenario of a NLP $P$ is the Horn theory $P \cup H$, where $H = H^+ \cup H^-$, $H^+ \subseteq \text{Atoms}(P)$, $H^- \subseteq \text{not}\ \text{Atoms}(P)$, and $\text{not}\ H^+$ and $H^-$ are disjoint. $H$ is called a set of hypotheses, positive and negative.
Definition 6. $\vdash$ operator Let $P$ be a NLP and $H$ a set of hypotheses. $P'$ is the Horn theory obtained from $P$ by replacing every default literal of the form $\text{not}\ L$ in $P$ by the atom $\text{not}\ L$. $H'$ is likewise obtained from $H$ using the same replacement rule. By definition, $P' \cup H'$ is a Horn theory, and so it has a least model $M$. We define $\vdash$ in the following way, where $A$ is any atom of $P$:
$$P \cup H \vdash A \iff A \in M$$
$$P \cup H \vdash \text{not}\ A \iff \text{not}\ A \notin M$$
Definition 7. **Consistent scenario** A scenario $P \cup H$ is consistent iff for all literals $L$, if $P \cup H \vdash L$ then $P \cup \vdash \text{not}\ L$, where $\text{not}\ \text{not}\ L \equiv L$.
Definition 8. **Consistent program** A Logic Program $P$ is consistent iff $P \cup \emptyset$ is a consistent scenario. NLPs are of course consistent.
2 Revision Complete Scenarios
In [4] the author proves that every Stable Model (SM) of a NLP is a 2-valued complete (total), consistent, admissible scenario. The author considers a scenario as a set of default negated literals — the hypotheses. However, not every NLP has a consistent, 2-valued complete scenario when one considers as hypotheses just default negated literals.
Also in [4], the author shows that preferred maximal (with maximum default negated literals) scenarios are always guaranteed to exist for NLPs. However, preferred maximal scenarios are, in general, 3-valued.
The problem we address now is to find a way to render 2-valued total a preferred maximal scenario. In this paper we take a step further from what was previously achieved in [4], extending its results. We allow a set of hypotheses to contain also positive literals, but only those absolutely necessary to guarantee Existence of a Model. These positive hypotheses are those who are justified true by a specific Reductio ad Absurdum reasoning we accept.
Before presenting the formal Definition of a Revision Complete Scenario we give a general intuitive idea to help the reader grasp the concept. For the formal definition of Revision Complete Scenario we will also need some preliminary auxiliary definitions.
2.1 Intuition
In [3] the authors prove that every SM of a NLP corresponds to a stable set of hypotheses which correspond in turn to a 2-valued complete, consistent, admissible scenario.
In order to guarantee the Existence of a 2-valued total Model for every NLP we allow positive hypotheses to be considered besides the usual negative hypotheses. Under this setting, the easiest way to solve the problem would be to accept every atom of a program as a positive hypotheses. However, we want to our semantics to be the most skeptical possible while ensuring stratification compatibility among hypotheses.
To further keep the semantics skeptical we want to have the maximal possible negative hypotheses and the minimum non-redundant positive hypotheses. Intuitively, a positive hypothesis \( L \) is considered redundant if, by the rules of the program and the rest of the hypotheses, \( L \) is already determined true. The formal definition of this notion of non-redundancy of positive hypotheses is presented and explained below.
The formal notion of compatibility will also be depicted and explained below, but for now the intuitive idea is that one positive hypothesis \( L \) must not contradict other hypotheses.
### 2.2 Definition
**Definition 9.**
Evidence for a literal \( L \) a negative set of hypotheses \( E \subseteq \text{not Atoms}(P) \) is evidence for a literal \( L \) in program \( P \) iff \( P \cup E \vdash L \). If \( P \) is understood we write \( E \leadsto L \). We also say \( E \) attacks not \( L \). Notice that we do not require an evidence to be consistent.
**Definition 10.**
Weakly Admissible set of hypotheses \( H^- \)
The notion of weakly admissible set presented here is in line with that of weak stability, first defined in [12].
Let \( P \) be a NLP; \( H^- \subseteq \text{not Atoms}(P) \) a set of negative hypotheses, not \( L \) a default negated literal in \( P \) and \( E \) an evidence for \( L \). We say \( H^- \) is weakly admissible iff \( \forall \text{not } L \in H^- \forall E \leadsto L \exists \text{not } A \in E P \cup H^- \cup E \vdash A \)
The classical notion of admissible set checks only if \( P \cup H^- \vdash A \). By doing this test with \( P \cup H^- \cup E \) we allow \( E \) to be inconsistent. It suffices to see that if \( P \cup H^- \nvdash A \) and \( P \cup H^- \cup E \vdash A \) it means that \( E \) is essential to derive \( A \) in the \( P \cup H^- \) context. Since we know not \( A \in E \) and \( P \cup H^- \cup E \vdash A \) we conclude that \( E \) is inconsistent.
There are some sets of hypotheses \( H^- \) which were not admissible according to the classical definition (with just \( P \cup H^- \)) and are weakly admissible — according to the definition using \( P \cup H^- \cup E \). These sets of hypotheses which are accepted as weakly admissible are just the ones where the adding of the evidence \( E \) was essential to derive \( A \), that is, where \( E \) is inconsistent.
Since the \( \vdash \) operator is monotonic, every admissible set of hypotheses according to the classical definition (using \( P \cup H^- \)) is also weakly admissible — according to the definition with \( P \cup H^- \cup E \).
**Example 5.** Weakly Admissible vs Non Weakly Admissible sets of negative hypotheses
Consider the following NLP:
\[
\begin{align*}
k & \leftarrow \text{not } t \\
t & \leftarrow a, b \\
a & \leftarrow \text{not } b \\
b & \leftarrow \text{not } a
\end{align*}
\]
In this program we can easily see that the bottom Even Loop Over Negation (ELON, for short) over \( a \) and \( b \) allows only one of them to be true — when we demand minimality of positive information. Under this setting we will never have \( t \) true for it needs both \( a \) and \( b \) to be true simultaneously to support its truthfulness. Therefore, \( k \) will always be true, since \( t \) is always false.
Let us analyze the different possible sets of hypotheses from an admissibility point of view. Consider the following two sets of negative hypotheses $H_1 = \{\text{not } b, \text{not } t\}$ and $H_2 = \{\text{not } b, \text{not } k\}$. The other two sets of negative hypotheses $H_3$ and $H_4$ are just symmetric to $H_1$ and $H_2$, respectively, on $\text{not } a$ and $\text{not } b$; therefore we are going to focus solely on $H_1$ and $H_2$.
$H_1$ is weakly admissible whereas $H_2$ is not. Let us see why. Analyzing $\text{not } b$ we verify that there is only one possible evidence $E = \{\text{not } a\}$ for $b$ and that $P \cup H_1 \cup E \vdash \text{a}$, i.e., $H_1 \cup E$ attacks (in the sense presented in definition 9) $\text{not } a$. In this particular case even just $H_1$ attacks $\text{not } a$.
Analyzing $\text{not } t$ we can see that there is only one evidence $E = \{\text{not } a, \text{not } b\}$ for $t$. $P \cup H_1 \cup E$ derives both $a$ and $b$, i.e., $P \cup H_1 \cup E \vdash \text{a}$ and $P \cup H_1 \cup E \vdash \text{b}$; hence $H_1$ is weakly admissible.
Let us see what happens with $H_2$. We have already seen $\text{not } b$, we just need to test $\text{not } k$. The only evidence for $k$ is $E = \{\text{not } t\}$. We can see however that $P \cup H_2 \cup E \not\vdash \text{t}$, which leads us to conclude that $H_2$ is not weakly admissible.
**Example 6. Allowing Inconsistent Evidence** Consider the following NLP:
$$k \leftarrow \text{not } t \quad t \leftarrow \text{not } t$$
The hypotheses $H_1 = \{\text{not } t\}$ is admissible and weakly admissible. However, since $P \cup H_1$ is not a consistent scenario, no model exists with $\text{not } t$.
The only possible hypotheses left are the empty set and $H_2 = \{\text{not } k\}$. Considering the classical notion of admissible set (with $P \cup H$) $H_2$ is non-admissible; however, $H_2$ is weakly admissible. Notice that the evidence for $k$ is $E = \{\text{not } t\}$ and that $P \cup H_2 \cup E \vdash t$. $P \cup H_2$ is a consistent scenario, but it is not complete. Since we already know that $\text{not } t$ cannot be in any consistent model, in a 2-valued setting we would like to "complete" the scenario $P \cup H_2$ with $t$ in order to obtain a 2-valued complete and consistent model. In such case we say $\{t\}$ is our set of positive hypotheses.
**Definition 11. Non-redundant set $H^+$ of positive hypotheses** Let $P$ be a NLP, and $H = H^+ \cup H^-$ a set of positive and negative hypotheses, i.e., $(H^+ \subseteq \text{Atoms}(P))$ and $(H^- \subseteq \text{not } \text{Atoms}(P))$. We say $H^+$ is non-redundant iff $\forall L \in H^+, P \cup H \setminus \{L\} \not\vdash L$.
As just explained, we wish to allow some positive hypotheses when they are absolutely needed in order to obtain 2-valued complete and consistent scenarios. However, we require the positive set of hypotheses to be non-redundant, that is, all positive hypotheses must not be already derived by other hypotheses. This is the purpose of definition 11 above.
**Example 7. Redundant positive hypotheses** Consider the following program $P$:
$$b \leftarrow a \quad a \leftarrow \text{not } a$$
In the previous example 6 we saw how a rule like $t \leftarrow \text{not } t$ forbids the negative hypothesis $\text{not } t$. By the same token, in this example’s program, the hypothesis $\text{not } a$ is also forbidden. Also $\{\text{not } b\}$ is not a weakly admissible set of negative hypotheses.
Since we are looking for 2-valued complete (total) and consistent scenarios, we would like one including both $a$ and $b$.
The question now is: should both $a$ and $b$ be considered positive hypotheses? Since we are looking for the minimum possible set of positive hypotheses (compatible with the negative ones), we answer no in this case, because assuming the positive hypothesis $a$ is enough to automatically determine the truth of $b$. That is why we say the set $\{a, b\}$ of positive hypotheses is redundant, whereas $\{a\}$ is not.
**Definition 12. Unavoidable set $H^+$ of positive hypotheses** Let $P$ be a NLP, and $H = H^+ \cup H^-$ a set of positive and negative hypotheses. We say $H^+$ is unavoidable iff $\forall L \in H^+ \cup (H \setminus \{L\}) \cup \{\neg L\}$ is an inconsistent scenario.
In a nutshell, this definition imposes that every positive hypothesis must be accepted as true for the sake of consistency and completeness in the context of all other hypotheses. We ensure this by demanding that any if positive hypothesis $L$ was to be considered false — i.e., $\neg L$ considered true — the whole scenario of $P$ with all the hypotheses, except $L$, and including $\neg L$ instead (for the sake of 2-valued completeness) would be inconsistent. So, there is no consistent 2-valued way to avoid having $L$ true in the context of the remaining hypotheses. Additionally, one may need the condition as stating that, if the scenario with $\neg L$ is consistent, then $L$ is avoidable.
**Example 8. Unavoidable vs Avoidable sets of positive hypotheses** Let $P$ be the following NLP:
$$d \leftarrow \neg c \quad c \leftarrow \neg b \quad b \leftarrow \neg a \quad a \leftarrow \neg a$$
In this example we consider $H_1 = H_1^+ \cup H_1^-$, where $H_1^+ = \{a\}$ and $H_1^- = \{\neg b, \neg d\}$; and $H_2 = H_2^+ \cup H_2^-$, where $H_2^+ = \{a, b\}$ and $H_2^- = \{\neg c\}$.
By the same reason as in example 7, $\neg a$ cannot be in any $H^-$ and, in order to obtain a 2-valued total model with an $H$, $a$ must be accepted as true — in that sense we say $a$ is unavoidable.
**Definition 13. Revision Complete Scenarios** Let $P$ be a NLP and $H = H^+ \cup H^-$ a set of positive ($H^+$) and negative ($H^-$) hypotheses. We say $H$ is a Revision Complete Scenario iff
1. $P \cup H$ is a consistent scenario and least($P \cup H$) is a 2-valued complete model of $P$
2. $H^-$ is weakly admissible
3. $H^+$ is not redundant
4. $H^+$ is unavoidable
□
2.3 The Exhaustive Model Generation Algorithm
Another method for finding the Revision Complete Scenarios is an iterative and incremental way.
Definition 14. **Inconsistency avoidance algorithm for generating the Revision Complete Scenarios (RCSs)**
1. Start with $i = 0$, $H_i^+ = \text{Atoms}(P)$ and $H_i^- = \emptyset$.
2. If $H_i^+$ is not weakly admissible then $H_i^+ \cup H_i^-$ is not a Revision Complete Scenario and the algorithm terminates unsuccessfully.
3. If $H_i^-$ is weakly admissible then:
4. If $H_i^+ = \emptyset$ then $H_i^+ \cup H_i^-$ is a RCS and the algorithm terminates successfully in this case.
5. If $H_i^+ \neq \emptyset$ then non-deterministically take one arbitrary $L \in H_i^+$ and check if $H_i^+$ is redundant on $L$. If it is then:
6. $H_{i+1}^+ = H_i^+ \setminus \{L\}$ and go back to step 3 (a).
7. If $H_i^+$ is non-redundant then:
8. Check if $H_i^+$ is unavoidable and, if so, then $H_i^+ \cup H_i^-$ is a RCS and the algorithm terminates successfully.
9. If $H_i^+$ is not unavoidable and $L \in H_i^+$ is one of the positive hypotheses rendering $H_i^+$ non-unavoidable then $H_{i+1}^+ = H_i^+ \setminus \{L\}$ and $H_{i+1}^- = H_i^- \cup \{\text{not } L\}$ and go on to step 2 again.
This algorithm starts with all the possible positive hypotheses (all the atoms of the program) and no negative hypotheses. By construction, a scenario with such $H^+$ and $H^-$ is necessarily consistent and 2-valued complete. Along the execution of the algorithm, at each time, we either just remove one positive hypothesis because redundant, or non-deterministically remove one positive hypothesis and add its correspondent default negation to the set of negative hypotheses. By construction, the algorithm guarantees that $H = H^+ \cup H^-$ is consistent. When we just remove one positive hypothesis $L \in H^+$ the 2-valued completeness of the resulting scenario is guaranteed because $L$ was removed from $H^+$ only because $L$ was rendering $H^+$ redundant. When we remove $L$ from $H^+$ and add $\text{not } L$ to $H^-$ 2-valued completeness is naturally assured.
The requirement for weak admissibility of $H^-$ in step 3 ensures the resulting $H = H^+ \cup H^-$ corresponds to a consistent scenario. The different non-deterministic choices engender all the RCSs.
Example 9. **Generating RCSs by Inconsistency avoidance**
$a \leftarrow \text{not } a, \text{not } b$
$b \leftarrow \text{not } a, \text{not } b$
We start the algorithm with all the possible positive hypotheses and no negative ones:
- $H_0^+ = \{a, b\}$, $H_0^- = \emptyset$.
- $H_0^+$ is weakly admissible.
- $H_0^- \neq \emptyset$ so we check if it is redundant. It is not, so we check if $H_0^+$ is unavoidable.
- $H_0^+$ is not unavoidable. We non-deterministically choose one atom from $H_0^+ = \{a, b\}$ which makes it non-unavoidable (in this case, both $a$ and $b$ are rendering $H_0^+$ non-unavoidable, so we can choose any one). Let us say we choose $b$. Then $H_1^+ = H_0^+ \setminus \{b\}$ and $H_1^- = H_0^- \cup \{\text{not } b\}$. And we go on to step 2 again.
– $H_1^-$ is weakly admissible.
– $H_1^+ \neq \emptyset$.
– $H_1^+$ is not redundant on any $L \in H_1^+$.
– $H_1^+$ is unavoidable and so $H_1 = H_1^+ \cup H_1^- = \{a, \text{not } b\}$ is a Revision Complete Scenario and the algorithm terminates successfully.
If we were to choose not $a$ instead of not $b$ in step 9, the resulting Revision Complete Scenario would be $\{\text{not } a, b\}$. There are no other Revision Complete Scenario for this program besides these two.
**Theorem 1.** The sets $H = H^+ \cup H^-$ resulting from the execution of algorithm of definition 14 are the Revision Complete Scenarios.
**Proof.** Trivial, by construction of the algorithm. \qed
**Theorem 2.** Existence of Model. For any given NLP $P$ there is always at least one Revision Complete Scenario.
**Proof.** In the algorithm described above, when we need to non-deterministically choose one atom $L$ to remove from $H_i^+$, and eventually add not $L$ to $H_i^-$, if there are no repetitions in the choice, then the algorithm is necessarily guaranteed to terminate.
Moreover, if the first positive hypothesis to remove correspond to atoms upon which no other atoms depend, then removing that positive hypotheses has causes no inconsistency, nor does it compromise 2-valued completeness. If the next positive hypotheses in the sequence to be removed always guarantee that the consequences of its removal (and eventual adding of its default negated counterpart to the set of negative hypotheses) does not change the truth value of positive hypotheses already removed, then it is necessarily guaranteed that the algorithm will find a Revision Complete Scenario.
Finally, it is always possible to find such a sequence of positive hypotheses to remove: the sequence just needs to be in reverse order of the stratification of the program. I.e., the first positive hypotheses in the sequence must be from the top strata of the program, the second hypotheses from the second strata counting from the top, and so on. The notion of stratification we are unsing here can be intuitively explained as: (1) atoms in a loop are all in the same strata; (2) atoms which are not in a loop, and are in the head of a rule are in a strata which is always one directly above the atoms in the body of the rule. \qed
**Theorem 3.** $M$ is a Stable Model of a NLP $P$ iff there is some Revision Complete Scenario $H$ such that $M = \text{least}(P \cup H)$ with $H^+ = \emptyset$.
**Proof.** Let $H = H^+ \cup H^-$ a set of positive and negative hypotheses. Let us consider the particular case where $H^+ = \emptyset$, therefore $H = H^-$. In [4], the author already proved that when $H = H^-$, $P \cup H$ is a consistent scenario and $M = \text{least}(P \cup H)$ is a 2-valued complete scenario iff $M$ is a Stable Model of $P$.
Stable Models are just a particular case of Revision Complete Scenarios. \qed
A variation of this algorithm reversing the direction of the changes in $H^+$ and $H^-$ can also be depicted. In such an algorithm we start with $H^- = \text{not Atoms}(P)$ and $H^+ = \emptyset$. 2-valued completeness is also assured at the starting point, although consistency of $P \cup H$ is not. The algorithm is:
Definition 15. Inconsistency removal algorithm for generating the Revision Complete Scenarios (RCSs)
1. Start with $i = 0$, $H_i^- = \text{not Atoms}(P)$ and $H_i^+ = \emptyset$.
2. If $P \cup H_i$ is a consistent scenario then $H_i$ is a RCS and the algorithm terminates successfully.
3. Check if $H_i^+$ is redundant:
4. If it is redundant then non-deterministically take one arbitrary atom $L \in H_i^+$ such that $P \cup H_i \setminus \{L\} \vdash L$ and construct $H_{i+1}^+ = H_i^+ \setminus \{L\}$.
5. If $H_i^+$ is non-redundant construct $H_{i+1}^+ = H_i^+$.
4. Check if $H_{i+1}^+$ is unavoidable:
7. If $H_{i+1}^+$ is non-unavoidable then $H_{i+1}^+ \cup H_{i+1}^-$ is not a RCS and the algorithm terminates unsuccessfully.
8. If $H_{i+1}^+$ is unavoidable then check if $P \cup H_i + 1$ is a consistent scenario:
9. If $P \cup H_i + 1$ is a consistent scenario then:
10. Check if $P \cup H_i + 1$ is also a 2-valued complete scenario and if it is then $H_{i+1} = H_i^+$.
12. If $P \cup H_i + 1$ is not a consistent scenario, take one not $L \in H_{i+1}^-$ such that $P \cup H_i + 1 \vdash L$ and $P \cup H_i + 1 \vdash \text{not} L$ (i.e., there is a contradiction in $L$ with $P \cup H_i + 1$) and construct $H_{i+2}^- = H_i^- \setminus \{\text{not} L\}$ and $H_{i+2}^+ = H_i^+ \cup \{L\}$, i.e., we revise the assumption not $L$ to $L$ making it a positive hypothesis. Go on to step 3 again.
Example 10. Generating RCSs by Inconsistency removal Let us revisit the example 9 and see the Inconsistency removal version of it.
\[
\begin{align*}
a & \leftarrow \text{not } a, \text{not } b \\
b & \leftarrow \text{not } a, \text{not } b
\end{align*}
\]
We start the algorithm with all the possible negative hypotheses and no positive ones:
– $H_0^- = \{ \text{not } a, \text{not } b \}$, $H_0^+ = \emptyset$.
– $P \cup H_0$ is not a consistent scenario.
– $H_0^+ = \emptyset$ is non-redundant.
– $H_1^+ = H_0^+ \cup \{1\}$ is unavoidable.
– $P \cup H_1$ is a consistent scenario.
– We non-deterministically choose one negative hypothesis not L from $H_1^- = \{ \text{not } a, \text{not } b \}$ such that $P \cup H_1 \vdash L$ and $P \cup H_1 \vdash \text{not } L$. In this case, both not a and not b, so we can choose any one of them. Let us say we choose not a. Then $H_2^+ = H_1^+ \cup \{1\}$ and $H_2^- = H_1^- \setminus \{\text{not } a\}$. And we go on to step 3 again.
– $H_2^+ \upharpoonright H_2^-$ is non-redundant.
– $H_3^+ = H_2^+ \cup \{1\}$ is unavoidable.
– $P \cup H_3$ is a 2-valued complete scenario, so $H_3 = H_3^+ \cup H_3^- = \{a\} \cup \{\text{not } b\} = \{a, \text{not } b\}$ is a Revision Complete Scenario and the algorithm terminates successfully.
If we were to choose not b instead of not a in step 12, the resulting Revision Complete Scenario would be $\{ \text{not } a, \text{b} \}$. These Revision Complete Scenario coincide with those produced by the algorithm in definition 14.
**Theorem 4.** The sets $H = H^+ \cup H^-$ resulting from the execution of algorithm of definition 15 are the Revision Complete Scenarios
**Proof.** Trivial, by construction of the algorithm. $\square$
### 2.4 The Name of the Game
Why the name “Revision” Complete Scenarios? The “Revision” part of the name comes from the assumption revision we do when an assumption not A $\in H^-$ leads to a contradiction in P, i.e., $(P \cup H^- \vdash \{A, \text{not } A\}) \land (P \cup (H^- \setminus \{\text{not } A\}) \nvdash \{A, \text{not } A\})$.
In such a case we accept to revise not L to its positive counterpart L. This is the specific form of reasoning by *Reductio ad Absurdum* we take here: if adding not A to P in the context of H leads to self inconsistency, then, by absurdity, we should assume A instead of not A. A becomes, thus, one of the positive hypotheses.
### 3 Syntactic Perspective of Revision Complete Scenarios over Normal Logic Programs
In [3] the authors proved that every Stable Model of a NLP corresponds to a 2-valued complete, consistent and admissible scenario. In [10] the author shows that when a NLP has no SMs it is because the Normal Logic Program has Odd Loops Over Negation (OLONs) and/or Infinite Chains Over Negation (ICONs), although the author does not employ these designations. These designations are taken from [14].
For the sake of readability and self-containment we briefly present some examples of OLONs and ICONs. Intuitively an OLON is a set of rules of a NLP which induce
Reductio ad Absurdum
Argumentation in Normal Logic Programs 109
a cycle over some literals in the dependency graph. The cycle of an OLON has the characteristic of having an Odd number of default Negated arcs around the cycle.
An example of an OLON is given in example 1. There we can see that the atom intend we to invade is in a cycle across the dependency graph, and that along that cycle there is only 1 (an Odd number) default negation.
Another example of an OLON is present in example 2. There the atom mountain is in a cycle with 3 default negations along the circular dependency graph. The same is true for travel and beach.
The classical example of an ICON was first presented in [10]. It goes as follows:
\[
p(X) \leftarrow p(s(X)) \quad p(X) \leftarrow \neg p(s(X))
\]
where \(X\) is a variable. The ground version of this program when there is only one constant 0 is the infinite program
\[
\begin{align*}
p(0) & \leftarrow p(s(0)) \quad p(0) & \leftarrow \neg p(s(0)) \\
p(s(0)) & \leftarrow p(s(s(0))) \quad p(s(0)) & \leftarrow \neg p(s(s(0))) \\
p(s(s(0))) & \leftarrow p(s(s(s(0)))) \quad p(s(s(0))) & \leftarrow \neg p(s(s(s(0)))) \\& \quad \vdots \& \quad \vdots
\end{align*}
\]
This example in particular is the one to which every other possible variation of an ICON reduces to (proven in [10]). As it can be easily seen, there is an infinitely long chain of support for any \(p(X)\) with an infinite number of default negations.
As we just said, in [10] the author proves that only OLONs and/or ICONs can prevent the existence of SMs in a NLP. Therefore, since our Revision Complete Scenario guarantee the Existence of a Model for any given NLP it follows that the Revision Complete Scenario deal with OLONs and ICONs in a way that the Stable Models semantics did not. This is achieved by means of the reasoning by Reductio ad Absurdum we explained in subsection 2.4.
4 Collaborative Argumentation
The classical perspective on Argumentation is typically of a competitive nature: there are arguments and counter-arguments, all of them attacking each other and struggling for admissibility. The ones which counter-attack all its attackers are admissible.
Typically, one takes one argument — a set of hypotheses \(H\) — and check if it is admissible, and if \(P \cup H\) is a consistent scenario. If 2-valuedness is a requisite, then an extra test for 2-valued completeness is required.
We now generalize this approach in a constructive way, by building up a compromise Revision Complete Scenario starting from several conflicting 2-valued complete and consistent Models of \(P\) — each corresponding to an argument. This is what the algorithm below does.
First, we take all the conflicting models \(N_1, N_2, \ldots, N_n\) and calculate the set of all the possible positive hypotheses \(M^+ = \bigcup_{i=1}^{n} N_i^+\); and the set of all the possible negative hypotheses \(M^- = \bigcup_{i=1}^{n} N_i^-\). \(M^+\) and \(M^-\) will now be used to guide the algorithm below in order to ensure consensus, i.e., the resulting Revision Complete
Scenario \( H \) will have no positive hypotheses outside \( M^+ \), nor will it have negative hypotheses outside \( M^- \). The algorithm goes as follows:
**Definition 16. Revision Complete Scenario** \( H \) construction from conflicting models \( N_1, N_2, \ldots , N_n \)
1. Start with \( M = M^+ \cup M^- \). \( M_0 = M \) is inconsistent.
2. \( M_1^+ = M_0^+ \setminus \{ L \in M_0^+ : \text{not } L \in M_0^- \} \), and \( M_1^- = M_0^- \). \( M_1 \) is now consistent.
3. If \( M_i^- \) is not weakly admissible then non-deterministically select one \( L \) such that not \( L \in M_i^- \), there is an \( E \) such that \( E \sim L \), and there is some not \( a \in E \) such that \( P \cup M_i^- \cup E \nsubseteq a \). Construct \( M_{i+1}^- = M_i^- \setminus \{ \text{not } L \} \). Repeat this step.
4. If \( M_{i+1}^+ \) is avoidable then \( M_{i+2}^+ = M_{i+1}^+ \setminus \{ L \} \), where \( P \cup (M_{i+1}^- \setminus \{ L \}) \cup \{ \text{not } L \} \) is an inconsistent scenario. \( M_{i+2}^- = M_{i+1}^- \cup \{ \text{not } L \} \) only if \( L \in M^- \), otherwise \( M_{i+2}^- = M_{i+1}^- \). Go on to step 3 again.
5. If \( P \cup M_{i+2} \) is not a consistent scenario then non-deterministically select one \( L \) such that \( P \cup M_{i+2} \vdash \{ L, \text{not } L \} \), and construct \( M_{i+3}^- = M_{i+2}^- \setminus \{ \text{not } L \} \). Go on to step 3 again.
6. If \( P \cup M_{i+2} \) is a 2-valued complete scenario then \( M_{i+3}^+ = M_{i+2}^+ \cup \{ L \} \), where \( P \cup M_{i+2} \nsubseteq L \) and \( P \cup M_{i+2} \nsubseteq \text{not } L \) and \( L \in M^+ \), and go on to step 4 again.
7. \( P \cup M_{i+2} \) is a 2-valued complete and consistent scenario, where \( M_{i+2}^+ \) is non-redundant and unavoidable, and \( M_{i+2}^- \) is weakly admissible. By definition, \( M_{i+2} \) is a Revision Complete Scenario, therefore \( H = M_{i+2} \) and the algorithm terminates successfully.
In essence, this algorithm is a mixture of the Inconsistency Avoidance and Inconsistency Removal algorithms presented in subsection 2.3. We start with two sets \( M^+ \) and \( M^- \) containing, respectively, all the possible positive hypotheses that can be adopted in the final Revision Complete Scenario \( H \), and all the possible negative hypotheses that can be adopted. Next, we remove from the set of positive hypotheses all those conflicting with the negative ones in order to ensure consistency. Now we need to ensure a weak admissibility of the current negative hypotheses \( M_i^- \). For that we check if the \( M_i^- \) is weakly admissible, and if it is not, then we non-deterministically select and remove from \( M_i^- \) one of the negative hypotheses causing \( M_i^- \) failing to comply to this requirement. This step is repeated until weak admissibility is verified by \( M_i^- \). Now we turn to the set of positive hypotheses \( M_i^+ \). If it is avoidable, then we non-deterministically select and remove from \( M_i^+ \) one positive hypothesis \( L \) which contributes to \( M_i^- \) avoidability. We also add the correspondent default negation of that positive hypotheses not \( L \) to \( M_i^- \), but only if \( L \) was already in \( M^- \) — the initial set of all the adoptable negative hypotheses. This extra requirement ensures the final compromise Revision Complete Scenario \( H \) to be found is maximally compatible with all the initial models \( N_1, N_2, \ldots , N_n \). When we add not \( L \) to \( M_i^- \) we need to recheck its weak admissibility, so we go on to that step again. If \( M_i^+ \) was unavoidable, then we need to check it the whole \( P \cup M_i \) is consistent. If this scenario fails consistency, then we remove from \( M_i^- \) one of the negative hypothesis whose positive counterpart was also being produced by \( P \cup M_i \). Notice that when the resulting scenario is not consistent we
remove one inconsistency in favour of the positive hypotheses, since the presence of the correspondent negative produced the inconsistency. This is basically the mechanism of reasoning by Reductio ad Absurdum we use. Again we need to recheck the weak admissibility, so we go on to that step again. If the scenario $P \cup M_i$ was consistent, then we need to check if it is 2-valued complete. If it is not, then we non-deterministically select one adoptable positive hypothesis and add it to $M_i^+$. Now we need to recheck $M_i^+$’s unavoidability; so we go on to that step again. Finally, if $P \cup M_i$ was 2-valued complete then $H = M_i$ is a Revision Complete Scenario and the algorithm terminates successfully.
Example 11. Example 2 revisited — A vacation problem Recall the example 2 presented earlier. The program is:
\[
\text{travel} \leftarrow \text{not mountain} \quad \text{mountain} \leftarrow \text{not beach} \quad \text{beach} \leftarrow \text{not travel}
\]
Now assume that one of the friends going on vacation with the other two could not be present when they were getting together to decide their vacations’ destinies. So, only John (the one who preferred going to the mountains, otherwise traveling it is), and Mary (she prefers going to the beach, otherwise going to the mountains is ok).
John’s opinion is $J = \{\text{mountain, not travel, not beach}\}$, while Mary’s choice is $Z = \{\text{beach, not mountain, not travel}\}$. We can already see that at least on one thing they agree: not travel. We now find the largest set of positive hypotheses we can consider $M^+ = J^+ \cup Z^+ = \{\text{mountain, beach}\}$ and the largest set of negative hypotheses we can consider $M^- = J^- \cup Z^- = \{\text{not travel, not beach, not mountain}\}$. And now the algorithm starts:
$M = M^+ \cup M^- = \{\text{mountain, beach, not mountain, not beach, not travel}\}$
Going through the steps of the algorithm we have:
- $M_0 = M$.
- $M_1^+ = M_0^+ \setminus \{\text{mountain, beach}\} = \emptyset$, $M_1^- = M_0^-$.
- $M_2^-$ is not weakly admissible, so we non-deterministically select one $L$ such that not $L \in M_1^-$ is one of the causes for $M_1^-$ not complying to the weak admissibility condition: for example, $L = \text{mountain}$. $M_2^- = M_1^- \setminus \{\text{not mountain}\} = \{\text{not beach, not travel}\}$. We repeat this set and now we must remove not beach from $M_2^-$. $M_3^- = M_2^- \setminus \{\text{not beach}\} = \{\text{not travel}\}$.
- $M_1^+ = M_2^+ = M_3^+ = \emptyset$ is unavoidable.
- $P \cup M_3$ is a consistent scenario.
- $P \cup M_3$ is not a 2-valued complete scenario. So $M_4^+ = M_3^+ \cup \{\text{mountain}\}$ because mountain is the only literal which verifies $P \cup M_3 \not\models$ mountain and $P \cup M_3 \not\models$ not mountain. Now we go on to step 4 of the algorithm again.
- $M_4^+$ is unavoidable.
- $P \cup M_4$ is consistent.
- $P \cup M_4$ is 2-valued complete, so $H = M_4^+ \cup M_4^- = \{\text{mountain, not travel}\}$ and the algorithm terminates successfully.
In the end, the resulting model is $\text{least}(P \cup H) = \{\text{mountain, beach, not travel}\}$.
Notice that beach is just a consequence of not travel in $P$, it does not have to be a hypothesis. If other atoms were to be chosen at step 3 other alternative solutions would be found.
5 Integrity Constraints
Example 12. Middle Region Politics Revisited Recall the example 4 presented earlier. We are now going to add extra complexity to it.
We already know the two factions which are at odds and their thinking.
\[
\text{oppression} \leftarrow \neg \text{end of terrorism} \quad \text{end of terrorism} \leftarrow \text{oppression} \\
\text{terrorism} \leftarrow \neg \text{end of oppression} \quad \text{end of oppression} \leftarrow \text{terrorism}
\]
We now combine these two sets of rules with the two following Integrity Constraints (ICs) which guarantee that \text{oppression} and \text{end of oppression} are never simultaneously true; and the same happens with terror:
\[
\text{falsum} \leftarrow \text{oppression, end of oppression, not falsum} \\
\text{falsum} \leftarrow \text{terrorism, end of terrorism, not falsum}
\]
So far so good, there is still a single joint set of hypotheses resulting in a consistent scenario \{\text{end of oppression, end of terrorism}\}. Still, there is no SM for this program. But introducing either one or both of the next two rules, makes it impossible to satisfy the ICs:
\[
\text{oppression} \leftarrow \neg \text{terrorism} \quad \text{terrorism} \leftarrow \neg \text{oppression}
\]
In this case all the consistent and 2-valued complete scenarios contain the atom \text{falsum}. There are still no Stable Models for the resulting program. The semantics we propose allows two models for this program, which correspond to the 2-valued complete consistent scenarios, both containing \text{falsum}. We can discard them on this account or examine their failure to satisfy the ICs.
6 Conclusions and Future Work
We have managed to assign a complete 2-valued semantics to every Normal Logic Program, by employing an argumentation framework that readily extends the argumentation framework of Stable Models semantics. We also presented three algorithms for finding the Revision Complete Scenario of any Normal Logic Program. Every Stable Model of a Normal Logic Program corresponds to a Revision Complete Scenario and, in that sense, our algorithms allow for a different perspective on Stable Models semantics: any Stable Model can be seen as the result of an iterative process of Inconsistency Removal or Inconsistency Avoidance. In any case, Stable Models are the final result of such inconsistency removal/avoidance where any initial positive hypotheses remain in the end. In the process, we have extended argumentation with \textit{Reductio ad Absurdum} reasoning for that purpose, and shown how Collaborative Argumentation can be defined in that context.
Future work concerns the extension to Generalized Logic Programs and Extended Logic Programs, and the seamless merging with more general belief revision in Logic Programs.
Some of the applications enabled by this improved semantics of Normal Logic Programs, concern the ability to guarantee that the meaning of diverse programs, e.g. arising from Semantic Web usage, always has a semantics. Similarly, we can also ensure this property whenever updating programs, including the case where an autonomous program evolves through self-updating [1]. Such applications will be enabled by the ongoing implementation.
Acknowledgments We deeply thank Robert A. Kowalski for his crucial help in clarifying our ideas and their presentation.
References
|
{"Source-Url": "http://lia.deis.unibo.it/confs/ArgNMR/proceedings/096-Pereira.pdf", "len_cl100k_base": 13938, "olmocr-version": "0.1.48", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 67404, "total-output-tokens": 16093, "length": "2e13", "weborganizer": {"__label__adult": 0.0005116462707519531, "__label__art_design": 0.0007433891296386719, "__label__crime_law": 0.0009212493896484376, "__label__education_jobs": 0.005069732666015625, "__label__entertainment": 0.0002290010452270508, "__label__fashion_beauty": 0.00029277801513671875, "__label__finance_business": 0.00048613548278808594, "__label__food_dining": 0.0007100105285644531, "__label__games": 0.0022716522216796875, "__label__hardware": 0.0009860992431640625, "__label__health": 0.0010786056518554688, "__label__history": 0.000507354736328125, "__label__home_hobbies": 0.00021827220916748047, "__label__industrial": 0.0007419586181640625, "__label__literature": 0.002490997314453125, "__label__politics": 0.00074005126953125, "__label__religion": 0.0007996559143066406, "__label__science_tech": 0.23681640625, "__label__social_life": 0.00023877620697021484, "__label__software": 0.015960693359375, "__label__software_dev": 0.7265625, "__label__sports_fitness": 0.0003695487976074219, "__label__transportation": 0.0009279251098632812, "__label__travel": 0.0002282857894897461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54377, 0.01802]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54377, 0.47499]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54377, 0.84148]], "google_gemma-3-12b-it_contains_pii": [[0, 2474, false], [2474, 5783, null], [5783, 8938, null], [8938, 11915, null], [11915, 14927, null], [14927, 18018, null], [18018, 21328, null], [21328, 24831, null], [24831, 27455, null], [27455, 30424, null], [30424, 33626, null], [33626, 35409, null], [35409, 38105, null], [38105, 41179, null], [41179, 45095, null], [45095, 48439, null], [48439, 51239, null], [51239, 54377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2474, true], [2474, 5783, null], [5783, 8938, null], [8938, 11915, null], [11915, 14927, null], [14927, 18018, null], [18018, 21328, null], [21328, 24831, null], [24831, 27455, null], [27455, 30424, null], [30424, 33626, null], [33626, 35409, null], [35409, 38105, null], [38105, 41179, null], [41179, 45095, null], [45095, 48439, null], [48439, 51239, null], [51239, 54377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54377, null]], "pdf_page_numbers": [[0, 2474, 1], [2474, 5783, 2], [5783, 8938, 3], [8938, 11915, 4], [11915, 14927, 5], [14927, 18018, 6], [18018, 21328, 7], [21328, 24831, 8], [24831, 27455, 9], [27455, 30424, 10], [30424, 33626, 11], [33626, 35409, 12], [35409, 38105, 13], [38105, 41179, 14], [41179, 45095, 15], [45095, 48439, 16], [48439, 51239, 17], [51239, 54377, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54377, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
ee9bef077859a90d784f5f4cb1db431811a2a7aa
|
[REMOVED]
|
{"Source-Url": "http://www-smis.inria.fr/~bouganim/CASC/Publications/LIUPPA_ISC02_An%20Access%20Control%20Model%20for%20Tree%20Data%20Structure.pdf", "len_cl100k_base": 9269, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 47967, "total-output-tokens": 11340, "length": "2e13", "weborganizer": {"__label__adult": 0.00043892860412597656, "__label__art_design": 0.0005526542663574219, "__label__crime_law": 0.0017805099487304688, "__label__education_jobs": 0.0009522438049316406, "__label__entertainment": 0.0001105666160583496, "__label__fashion_beauty": 0.00020515918731689453, "__label__finance_business": 0.0005540847778320312, "__label__food_dining": 0.0003542900085449219, "__label__games": 0.0005769729614257812, "__label__hardware": 0.0012760162353515625, "__label__health": 0.0006833076477050781, "__label__history": 0.0003333091735839844, "__label__home_hobbies": 0.0001361370086669922, "__label__industrial": 0.0006465911865234375, "__label__literature": 0.0004322528839111328, "__label__politics": 0.0005011558532714844, "__label__religion": 0.0004625320434570313, "__label__science_tech": 0.152099609375, "__label__social_life": 0.0001418590545654297, "__label__software": 0.0491943359375, "__label__software_dev": 0.78759765625, "__label__sports_fitness": 0.0002193450927734375, "__label__transportation": 0.0003750324249267578, "__label__travel": 0.00018775463104248047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44292, 0.02323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44292, 0.48556]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44292, 0.91547]], "google_gemma-3-12b-it_contains_pii": [[0, 2160, false], [2160, 4805, null], [4805, 6636, null], [6636, 8203, null], [8203, 11099, null], [11099, 13347, null], [13347, 15573, null], [15573, 17776, null], [17776, 20461, null], [20461, 22553, null], [22553, 24838, null], [24838, 26523, null], [26523, 28015, null], [28015, 29735, null], [29735, 32370, null], [32370, 35002, null], [35002, 37595, null], [37595, 40581, null], [40581, 42947, null], [42947, 44292, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2160, true], [2160, 4805, null], [4805, 6636, null], [6636, 8203, null], [8203, 11099, null], [11099, 13347, null], [13347, 15573, null], [15573, 17776, null], [17776, 20461, null], [20461, 22553, null], [22553, 24838, null], [24838, 26523, null], [26523, 28015, null], [28015, 29735, null], [29735, 32370, null], [32370, 35002, null], [35002, 37595, null], [37595, 40581, null], [40581, 42947, null], [42947, 44292, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44292, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44292, null]], "pdf_page_numbers": [[0, 2160, 1], [2160, 4805, 2], [4805, 6636, 3], [6636, 8203, 4], [8203, 11099, 5], [11099, 13347, 6], [13347, 15573, 7], [15573, 17776, 8], [17776, 20461, 9], [20461, 22553, 10], [22553, 24838, 11], [24838, 26523, 12], [26523, 28015, 13], [28015, 29735, 14], [29735, 32370, 15], [32370, 35002, 16], [35002, 37595, 17], [37595, 40581, 18], [40581, 42947, 19], [42947, 44292, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44292, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
0cde3bf81541e5dae3f5f9ecde78f6ff908840ea
|
[REMOVED]
|
{"len_cl100k_base": 13089, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 48049, "total-output-tokens": 15148, "length": "2e13", "weborganizer": {"__label__adult": 0.0004324913024902344, "__label__art_design": 0.0007138252258300781, "__label__crime_law": 0.0014581680297851562, "__label__education_jobs": 0.0031108856201171875, "__label__entertainment": 0.00012922286987304688, "__label__fashion_beauty": 0.0002510547637939453, "__label__finance_business": 0.0035724639892578125, "__label__food_dining": 0.0005540847778320312, "__label__games": 0.0008301734924316406, "__label__hardware": 0.0011539459228515625, "__label__health": 0.0010824203491210938, "__label__history": 0.00036454200744628906, "__label__home_hobbies": 0.00023365020751953125, "__label__industrial": 0.0012502670288085938, "__label__literature": 0.00054168701171875, "__label__politics": 0.0005779266357421875, "__label__religion": 0.0004041194915771485, "__label__science_tech": 0.2880859375, "__label__social_life": 0.00020432472229003904, "__label__software": 0.037261962890625, "__label__software_dev": 0.65673828125, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.0006580352783203125, "__label__travel": 0.0002086162567138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48738, 0.02367]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48738, 0.38799]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48738, 0.8405]], "google_gemma-3-12b-it_contains_pii": [[0, 3525, false], [3525, 6666, null], [6666, 10006, null], [10006, 12877, null], [12877, 15757, null], [15757, 21536, null], [21536, 25649, null], [25649, 28644, null], [28644, 32137, null], [32137, 37750, null], [37750, 42402, null], [42402, 47021, null], [47021, 48738, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3525, true], [3525, 6666, null], [6666, 10006, null], [10006, 12877, null], [12877, 15757, null], [15757, 21536, null], [21536, 25649, null], [25649, 28644, null], [28644, 32137, null], [32137, 37750, null], [37750, 42402, null], [42402, 47021, null], [47021, 48738, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48738, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48738, null]], "pdf_page_numbers": [[0, 3525, 1], [3525, 6666, 2], [6666, 10006, 3], [10006, 12877, 4], [12877, 15757, 5], [15757, 21536, 6], [21536, 25649, 7], [25649, 28644, 8], [28644, 32137, 9], [32137, 37750, 10], [37750, 42402, 11], [42402, 47021, 12], [47021, 48738, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48738, 0.13605]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
b8c0c09bc737f2281a222d787cb1a686855f4146
|
Software Concerns for Execution on Heterogeneous Platforms
Hugo Sica de Andrade
Division of Software Engineering
Department of Computer Science & Engineering
Chalmers University of Technology and University of Gothenburg
Gothenburg, Sweden, 2018
Software Concerns for Execution on Heterogeneous Platforms
Hugo Sica de Andrade
Copyright ©2018 Hugo Sica de Andrade except where otherwise stated.
All rights reserved.
Technical Report No 189L
ISSN 1652-876X
Department of Computer Science & Engineering
Division of Software Engineering
Chalmers University of Technology and University of Gothenburg
Gothenburg, Sweden
This thesis has been prepared using\LaTeX.
Printed by Chalmers Reproservice,
Gothenburg, Sweden 2018.
To my father Hélio, my mother Gina, and my sister Débora.
Abstract
Context: Heterogeneous computing, i.e., computing performed on different types of execution units, such as CPUs, GPUs, FPGAs, has shown to be a feasible path towards higher performance and less energy consumption. Heterogeneous platforms are specialized on specific types of computation, e.g., parallel computing. However, this approach imposes a number of challenges on the software side. One of such challenges is related to software deployment, in which applications must be prepared to be executed in different target architectures. Further, the approach demands a robust inter-process communication solution, since these systems inherently distribute computation.
Objective: The objective of this thesis is twofold. First, to provide an overview of the state-of-the-art of software deployment on heterogeneous platforms, with emphasis to goals, concerns, challenges, and identification of topics of importance for further research. Second, to investigate the communication problem and propose a novel method that improves inter-process communication in distributed systems.
Method: Six papers were written as results of four studies: (i) a literature review in the form of a systematic mapping study on software deployment on heterogeneous platforms; (ii) a systematic evaluation of deployment methods in the context of a self-driving heavy vehicle; (iii) an investigation on data marshalling approaches and how they perform in the context of a cyber-physical system; and (iv) a novel message restructuring approach, also in the context of cyber-physical systems.
Results and Conclusions: The mapping study discussed the (i) concerns on the topic such as scheduling and software quality; the (ii) approaches available, such as frameworks; and the (iii) architecture solutions used, such as styles and principles. In the second study, we found that the performance decay is negligible when using sandboxed environments for deployment. In the third and fourth studies, we proposed and evaluated a data marshalling approach that decreases bandwidth consumption.
Future work: We intend to identify challenges that are currently faced in an industrial setting. In particular, a migration from a non-heterogenous platform to a heterogeneous platform can be studied in the context of a modern software development process, such as DevOps.
Keywords
Software Deployment, Software Architecture, Heterogeneous Platforms, Inter-process Communication
Acknowledgment
Big thanks to my supervisor, Ivica Crnkovic, for all his words of wisdom throughout the last years. He has been providing me not only with great guidance and opportunities on my professional path, but also with new and valuable perspectives on life. It is no wonder why everyone who has been in contact with Ivica gets amazed by his outstanding character.
Thanks to my co-supervisor, Christian Berger, with whom I have learned a lot about work processes, organizing and teaching. His very high dedication approach to being a researcher/engineer/teacher has shown me what hard work can get me in return.
Thanks to my fellow office sharing colleagues Federico Giaimo, Yue Kang and previously Hang Yin. My appreciation also goes out to all my Software Engineering division co-workers. I feel extremely privileged and honoured being able to work among such talented individuals.
I am very thankful to my parents and sister, who have managed to continue by my side even when I chose to be in different continents (multiple times).
Thanks to my north American host family, who have made me part of their family 14 years ago. This mental geographic triangulation has nothing against my feelings towards my family. I always have all them in my thoughts. Always.
Thanks to my girlfriend Anna, who has shown incredible support throughout my latest challenges. She has provided me with valuable encouragement and managed to find ways to kept me on track. Tack så mycket, gatinha!
List of Publications
Appended publications
This thesis is based on the following publications:
Other publications
The following publications are not appended to this thesis, due to contents overlapping that of appended publications or contents not related to the thesis.
[a] H. Andrade “Investigating Software Deployment on Heterogeneous Platforms”
In submission to the Information and Software Technology journal (IST).
Personal Contribution
I was the main driver in Papers A and B, from the composition of the review protocol to writing the manuscripts. Throughout the study, I had multiple iterations with my supervisor, and part of the review process was aided by a colleague.
Paper C is derived from a master thesis that I closely co-supervised. I helped in the definition of goals, research design, literature review, and writing of the paper.
The studies leading to Papers D, E and F, were jointly conducted by the authors, who equally contributed to all phases of the research. From drawing sketches on the whiteboard to reviewing the text, we held several meetings and worked as a team.
# Contents
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Abstract</td>
<td>v</td>
</tr>
<tr>
<td>Acknowledgement</td>
<td>vii</td>
</tr>
<tr>
<td>List of Publications</td>
<td>ix</td>
</tr>
<tr>
<td>Personal Contribution</td>
<td>xi</td>
</tr>
<tr>
<td>1 Introduction</td>
<td>1</td>
</tr>
<tr>
<td>1.1 Background</td>
<td>2</td>
</tr>
<tr>
<td>1.1.1 Heterogeneous platforms</td>
<td>2</td>
</tr>
<tr>
<td>1.1.2 Software deployment</td>
<td>3</td>
</tr>
<tr>
<td>1.1.3 Automotive domain & Inter-process communication</td>
<td>4</td>
</tr>
<tr>
<td>1.2 Research Goal</td>
<td>5</td>
</tr>
<tr>
<td>1.3 Research Methodology</td>
<td>6</td>
</tr>
<tr>
<td>1.3.1 Systematic literature reviews</td>
<td>6</td>
</tr>
<tr>
<td>1.3.2 Controlled experiments</td>
<td>7</td>
</tr>
<tr>
<td>1.3.3 Design science</td>
<td>7</td>
</tr>
<tr>
<td>1.4 Contributions</td>
<td>8</td>
</tr>
<tr>
<td>1.4.1 Contribution 1: An overview of the main concerns and approaches of software deployment on heterogeneous platforms</td>
<td>8</td>
</tr>
<tr>
<td>1.4.2 Contribution 2: A systematic evaluation of sandboxed software deployment strategies</td>
<td>9</td>
</tr>
<tr>
<td>1.4.3 Contribution 3: A data marshalling approach for reducing bandwidth consumption</td>
<td>9</td>
</tr>
<tr>
<td>1.4.4 Contribution 4: A message restructuring approach for improving resource usage</td>
<td>10</td>
</tr>
<tr>
<td>1.5 Publications</td>
<td>10</td>
</tr>
<tr>
<td>1.6 Threats to Validity</td>
<td>14</td>
</tr>
<tr>
<td>1.6.1 Construct validity</td>
<td>14</td>
</tr>
<tr>
<td>1.6.2 Internal validity</td>
<td>15</td>
</tr>
<tr>
<td>1.6.3 External validity</td>
<td>15</td>
</tr>
<tr>
<td>1.6.4 Reliability</td>
<td>15</td>
</tr>
<tr>
<td>1.7 Conclusion</td>
<td>16</td>
</tr>
<tr>
<td>1.8 Future Work</td>
<td>16</td>
</tr>
</tbody>
</table>
2 Paper A
2.1 Introduction .................................................. 18
2.2 Background .................................................... 19
2.2.1 Heterogeneous computing and platforms ................. 19
2.2.2 Software deployment ........................................ 20
2.3 Research methodology ........................................ 20
2.3.1 Research questions ......................................... 21
2.3.2 Conduction of search ....................................... 22
2.3.3 Screening of papers ......................................... 23
2.3.4 Data Extraction .............................................. 24
2.4 Study results ................................................... 25
2.4.1 The meaning of the term “heterogeneous” ................ 25
2.4.2 Main purpose of the studies and research type classification .... 25
2.4.3 Primary studies’ meta-data .................................. 27
2.5 RQ1 - The main Concerns ..................................... 29
2.5.1 Scheduling ................................................... 30
2.5.1.1 Load balancing ............................................. 31
2.5.1.2 Scheduling executable units ............................. 31
2.5.1.3 Utilizing resources ....................................... 32
2.5.2 Software quality ............................................. 32
2.5.2.1 Performance ................................................. 33
2.5.2.2 Portability .................................................. 33
2.5.2.3 Efficiency ................................................... 33
2.5.2.4 Maintainability ............................................ 33
2.5.2.5 Scalability ................................................ 34
2.5.3 Software architecture ....................................... 34
2.5.3.1 Efficient data and memory management ............. 34
2.5.3.2 Real-time constraints ..................................... 35
2.5.4 Development process ....................................... 35
2.5.4.1 Efficiency in the process ............................... 35
2.5.4.2 Parallel programming & complexity .................... 35
2.5.5 Hardware-related concerns ................................ 36
2.5.5.1 Energy consumption ...................................... 36
2.5.5.2 Hardware constraints ..................................... 36
2.5.5.3 Design and maintenance ................................. 37
2.5.5.4 Components malfunctioning ............................. 37
2.5.6 System evaluation ............................................ 37
2.5.6.1 Performance analysis ..................................... 37
2.5.6.2 Heterogeneous system visualization .................... 38
2.5.7 Simulation ................................................... 38
2.5.7.1 Simulating heterogeneous systems ..................... 38
2.5.8 Summary - Concerns (RQ1) ................................ 38
2.6 RQ2 - The Approaches ......................................... 40
2.6.1 General practices ........................................... 40
2.6.1.1 Frameworks ................................................. 40
2.6.1.2 Load balancing techniques ............................... 42
2.6.1.3 Scheduling algorithms .................................... 43
2.6.2 Design time practices ....................................... 44
2.6.2.1 Modeling software and hardware ....................... 44
## Contents
2.6.2.2 Definition of configurations ........................................ 45
2.6.2.3 Activities .................................................................. 46
2.6.3 Runtime practices .............................................................. 46
2.6.3.1 Own scheduler .............................................................. 46
2.6.3.2 Profiling .................................................................... 47
2.6.3.3 Task queuing ................................................................. 47
2.6.3.4 Current job state table ................................................... 47
2.6.4 Summary - Approaches (RQ2) .............................................. 48
2.7 Discussion .......................................................................... 49
2.8 Threats to validity ................................................................. 50
2.9 Related work ........................................................................ 52
2.10 Conclusion and Future Work .................................................. 53
3 Paper B
3.1 Introduction ......................................................................... 56
3.2 Background .......................................................................... 57
3.3 Research Method ................................................................. 58
3.3.1 Research question ............................................................. 58
3.3.2 Conduction of search ......................................................... 59
3.3.3 Screening of papers ............................................................ 60
3.3.4 Keywording using abstracts ............................................... 61
3.3.5 Data extraction and mapping process ................................. 61
3.4 Results .................................................................................. 62
3.4.1 Classification scheme ......................................................... 62
3.4.2 Which architecture solutions enable/support deployment
strategies for heterogeneous platforms? .................................... 65
3.4.2.1 Architectural styles ......................................................... 65
3.4.2.2 Architectural principles .................................................... 66
3.5 Discussion ............................................................................. 67
3.6 Threats to Validity .................................................................. 68
3.7 Related Work ....................................................................... 68
3.8 Conclusion ............................................................................ 69
4 Paper C
4.1 Introduction ......................................................................... 72
4.1.1 Problem Domain & Motivation .......................................... 72
4.1.2 Research Goal & Research Questions ................................. 73
4.1.3 Contributions of the Article .............................................. 73
4.1.4 Structure of the Article ....................................................... 74
4.2 Methodology ........................................................................ 74
4.3 Literature Review ................................................................. 74
4.3.1 Outcomes of the Review .................................................... 75
4.4 Experiments ......................................................................... 76
4.5 Results .................................................................................. 77
4.6 Analysis & Discussion ............................................................ 80
4.6.1 Threats to Validity ............................................................. 81
4.7 Conclusion & Future Work ..................................................... 82
Chapter 1
Introduction
The demands for computing performance continue to increase in science and engineering. A clear rise in the amount of data that is processed by computer systems is evident in multiple domains. The scenario is particularly challenging in the case of embedded systems, which are often limited in resources, as well as real-time and interfaces constraints [1].
In the past, the increasing requirements for hardware performance were fulfilled by (i) boosting the frequency of processing units (PUs) and/or by (ii) adding transistors onto processors. Since the frequency cannot be further increased with today’s technology [2], performance is primarily boosted by an increased transistor count. However, as the number of transistors built on chips has reached several billions (c.f. Intel (Altera) Stratix 10 featuring over 30 billion transistors), it is increasingly difficult to make use of this many of them. On the software side, the added complexity in such platforms should be accounted for as soon as in the design phase, when different software deployment strategies may be modeled and analyzed.
One way to fulfill these high demands for performance is to consider a heterogeneous platform, i.e., a hardware platform consisting of different types of computational units and technologies. Heterogeneous platforms may contain, for instance, a combination of multi-core Central Processing Units (CPUs), Graphics Processing Units (GPUs) and Field-Programmable Gate Arrays (FPGAs), creating the impression of dedicated units that can be adapted to a wide range of application domains. These dedicated units can significantly increase the overall system’s performance and energy management through, for instance, optimizing the workload distribution according to the types of data to be executed.
Accelerators such as GPUs and FPGAs are gaining popularity because they offer performance improvements in many applications. However, despite the fact that the difficulty in programming for such platforms is decreasing [3], there are multiple concerns that must be addressed in order to handle the inherent complexity of both hardware and software in such environments.
In this project, we focus on the software engineering side, based on the need to provide support for software development on heterogeneous platforms.
CHAPTER 1. INTRODUCTION
The remainder of this introduction section of thesis is organized as follows. In Section 1.1, we introduce the background. In Section 1.2, we describe the research goal and research questions of this thesis. In Section 1.3, we describe the research methodologies used in this research. In Section 1.4, we summarize the contributions of this thesis. In Section 1.5, we present the publications appended to this thesis. In Section 1.6, we describe the threats to validity of this work. Then, in Section 1.7, we discuss the conclusions. Finally, in Section 1.8, we present our intentions for future work.
1.1 Background
In this section, we provide the background for the main topics covered in this thesis: heterogeneous platforms (heterogeneous computing), software deployment, the automotive domain and inter-process communication.
1.1.1 Heterogeneous platforms
During our investigation, we discovered multiple studies that refer to the term “Heterogeneous platform” in different ways. Besides meaning different processors, we found that this term also refers to platforms containing processors of the same type, but with different capacities. For instance, a system that includes 2 CPUs with a different number of cores and/or clock frequencies is often called heterogeneous. Another situation in which the term is commonly found is when the types and further characteristics of the processors are omitted, being discussed only the difference in capacity of the PUs. For example, strictly combinatorial problems that consider a cost formula and a few performance attributes of the processors in order to determine the best deployment strategy. In this thesis, we adopted the definition used in [4]. The author defines heterogeneous computing as complex systems composed of different kinds of processing units which use different processing paradigms and are designed for different types of tasks which work together in order to provide the best processing performance for diverse computing needs. In this sense, we consider “heterogeneous platform” a hardware set consisting of at least two different types of processors that are specialized in different types of tasks.
An example of heterogeneous hardware architectures is shown in Figure 1.1. In Figure 1.1(a), the single-chip Cell Broadband Engine Architecture (CBEA) is depicted consisting of a traditional CPU core and eight single-instruction multiple data (SIMD) accelerator cores. Each core can run separate programs and communicate through a fast on-chip bus. Its main design criteria is to maximize performance while consuming minimum power. Figure 1.1(b) illustrates a GPU with 30 highly multi-threaded SIMD accelerator cores in combination with a standard multicore CPU. The GPU has superior bandwidth and computational performance. It is designed for high-performance graphics, where throughput of data is key. In Figure 1.1(c), a standard multi-core CPU is paired with an FPGA consisting of an array of logic blocks. FPGAs can also incorporate regular CPU cores on-chip, making it a heterogeneous chip by itself. FPGAs offer fully deterministic performance and are designed for high throughput, for example, in telecommunication applications.
1.1. BACKGROUND
1.1.2 Software deployment
The concept of deployment also varies according to the context in which the study is performed. For business research, it may refer to strategies for update releases of a mobile app. For technology research, deployment may refer to the tools that are used to facilitate and enable deployment, e.g., Docker [5]. For fundamental research, it may refer to the mathematical strategies to optimize load balance in a heterogeneous environment.
In the context of software engineering, software deployment comprises a set of activities resulting in a system that is available for use [6]. These activities can be very diverse and include a wide range of processes, such as users training, integration of new features into the existing system, the actual installation of software on the underlying hardware, etc. The scope of this project is limited to the activities involved in the process of installing software on hardware, including the decision about the units in which software components will be executed (component allocation). The activities of partitioning the software system into components and planning their execution on different processing units are considered in this work. Further, we focus on deployment from the software perspective rather than from the hardware perspective. We do not discuss organizational/business issues around deployment, the stakeholders involved in the process, or the training of prospective users.
As we conducted this work, we realized that the activities performed in the typical deployment stage are heavily influenced by activities in previous stages in the software process. For instance, we learned that one common way to realize deployment onto heterogeneous platforms is by using a development framework, which needs to be applied as soon as in the architecture phase. For this reason, we extend the concept of deployment to include all activities that are relevant throughout the software engineering process to successfully execute software onto a heterogeneous platform.
One generic representation is shown in Figure 1.2, where a deployment scenario is depicted. On the hardware side, there is a heterogeneous platform consisting of an FPGA, N CPUs and M GPUs that are available for processing data, and these units have interfaces with different types of sensors and actuators. The software is decomposed into components that can be deployed according to different configurations, while the following assumptions might be relevant: (i) vehicle data instructions might execute in a shorter time on the FPGA when compared to CPUs or GPUs, however programming for FGPA
be complex and more time consuming; (ii) two dependent applications might be executed faster in different executing units, however the communication between them might be compromised by the available bandwidth; (iii) allocating components and running image processing applications in parallel on a single executing unit (e.g., GPU) might be less complex, but also compromise energy efficiency. Other aspects may also be considered, such as the impact of the technology used to encapsulate processes for software deployment, or the underlying environment in which the application is executed.
### 1.1.3 Automotive domain & Inter-process communication
In the automotive domain, a modern vehicle comprises of a set of Electronic Control Units (ECUs) interconnected via a network to provide functionalities such as stability control, measurement of essential fluids and infotainment systems. In the context of a self-driving vehicles, these ECUs are also responsible for handling data obtained by sensing the surrounding environment. These types of systems are also called cyber-physical systems, given such interaction between the software/electronic capabilities and the physical world, these types of systems.
From the software perspective, self-driving vehicles require a middleware that bridges high-level and low-level data and ideally enables seamless interaction between the different hardware components that are part of the system. In the context of this research, we use the OpenDaVinci software environment, which provides middleware functionalities to enable communication between all the distributed software modules in the context of a self-driving vehicle. The software comprises of interfaces to the supported hardware components and devices, like cameras, sensors and laser scanners.
OpenDaVinci is currently used in three different settings, as follows.
**ReVeRe.** In the facilities provided by Chalmers University of Technology’s vehicle laboratory “ReVeRe” (Resource for Vehicle Research), multiple projects are conducted in collaborations between academia and industrial partners.
The main goal of this research setting is to provide the infrastructure that allows researchers to create breakthrough technology in the area of self-driving vehicles and collision avoidance. The lab equipment comprises, among others, an SUV (Volvo XC90), and a truck tractor (Volvo FH16).
**Miniature vehicles.** Within the academic context, miniature vehicles on a 1:10 scale are used as (i) experimental platforms for our research, and (ii) educational platforms in university courses. These vehicles contain hardware components including ESC, sensors, and camera that are used to perform a number of activities without human intervention. For instance, in the context of the B.Sc. course, the students work on a software implementation that enables the miniature vehicle to perform lane-following, parking, and overtaking.
**Simulation environment.** OpenDaVINCI includes a simulation environment in which it is possible to experiment self-driving capability algorithms with the help of generated sensor data. It is possible to model virtual tracks and add obstacles that emulate real-life scenarios in a driving context.
In addition to the hardware capabilities on handling data, we have observed that one influencing aspect regarding the overall performance of embedded systems is related to the communication between different components (i.e. data marshaling). Because software deployed on heterogeneous platforms is distributed in essence, the exchange of messages between different components/processes play a key role in determining whether or not satisfactory results in terms of performance will be achieved. Especially in the context of embedded systems (or cyber-physical systems), it is important to define an effective communication strategy because resources are typically limited and several applications are safety-critical (e.g., self-driving vehicles).
### 1.2 Research Goal
The main goal of this thesis is to investigate software concerns for execution on heterogeneous platforms. By concerns, we mean topics that practitioners must be aware of and ultimately address when deploying software on hardware containing more than one type of processor. Throughout the conduction of the research, we discovered that the concept of deployment varies according to the context in which the study is performed. In this work, we consider the software engineering perspective, which covers methods, processes, and techniques that enable and lead to the execution of software on heterogeneous platforms.
Based on the research goal, we formulated three research questions, as follows.
**RQ1:** What are the main concerns and approaches in software deployment on heterogeneous platforms?
As the first step towards understanding the area of research, we aimed to investigate the state-of-the-art of software deployment on heterogeneous platforms, focusing on the main concerns and approaches that can be found in the literature. The purpose of RQ1 is to obtain an overview of the body of
knowledge in the area by revealing concerns and approaches that are relevant when deploying software on heterogeneous platforms.
**RQ2:** What is the impact of different deployment strategies on non-functional properties when designing embedded systems?
With RQ2, we aimed to study the impact of different deployment strategies on the systems performance. The idea is to measure the possible trade-offs in utilizing common deployment strategies, such as sandboxed environments. In some critical domains (e.g., self-driving vehicles), performance is crucial and must be guaranteed throughout the execution processes, despite the limited resources available on such embedded systems.
**RQ3:** How can the communication between different computational resources be improved?
Further in the context of self-driving vehicles, we have observed that one of the main issues that may hinder performance is the inter-process communication between computational resources. With RQ3, we aimed to investigate the problem and propose solutions to this issue, which commonly represents a bottleneck to such systems due to the increased amounts of data that are transmitted.
In summary, RQ1 and RQ2 focus on software deployment, while RQ3 focuses on inter-process communication.
### 1.3 Research Methodology
We used three different research methodologies to address the formulated research questions, as follows.
#### 1.3.1 Systematic literature reviews
To address RQ1, we conducted a systematic literature review in the form of a mapping study. Mapping Studies differ from classic Systematic Literature Reviews in their broadness and depth [9][10]. Instead of rigorously searching, analyzing and assessing studies, selected information is extracted from the primary studies in order to obtain an overview of the current state-of-the-art of research in a particular field.
We aimed at performing a systematic approach to increase reliability of the study and enable reproducibility in the future. The search included popular academic databases and followed a set of predefined inclusion and exclusion criteria. After the selection of studies from the libraries, we performed the snowballing procedure [11] to also cover related papers. The review selected 146 primary studies, and therefore collected a large amount of data to be analyzed and discussed. We followed the standard rigorous procedure for mapping studies, and complemented with a bottom-up approach to find similar common characteristics of the studies. Finally, we provided different classifications in order to achieve a better understanding of the area.
To address RQ2, we conducted a smaller literature review followed by two controlled experiments. The review followed the same principles as in the
guidelines for performing systematic mapping studies, although it only contained on research question and searched in only one (major) digital library. We also performed the snowballing procedure to cover related papers that were possibly neglected when selecting studies from the digital library.
1.3.2 Controlled experiments
Controlled experiments help to investigate a testable hypothesis where one or more independent variables are manipulated to measure their effect on one or more dependent variables \cite{12}.
The first controlled experiment on desk was conducted in the context of RQ2, in which we measured the scheduling precision and I/O performance of sample applications when deployed on different environments. Through a sequence of controlled steps, the sample applications were executed in four different execution environments, consisting of an alternation of (i) executing the sample applications natively or sandboxed within a Docker container and (ii) executing the sample applications on a target system with a vanilla or a real-time enabled kernel.
Finally, the second experiment used a real-world application that was at the time deployed on a self-driving truck. It was designed in a way that the findings from the first experiment could be validated.
For RQ3, we conducted controlled experiments to assess our approach compared with our existing solutions using evaluation scenarios. We studied the existing approaches and assessed their performances under various, application-independent conditions. We ran an experiment that collected data in a systematic way, using fixed parameters and different message attribute combinations.
1.3.3 Design science
The design science methodology focuses on the design and investigation of artifacts in a given context \cite{13}. In concrete terms, the methodology comprises of an iterative process in which researchers engage in three main activities: (i) problem identification and opportunities representation; (ii) development of solutions; and (iii) evaluations of the proposed solutions in a given context in order to determine whether the solutions effectively address the problem.
In the context of RQ3, we followed the design science approach \cite{14} to identify the problem of bandwidth limitation in cyber-physical systems. From our experience, this aspect represented a bottleneck in our systems, thus representing opportunities for improvement. Then, we proposed a solution referring to data marshalling approaches, in which the bandwidth consumption is decreased by optimizing the composition of messages to be transmitted. The approach was implemented in the context of self-driving cars, including both simulation environments and real-world scenarios. Finally, we evaluated our proposed approach by conducting experiments to compare the performance of existing approaches against ours. In the case of our message restructuring proposal, we formally validated the model’s correctness using a widely known model-checking technology.
1.4 Contributions
In this section, we present a summary of the four research contributions, followed by their relation to the papers included in this thesis, and my personal contribution to them. These contributions are the outcome of our investigations on the topics that were previously described, aiming to answer the research questions. We started by conducting a systematic literature review in the form of a mapping study to obtain an overview of the area, and leading to Contribution 1. This study revealed several approaches that can be used for deploying software on heterogeneous platforms. Then, from observing our current projects in the lab, we came across the possibility to adopt a widely known deployment tool, which led to Contribution 2. Also from observations on our cyber-physical systems domain, we explored the area of data marshalling approaches, leading to Contributions 3 and 4.
The relation between research questions, contributions, papers, and their main topics is shown in Table 1.1.
1. An overview of the main concerns and approaches of software deployment on heterogeneous platforms;
2. A systematic evaluation of sandboxed software deployment strategies;
3. A data marshalling approach for reducing bandwidth consumption;
Table 1.1: Relation between contributions, research questions, papers, and main topics of the papers.
<table>
<thead>
<tr>
<th>Contributions</th>
<th>Questions</th>
<th>Papers</th>
<th>Main topic</th>
</tr>
</thead>
<tbody>
<tr>
<td>Contribution 1</td>
<td>RQ1</td>
<td>Paper A, Paper B</td>
<td>Deployment</td>
</tr>
<tr>
<td>Contribution 2</td>
<td>RQ2</td>
<td>Paper C</td>
<td></td>
</tr>
<tr>
<td>Contribution 3</td>
<td></td>
<td>Paper D, Paper E, Paper F</td>
<td>Communication</td>
</tr>
<tr>
<td>Contribution 4</td>
<td>RQ3</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
1.4.1 Contribution 1: An overview of the main concerns and approaches of software deployment on heterogeneous platforms
We systematically searched and analyzed the literature in order to obtain an overview of the research area, discovering gaps and trends. We considered papers indexed by trusted libraries in computer science and followed a pre-defined process to formulate the research questions, conduct the research, screen the papers, and extract data from them.
This study led to two papers, one focusing on the main concerns and approaches (Paper A [15]), and another focusing on architectural aspects (Paper B [16]) of software deployment on heterogeneous platforms.
**Personal contribution:** I was the main driver of all steps in this study. The main idea was formulated by Ivica Crnkovic, and the study was conducted mainly by me, with the help of Jan Schröder in the paper screening process. Weekly meetings were held between Ivica and me to align the focus of the review, resolve disagreements, and plan future steps.
### 1.4.2 Contribution 2: A systematic evaluation of sandboxed software deployment strategies
In this study, we aimed at systematically evaluating the influence of sandboxed execution environments for applications in the automotive domain. We were particularly interested in studying the impact on two quality attributes of the system: scheduling precision and input/output performance. We elected Docker as the deployment tool to be evaluated, due to: (i) its growing popularity, and (ii) our interests in adopting Docker for our ReVeRe project for self-driving vehicles.
This study led to Paper C [17].
**Personal contribution:** This study was based on the master thesis by Philip Masek and Magnus Thulin at the Chalmers | University of Gothenburg [18], whom I co-supervised with Christian Berger. The concept, scope, and setting of the project were designed by Christian and I. The experiments were conducted by Philip and Magnus under my close supervision, and facilitated in the lab by Ola Benderius. The paper was jointly written by all co-authors.
### 1.4.3 Contribution 3: A data marshalling approach for reducing bandwidth consumption
We proposed and evaluated our concept of self-adaptive data marshalling, that aimed at reducing bandwidth usage. The main idea was to improve inter-process communication by only sending the difference (i.e., the *delta*) between the current message and the previous. The communication protocol is transparent and was designed in the publish/subscribe architectural pattern. Our approach was then evaluated against well-established data marshalling approaches: LCM [19] and Google Protobuf [20].
This study led to two papers: one focusing on the proposal of our approach (Paper D [21], and another focusing on its evaluation against existing approaches (Paper E [22]).
**Personal contribution:** This study was conducted jointly by Federico Gi-gaimo, Christian Berger and I. From the conceptual discussions to writing the papers, we equally shared the workload and held frequent discussions to reach agreements and define the next steps.
1.4.4 Contribution 4: A message restructuring approach for improving resource usage
We continued to explore inter-process communication issues by proposing a model-based approach for adaptive message restructuring. The approach aims at reducing message latency by dynamically restructuring messages according to both design- and runtime properties. Design-time parameters included, e.g., the composition of fields in a message; while runtime parameters included, e.g., message transmission latency, timeout and message arrival/drop rate.
This study led to Paper F \[23\].
Personal contribution: This study was conducted jointly by Hang Yin, Federico Giaimo, Christian Berger and I. We equally shared all the workload related to designing the concept, conducting the validation and writing the paper.
1.5 Publications
In this section, we list the main publications related to this thesis. The complete version of the papers can be found in Chapters 2 to 7. They have been reformatted in order to comply with the layout of this thesis.
Paper A
Software Deployment on Heterogeneous Platforms: A Systematic Mapping Study
H. Andrade, J. Schröder, I. Crnkovic
Submitted to the IEEE Transactions on Software Engineering journal (TSE).
The main goal in Paper A was to investigate the state-of-the-art of software deployment on heterogeneous platforms. We systematically searched and analyzed the literature in order to obtain an overview of the research area, discovering gaps and trends. We considered papers indexed by trusted libraries in computer science and followed a pre-defined process to formulate the research questions, conduct the research, screen the papers, and extract data from them.
Abstract: Context: Multiple types of processing units (e.g., CPUs, GPUs and FPGAs) can be used jointly to achieve better performance in computational systems. However, these units are built with fundamentally different characteristics and demand attention especially towards software deployment. Objective: The goal of this work is to summarize the state-of-the art of software deployment on heterogeneous platforms. We provide an overview of the research area by searching for and categorizing relevant studies, as well as discussing gaps and trends of the field. We are interested in the main concerns (RQ1) and the approaches used (RQ2) when deploying software on heterogeneous platforms. Method: In order to achieve our goal, we performed a systematic mapping study, which refers to a method for reviewing literature with basis on predefined search strategies and a multi-step selection process. Results: We
selected and analyzed 146 primary studies from multiple sources, and found that the area of research is dominated by solution proposals. The majority of the studies discussed concerns about scheduling, the quality of the software, and its architecture. A large number of studies focused on the problem of scheduling tasks and processes. We found approaches that are applied at different binding times (i.e., design time, runtime, orthogonal). Conclusion: The evaluation of the proposed solutions in an industrial context are missing. Also, the proposed methods have not been evaluated in development processes. Most of the methods address a particular concern, or a few concerns, while there is a lack of a holistic approach.
Paper B
A Review on Software Architectures for Heterogeneous Platforms
H. Andrade, I. Crnkovic
The systematic mapping study covered a large area of research, so we decided to split the results into Papers A and Paper B. While Paper A focused on concerns and approaches, Paper B focused on software architecture.
Abstract: The increasing demands for computing performance have been a reality regardless of the requirements for smaller and more energy efficient devices. Throughout the years, the strategy adopted by industry was to increase the robustness of a single processor by increasing its clock frequency and mounting more transistors so more calculations could be executed. However, it is known that the physical limits of such processors are being reached, and one way to fulfill such increasing computing demands has been to adopt a strategy based on heterogeneous computing, i.e., using a heterogeneous platform containing more than one type of processor. This way, different types of tasks can be executed by processors that are specialized in them. Heterogeneous computing, however, poses a number of challenges to software engineering, especially in the architecture and deployment phases. In this paper, we conduct an empirical study that aims at discovering the state-of-the-art in software architecture for heterogeneous computing, with focus on deployment. We conduct a systematic mapping study that retrieved 28 studies, which were critically assessed to obtain an overview of the research field. We identified gaps and trends that can be used by both researchers and practitioners as guides to further investigate the topic.
Paper C
Systematic Evaluation of Sandboxed Software Deployment for Real-time Software on the Example of a Self-Driving Heavy Vehicle
P. Masek, M. Thulin, H. Andrade, C. Berger, O. Benderius
In Paper C, we aimed at systematically evaluating the influence of sandboxed execution environments for applications from the automotive domain. We were particularly interested in studying the impact on two quality attributes of the system: Scheduling precision and input/output performance.
Abstract: Companies developing and maintaining software-only products like web shops aim for establishing persistent links to their software running in the field. Monitoring data from real usage scenarios allows for a number of improvements in the software life-cycle, such as quick identification and solution of issues, and elicitation of requirements from previously unexpected usage. While the processes of continuous integration, continuous deployment, and continuous experimentation using sandboxing technologies are becoming well established in said software-only products, adopting similar practices for the automotive domain is more complex mainly due to real-time and safety constraints. In this paper, we systematically evaluate sandboxed software deployment in the context of a self-driving heavy vehicle that participated in the 2016 Grand Cooperative Driving Challenge (GCDC) in The Netherlands. We measured the system’s scheduling precision after deploying applications in four different execution environments. Our results indicate that there is no significant difference in performance and overhead when sandboxed environments are used compared to natively deployed software. Thus, recent trends in software architecting, packaging, and maintenance using microservices encapsulated in sandboxes will help to realize similar software and system engineering for cyber-physical systems.
Paper D
Improving Bandwidth Efficiency with Self-Adaptation for Data Marshalling on the Example of a Self-Driving Miniature Car
F. Giaimo, H. Andrade, C. Berger, I. Crnkovic
In Paper D, we proposed and evaluated our concept of self-adaptive data marshalling, that aimed at reducing bandwidth usage. The main idea was to improve inter-process communication by only sending the difference (i.e., the delta) between the current message and the previous. The communication protocol is transparent and designed in the publish/subscribe architectural pattern.
Abstract: Publish/subscribe communication is a common architectural design pattern in component-based software systems used in many of today’s cyber-physical systems to exchange information between distributed software components. These systems typically deal with an increased number of data transfers, with a risk of lacking resources. Our recent domain analysis for a lane-following algorithm of a self-driving miniature car unveiled that the
actual “information increment” between two subsequently sent packets is often small. Such scenario enables possibilities for a more efficient data exchange by avoiding redundant and/or unnecessary information transfer. In this paper, we propose and evaluate our concept for “self-adaptive data marshalling” that transparently adapts data types in messages to be exchanged by analyzing the actual information increment. The approach could reduce the bandwidth usage by more than 50% in comparison to the current approach, and by approximately 33% compared to the use of the general-purpose compression library zlib.
**Paper E**
**Systematic Evaluation of Three Data Marshalling Approaches for Distributed Software Systems**
H. Andrade, F. Giaimo, C. Berger, I. Crnkovic
*Proceedings of the Workshop on Domain-Specific Modeling (DSM @ SPLASH). Pittsburgh, USA, 27 October, 2015.*
As a follow-up from Paper D, we studied data marshalling approaches further by systematically evaluating three approaches: Google Protobuf, LCM, and our self-adaptive delta approach. We selected these two popular approaches as a way to further assess the efficiency of our proposed approach.
**Abstract:** Cyber-physical systems like robots and self-driving vehicles comprise complex software systems. Their software is typically realized as distributed agents that are responsible for dedicated tasks like sensor data handling, sensor data fusion, or action planning. The modular design allows a flexible deployment as well as algorithm encapsulation to exchange software modules where needed. The distributed software exchanges data using a data marshalling layer to serialize and deserialize data structures between a sending and receiving entity. In this article, we are systematically evaluating Google Protobuf, LCM, and our self-adaptive delta marshalling approach by using a generic description language, of which instances are composed at runtime. Our results show that Google Protobuf performs well for small messages composed mainly by integral field types; the self-adaptive data marshalling approach is efficient if four or more fields of type double are present, and LCM outperforms both when a mix of many integral and double fields is used.
**Paper F**
**Adaptive Message Restructuring using Model-Driven Engineering**
H. Yin, F. Giaimo, H. Andrade, C. Berger, I. Crnkovic
In Paper F, we continued to explore inter-process communication issues by proposing a model-based approach for adaptive message restructuring. The approach aims at reducing message latency by dynamically restructuring messages.
according to both design- and runtime properties. Design-time parameters included, e.g., the composition of fields in a message; while runtime parameters included, e.g., message transmission latency, timeout and message arrival/drop rate.
**Abstract:** Message exchange between distributed software components in cyber-physical systems is a frequent and resource-demanding activity. Existing data description languages simply map user-specified messages literally to the system implementation creating the data stream that is exchanged between the software components; however, our research shows that the exchanged information is often redundant and would allow for runtime optimization. In this paper, we propose a model-based approach for adaptive message restructuring. Taking both design-time properties and runtime properties into account, we propose to dynamically restructure user-specified messages to achieve better resource usage (e.g., reduced latency). Our model-based workflow also includes formal verification of adaptive message restructuring in the presence of complex data flow. This is demonstrated by an automotive example.
1.6 Threats to Validity
In this section, we provide an overview of the threats to validity of this work. They are structured according to the approach shown in [24], which classifies the validity into Construct, Internal, External, and Reliability. Detailed threats to validity to each included paper are discussed in their respective chapters.
1.6.1 Construct validity
The literature study leading to RQ1 covered a broad research area. The data extraction process was extensive and manually conducted, thus subject to interpretation. Further, the granularity of the selected categories, and consequently answers to questions and conclusions drawing were based on our judgment. As an attempt to reduce bias, two researchers performed the inclusion/exclusion process individually and then discussed the results, and in several iterations refined the categories. Disagreements were solved by a third researcher. We used common principles to guide our research, such as keeping the same or similar abstraction level of the categories, the orthogonality of the categories, and the completeness of the categories.
For RQ2, the study was realized within an established middleware to ensure a high degree of software correctness and completeness, meeting the design requirements for real-time systems. The experiments were conducted in close supervision of expert practitioners that validated both the research questions and the research procedures.
For RQ3, the design of the evaluation followed the regulations of the actual self-driving vehicles competition. The design of the studies used a generic message representation that can be systematically defined, allowing for a scenario-independent analysis of the performance of the evaluated approaches.
In Paper F, we designed the properties of our validation model based on our experience from previous automotive projects.
1.6.2 Internal validity
In order to minimize the internal validity threats in addressing RQ1, we conducted standard systematic mapping study methods that included a pre-defined set of inclusion/exclusion criteria, pilot searches, and the calibration of the search string. Standard methods were also followed in RQ2, in which we systematically compared our current implementation and our proposed method with existing methods that could be used alternatively.
For RQ2, the execution of the sample applications was carried out by a script to ensure precise reproducibility of the experiments; all peripherals such as networking are detached; and data was collected via serial communication to limit additional load to the system.
For RQ3, we based our work on the guidelines for design science \cite{25} in the development of the self-driving vehicle platform.
1.6.3 External validity
The results of the study leading to RQ2 can be applied to time-sensitive applications in respect to the hardware and software used. The hardware we used is industrial grade, making the experiment reproducible and the results relatable to similar contexts. The evaluation of our self-adaptive data marshalling approach against Google Protobuf and LCM can be considered as relevant as both other approaches are widely used and have shown the applicability in the domain of cyber-physical systems. Thus, the findings presented in this study have an impact on the design of such systems.
Regarding RQ3, our results were obtained by using the simulation. While this allows to conduct our study in a repeatable way, the expected savings might differ in reality as the manufacturing quality of the different sensors as well as environmental factors can influence the volatility of the data resulting in significant differences between consecutive packets.
In paper F, we considered the automotive domain as evaluation context, and only runtime properties need to be further evaluated using case studies from further domains.
1.6.4 Reliability
We rigorously followed well-established guidelines for conducting systematic literature reviews in order to minimize the reliability threat in RQ1. The concepts, goals, and directions were specified in a review protocol, and complied by all researchers involved. Frequent meetings were held to solve disagreements and answer questions that rose individually. Extensive information on the process and all data collected are available online to enable verification and possible reproducibility.
For RQ3, the range for the floats used in the evaluation was inspired by applications in the self-driving vehicles domain. As the goal for the delta approach was to address high frequency data exchanges, the motivation for the increment values was due to the small numeric difference between values in consecutive packets.
1.7 Conclusion
This thesis presents the motivation, procedures, and findings of our research conducted in the area of software deployment on heterogeneous platforms. The overall goal of the Ph.D. project is to understand the role of software deployment in achieving certain non-functional properties when a heterogeneous platform is available. Particularly on this licentiate thesis, we focused on two main topics: deployment and communication.
Under deployment, we conducted an extensive systematic literature study to obtain an overview of the area by “mapping out” the field of research, obtaining domain knowledge, and identifying gaps that indicated possibilities for future investigation. We identified several concerns and approaches for software deployment on heterogeneous platforms. Further on the deployment topic, we systematically evaluated a widely known technology for deploying sandboxed software in the context of cyber-physical systems.
Regarding the communication topic, we performed studies to understand how communication is typically done between software components and proposed a communication protocol that enables reduced bandwidth consumption. We compared the results using our approach with well-known data marshalling approaches, and pointed the cases in which each approach offers better performance. Finally, we proposed a novel model-based approach based on the restructuring of the messages, enabling the content of the communication packets to be reorganized. The approach saves bandwidth consumption by optimizing the use of each fixed-sized packet, thus reducing the number of packets that are transmitted.
1.8 Future Work
As future work, we intend to continue investigating software deployment on heterogeneous platforms by considering the following possibilities:
**Exploring industrial settings.** Especially with the popularization of artificial intelligence techniques, heterogeneous platforms can help in improving the performance of software systems in industry. It would be relevant to investigate how heterogeneous platforms can be useful in the context of modern, typically industrial software development processes, such as DevOps.
**Studying the migration to heterogeneous platforms.** In the context of cyber-physical systems, it would be possible to study the feasibility of adopting a heterogeneous computing strategy on our self-driving vehicles’ project. The study would be able to identify the advantages and drawbacks of deploying software onto heterogeneous platforms in the context of a cyber-physical system.
**Performing evaluation studies.** As discovered in the mapping study, there are very few studies evaluating existing techniques, while the vast majority of them proposes solutions to a given problem. One could, for instance, setup an experiment to observe a variety of quality attributes when using different frameworks for heterogeneous computing (e.g., OpenCL, CUDA, OpenMP).
|
{"Source-Url": "https://research.chalmers.se/publication/506002/file/506002_AdditionalFile_a8f1ae7e.pdf", "len_cl100k_base": 11178, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 61273, "total-output-tokens": 13186, "length": "2e13", "weborganizer": {"__label__adult": 0.0004072189331054687, "__label__art_design": 0.0005116462707519531, "__label__crime_law": 0.00028133392333984375, "__label__education_jobs": 0.0029315948486328125, "__label__entertainment": 9.995698928833008e-05, "__label__fashion_beauty": 0.00019240379333496096, "__label__finance_business": 0.0003147125244140625, "__label__food_dining": 0.0003101825714111328, "__label__games": 0.0008082389831542969, "__label__hardware": 0.00147247314453125, "__label__health": 0.0004320144653320313, "__label__history": 0.0004270076751708984, "__label__home_hobbies": 0.0001176595687866211, "__label__industrial": 0.0004580020904541016, "__label__literature": 0.0005106925964355469, "__label__politics": 0.0002682209014892578, "__label__religion": 0.0004606246948242187, "__label__science_tech": 0.03472900390625, "__label__social_life": 0.00013780593872070312, "__label__software": 0.0079803466796875, "__label__software_dev": 0.94580078125, "__label__sports_fitness": 0.0002560615539550781, "__label__transportation": 0.0009002685546875, "__label__travel": 0.00018703937530517575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60592, 0.04108]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60592, 0.33625]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60592, 0.9039]], "google_gemma-3-12b-it_contains_pii": [[0, 248, false], [248, 723, null], [723, 781, null], [781, 781, null], [781, 3240, null], [3240, 3240, null], [3240, 4725, null], [4725, 4725, null], [4725, 6376, null], [6376, 7645, null], [7645, 8323, null], [8323, 8323, null], [8323, 9499, null], [9499, 12914, null], [12914, 16839, null], [16839, 16839, null], [16839, 19181, null], [19181, 22419, null], [22419, 25082, null], [25082, 27187, null], [27187, 30190, null], [30190, 32952, null], [32952, 35973, null], [35973, 38243, null], [38243, 40897, null], [40897, 43512, null], [43512, 46189, null], [46189, 49024, null], [49024, 51772, null], [51772, 54794, null], [54794, 57639, null], [57639, 60592, null]], "google_gemma-3-12b-it_is_public_document": [[0, 248, true], [248, 723, null], [723, 781, null], [781, 781, null], [781, 3240, null], [3240, 3240, null], [3240, 4725, null], [4725, 4725, null], [4725, 6376, null], [6376, 7645, null], [7645, 8323, null], [8323, 8323, null], [8323, 9499, null], [9499, 12914, null], [12914, 16839, null], [16839, 16839, null], [16839, 19181, null], [19181, 22419, null], [22419, 25082, null], [25082, 27187, null], [27187, 30190, null], [30190, 32952, null], [32952, 35973, null], [35973, 38243, null], [38243, 40897, null], [40897, 43512, null], [43512, 46189, null], [46189, 49024, null], [49024, 51772, null], [51772, 54794, null], [54794, 57639, null], [57639, 60592, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60592, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60592, null]], "pdf_page_numbers": [[0, 248, 1], [248, 723, 2], [723, 781, 3], [781, 781, 4], [781, 3240, 5], [3240, 3240, 6], [3240, 4725, 7], [4725, 4725, 8], [4725, 6376, 9], [6376, 7645, 10], [7645, 8323, 11], [8323, 8323, 12], [8323, 9499, 13], [9499, 12914, 14], [12914, 16839, 15], [16839, 16839, 16], [16839, 19181, 17], [19181, 22419, 18], [22419, 25082, 19], [25082, 27187, 20], [27187, 30190, 21], [30190, 32952, 22], [32952, 35973, 23], [35973, 38243, 24], [38243, 40897, 25], [40897, 43512, 26], [43512, 46189, 27], [46189, 49024, 28], [49024, 51772, 29], [51772, 54794, 30], [54794, 57639, 31], [57639, 60592, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60592, 0.10417]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
a6f40b02a5417996eb8a9f9d9a0e7db89552d82d
|
[REMOVED]
|
{"Source-Url": "http://kth.diva-portal.org/smash/get/diva2:815173/FULLTEXT01", "len_cl100k_base": 9988, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 48834, "total-output-tokens": 12005, "length": "2e13", "weborganizer": {"__label__adult": 0.0004589557647705078, "__label__art_design": 0.0005578994750976562, "__label__crime_law": 0.0006575584411621094, "__label__education_jobs": 0.0006051063537597656, "__label__entertainment": 9.179115295410156e-05, "__label__fashion_beauty": 0.00020503997802734375, "__label__finance_business": 0.0005340576171875, "__label__food_dining": 0.00041747093200683594, "__label__games": 0.000934600830078125, "__label__hardware": 0.01131439208984375, "__label__health": 0.0006499290466308594, "__label__history": 0.0003833770751953125, "__label__home_hobbies": 0.0001900196075439453, "__label__industrial": 0.0010423660278320312, "__label__literature": 0.00025582313537597656, "__label__politics": 0.0004191398620605469, "__label__religion": 0.0005865097045898438, "__label__science_tech": 0.23681640625, "__label__social_life": 7.551908493041992e-05, "__label__software": 0.01294708251953125, "__label__software_dev": 0.7294921875, "__label__sports_fitness": 0.0003628730773925781, "__label__transportation": 0.0010251998901367188, "__label__travel": 0.00024890899658203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40911, 0.02036]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40911, 0.46342]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40911, 0.789]], "google_gemma-3-12b-it_contains_pii": [[0, 687, false], [687, 3265, null], [3265, 6746, null], [6746, 9944, null], [9944, 13612, null], [13612, 18836, null], [18836, 21980, null], [21980, 23696, null], [23696, 27126, null], [27126, 31018, null], [31018, 34695, null], [34695, 37771, null], [37771, 40911, null]], "google_gemma-3-12b-it_is_public_document": [[0, 687, true], [687, 3265, null], [3265, 6746, null], [6746, 9944, null], [9944, 13612, null], [13612, 18836, null], [18836, 21980, null], [21980, 23696, null], [23696, 27126, null], [27126, 31018, null], [31018, 34695, null], [34695, 37771, null], [37771, 40911, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40911, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40911, null]], "pdf_page_numbers": [[0, 687, 1], [687, 3265, 2], [3265, 6746, 3], [6746, 9944, 4], [9944, 13612, 5], [13612, 18836, 6], [18836, 21980, 7], [21980, 23696, 8], [23696, 27126, 9], [27126, 31018, 10], [31018, 34695, 11], [34695, 37771, 12], [37771, 40911, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40911, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
4787ac9cb09514b92dc1702cecfcd4a31cb667d8
|
Trace++: A Traceability Approach to Support Transitioning to Agile Software Engineering
Felipe Furtado¹, ², Andrea Zisman¹
¹Computing Department, The Open University, Milton Keynes, UK
²Educational Department, CESAR (Recife Center for Advanced Studies and Systems), PE, Brazil
furtado.fs@gmail.com, andrea.zisman@open.ac.uk
Abstract—Agile methodologies have been introduced as an alternative to traditional software engineering methodologies. However, despite the advantages of using agile methodologies, the transition between traditional and agile methodologies is not an easy task. There are several problems associated with the use of agile methodologies. Examples of these problems are related to (i) lack of metrics to measure the amount of work that occurs per sprint, (ii) interruption of a project after several iterations, (iii) changes in the requirements, (iv) lack of documentation, and (v) lack of management control. In this paper we present Trace++, a traceability technique that extends traditional traceability relationships with extra information in order to support the transition between traditional and agile software development. The use of Trace++ has been evaluated in two real projects of different software development companies to measure the benefits of using Trace++ to support agile software development.
Index Terms—Traceability, agile methods, hybrid process.
I. INTRODUCTION
In the last twenty years, several agile methodologies have been proposed to support software development [3][8][12][15][20][28][32][36]. Agile methodologies bring several advantages to the software development life-cycle including, but not limited to, lightweight development processes, small number of documents, frequent deliveries, customer satisfaction, and close communication among stakeholders.
However, despite the advantages of agile methodologies, the transition between ‘traditional’ to ‘agile’ methodologies in software development organizations is not an easy task. In this paper, we use the term ‘traditional software engineering’ to refer to methodologies that place more emphasis on processes, tools, contracts, and plans. We consider two traditional paradigms: Unified Process (UP) [27] and Project Management Body of Knowledge (PMBOK) [33].
Several surveys have been presented in order to analyse the advantages and challenges with agile methodologies[1][2][14][29][39][40]. As outlined in [14], 64% of the 200 industrial participants in this survey found the transition to agile methodologies confusing, hard, and slow. In this survey, the participants stated that understanding the necessary amount of rework to be executed is essential for the success and overall cost of using agile methodologies. Other identified challenges in agile adoption were concerned with the lack of team alignment, documentation, and focus; constant changes in the development cycle; and cultural acceptance. In [1][14][40], the participants pointed out issues of the agile methodology related to communication problems, loss of management control, ability to scale agile, and regulatory compliance.
Another problem that has been flagged is related to the adoption of agile management practices [1]. For example, as outlined in [4], “in the development of large systems, the ‘just enough’ documentation goes beyond the traditional set recommended by the agile methods, due to the diversity of elements to be considered, for instance geographic distribution of the teams, necessity to comply with industry regulations, strict IT governance programs, integration of the system being developed with others, or even the presence of not-so-agile people in the teams”. According to [6], agile methodologies are well known for early and frequently releases. However, in some cases agile practitioners are not aware of how changes in functional requirement may affect non-functional requirements, and could cause breaches of security and performance in a system.
In this paper we present Trace++, a traceability approach to assist with the transition from traditional to agile methodologies. Traceability of software systems has been recognized as an important activity in software system development [10][11][24][38]. Traceability relations can support several software development activities such as evolution, reuse, validation, rationale, understanding, and change impact analysis. Traceability relations can improve the quality of the products being developed and reduce time and cost of development.
The use of traceability techniques in agile projects has been advocated in [5][9][16][41]. In agile projects traceability can help with change impact analysis, product conformance, process compliance, project accountability, baseline reproducibility, and organisational learning [9]. In some cases, traceability is seen as a heavy process by agile developers [4][5].
The work presented in this paper complements the work on traceability for agile projects and proposes an extension of traceability relations to represent extra information in order to assist with the transition from traditional to agile methodologies. More specifically, we concentrate on four main problems related to the adoption of agile methodologies and show how Trace++ could assist with these problems. The work has been developed based on observations and analysis of some real world agile projects. We propose the necessary information to be represented in traceability relations. The work has been evaluated in other two real world agile projects.
The remaining of this paper is structured as follows. In Section II, we describe the Trace++ approach including the types
of artifacts and traceability relations used in the work, and the different problems tackled by the approach. In Section III, we present an evaluation of the approach in two different agile projects. In Section IV, we discuss related works. Finally, in Section V, we present some conclusions and future work.
II. Trace++
A. Overview of the Approach
In order to support the transition from traditional to agile methodologies, we propose to use a traceability approach called Trace++, between documents generated during traditional software development and agile methodologies. The traceability approach consists of extending traceability relations with extra information. The work concentrates in four different problems associated with the transition from traditional software development to agile methodologies. These problems are concerned with the (i) amount of rework that occurs per sprint, (ii) understanding of the high level scope of a project before beginning the sprints, (iii) lack of non-functional requirements documentation, and (iv) loss of management control (see Subsection II.C).
We extend the standard definition of traceability relations [10] with the notion of information set. The information set contains information necessary to support the four different problems of our concern. For example, in a certain type of Trace++ relation, the information set may contain the percentage of rework in story points variations, in order to assist with the problem of absence of metrics to indicate the amount of rework that occurs in a sprint. More formally, the main elements of Trace++ are:
- \( P \): An agile related problem;
- \( \Lambda \): Trace relations composed of a source artifact, a target artifact, a set of additional information, and a relation type;
- \( S = \{S_1, S_2 \ldots S_j\} \) a set of all source artifacts;
- \( T = \{T_1, T_2, \ldots, T_i\} \) a a set of all target artifacts;
- \( I = \{I_1, I_2, \ldots, I_m\} \) a set of all additional information;
- \( Y \): Relations type.
A Trace (\( \lambda \)) relation for a Problem \( P \) is given as:
\[
\lambda(P) = \left\{ s_k, t_k, \{i_{k^*}\}, y_{k^*} \mid \{i_{k^*}\} \subseteq I \right\}
\]
(1)
where:
\[
s_k \in S = \{s_1, s_2, \ldots, s_j\}
\]
\[
t_k \in T = \{t_1, t_2, \ldots, t_i\}
\]
\[
i_{k^*} \in I = \{i_1, i_2, \ldots, i_m\}
\]
and
\[
\{i_{k^*}\} \subseteq I
\]
with the maximum number of necessary elements to provide information to each traceability relation. In some cases, a complex problem may require various traceability relations. This can be represented as the union between various traceability relations, as shown below:
\[
\Lambda(P) = \lambda(P) \cup \lambda^-(P) \cup \ldots \cup \lambda^{(m)}(P)
\]
In order to use the proposed Trace++ approach, for each problem \( P \), the development team should collect the elements represented in equation (1), and use these elements to analyse the problem. Examples of the types of elements to be represented in equation (1) are shown in Subsection II.C. Guidelines for the analysis of the elements are described in Section 0.
B. Artifact Types
The aim of the approach is to provide traceability relations between artifacts generated in both traditional and agile methodologies. We analysed various types of artifacts that are generated when using different types of agile methods, and artifacts that are generated when using traditional software engineering methods, in order to identify the artifacts that are relevant to the transition between both methods.
<table>
<thead>
<tr>
<th>Method</th>
<th>Name</th>
<th>E</th>
<th>M</th>
<th>Author</th>
</tr>
</thead>
<tbody>
<tr>
<td>APM</td>
<td>Product Vision</td>
<td>X</td>
<td>X</td>
<td>[21]</td>
</tr>
<tr>
<td></td>
<td>Product Roadmap</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Release Plan</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Performance Card</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Project Datasheet</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Project Charter</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>Scrum</td>
<td>Product Backlog</td>
<td>X</td>
<td>X</td>
<td>[36]</td>
</tr>
<tr>
<td></td>
<td>Sprint Backlog</td>
<td>X</td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Task Board</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Impediment List</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Retrospective Timeline</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Release Burndown/up Chart</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Sprint Burndown/up Chart</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Product Burndown/up Chart</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>FDD</td>
<td>Feature Cards</td>
<td></td>
<td>X</td>
<td>[28]</td>
</tr>
<tr>
<td></td>
<td>Domain Model (UML-color)</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Parking Lot Chart</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Feature Breakdown Structure</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td></td>
<td>Development Plan</td>
<td></td>
<td>X</td>
<td></td>
</tr>
<tr>
<td>DSDM</td>
<td>Functional Prototype</td>
<td>X</td>
<td></td>
<td>[15]</td>
</tr>
<tr>
<td></td>
<td>Theme</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Epic</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>User Stories</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Spyke/Research Stories</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Acceptance Criteria</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Theme Screening Matrix</td>
<td>X</td>
<td></td>
<td>[8]</td>
</tr>
<tr>
<td></td>
<td>High Level Design</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Spyke Architectural</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Code Refactored</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Unit Tests</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Acceptance Tests</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Lean/ Kanban</td>
<td>Kanban System Board</td>
<td>X</td>
<td></td>
<td>[32][3]</td>
</tr>
<tr>
<td></td>
<td>Visual Card</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Cumulative Flow Diagram</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Common</td>
<td>Kano Matrix</td>
<td>X</td>
<td></td>
<td>[19]</td>
</tr>
<tr>
<td></td>
<td>Persona</td>
<td>X</td>
<td></td>
<td>[30]</td>
</tr>
<tr>
<td></td>
<td>Wireframe/Mockups</td>
<td>X</td>
<td></td>
<td>[22]</td>
</tr>
<tr>
<td></td>
<td>User Story Mapping</td>
<td>X</td>
<td></td>
<td>[31]</td>
</tr>
<tr>
<td></td>
<td>Risk Burndown/up Chart</td>
<td>X</td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td>Risk Radar Chart</td>
<td>X</td>
<td></td>
<td>[13]</td>
</tr>
<tr>
<td></td>
<td>Risk Backlog</td>
<td>X</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Table I shows the agile methods and the respective artifacts that were used in this analysis. We have classified the artifacts into two groups, based on the definition proposed in [37], namely (i) engineering-related artifacts (E) such as requirements, design, coding, and testing specifications; and (ii) management-related artifacts (M) such as project management, measurement and analysis, and management processes. These groups are important to support traceability relations in the various stages of the software development lifecycle.
The agile artifacts listed in Table I are mapped to artifacts generated during traditional software engineering development, in order to identify possible target artifacts. For this mapping, we have considered artifacts based on several criteria: (a) artifacts that belong to the Unified Process (UP) [27] or the PMBOK [33] paradigms; (b) artifacts that are related to engineering and management groups, as per the definition in [37]; and (c) artifacts that appear in all the different stages of the software development lifecycle.
C. Agile Problems
We have conducted a study involving work reported in 23 papers [6][9][16][23][41] and six industrial reports [1][2][14][29][39][40]. Based on this study we identified several problems and challenges that undermine adoption of agile methods. Examples of these problems and challenges are:
- Absence of the use of metrics to indicate the amount of rework that occurs in each sprint [14];
- Abandonment of the project after several iterations due to the misunderstanding on the high-level scope of the project before beginning sprints [14];
- Constant changes in requirements [14][40];
- Large projects with distributed teams [1][40];
- Lack of sufficient documentation [29][40];
- Communication failure between the various stakeholders of the project on the evolution of requirements [14][39][40];
- Loss of management control [29][39][40];
- Low quality of software maintainability in formal projects in certain industries, e.g. financial services, healthcare, telecom and government [40].
The work in this paper tackles four of these problems. We have selected problems that are often cited in the literature [6][16][26][41], and that involve tracking information between different types of artifacts generated during the software development life-cycle. In the following we describe the four problems of our concern in terms of their context, associated source and target artifacts, additional information set, and proposal to establish traceability relations. Please note that these problems are not only related to agile methodologies, but that they appear during transition from traditional software development.
Problem 1 (P1): Absense of metrics to indicate the amount of rework that occurs in each sprint.
Context: One of the premises of agile methods is concerned with the fact that the cost and schedule of a project are fixed, but the scope varies. This is important to provide Product Owners (PO) the flexibility to prioritize backlog items in the way that best meets their business needs, without exceeding time and cost of development [36][32]. This approach is important to produce items of big business value. For example, sometimes the PO has prioritized in the beginning of a project a set of requirements that may change along the sprint, due to market needs, but as these changes do not interfere with the schedule and cost of the project, the manager may substitute with another backlog some items that do not have contract changes. However, when there are no metrics that indicate the amount of rework caused in each sprint, multiple items of the backlog are moved to the bottom of the list as having lowest priority, and may not be implemented until the end of the project, causing customer dissatisfaction.
Source artifact: User story (US).
Target artifact: Software requirements specification, persona, wireframe, class diagram.
Additional information: Percentage of rework (story points variation), rework (business value variation), rework (new personas, wireframes and classes involved in user story);
Proposal: At the end of the execution of each sprint or at the next sprint planning meeting, the team and the Scrum Master should calculate the percentage change of story points (and/or business value) between what has been planned at the beginning of the sprint and what actually was delivered at the end of the sprint. In addition, the team members should calculate how much was spent on rework activities due to changes requested along the sprint. The same can be done with the number of new personas, wireframes, and classes identified and created along the sprint to meet business goals.
Problem 2 (P2): Lack of understanding about the high level scope of a project before starting a sprint.
Context: In some cases a user story is developed during a sprint, but its prioritization changes or it becomes an epic due to its size. In this case, traceability relations could help in providing more details about the user story and the scope of activities to be implemented. Moreover, these traceability relations will help with impact analysis of changes in user stories, which can cause impacts on architecture or test scenarios of the system. The lack of understanding of the scope can cause abandonment of the project after a few iterations.
Source artifact: User story.
Target artifact: Class diagram, sequence diagram, use case diagram.
Additional information: Identification of the preceding sprints, which the user story has been implemented.
Proposal: During the stage of sprint planning, the development team will have access to all information about user stories from previous sprints, as well as traceability relations between diagrams (eg.: class, sequence and use cases).
Problem 3 (P3): Lack of documentation about non-functional requirements (NFR).
Context: The lack of documentation and incomplete information on non-functional requirements before starting a sprint
---
Footnote: Due to space limitations, we reference here some of the main papers and industrial reports. A complete list of these papers can be found at http://bit.ly/1RK7T4f.
may cause delay of important architectural decisions and / or absence of identification of important development tasks.
**Source artifact:** User story, Story acceptance test.
**Target artifact:** Architecture design diagrams, test design;
**Additional information:** Story acceptance criteria (e.g.; performance, security, usability).
**Proposal:** During the stage of sprint planning, the development team will have access to traceability relations between user stories, architecture documents, test scenarios, and the list of acceptance criteria related to performance, security, and usability, among others. Access to this information during the sprint planning will give the team members the opportunity to make architectural changes as soon as possible, to better define the user story acceptance criteria, to validate test scenarios, and to identify technical tasks that would only be identified in development life cycle. This will avoid delays in completing the sprint or avoid increasing costs to a project.
**Problem 4 (P4):** Loss of management control when the project size set in the contract is measured with function point or use cases point.
**Context:** Normally, in projects involving public companies or more traditional institutions such as financial and telecom organisations, software contracts take into account standard measures to define the size of a project. Typically, this is done based on function point analysis [17] or use cases points [25]. In these projects, during the transition from traditional to agile methods, there may exist conflicts when accounting for these measures (eg.: story points, ideal days).
**Source artifact:** User story, epic, theme.
**Target artifact:** Software requirements specification, use case specifications.
**Additional information:** Story points implemented, function points or use case points implemented per sprint.
**Proposal:** During the stage of sprint planning, the development team and the Scrum Master shall estimate the scope of the sprint in story points and function points (or use case points). At the end of the sprint, the same team must conduct a recount in order to understand the variation of the size of the project that has been implemented in relation to the planned project. For every new sprint, these values should be presented to the product owner and a comparison with the original value of the contract is made in order to renegotiate previously agreed conditions. This comparison will be based on the requirements document traced with user stories, epics, and themes.
**D. Traceability Relations**
Based on the artifact mappings described in Subsection II.B and the agile problems described in Subsection II.C, a large number of traceability relations can be generated combining the various artifacts. However, not every traceability relation can support the transition from traditional to agile software engineering processes. For example, a traceability relation between project data sheet with project charter artifact, or a traceability relation between risk burndown chart and risk management data sheet, cannot assist with the transition process.
Figure 1 shows the artifacts and their relations that are relevant to the work in this paper. In the approach, we propose

different types of traceability relations and different types of information sets, as follows:
- **Type of traceability relation between artifacts**: `<is attened>`, `<has>`, `<is one>`, `<is part of>`, `<is related with>`, `<done when pass>`, `<is test by>`, `<uses>`, `<serve>`, `<constrained by>`, `<implemented by>``
- **Types of information sets**: `{actors}`, `{business rules}`, `{test case}`, `{functional requirement Id}`, `{class name}`, `{scenario}`, `{flow}`, among others.
Table II summarises the various types of traceability relations in Trace++ with respect to the agile problems (P1 to P4) relevant to this paper. The traceability relations were created based on the elements of equation (1).
### III. Evaluation
Trace++ has been evaluated in two real projects (projects A and B) in one telecom and in one banking organisation in Brazil, located in Porto Digital\(^2\) in Recife. The main goal of the evaluation was to analyse how Trace++ contributes to alleviate the four agile transition problems described in this paper. More specifically, the evaluation analysed if Trace++ can:
(a) improve decision-making on how to prioritize backlog items and better visualize possible items that may not be implemented;
(b) improve understanding of the project scope before the beginning of the sprint, in order to prevent abandonment of the project after a few iterations (Problem P2);
---
\(^2\) \url{http://www.portodigital.org}
(c) improve understanding of the non-functional requirements before the beginning of a sprint in order to avoid delays in important architectural decision and / or in the absence of identification of important development tasks (Problem P3); and (d) provide better control of project scope and minimize conflicts that arise when traditional software development measures are used instead of agile measures (Problem P4).
**Problem P1:** Absence of metrics to indicate the amount of rework that occurs in each sprint.
**Goal:** improve decision-making on how to prioritize the backlog items and better visualize what are the risk items;
**Question:** how much rework occurred along the sprint because of requirement change requests?
**Metrics** (as per Table II):
- I1 - % Rework (story points variation);
- I2 - % Rework (business value variation);
- I8 - % Rework (new personas involved in user story);
- I9 - % Rework (new wireframes involved in user story);
- I10 - % Rework (new classes involved in user story).
**Receiver:** Product Owner (PO);
**Supplier:** Scrum Master / Team;
**Periodicity:** every sprint (weekly, fortnightly or monthly);
**Collection time:** at the end of each sprint;
**Where it will be stored:** XML format;
**How will the metrics be collected:** the Scrum Master will have a form to fill in the data that should be collected in 4 phases: sprint planning 1 and 2, daily meeting, and at the end of the sprint:
- **Sprint Planning 1:** for each user story selected by the PO, the team calculates the amount of story points;
- **Sprint Planning 2:** the team breaks down each user story into smaller tasks, preferably at most 16 hours;
- **Daily meeting:** the Scrum Master register in the bug tracking tool the changes in each story user (source) and the number of classes, personas and wireframes (targets) that have been added, as well as the amount of additional effort;
- **At the end of sprint:** the team calculates the amount of story points, the effort for each user story and the percentage increase.
**How will the metrics be analyzed:** during the planning meeting, at which the backlog is prioritized, the PO will know how much of the backlog has already been consumed. So he will have more elements to support his decision-making and identify the backlog risk items.
**Evaluation criteria:** Given the rework percentage presented at every new sprint, the PO will be asked if the approach provides more visibility of the increase of the project scope and, therefore, if it is helpful to replan the backlog from previous sprints.
Fig. 2. Evaluation Guideline for Problem 1
**Trace**++ was evaluated based on the guidelines proposed in [35], following four main steps: (i) planning, (ii) data collection, (iii) analysis of collected data, and (iv) recording of the results. The planning step was executed based on GQM (Goal-Question-Metric) paradigm [7]. For each agile problem we describe a conceptual level (goal), an operating level (question), and a quantitative level (metric). As an example, consider the guideline for Problem P1 described in Figure 2. The guidelines for the other problems are available at http://bit.ly/IRK7T4f.
Table III summarises information about the companies and projects (A and B) used in the evaluation, as well as the methods used in these projects, the types of contracts, and the respective agile problems associated with each project. For each project, Table IV shows information about the number of sprints, user stories, tasks, and function points that have been collected during the evaluation phase. As shown in Table III, project A was developed by a team of nine members, using hybrid methodologies, and is related to problems P1, P2, and P3; while project B was developed by three parallel teams, with a total of nine people, and is related to problem P4. Project A started in 2010 and it is still under development and maintenance; Project B started recently and it is also under development.
**TABLE III. EVALUATED PROJECTS**
<table>
<thead>
<tr>
<th>Project</th>
<th>Area</th>
<th>Team</th>
<th>Methods</th>
<th>Contract type</th>
<th>Problem evaluated</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Telecom</td>
<td>9 people in the project team (project manager, team leader, software engineers, designers and test engineers)</td>
<td>Hybrid (UP, Scrum and XP)</td>
<td>Fixed price and schedule; Flexible scope.</td>
<td>P01, P02, P03</td>
</tr>
<tr>
<td>B</td>
<td>Bank</td>
<td>3 parallel teams with 6 software engineers, 2 testers and 1 designer</td>
<td>Hybrid (UP and Scrum)</td>
<td>Fixed price; 1100 functions points.</td>
<td>P04</td>
</tr>
</tbody>
</table>
Project “A” is run by a company that develops solutions for a telecom company in Brazil. The project is about the development and maintenance of a billing system. The software development process used by the client is based on the Unified Process (UP), while the team's process uses a hybrid approach of Scrum, XP and UP. The types of artifacts generated by the project are: requirements document, use cases, user stories (documented in the JIRA tool1 and team task board), acceptance criteria, class diagrams, sequence diagrams, test plan, test design, and unit test.
Project “B” is run by another company that develops solutions for public sectors in banking. The project is about the development of a new system to a public bank with a contract based on function points. The software development process used by the three parallel teams is a hybrid approach of Scrum and UP. This project has a public bidding contract with 1100 function points, which required documentation based on traditional processes. The project consists of three subsystems developed by three parallel teams of developers (six developers, two testers, one designer). The user stories were prioritized so that each subsystem would not last more than 30 days. Each subsystem was divided into three sprints of 10 days each.
---
1 https://www.atlassian.com/software/jira
Given the data and documents available for projects A and B, 69 Trace++ traceability relations were manually created for the two projects used in the evaluation. Examples of these traceability relations for each type of problem are shown in Figure 3. In the figure, US is an identifier used by the project to represent a user story, NFR stands for non-functional requirements, and FR stands for functional requirements. A complete list of all traceability relations can be found at http://bit.ly/1RK7T4f.
**TABLE IV. PROJECT INFORMATION**
<table>
<thead>
<tr>
<th>Project</th>
<th>Sprints</th>
<th>User Stories</th>
<th>Tasks</th>
<th>Function Points</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>3 sprints, 15 days each</td>
<td>9 user stories totaling 39, 29 and 32 story points per sprint, respectively</td>
<td>264 hours of development effort</td>
<td>Not applicable</td>
</tr>
<tr>
<td>B</td>
<td>3 sprints, 10 days each</td>
<td>24 user stories divided into three subsystems totaling 54, 51 and 50 story points per subsystem</td>
<td>Not collected</td>
<td>124, 132 and 115 function points per subsystem for a total of 1100 FP</td>
</tr>
</tbody>
</table>
**Problem 1**
P1(US001, FR007, 160%) U P1(US001, (Class1, Class2)), 250%
This example shows the highest effort variation (250%) during one sprint caused by user story US001, function requirement FR007, and additional classes “Class1, Class2”.
**Problem 2**
P2(US001, (Class-domain-model, Class1, Class2), (30, 31, 37)) U P2(US001, (Class3, Class4), (30, 31, 37))
This example shows previous sprints (30, 31, 37) involving user story US001 and related UML diagrams.
**Problem 3**
P3(US001, (NFR2-1, NFR4-2-3, NFR4-2-4), (AC1, AC2, AC3, AC4, AC5))
This example shows a traceability relation between user story US001 and a non functional requirement (NFR2-1), that requires automation scenarios related to SOAP and HTTP APIs, with an acceptance criteria (AC1) that also needs to maintain automation scenarios.
**Problem 4**
This example shows three user stories (US001A, US002A, US003A) related to functional requirements FR001, FR002, FR006, FR008, FR013, representing 18 story points (5+5+8), with the whole sprint concluded with 41 functional points.
Fig. 3. Example of Traceability Relations
In the following we present the results of the analysis of the collected data for each problem.
**Problem 1 (P1): Absence of metrics to indicate the amount of rework that occurs in each sprint.**
Figure 4 shows the story point variation in terms of its size per user story, and Figure 5 shows the task effort variation in terms of hours per user story, for project A. The graph in Figure 4 shows a decrease in the range of changes between the first and the other user stories (from 160% to 60%), when using the traceability relations provided by Trace++. As shown in the figure, some user stories did not have variations on the size. This is attributed to the fact that the scope of these user stories has already been well defined in the sprint planning. Although 60% is still a high value for changes, the approach provides a better view of the amount of rework required and mechanisms to reduce the rework over the next sprints in order to minimize the backlog of product risk items.
A similar situation occurs in Figure 5, in which the effort variation in the first user story was reduced from 250% to 46%. This reduction was possible due to the amount of rework specified in the traceability relations, which was not known before. In this particular case, the remaining variation peaks (100% and 133%, respectively) were related to low complexity in the user stories (eight and three story points, respectively), which in absolute numbers represented eight hours of additional work.
**Analysis:** The use of traceability relations from Trace++ demonstrated that in every sprint planning meeting, at which the backlog was prioritized, the product owner (PO) knew about how much of the total backlog was consumed. These gave the PO more information to support decision-making and, therefore, identify the risk items that could be left out of the project. In addition, the PO confirmed that the approach provided more visibility about increase in the scope of the project assisting with the replanning of the backlog in relation to previous sprints. It was also confirmed that it is not necessary to wait until the end of the project to complete the analysis phase.
Another advantage of the approach was concerned with the analysis of the consolidated data for all the sprints. In this case, the PO noticed that there was a decrease in the size of the rework variation after the first sprint was evaluated (from 70% to 38%). The same occurred in relation to the task effort, in which the variation of effort rework decreased after the first sprint was evaluated (from 45% to 35%).
**Other benefits highlighted by team:** The team that participated in the evaluation and in the development of project A, highlighted the following benefits of the Trace++ approach:
1. **“The traces are created iteratively, at the end of each sprint. Thus, it avoids an additional effort of creating a traceability matrix around the legacy system”;**
2. **“The percentages are presented and this has helped in making the decision of what will be prioritized between the choice of new items and changes in current items”;**
3. **“The graphics with the percentage variation (story points and effort) help at the time to replan the backlog and to identify elements are most affected by the changes. For example, if a particular class is being so affected by the changes, it may be appropriate to hold a refactoring activity to optimize it, or establish a pair-programming rotation so that more people know of its contents, or to convince the team to perform TDD (Test Driven Development) for creating more classes of tests in order to automate the regression tests”**.
Problem 2 (P2): Lack of understanding about the high level scope before starting the sprint.
Analysis: In the case of problem P2, the team also agreed that the Trace++ approach provides more visibility about the scope of the sprint. The team affirmed that in some cases the approach could influence problem P1, since the improvement on understanding the scope of the project may reduce the amount of rework variation. This behavior was observed between two consecutive sprints, where the effort variation reduced from 45% to 35%. In addition, during the second sprint planning meeting, in which the tasks in the sprint backlog are detailed, the team was able to have a better idea of the traceability information involving user stories and class and sequence diagrams, helping with the details of the tasks that were part of the sprint backlog.
Other benefits highlighted by team: The participants highlighted the following: “Considering the specific context of the project, where the requirements have been evolved over the last five years, some more experienced members of the team have not made much use of information related to the class and sequence diagrams. However, due to high staff turnover such information can be essential to improve understanding of the scope of each sprint backlog, as was the case of two developers who recently joined the team”.
Problem 3 (P3): Lack of documentation about non-functional requirements (NFR).
Analysis: In the case of problem P3, the use of Trace++ provided the team with information about user stories and non-functional requirements previously defined in the project architecture document. Access to this information during sprint planning stage gave the team the opportunity to review whether there is a need for architectural changes in the system, and better define acceptance criteria of user stories. These avoid delays in completing the sprint and increase on project costs.
The team also agreed that this approach supports the alignment between the constraints and quality attributes defined in the project architecture document and the acceptance criteria of user stories.
Other benefits highlighted by team: The participants highlighted the following: “In the specific case of the selected user stories, although they were identified few architectural impacts, the approach was also helpful for new members of the team who could question alternative ways to meet certain non-functional requirements, for example, the need to automate some scenarios using the JUnit Framework”.
Problem 4 (P4): Loss of management control when the project size set in the contract is measured with function point or use case points.
Analysis: In the case of problem P4, in every new sprint of project B, the percentage of deviation of the number of function points was presented to the PO in order to compare with the original value specified in the contract. This was important to allow the PO to plan each new sprint considering the consumption of accumulated function points. In this exercise, the original requirements document was used as a basis, considering the requirements and user stories indicated on each traceability relation. The traceability relations helped the team to see that the requirements were enforcing more changes along the sprints. The use of the relations assisted the team to have more control over changes in the project scope.
The PO also commented that the above has helped him to renegotiate the scope of the project with its senior manager, since in every 10 days (sprint size), he was shown 1100 function points and, therefore, was able to avoid surprises at the end of the project. Figure 6 shows the variation of function points per sprint. As shown in the figure, at the end of the third sprint the PO could be notified that an extra 69 function points were developed. This will be used in the planning of the next subsystem, which would initially have 805 function points available, but will now be reduced to 736.
Other benefits highlighted by team: The participants highlighted the following: “This approach is allowing the team to continue using normally story points to track the progress of the sprint backlog through the burndown chart and only at the end of the sprints we count the function points. This helps the PO and Scrum Master with the visibility of the project scope variation”.
4 http://junit.org
IV. RELATED WORK
Several approaches and techniques have been proposed to support software traceability [10][11][18][24][38]. However, the majority of existing approaches mainly discuss the use of traceability techniques in traditional software development processes and not in agile projects. More recently, some approaches have been proposed to use traceability in agile projects [4][6][9][18][23][26][34][41].
The work in [9] provides general guidelines for using traceability in different types of agile projects, depending on the size, longevity, complexity, and criticality of the project. According to [18], the application of traceability concepts to agile projects is still novice. In their work, the authors proposed a roadmap to provide guidelines to simplify traceability tools and provide traceability relations relevant to agile projects. The work focuses on Scrum [36] and XP [8] methodologies and proposes different types of traceability relations between: Stakeholder-User story, User story-User story, User story-Acceptance criteria, User story-Test cases, and Test cases-Refactoring. The different traceability relations were not seen with the same level of importance by the various agile developers, and some stakeholders created more relations than others.
In [23], the author introduces the concept of traceability types identified through interviews conducted with developers, testers, configuration managers, product owners and Scrum Masters from multiple agile projects. The author concluded the importance of the following traceability types: Stakeholder-Requirement; Requirement-User story; Requirement-Code; Requirement-Test cases; User evaluation-Version.
The work in [4] describes a traceability management tool to ensure traceability among user stories, traditional requirements documents, test specifications, architecture design, and source code. However, to guarantee a non-invasive traceability, the authors understand that “traceability techniques should be minimally intrusive, in the sense that people should be able to keep using the tools they are used to for creating the artifacts and still be able to maintain traceability among the artifacts produced”.
In [6], the authors present a Traceability Process Model (TPM), which is compatible to agile development processes such as Scrum and FDD, to support traceability of non-functional requirements. In [34] the authors propose an auditing model for ISO 9001 traceability requirements that is applicable in agile (XP) environments. The work in [41] analyses the benefits against the challenges of using traceability in agile software projects. The authors advocate the use of traceability to help software companies with more focused customers. The work in [26] integrates traceability within Scrum development process.
Despite some advances in the topic, the works involving traceability and agile processes are still immature. Our Trace++ approach contributes to the area by providing traceability relations between artifacts generated during traditional and agile software development, and by assisting the problem of transitioning from traditional to agile projects, since some organisations use hybrid processes.
V. CONCLUSIONS AND FUTURE WORK
In this paper we present Trace++, a traceability technique that extends traditional traceability relationships in order to support the transition from traditional to agile software development. We concentrate on four real problems that exist in agile projects, namely (i) lack of metrics to measure the amount of rework that occurs per sprint, (ii) lack of understanding about the high level scope of a project before starting the sprint, (iii) lack of documentation about non-functional requirements, and (iv) lack of management control. The work has been evaluated in two agile projects involving two organisations. The results of the evaluation are discussed in the paper.
Currently, we are extending the work to support the generation of Trace++ relations in an automatic way and evaluate the cost of generating these relations. We also plan to evaluate the use of Trace++ with respect to different characteristics of a project such as size, complexity, and clarity, as proposed in [9]. Another area is concerned with the evaluation of when in the life-cycle of agile projects traceability relations should be created. The work is also being extended to support other types of problems relevant to industry. Moreover, the approach should allow for an extensible approach in which the definition of a new problem does not require the identification of new traceability relation types, but instead, it should allow for reuse of existing Trace++ information.
ACKNOWLEDGMENT
This research work was supported by the Brazilian National Research Council (CNPq) of the Ministry of Science, Technology and Innovation of Brazil, process #206556/2014-4. The international cooperation with the Open University was part of the Science without Borders’ program.
REFERENCES
http://www.cienciasemfronteiras.gov.br/web/csf
[23] M. Jacobsson, “Implementing traceability in agile software development”, Department of Computer Science Lund University, Faculty of Engineering, LTHSE-221 00 Lund, Sweden www.lth.se.
|
{"Source-Url": "https://oro.open.ac.uk/51628/1/PID4342443.pdf", "len_cl100k_base": 10035, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 45618, "total-output-tokens": 12157, "length": "2e13", "weborganizer": {"__label__adult": 0.00035119056701660156, "__label__art_design": 0.00031256675720214844, "__label__crime_law": 0.0002841949462890625, "__label__education_jobs": 0.0016775131225585938, "__label__entertainment": 4.011392593383789e-05, "__label__fashion_beauty": 0.00017523765563964844, "__label__finance_business": 0.00042176246643066406, "__label__food_dining": 0.0002956390380859375, "__label__games": 0.00045371055603027344, "__label__hardware": 0.0004799365997314453, "__label__health": 0.0003955364227294922, "__label__history": 0.000244140625, "__label__home_hobbies": 8.058547973632812e-05, "__label__industrial": 0.0003566741943359375, "__label__literature": 0.00021767616271972656, "__label__politics": 0.00020933151245117188, "__label__religion": 0.0003783702850341797, "__label__science_tech": 0.00478363037109375, "__label__social_life": 9.065866470336914e-05, "__label__software": 0.0039520263671875, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00032067298889160156, "__label__transportation": 0.00045013427734375, "__label__travel": 0.0002028942108154297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49397, 0.03046]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49397, 0.3312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49397, 0.91395]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5666, false], [5666, 11156, null], [11156, 17357, null], [17357, 20688, null], [20688, 22137, null], [22137, 28120, null], [28120, 34048, null], [34048, 38453, null], [38453, 43788, null], [43788, 49397, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5666, true], [5666, 11156, null], [11156, 17357, null], [17357, 20688, null], [20688, 22137, null], [22137, 28120, null], [28120, 34048, null], [34048, 38453, null], [38453, 43788, null], [43788, 49397, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49397, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49397, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5666, 2], [5666, 11156, 3], [11156, 17357, 4], [17357, 20688, 5], [20688, 22137, 6], [22137, 28120, 7], [28120, 34048, 8], [34048, 38453, 9], [38453, 43788, 10], [43788, 49397, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49397, 0.18214]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
79ae08bf819fedaf74b8a8cea7932fc6ee2a94ab
|
Middle East Technical University
Department of Computer Engineering
CENG490 Computer Engineering Design
Fall 2010
Initial Software Design Document
for
watch & touch
by DialecTech
Giray Havur (1630870)
Melike Ercan (1560135)
Utku Şirin (1560838)
Yaman Umuroğlu (1560614)
Table of Contents
1. Introduction ........................................................................................................... 3
1.1. Problem Definition .......................................................................................... 3
1.2. Purpose .......................................................................................................... 3
1.3. Scope ............................................................................................................ 3
1.4. Overview ....................................................................................................... 3
1.5. Definitions, Acronyms and Abbreviations .................................................... 4
1.6. References ................................................................................................... 5
2. System Overview .................................................................................................... 6
2.1. The Interactive Whiteboard Client ............................................................... 7
2.2. The Collaboration Client ............................................................................... 7
3. Design Considerations ........................................................................................... 7
3.1. Design Assumptions, Dependencies and Constraints ..................................... 7
3.1.1. Interactive Whiteboard Client ............................................................... 7
3.1.2. Collaborativity Client: .......................................................................... 8
3.2. Design Goals and Guidelines ....................................................................... 8
4. Data Design ............................................................................................................ 9
4.1. Data Description ........................................................................................... 9
4.1.1. Description of data entities ................................................................... 9
4.1.2. Annotation data storage ...................................................................... 13
4.1.3. Supported content file formats ............................................................ 14
4.2. Data Dictionary ............................................................................................ 15
5. System Architecture ............................................................................................... 18
5.1. Architectural Design .................................................................................... 18
5.1.1. Architecture of Interactive Whiteboard Client ..................................... 18
5.1.2. Architecture of Collaboration Client .................................................. 20
5.2. Description of Modules ............................................................................... 21
5.2.1. TaskManagement module .................................................................. 21
5.2.2. ContentDisplay module .................................................................... 22
5.2.3. Drawing module .................................................................................. 23
5.2.4. Sessions module .................................................................................. 25
5.2.5. WiimoteInput module ......................................................................... 26
5.2.6. Configuration module .......................................................................... 28
6. User Interface Design ............................................................................................ 29
6.1. Overview of User Interface .......................................................................... 29
6.1.1. User Interfaces for the IWBC ............................................................... 29
6.1.2 User Interfaces for the Collaborativity Client: ........................................ 39
6.2. Screen Images .............................................................................................. 40
6.3. Screen Objects and Actions .......................................................................... 41
7. Libraries and Tools ............................................................................................... 43
7.1 Qt .................................................................................................................... 43
7.1.1. Description ........................................................................................... 43
7.1.2. Usage in Watch & Touch ..................................................................... 44
1. Introduction
Initial Software Design Document of Watch & Touch provides the necessary definitions to conceptualize and further formalize the design of the software, of which its requirements and functionalities were summarized in the previous requirements analysis report. The aim is to provide a guide to a design that could be easily implemented by any designer reading this report.
1.1. Problem Definition
The aim of Watch & Touch is to create a complete, free & open source, multi-platform, interactive whiteboard system which supports multi-touch gestures, complemented by an in-classroom collaborative working client to increase student participation in classroom activities. Hardware and software solutions targeting these have existed for many years, especially as an integrated part of “smart class” solutions, but deployment is not widespread (only 2 smart classes in METU) due to their high cost (around a thousand dollars[2] for interactive whiteboard units not including the projector, and much higher costs for complete smart class solutions). A review of existing solutions can be found in the 1.4. User and Literature Survey section of the Watch & Touch SRS[3]. Considering existing solutions or methods, it is desired to create a complete, free & open source, multi-platform, interactive blackboard system that can be utilized at the relatively low cost of its available off-the-shelf hardware components, as well as an in-classroom collaborative working environment to complement the classroom activity when the necessary hardware is available.
1.2. Purpose
In the Watch & Touch SRS[3] document, Watch & Touch’s desired features and requirements were stated. This SDD is intended to provide a software system design which will satisfy the given functional and non-functional requirements in line with the provided assumptions and constraints. As the design of the system is the core of its development, the document is intended to be viewed primarily by the DialecTech team which will be developing Watch & Touch, and will be presented to the Department of Computer Engineering faculty as part of the Senior Computer Engineering Design (CENG491) course.
1.3. Scope
Scope of ISDD is to provide information about design of the project. This document covers the architectural design, data design, procedural design, design constraints, development schedule. Also hardware and software requirements and working environment will be explained.
1.4. Overview
This document provides information about Watch & Touch software system.
As it is explained in section 2, the system consists of two separate but interconnected software applications, named the Interactive Whiteboard Client (IWBC) and the Collaboration Client (CBC). This document explains these two components in detail. In section 3, design issues such as assumptions, dependencies and constraints, design goals and guidelines that are used is explained. In section 4, information about data design and data entities in the system is provided. Section 5 is about the system’s overall architecture; the architectural design of the software applications and detailed description of modules are explicated. Section 6 provides design details on the user interfaces. Section 7 includes the libraries and tools that will be used during software development. In section 8, time planning and scheduling issues will be demonstrated by a Gantt Chart. The document is ended with a conclusion that will be given in section 9. Additional UML diagrams are provided for further clarification in section 10.
1.5. Definitions, Acronyms and Abbreviations
Definitions, Acronyms and Abbreviations in Watch & Touch Software are explained in the following table.
<table>
<thead>
<tr>
<th>Wiimote</th>
<th>The hardware controller designed by Nintendo for use with the Nintendo Wii gaming console, that offers the tracking of movements of up to 4 infrared points</th>
</tr>
</thead>
<tbody>
<tr>
<td>Watch & Touch</td>
<td>The software system hereby documented, including the IWBC and CBC clients</td>
</tr>
<tr>
<td>machine</td>
<td>A computing device, used either by the instructor or one of the students, that fulfills the Watch & Touch requirements</td>
</tr>
<tr>
<td>IWB</td>
<td>Interactive Whiteboard, the interactive projected display created using the Watch & Touch IWBC</td>
</tr>
<tr>
<td>IWBC</td>
<td>Interactive Whiteboard Client, the piece of software installed on the instructor’s machine which allows interaction on the projected display using the IR pen and the Wiimote, and provides annotation features.</td>
</tr>
<tr>
<td>CBC</td>
<td>The Watch & Touch Collaboration Client, the piece of software installed on student machines that allows them to work collaboratively</td>
</tr>
<tr>
<td>collaborative working</td>
<td>A process in which several users are able to simultaneously make modifications to a common document and see the modifications done by others</td>
</tr>
<tr>
<td>collaboration hub</td>
<td>The machine which will serve as a hub for in-classroom collaboration, which is determined as the instructor’s machine</td>
</tr>
<tr>
<td>annotation</td>
<td>The ability to create user content such as hand-made drawings, shapes or text on top of other content on the screen</td>
</tr>
<tr>
<td>multi-touch gesture</td>
<td>A predefined movement involving multiple points of interaction</td>
</tr>
<tr>
<td>Term</td>
<td>Description</td>
</tr>
<tr>
<td>--------------</td>
<td>-----------------------------------------------------------------------------</td>
</tr>
<tr>
<td>pinch gesture</td>
<td>A multi-touch gesture in which two points along a line move towards, or away from, their center</td>
</tr>
<tr>
<td>rotation gesture</td>
<td>A multi-touch gesture in which two points rotate clockwise or counterclockwise along a fixed axis</td>
</tr>
<tr>
<td>swipe gesture</td>
<td>A multi-touch gesture in which two points move towards the same horizontal direction</td>
</tr>
<tr>
<td>IR pen</td>
<td>Infrared pen, a pen-shaped object whose tip radiates Wiimote-detectable infrared light when desired</td>
</tr>
<tr>
<td>IR ring</td>
<td>An IR pen that can be worn on a finger, several can be used multi-touch gestures</td>
</tr>
<tr>
<td>content</td>
<td>Visual content in presentation, video or webpage form.</td>
</tr>
<tr>
<td>SDD</td>
<td>Software Design Document</td>
</tr>
<tr>
<td>SVG</td>
<td>Scalable Vector Graphics</td>
</tr>
</tbody>
</table>
### 1.6. References
A common software engineering standard to provide some guidance and recommended approaches for specifying software design descriptions
[2] **Software Requirements Specification for Watch & Touch**
The SRS document for Watch & Touch, prepared according to the **IEEE Std 830-1998: IEEE Recommended Practice for Software Requirements Specifications**
[3] **The KISS Principle in design process**
[4] **Designing QT-Style C++ APIs Guideline**
2. System Overview
Watch & Touch consists mainly of two separate but interconnected software applications, named the Interactive Whiteboard Client (IWBC) and the Collaboration Client (CBC).
2.1. The Interactive Whiteboard Client
The IWBC is an application that is capable of turning any projected surface into a multi-touch-input whiteboard. In terms of providing interaction, it calculates the position of the infrared pen on the surface (by utilizing the input from a Wiimote) in terms of screen coordinates and generates mouse events accordingly to provide interaction on the selected point, as well as interpret the movements of multiple IR sources as multi-touch gestures to enhance the general user experience. An important consequence of this capability is the annotation features provided - the instructor is able to display presentations, web pages and videos from inside the IWBC and take handwritten notes on top of these visual content when desired. The IWBC also allows the instructor to actively partake in the classroom collaboration activities, in a way similar to the facilities provided to the students by the CBC.
2.2. The Collaboration Client
The CBC is a smaller application intended for use by students on their own machines, and allows them to engage in classroom collaboration by creating collaborative drawings. The CBC establishes collaborative links to other machines which also have . The collaborative link exists not only between students but also between students and the instructor; students can see the information on the instructor’s projected screen on their own machines, and (with the instructor’s permission) make drawings on the content seen on the instructor’s screen, to ask questions or express themselves by visually emphasizing or annotating the currently displayed content.
3. Design Considerations
Special design issues which need to be addressed or resolved before attempting to devise a complete design solution are noted here.
3.1. Design Assumptions, Dependencies and Constraints
Watch & Touch is assumed to operate under the presence of certain factors with regard to hardware, system software and general properties of the operating environment. These factors are stated, separately for the IWBC and the CBC, in the following two subsections 3.1.1. and 3.1.2.
3.1.1. Interactive Whiteboard Client
- An operational Wiimote is assumed to be present and connected via Bluetooth to the instructor’s machine
- The IWBC must be installed and successfully launching on the instructor’s machine
- At least one IR pen with the necessary characteristics for being detected by the Wiimote must be present and functional. For multi-touch gestures, at least two such IR sources are required.
- A flat projected surface is required to be turned into an interactive whiteboard.
• The Wiimote must be positioned in such a way that all four corners of the projected display are within the line of sight of the Wiimote infrared camera.
3.1.2. Collaborativity Client:
• At least several student machines should be present. The ideal would be one machine per student, however this is not a requirement.
• The Watch & Touch collaborativity client must be installed and successfully launching on all the machines intended for collaboration.
• The machines intended for collaboration must be connected by a Local Area Network, including the instructor’s machine.
• The machines must have the necessary two-dimensional input hardware interface (such as a mouse or touch surface) that gives them the capability to provide the input for drawing.
3.2. Design Goals and Guidelines
• Both software applications in the system (IWBC and CBC) must be capable of working with full functionality on multiple platforms and operating systems. Towards this end, the Qt framework will be used in the development of the project. Microsoft Windows XP® and Ubuntu 10.04 have been chosen as the testing platforms due to their widespread use.¹
• Since Watch & Touch is heavily centered on its user interfaces:
• the user interfaces must be consistent, clean and easy to use in general
• the Cornell University Ergonomic Guidelines for User-Interface Design¹⁄² will serve as the guideline during the user interface design process
• existing Qt widgets providing UI functionality should be preferred as much as possible instead of re-creating user interface components, as this will consume more time and be less stable
• where possible, a smaller degree of abstraction should be preferred between the underlying business logic and the user interface, to keep a simpler software structure
• During the design process, the KISS principle will serve as the primary guideline for Watch & Touch. The KISS principle¹³ states that simplicity should be a key goal in design, and that unnecessary complexity should be avoided.
• During design and development, the principles mentioned in the Designing Qt Style C++ APIs [4] document will be followed to obtain source code which is easier to understand for developers, and more suitable for Qt’s style.
• To keep Watch & Touch as free and open¹⁴ as possible, use of external libraries and tools which do not offer LGPL or GPL should be avoided.
• The Scalable Vector Graphics (SVG) format¹⁵ should be used to store annotation and drawing data, for the following reasons:
• Qt has good built-in support for SVG
• vector graphics allows for easier processing of individual annotation elements
○ scaling the annotations does not cause any loss of quality
○ format is suitable for extensions and metadata storage since it is XML-based
4. Data Design
4.1. Data Description
This section contains a description of the information domain for Watch & Touch. First, a description of the structure of defined data entities is provided, followed by two sections providing an overview of how the annotation storage and content display work.
Watch & Touch does not make extensive use of relational data, thus no databases are utilized. In fact the only persistent data is for the annotations (whose storage is described in section 4.1.2) and the configuration data (4.1.1.8).
4.1.1. Description of data entities
4.1.1.1. AnnotationBase data entity
The AnnotationBase data entity describes the basic content of an annotation and forms the base of the other *Annotation data entities.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>ParentID</td>
<td>String</td>
<td>The MD5 hash of the parent content</td>
</tr>
<tr>
<td>Geometry</td>
<td>Rectangle</td>
<td>Position and size of annotation</td>
</tr>
<tr>
<td>LayerCount</td>
<td>Integer</td>
<td>Number of items in the Layers field</td>
</tr>
<tr>
<td>Layers</td>
<td>Array of SVGData</td>
<td>Annotation SVG data is contained in these items</td>
</tr>
</tbody>
</table>
4.1.1.2. PresentationAnnotation data entity
The PresentationAnnotation data entity describes the annotation on a single presentation slide.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>SlideNumber</td>
<td>Integer</td>
<td>Which slide the annotation belongs to</td>
</tr>
<tr>
<td>Data</td>
<td>AnnotationBase</td>
<td>Base annotation data entity</td>
</tr>
</tbody>
</table>
**4.1.1.3. WebpageAnnotation data entity**
The WebpageAnnotation data entity describes the annotation on a single webpage anchor.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>AnchorName</td>
<td>String</td>
<td>A string identifying the webpage anchor on which the annotation was made</td>
</tr>
<tr>
<td>Data</td>
<td>AnnotationBase</td>
<td>Base annotation data entity</td>
</tr>
</tbody>
</table>
**4.1.1.4. VideoAnnotation data entity**
The VideoAnnotation data entity describes the annotation on a single video frame.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>TimelinePosition</td>
<td>String</td>
<td>The position in the video timeline to which the annotation belongs (hh:mm:ss)</td>
</tr>
<tr>
<td>Data</td>
<td>AnnotationBase</td>
<td>Base annotation data entity</td>
</tr>
</tbody>
</table>
**4.1.1.5. CollaborativeSession data entity**
The CollaborativeSession data entity describes a group of participants working on a collaborative drawing.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>ParticipantCount</td>
<td>Integer</td>
<td>Number of participants in the session</td>
</tr>
<tr>
<td>Participants</td>
<td>Array of CollaborativeParticipant</td>
<td>Information on participants in the session</td>
</tr>
<tr>
<td>SessionName</td>
<td>String</td>
<td>The name of the session, visible to all</td>
</tr>
<tr>
<td>SessionPassword</td>
<td>String</td>
<td>MD5 hash of session password</td>
</tr>
<tr>
<td>TheDrawing</td>
<td>CollaborativeDrawing</td>
<td>The collaborative drawing being created by this session</td>
</tr>
<tr>
<td>CreationDate</td>
<td>DateTime</td>
<td>When the session was created</td>
</tr>
</tbody>
</table>
**4.1.1.6. CollaborativeParticipant data entity**
The CollaborativeParticipant data entity describes a participant in the collaborative drawing session.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>NameSurname</td>
<td>String</td>
<td>Publically visible name and surname of participant</td>
</tr>
<tr>
<td>EMail</td>
<td>String</td>
<td>E-mail address of the participant</td>
</tr>
<tr>
<td>IsInstructor</td>
<td>Boolean</td>
<td>Whether the participant has instructor privileges</td>
</tr>
</tbody>
</table>
**4.1.1.7. CollaborativeDrawing data entity**
The CollaborativeDrawing data entity describes a single collaborative drawing.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Session</td>
<td>CollaborativeSession</td>
<td>The collaborative drawing session which is creating this drawing.</td>
</tr>
<tr>
<td>LastModified</td>
<td>DateTime</td>
<td>When the drawing was last modified</td>
</tr>
<tr>
<td>LayerCount</td>
<td>Integer</td>
<td>Number of layers in the drawing</td>
</tr>
<tr>
<td>Layers</td>
<td>Array of SVGData</td>
<td>Actual drawing SVG data in layers</td>
</tr>
</tbody>
</table>
4.1.1.8. Configuration data entity
The Configuration data entity describes the current configuration of the IWBC. It is made persistent in XML form, where the field names serve as tag names.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>WiimoteData</td>
<td>WiimoteConfig</td>
<td>Information pertaining to the currently connected Wiimote.</td>
</tr>
<tr>
<td>ScreencastingBackend</td>
<td>String</td>
<td>Which screencasting backend is to be used (“ffmpeg” or “vlc”)</td>
</tr>
<tr>
<td>GestureData</td>
<td>Array of GestureMap</td>
<td>The matching of gestures to actions</td>
</tr>
</tbody>
</table>
4.1.1.9. WiimoteConfig data entity
The Configuration data entity describes the current configuration of the Wiimote controller. It is made persistent in XML form, where the field names serve as tag names.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>UniqueID</td>
<td>Integer</td>
<td>The unique identifier assigned to the Wiimote by the wiiuse library</td>
</tr>
<tr>
<td>CalibrationData</td>
<td>Array of (Float, Float)</td>
<td>The raw IR coordinates of the four IR calibration points received</td>
</tr>
</tbody>
</table>
### 4.1.1.10. GestureMap data entity
The GestureMap data entity describes a single gesture-action mapping. It is made persistent in XML form, where the field names serve as tag names.
<table>
<thead>
<tr>
<th>Field Name</th>
<th>Data Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>GestureID</td>
<td>GestureEnum</td>
<td>The identifier of the predefined gesture</td>
</tr>
<tr>
<td>ActionID</td>
<td>ActionEnum</td>
<td>The identifier of the predefined action</td>
</tr>
</tbody>
</table>
### 4.1.2. Annotation data storage
This section gives a general description of how the annotations made on visual content are stored and retrieved.
#### 4.1.2.1. Storage logic
The matching of content and annotations is made through a unique identifier. Each visual content is has a unique identifier, which represents the current state of the content file. This identifier is generated by running the MD5 algorithm on the content file. This process ensures that the identifier is not modified by trivial changes such as changing the content filename, and prevents mismatches between the annotation and the content file in case the content changes.
All annotations created by the IWBC are stored under the annotations/ directory, located under the IWBC application directory. For each content which is annotated by the user, a sub-directory under annotations/ is created. This sub-directory is named as the unique identifier of the annotated content, and inside it are all the annotation data files which belong to this content.
Each annotation data file contains the annotation made in a different part of the content, and is named according to the following convention:
\[ \text{annotation} \_\%\text{subcontextid} \_\%\text{layerno}.wta \]
The \%\text{subcontextid} identifies which part of the content the annotation is related to. It is generated differently for different content types:
- **Presentations**: slide number, starting from 0
- **Webpages**: anchor name (such as \#section1) if exists, nothing otherwise
- **Videos**: timeline position (hh:mm:ss) in the video
The \%\text{layerno} identifies the layer number of the annotation. Currently, all annotation data is stored in layer number 0; if future expansion layers are needed (which will be numbered 1, 2, 3...) they will be stacked in front of layer 0.
### 4.1.2.2. File format
All annotations will be stored using the Scalable Vector Graphics (SVG) format, an XML-based file format for describing two-dimensional vector graphics\(^8\). See section 3.2 for the reasons why SVG was chosen as the annotation storage format.
Since the actual vector data will be generated using the Qt’s QSvgGenerator and rendered using QSvgRenderer, no further details will be provided here as to how the annotation vector data is generated.
### 4.1.3. Supported content file formats
This section provides a list of the supported content file formats for presentations, videos and webpages. As long as the general content format is preserved (numbered slides for presentations, vertical scrolling for webpages and a hour/minute/second partitioned timeline for videos), support for other file formats can be added simply by adding the corresponding viewer code.
#### 4.1.3.1. Presentations
Portable Document Format (PDF) will be the primary supported format for presentations. These files will be displayed using the Poppler Qt4 interface library\(^10\). Since the viewing is done by this external library, no further details on the PDF file format structure are provided here.
Additionally, support for Microsoft PowerPoint and OpenDocument Presentation formats should also be provided if time allows. This support can be provided by converting these formats to PDF via some command line tool.
4.1.3.2. Videos
The video playback functionality will be provided by the Phonon multimedia framework\cite{11} which is a part of Qt. All seek-enabled video formats which is supported by the current Phonon backend are supported. Since the video display is handled by Phonon, no further details on the video file format structures are provided here.
4.1.3.3. Webpages
The displaying of webpages will be done using the integrated WebKit support\cite{12} in Qt; thus, all HTML subformats supported by WebKit are also supported. The webpages are assumed to be only vertical-scrolling (horizontal scrolling is not supported) and static in terms of geometry (interacting with links etc. does not change the width/height of the webpage). Since the webpage display is handled by WebKit, no further details on the HTML file format structure are provided here.
4.2. Data Dictionary
<table>
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Refer to Section</th>
</tr>
</thead>
<tbody>
<tr>
<td>AnnotationBase</td>
<td>data entity</td>
<td>4.1.1.1</td>
</tr>
<tr>
<td>AnnotationWidget</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>BaseDrawingWidget</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>ClassroomSession</td>
<td>component</td>
<td>5.2.4</td>
</tr>
<tr>
<td>CollaborativeDrawing</td>
<td>data entity</td>
<td>4.1.1.7</td>
</tr>
<tr>
<td>CollaborativeDrawingTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>CollaborativeDrawingWidget</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>CollaborativeParticipant</td>
<td>data entity</td>
<td>4.1.1.6</td>
</tr>
<tr>
<td>CollaborativeSession</td>
<td>data entity</td>
<td>4.1.1.5</td>
</tr>
<tr>
<td>Configuration</td>
<td>data entity</td>
<td>4.1.1.8</td>
</tr>
<tr>
<td>Configuration</td>
<td>module</td>
<td>5.2.6</td>
</tr>
<tr>
<td>ConfigurationData</td>
<td>component</td>
<td>5.2.6</td>
</tr>
<tr>
<td>ConfigurationManager</td>
<td>component</td>
<td>5.2.6</td>
</tr>
<tr>
<td>ConfigurationTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>Class/Component</td>
<td>Type</td>
<td>Version</td>
</tr>
<tr>
<td>----------------------------</td>
<td>---------------</td>
<td>---------</td>
</tr>
<tr>
<td>ContentDisplay</td>
<td>module</td>
<td>5.2.2</td>
</tr>
<tr>
<td>ContentDisplay</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>ContentDisplayTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>ContentMatcher</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>ContentSelector</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>ContextRecognizer</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>Drawing</td>
<td>module</td>
<td>5.2.3</td>
</tr>
<tr>
<td>DrawingData</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>EmailSender</td>
<td>component</td>
<td>5.2.4</td>
</tr>
<tr>
<td>EventGenerator</td>
<td>component</td>
<td>5.2.5</td>
</tr>
<tr>
<td>FileSystemBrowser</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>GestureMap</td>
<td>data entity</td>
<td>4.1.1.10</td>
</tr>
<tr>
<td>GestureMapper</td>
<td>component</td>
<td>5.2.6</td>
</tr>
<tr>
<td>GoogleDocsAccess</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>IContentSelection</td>
<td>interface</td>
<td>5.2.2</td>
</tr>
<tr>
<td>IDisplayedSelection</td>
<td>interface</td>
<td>5.2.2</td>
</tr>
<tr>
<td>InputCalibration</td>
<td>component</td>
<td>5.2.5</td>
</tr>
<tr>
<td>InputReceiver</td>
<td>component</td>
<td>5.2.5</td>
</tr>
<tr>
<td>ITask</td>
<td>interface</td>
<td>5.2.1</td>
</tr>
<tr>
<td>MessageTransciever</td>
<td>component</td>
<td>5.2.4</td>
</tr>
<tr>
<td>PresentationAnnotation</td>
<td>data entity</td>
<td>4.1.1.2</td>
</tr>
<tr>
<td>PresentationDisplay</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>PresentationDisplayTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>RecentlyUsed</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>ScreenCapture</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>Item</td>
<td>Type</td>
<td>Section</td>
</tr>
<tr>
<td>---------------------------</td>
<td>---------------</td>
<td>---------</td>
</tr>
<tr>
<td>Screencasting</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>Session</td>
<td>component</td>
<td>5.2.4</td>
</tr>
<tr>
<td>SessionManagementTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>SessionManager</td>
<td>component</td>
<td>5.2.4</td>
</tr>
<tr>
<td>Sessions</td>
<td>module</td>
<td>5.2.4</td>
</tr>
<tr>
<td>SessionUser</td>
<td>component</td>
<td>5.2.4</td>
</tr>
<tr>
<td>SketchingTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>SketchingWidget</td>
<td>component</td>
<td>5.2.3</td>
</tr>
<tr>
<td>SnapshotRetriever</td>
<td>component</td>
<td>5.2.4</td>
</tr>
<tr>
<td>Task</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>TaskManagement</td>
<td>module</td>
<td>5.2.1</td>
</tr>
<tr>
<td>TaskManager</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>VideoAnnotation</td>
<td>data entity</td>
<td>4.1.1.4</td>
</tr>
<tr>
<td>VideoDisplay</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>VideoDisplayTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>WebpageAnnotation</td>
<td>data entity</td>
<td>4.1.1.3</td>
</tr>
<tr>
<td>WebpageDisplay</td>
<td>component</td>
<td>5.2.2</td>
</tr>
<tr>
<td>WebPageDisplayTask</td>
<td>component</td>
<td>5.2.1</td>
</tr>
<tr>
<td>WiimoteConfig</td>
<td>data entity</td>
<td>4.1.1.9</td>
</tr>
<tr>
<td>WiimoteInput</td>
<td>module</td>
<td>5.2.5</td>
</tr>
<tr>
<td>WiimoteManager</td>
<td>component</td>
<td>5.2.5</td>
</tr>
<tr>
<td>wiiuse</td>
<td>component</td>
<td>5.2.5</td>
</tr>
</tbody>
</table>
5. System Architecture
A general description of the Watch & Touch software system architecture is presented in the following sections, in a top-down manner.
5.1. Architectural Design
Watch & Touch will include two separate but related software applications, the Interactive Whiteboard Client and the Collaboration Client whose functional descriptions and requirements are provided in the sections 2.2 and 3.2.1 of Watch & Touch SRS\[2]. The deployment diagram below illustrates the topmost level view of Watch & Touch architecture by presenting a view of how the applications will be deployed in the usage environment.

5.1.1. Architecture of Interactive Whiteboard Client
The IWBC, which provides the primary interactive whiteboard-related functionality such as content display and annotation, as well as collaboration-related functionality, will be composed of six top-level modules, each formed of components grouped according to their functional similarity. In the following subsections of this section, a brief overview of the responsibilities of each of these modules is provided. A simplified overview of the IWBC architecture (simplified: not all components and associations inside the modules are shown - consult the following subsections for more detailed views of the internal module structures) intending to show how these modules are connected together is presented in the following component diagram.
5.1.2. Architecture of Collaboration Client
The CBC, which basically provides collaboration-related functionality, will be composed of three top-level modules, each formed of components grouped according to their functional similarity. In the following subsections of this section, a brief overview of the responsibilities of each of these modules is provided. A simplified overview of the CBC architecture (simplified: not all components and associations inside the modules are shown - consult the following subsections for more detailed views of the internal module structures) intending to show how these modules are connected together is presented in the following component diagram.
As can be noted from the above diagram, the CBC architecture is essentially a minimized subset of the IWBC architecture - the same components are reused with a lesser degree of functionality, thus a separate module description for the CBC is not provided.
5.2. Description of Modules
5.2.1. TaskManagement module
5.2.1.1. Processing narrative
The TaskManagement module is responsible for the logical and visual management of the instances of multiple tasks which the IWBC is capable of - creating new task instances, switching between and terminating existing instances, and forming a basis for the user interface elements related with these tasks - the TaskManager is responsible for the IWBC Main Menu and each QWidget-derived *Task component is responsible for its own user interface elements.
5.2.1.2. Interface description
ITask: allows performing the user interface related tasks (set window geometry, minimize, maximize, show/hide...) and is realized by the QWidget set of member functions with which the component is associated.
5.2.1.3. Processing detail
The TaskManager component creates new *Task instances, stores the existing instances in an internal structure, and performs the necessary operations on them when requested through the user interface.
5.2.1.4. Dynamic behavior
Each Task component is associated with a QWidget-derived component, which realizes the user interface and specific functionality for that particular task. The mapping is as follows:
- PresentationDisplayTask to ContentDisplay::PresentationDisplay
- VideoDisplayTask to ContentDisplay::VideoDisplay
- WebpageDisplayTask to ContentDisplay::WebpageDisplay
- SketchingTask to Drawing::SketchingWidget
- SessionManagementTask to Sessions::SessionManager
- CollaborativeDrawingTask to Drawing::CollaborativeDrawingWidget
- ConfigurationTask to Configuration::ConfigurationManager
5.2.2. ContentDisplay module

5.2.2.1. Processing narrative
The ContentDisplay module provides the necessary functionality for displaying the three primary kinds of visual content the IWBC supports: presentations (PresentationDisplay), videos (VideoDisplay) and webpages (WebpageDisplay). These display-related components provide the graphical basis for annotation, and they can access the external Screencasting component to start/stop the screencasting operation. The business logic for choosing which particular file will be displayed is also handled here, by the RecentlyUsed, FileSystemBrowser and GoogleDocsAccess components.
5.2.2.2. Interface description
*IContentSelection*: allows placing type restrictions on the selectable files and retrieving the selected file name.
*IDisplayedContent*: provides info the current status of the displayed content (current slide number for presentations, timeline location for videos, scroll position for webpages), used for recognizing the annotation context
5.2.2.3. Processing detail
The *ContentSelector* component retrieves the selected filename from one of the components implementing the *IContentSelection* interface, and supplies the preferred *ContentDisplay*-derived display component (one of *PresentationDisplay*, *VideoDisplay* and *WebpageDisplay*) with the selected file. The display component then renders and displays the selected file, enabling the user to browse through the content (next/previous slide for presentations, playback control for videos, scrolling and link following for webpages) through its own user interface.
5.2.2.4. Dynamic behavior
- The *ContentDisplay* component sends the current status of the displayed content to *Drawing::AnnotationWidget* so it can display the relevant annotations [Sequence Diagram 10.1.3]
- When requested, the display component can access the external *Screencasting* component to start/stop the screencasting operation.
5.2.3. Drawing module

*Figure 5.2.3.a: Component diagram for the Drawing module*
5.2.3.1. Processing narrative
The Drawing module contains the components responsible for realizing the drawing operations which play a prime part in Watch & Touch’s functionality. The three components AnnotationWidget, SketchingWidget and CollaborativeDrawingWidget offer annotation, sketching (on blank pages) and collaborative drawing capabilities, respectively. The DrawingData component is responsible for storing the drawing data and importing/exporting into files when requested. The ScreenCapture component offers user-directed screen capturing capabilities, whose raster forms can be used as sketching backgrounds later on. The ContentMatcher and ContentRecognizer components are responsible for recognizing the current state of the displayed content and fetch the relevant drawing data for annotations.
5.2.3.2. Interface description
IDisplayedContent: see section 5.2.2.2
All other inter-component links are associations, meaning that mutual access to both connected components’ public properties and functions are possible.
5.2.3.3. Processing detail
Drawing is performed on the user interface of the BaseDrawingWidget and derived components, and the drawing data is kept in the DrawingData component which can export or import this data as SVG images when requested. In addition to these basic drawing operations:
- The AnnotationWidget uses the ContextRecognizer to identify the current status of the displayed content, and then the ContentMatcher component to retrieve the annotations which were previously made in this display state of the content, and display these annotations.
- The SketchingWidget can use the ScreenCapture component to retrieve a snapshot of the desired state of the screen, and use this as a background overlay to draw on [Sequence Diagram 10.1.4]
- The CollaborativeDrawingWidget updates the displayed drawing when other users in the collaboration session make changes to their own image.
5.2.3.4. Dynamic behavior
- The AnnotationWidget negotiates the permission status of a student’s IWB access via their own CBC with the Sessions::ClassroomSession component, allows modification on the current annotation state when permission is granted, and provides the current state of the displayed content and its annotation to the Sessions::ClassroomSession [Sequence Diagram 10.1.6]
- The AnnotationWidget receives the status of the currently displayed content from the ContentDisplay::ContentDisplay component [Sequence Diagram 10.1.3]
- The CollaborativeDrawingWidget gets updated by the Sessions::Session component when change are made by other users in the session, and the Session is also notified when the user makes modifications to their own drawing [Sequence Diagram 10.1.5]
5.2.4. Sessions module

*Figure 5.2.4.a: Component diagram for the Sessions module*
5.2.4.1. Processing narrative
The Sessions module includes the components related to the administration and functioning of the collaborative drawing sessions. The MessageTransciever component handles the sending and receiving of network messages which allow the communication of IWBC and CBCs through the calls of other Sessions components. The SessionManager is the façade component for the Sessions module and handles the creation and administration of sessions and users, thus allowing the IWBC to function as the collaboration hub. SessionUsers represent the participants of Sessions, and the EmailSender component is responsible for delivering e-mail copies of the collaborative drawing to the members of the session.
5.2.4.2. Interface description
No specific or constrained interfaces are defined; the SessionManager component is the façade component for the module and thus serves as the access interface.
5.2.4.3. Processing detail
When new users log in to the collaboration system and join or leave sessions the SessionManager is notified by the MessageTransciever component. The SessionManager then creates and modifies the corresponding Session or SessionUser objects as needed. When users enter or exit sessions or modify the collaborative drawing, the other CBCs in the session are notified by the MessageTransciever which is also capable of sending network messages. The SnapshotRetriever saves local copies of the current state of
all collaborative drawings when requested, and EmailSender sends the local state of the collaborative drawing to all session members.
5.2.4.4. Dynamic behavior
- The Drawing::AnnotationWidget negotiates the permission status of a student’s IWB access via their own CBC with the ClassroomSession component, allows modification on the current annotation state when permission is granted, and provides the current state of the displayed content and its annotation to the ClassroomSession [Sequence Diagram 10.1.6]
- The Drawing::CollaborativeDrawingWidget gets updated by the Session component when change are made by other users in the session, and the Session is also notified when the user makes modifications to their own drawing [Sequence Diagram 10.1.5]
5.2.5. WiimoteInput module

Figure 5.2.5.a: Component diagram for the WiimoteInput module
5.2.5.1. Processing narrative
The WiimoteInput module forms the core of the IR-pen input functionality of the IWBC; it communicates with the Wiimote via the wiiuse library to receive the changes of the IR point position via the InputReceiver component, which forwards this to the InputCalibration component that generates screen
coordinates for the IR point coordinates. The EventGenerator component generates the system mouse events from the received coordinates, and if movements of multiple points are involved, the corresponding multi-touch gesture events as well. The WiimoteManager is the façade component for the module and provides access to the functionality of the other components, as well handling the initial connection operation to the Wiimote.
5.2.5.2. Interface description
No specific or constrained interfaces are defined; the WiimoteManager component is the façade component for the module and thus serves as the access interface.
5.2.5.3. Processing detail
Initially, the WiimoteManager uses the wiiuse library to find and connect to the Wiimote. Once the connection is established, the four-point calibration data as provided by the user interface is given to the InputCalibration component. When the IR source changes position, the InputReceiver gets the current position info via a callback function from wiiuse, and forwards the pure IR coordinates to the InputCalibration component, which translates them to screen coordinates. Finally, the EventGenerator creates mouse events based on the information received - including multi-touch gesture events if multiple points are detected to be forming a gesture.
5.2.5.4. Dynamic behavior
- The InputCalibration component saves and loads calibration data using the Configuration::ConfigurationData component.
- The EventGenerator passes multi-touch coordinates to Configuration::GestureMapper, which recognizes any multi-touch gestures and sends info about them back to the EventGenerator.
- The WiimoteManager sends battery status and calibration info to the Configuration::ConfigurationManager when requested.
- The InputReceiver and WiimoteManager use the wiiuse library’s API to access the Wiimote.
- see also: [Sequence Diagram 10.1.1] [Sequence Diagram 10.1.2]
5.2.6. Configuration module
![Component diagram for the Configuration module]
5.2.6.1. Processing narrative
The Configuration module is responsible for keeping the user preferences regarding the functioning of the IWBC. The ConfigurationData module keeps all the system configuration, including the Wiimote calibration data, screeencasting preferences, and the mapping of gestures and actions. The GestureMapper component handles the recognition of multi-touch gestures using the Wiimote input data. The ConfigurationManager component is the façade component for the module and provides access to the other components; enabling the changing of configuration through its user interface, as well as displaying information on the currently connected Wiimote.
5.2.6.2. Interface description
No specific or constrained interfaces are defined; the ConfigurationManager component is the façade component for the module and thus serves as the access interface.
5.2.6.3. Processing detail
Through its graphical user interface, the ConfigurationManager displays and modifies the current configuration information stored in the ConfigurationData. The GestureMapper keeps the current mapping of gestures and actions, and performs gesture recognition and identifying the mapped action when presented with multi-touch point data.
5.2.6.4. Dynamic behavior
- The **ConfigurationData** component receives updated Wiimote calibration data from **WiimoteInput::InputCalibration** when calibration is performed.
- The **GestureMapper** recognizes any multi-touch gestures and sends info about them to the **WiimoteInput::EventGenerator** when requested.
- The **ConfigurationManager** receives Wiimote calibration and battery info from **WiimoteInput::WiimoteManager** when requested.
6. User Interface Design
6.1. Overview of User Interface
Describe the functionality of the system from the user’s perspective. Explain how the user will be able to use your system to complete all the expected features and the feedback information that will be displayed for the user.
6.1.1. User Interfaces for the IWBC
This section describes the graphical user interfaces offered by the Watch & Touch IWBC.
1. **Connect to Wiimote**: The user will be prompted to put Wiimote in the discoverable mode by simultaneously pressing 1+2 on the Wiimote, and will be informed about the progress of the connection.

*6.1.1: Connect to Wiimote*
2. **Calibration:** Four circles on the four corners of the screen will be displayed, each revealing the next one as it is touched. Once all four are touched, the user will be able to test the new calibration settings on the screen by scribbling around the page, and indicating whether to keep the settings or to repeat the calibration by a button press.

6.1.2: Calibration
3. **IWBC Main Menu:** The main menu will be following a modified version of the desktop metaphor to allow the instructor to launch the various tasks offered by the IWBC. Several independent instances of the same task can be launched for certain types of tasks, while for others only a single instance will be allowed. Only a single task can be active at the same time, all tasks take up maximal area and cannot be resized. Switching between tasks and instances, or closing them will be also done via the main menu interface. A toolbar will provide access to the functionality listed below:
a. Display Presentation - launch new instance of presentation viewer
b. Display Webpage - launch new instance of webpage viewer
c. Display Video - launch new instance of video viewer
d. Sketch - launch new instance of sketching application
e. Collaboration - open the single-instance collaboration application
f. Configuration - open the single-instance configuration application
g. Exit - exit the IWBC
6.1.3.a: IWBC Main Menu – Annotation Menu On
6.1.3.b: IWBC Main Menu – Task Manager
4. **File Selection**: The IWBC applications that work with user content will require the selection of a file to be displayed (presentations, webpages, videos, sketches). This interface will provide three options to make this selection: the native file selection dialogue from the operating system, a list of recently used files in this category, and an option to select the file from the instructor’s Google Docs account. The user will be prompted to login to Google Docs if not already logged in. Cancelling the file selection operation to terminate the task instance and go back to the IWBC Main Menu is also possible.
5. **Display and Annotate Content**: The main interface for displaying presentations, webpages and videos, it will be offering the standard display control interface options (next/prev buttons for presentations, a browser bar for the webpages, playback control for videos) for each kind of content. If the loaded content had been annotated before, the annotation info will appear in the previously determined location (screen position for presentations and webpages, timeline position for videos). The user will also be able to annotate the content further by simply “writing” on the content when annotation is enabled from the annotation menu, or exit the interface to go back to the IWBC main menu. The CBC may ask to be granted permission to annotate the IWB, in
such a case a notification will be displayed on the lower left hand corner. The user can choose to deny or grant this permission. Once the permission is granted, a small window will be visible on the lower left hand corner, which can be used to revoke the permission of the CBC anytime.
Abstract This paper introduces a novel framework for the design, modeling and control of a Micro Aerial Vehicle (MAV). The vehicle's conceptual design is based on biologically-inspired principles and emulates a dragonfly (Odonata—Anisoptera). We have taken inspiration from the flight mechanism features of the dragonfly and have developed indigenous designs in creating a novel version of a Flapping Wing MAV (FWMAS). The MAV design incorporates a complex mechanical system of sophisticated multi-layered, hybrid linear/non-linear controllers to achieve superior flight times and improved agility compared to other rotary wing MAV critical Take Off and Landing (VTOL) designs. The first MAV prototype has a halfpark weight including sensor payload of around 30 g. The targeted lift capability is about twice the weight. The MAV features state of the art sensors and instrumentation payload, which includes integrated high-power on-board processors, 6DoF inertial sensors, 3DoF compasses, GPS, embedded camera and long-range telemetry capability. A 3-layer control mechanism has been developed to harness the dynamics and attain complete navigational control of the MAV. The inner-layer is composed of a ‘quad hybrid-energy controller’ and two higher layers are at present, implementing a linear controller; the latter will be replaced eventually with a dynamic adaptive non-linear controller. The advantages of the proposed design compared to other similar ones include higher energy efficiency.
6.1.5.a: IWBC Presentation
2. The value of $C_D$ has been chosen to be equal to 0.01, to correspond to a flat plate parallel to the flow of air.
6.1.5.d: IWBC Web
6.1.5.e: IWBC Multimedia
6. **Annotation Menu**: Contains context-dependent annotation facilities: enabling/disabling annotation, changing pen style and color, eraser, exporting the annotated content, clearing all annotations and starting / stopping the screencast operation. The menu can be reached by performing the Open Context Menu multi-touch gesture determined in the Configuration section, or by long-pressing the IR pen (which is equivalent to a right click). (See 6.1.1.3 for the visual)
7. **Sketch**: Provides general drawing facilities similar to those found in painting applications. A pre-existing image can be loaded into the sketching area, and created sketches can be saved as images. Alternatively, the user can ask to work on a capture of the screen - in this case the IWBC will be minimized until the user presses the “Print Screen” key, which will capture the current screen image and bring it to the Sketch application as an underlay to be worked on.

8. **Sketching Menu**: Contains a variety of sketching-related tools: changing pen style and color, eraser, drawing elementary geometric shapes, saving/loading the image and capturing the screen. The menu can be reached by performing the Open Context Menu multi-touch gesture.
determined in the Configuration section, or by long-pressing the IR pen (which is equivalent to a right click). (See 6.1.1.3, 6.1.1.8 for the visual)
9. **Session Manager**: The portal for collaboration on the IWBC side, this interface provides the user with a list of currently existing collaborative drawing sessions. The instructor will be able to view the names of participants in each list. The user will also be able to create a new session by specifying a name and a password, or join an existing session (without providing the session password - this is a privilege of the IWBC user). Another feature accessible from this interface is the option to gather snapshots of all current collaborative drawings.
10. **Collaborative Drawing**: In this interface, the user will be able to create drawings together with the other members of the collaborative drawing session. Each user will be able to modify the drawing and see the modifications done by others as soon as the modifications are made, see the list of session members and change the currently used drawing tools, export or e-mail the current state of the drawing, or import a previously made drawing into the current session via a toolbar. Making local (not collaborative, not accessible to others) drawings is possible on a separate tab of the interface.
11. **Configuration:** This interface will allow the user to change the mapping of predetermined multi-touch gestures to actions, re-calibrate the Wiimote and see the current battery level and calibration status of the Wiimote.
### 6.1.2 User Interfaces for the Collaborativity Client:
1. **Login**: To be able to access the collaborative working system, the user must provide his name and surname, (optionally e-mail address and student number). This simple interface allows the user to provide these credentials and login to the system.
2. **CBC Main Menu**: The portal for collaboration on the CBC side, this interface provides the user with a list of currently existing collaborative drawing sessions. The user will also be able to create a new session by specifying a name and a password, or join an existing session by providing the session password. On the CBC side, the user is able to join the special Classroom Session directly from this interface (no password required).
3. **Collaborative Drawing**: *identical to the IWBC Collaborative Drawing interface, see section 3.1.1.10 for description*
4. **Classroom Session**: This interface will be identical to the Collaborative Drawing interface in appearance, with some extra features. The background of the drawing will always reflect the
current content displayed by the IWB, and the drawings that the CBC user makes on top will not be reflected on the IWB unless the CBC asks for special permission via a button on the toolbar.
6.2. Screen Images
![6.2.a: IWBC – Annotation Menu, File Operations]
### 6.3. Screen Objects and Actions
1. **Menu Size and Menu Location Manipulation:**

**6.3.1.a: Annotation Menu**
User can both zoom-in, zoom-out menu and changes its location on the screen if it has a small light-blue submenu.
- a. Zoom-In menu
- b. Zoom-Out menu
- c. Change location of the menu
2. Multimedia Control Panel
- a. Continuous multimedia annotation
While the time indicator is on the green pieces, a continuous annotation is shown along with the multimedia content.
- b. Point multimedia annotation
When the time indicator is on the yellow piece, a instantaneous annotation is shown for a while along with the multimedia content.
- c. Time Indicator and Manipulator
A small circle indicates the
d. Decrease time scale
By pressing ‘d’, one zooms out the timeline.
e. Increase time scale
By pressing ‘e’, one zooms in the timeline.
3. Screencasting Indicator
![Session Manager Menu]
If screen casting is started, ‘a’ is shown on the bottom left side of the screen. It turns to a’ if screen casting is stopped.
7. Libraries and Tools
Watch & Touch will make use of several existing software libraries and tools, both during development to achieve a faster production cycle and during runtime to obtain extra functionality. These libraries and tools are described in the following subsections.
7.1 Qt
7.1.1. Description
Qt is a cross-platform application framework that is widely used for developing application software with graphical user interface (GUI) (in which case Qt is referred to as a widget toolkit when used as such), and also used for developing non-GUI programs such as command-line tools and consoles for servers. Qt is most notably used in Autodesk, Google Earth, KDE, Adobe Photoshop Album, the European Space Agency, OPIE, Skype, VLC media player, Samsung, Philips, Panasonic and VirtualBox. It is produced by Nokia’s Qt Development Frameworks division, which came into being after Nokia’s acquisition of the Norwegian company Trolltech, the original producer of Qt. Qt uses standard C++ but makes extensive use
of a special code generator (called the Meta Object Compiler, or moc) together with several macros to
enrich the language. Qt can also be used in several other programming languages via language bindings.
It runs on all major platforms and has extensive internationalization support. Non-GUI features include
SQL database access, XML parsing, thread management, network support, and a unified cross-platform
API for file handling. Distributed under the terms of the GNU Lesser General Public License (among
others), Qt is free and open source software. All editions support a wide range of compilers, including the
GCC C++ compiler and the Visual Studio suite.
7.1.2. Usage in Watch & Touch
By the nature of its primary feature set (annotation, multitouch gesture manipulation, collaborative
drawing) Watch & Touch is quite user interface centric. This heavy emphasis on graphical user interfaces
makes their ease of construction a primary factor in design. Its feature-rich and diverse library of UI
elements (including a multimedia framework for viewing videos, an HTML renderer for webpages and
SVG support), multitouch gesture capabilities and the Qt Creator IDE make Qt an excellent choice of
framework for Watch & Touch. Another important factor is the framework’s cross-platform support; with
Qt, building applications for a range of operating systems is often a simple matter of recompiling.
7.2 wiiuse
7.2.1. Description
Wiiuse is a library written in C that connects with several Nintendo Wii remotes. Supports motion
sensing, IR tracking, nunchuk, classic controller, and the Guitar Hero 3 controller. Single threaded and
nonblocking makes a light weight and clean API.
7.2.2. Usage in Watch & Touch
The wiiuse library is used in Watch & Touch as a layer of abstraction between the IWBC and the Wiimote;
it handles the Bluetooth communication between the Wiimote and the instructor’s machine and exposes
a number of methods for accessing the Wiimote’s functionality without going into the details of the
communication protocol.
7.3 Google Documents API
7.3.1. Description
The Google Documents List API allows client applications to programmatically access and manipulate user
data stored with Google Documents. Here are some of the things you can do with the API:
- Discovery: Retrieve documents that match specific keywords, categories, or metadata.
- Download: export documents in common formats such as pdf, rtf, doc, xls, ppt, and more.
- Sharing (ACLs): Modify the sharing permissions of documents and folders. The API allows sharing to individuals, group emails, or across an entire Google Apps domain.
- Create/upload/copy documents: Create online backups of local word processor documents, spreadsheets, presentations, and PDFs.
- Revisions: Review, download, or publish a document’s complete revision history.
- File documents: Create folders and move documents/folders in and out of folders.
- Spreadsheets: While the Documents List API can be used to create and retrieve a list of Google Spreadsheets, it cannot be used to modify the data within a Spreadsheet. For that, you can use the Google Spreadsheets API.
### 7.3.2. Usage in Watch & Touch
In the Watch & Touch IWBC, the Google Documents List API is used to provide an alternative input source for presentations - instead of using local files, the user can provide Google Account login details to access the Google Docs presentations stored in this account. The ClientLogin authorization method is used to gain access to the account, followed by API calls that retrieve the list of presentations and download a given presentation in the PDF format. The details of the API are abstracted from the rest of the application by the GoogleDocsAccess component, which provides direct methods for logging in, getting the list of presentations and downloading a given presentation as PDF.
### 7.4 Screencasting tools
#### 7.4.1. Description
A screencast is a digital recording of computer screen output, also known as a video screen capture, often containing audio narration. The term screencast compares with the related term screenshot; whereas screenshot is a picture of a computer screen, a screencast is essentially a movie of the changes over time that a user sees on a computer screen, enhanced with audio narration. Screencasting tools are the software tools that allow the recording of such screen activity videos.
#### 7.4.2. Usage in Watch & Touch
A variety of screencasting applications exist; however, keeping in mind that Watch & Touch aims to be open-source and cross-platform compatible, programs which fulfill these two constraints would be more fitting, so the two suitable candidates have been determined as VLC and FFmpeg. The user is able to specify which of these tools will be used during the screencasting operation in the IWBC Configuration screen.
7.5. Poppler
7.5.1. Description
Poppler (or libpoppler) is a free software library used to render PDF documents. It is used by the PDF viewers of the open source GNOME and KDE desktop environments, and its development is supported by freedesktop.org. The project was started by Kristian Høgsberg with two goals in mind: To provide PDF rendering functionality as a shared library, in order to centralize maintenance effort, and to go beyond the goals of Xpdf, and integrate with functionality provided by modern operating systems. Poppler itself is a fork of the Xpdf-3.0 PDF viewer developed by Derek Noonburg of Glyph and Cog, LLC.
7.5.2. Usage in Watch & Touch
Watch & Touch uses the libpoppler Qt4 interface, which allows rendering of PDF files directly at the desired resolution (as well as numerous other operations) from inside Qt to display PDF content. The following code segment illustrates how easily this can be accomplished:
```cpp
Poppler::Document* document = Poppler::Document::load(filename);
Poppler::Page* pdfPage = document->page(pageNumber);
// Generate a QImage of the rendered page
QImage image = pdfPage->renderToImage(xres, yres, x, y, width, height);
// ... use image ...
```
8. Time Planning (Gantt Chart)
8.1. Term 1 Gantt Chart
9. Conclusion
This document states the design level approach taken by DialecTech team for the Watch & Touch project. In this document, a fair amount of elaboration has been done on the project scenario pointing out most of the important details. The goals for the final product has become more apparent as the scenario and the desired user interface is visually explained. Additionally, this document is the first document that explains somewhat deep technical details. The architecture of the system is discussed with an overview, and illustrated with component diagrams. Further information on the technical design is given with detailed explanations of the modules which are supported with and sequence diagrams. Finally, the progress made by the project team has been summarized.
10. Appendix
10.1. Sequence Diagrams
10.1.1. Wiimote Initialization sequence diagram
![Wiimote Initialization sequence diagram]
10.1.2. Wiimote Calibration sequence diagram
1: request recalibration
2: getNewCalibration()
4: getNewCalibration()
6: accept settings
10.1.3. Content Display and Annotation sequence diagram
1: request content display
3: present selection options
4: select content
6: content is displayed
7: browse content
10: getCurrentContent()
11: getCurrentContent()
12: recognizeContext()
13: getMatchingAnnotation()
14: annotation (if exists) is loaded and displayed
15: modify annotations
10.1.4. Sketching sequence diagram
User
1: new sketch
2: request screenshot
1.1: clear()
2.1: getScreenCapture()
3: prompt for pressing PrintScreen key
4: press PrintScreen
4.1: getScreenCapture()
2.2: setBackgroundImage()
5: scribble
5.1: modify
10.1.5. Collaborative Drawing overview sequence diagram
Collaboration Client #1
1: login
Collaboration Client #2
1.2: login is OK
2: create new session
Message Transceiver (CSC)
2.1: createSessionMessage
2.2: session created
3: login
3.2: login is OK
3.1.2: loginMessage
4: update drawing
4.1.1: drawingMessage
Message Transceiver (WBC)
1.1.1: loginUser()
1.2: loginMessage
1.1.2: loginUser()
2.1.2: createSessionMessage
2.1.2: createSession()
2.1.2: createSession()
3.1.1: loginUser()
3.1.2: loginUser()
3.1.2: loginUser()
4.1.1: updateDrawing()
4.1.1: drawingUpdate()
Session Manager
10.1.6. Classroom Session sequence diagram
10.2. Detailed Use-Case Diagrams
10.2.1. Calibration use-case diagram
10.2.2. Log in and Log out use-case diagrams
10.2.3. Create Student Account and Instructor Account use-case diagrams
User
Student
Provide account creation screen
Enter and submit username and password
Enter e-mail address (optional)
Inform instructor about student account creation
Give permission to the system for account creation
Create student account
Instructor
System
10.2.4. Annotation use-case diagram
- Request to open annotated document
- Load and open an annotatable content (i.e., video, webpage, slideshow)
\[
\text{<<?extends>> \{ If annotatable content is opened \}}
\]
- Open annotation tool menu
\[
\text{<<?extends>> \{ If annotation tool is opened \}}
\]
- Change pen attributes (i.e., thickness, color, eraser, erase entire page, erase entire presentation)
- Annotate displayed content
\[
\text{<<?extends>> \{ If annotation is done once \}}
\]
- Request to see pre-saved annotated document
- Display pre-saved annotated document
- Request to save (as) annotated document
- Save (as) annotated document into file system
10.2.5. Collaboration use-case diagram
10.2.6. File Manipulation use-case diagram
- Request to see existing files
- <<includes>> (if display file is requested)
- Display existing files
- Request to load a work from existing files
- <<includes>> (if load file is requested)
- Load a work from existing files
- Do Work Manipulation on file
- Request to see pre-saved files
- <<includes>> (if see pre-saved file is requested)
- See pre-saved files
- Request to save the file to the file structure
- <<includes>> (if save file is requested)
- Save the work to file structure
System
Student
10.2.7. Presentation Manipulation use-case diagram
10.2.8. Video Manipulation use-case diagram
10.2.9. Webpage Manipulation
10.3. Activity diagrams
10.3.1. IWBC activity diagrams
10.3.1.1. Initialization and configuration
10.3.1.2. Content display and annotation
10.3.1.3. Sketching
10.3.1.4. Collaboration
10.3.2. CBC activity diagrams
10.3.2.1. CBC login
[Diagram showing the flow of activity diagrams for CBC login, including steps for opening CBC, logging in UI, providing required information, and handling successful or unsuccessful login.]
10.3.2.2. CBC collaboration
CBC Main Menu
See list of sessions
Join session
Collaborative drawing
Create new session
Provide required information
• name
• password
Close list of sessions
Close collaborative drawing
Open classroom session
Classroom session
Request permission
Permission given
Draw on IWBC
Permission denied
Local drawing
Close
Close
|
{"Source-Url": "http://senior.ceng.metu.edu.tr/2011/dialectech/wpwt/documents/WatchTouchInitialSoftwareDesignDocument.pdf", "len_cl100k_base": 16037, "olmocr-version": "0.1.53", "pdf-total-pages": 66, "total-fallback-pages": 0, "total-input-tokens": 169477, "total-output-tokens": 17284, "length": "2e13", "weborganizer": {"__label__adult": 0.0006394386291503906, "__label__art_design": 0.003627777099609375, "__label__crime_law": 0.0003101825714111328, "__label__education_jobs": 0.03216552734375, "__label__entertainment": 0.0003936290740966797, "__label__fashion_beauty": 0.0002892017364501953, "__label__finance_business": 0.000579833984375, "__label__food_dining": 0.00054931640625, "__label__games": 0.0017557144165039062, "__label__hardware": 0.004901885986328125, "__label__health": 0.00035190582275390625, "__label__history": 0.0007510185241699219, "__label__home_hobbies": 0.0002448558807373047, "__label__industrial": 0.0007061958312988281, "__label__literature": 0.0006842613220214844, "__label__politics": 0.0002663135528564453, "__label__religion": 0.0007252693176269531, "__label__science_tech": 0.03314208984375, "__label__social_life": 0.0002390146255493164, "__label__software": 0.0499267578125, "__label__software_dev": 0.8662109375, "__label__sports_fitness": 0.00028896331787109375, "__label__transportation": 0.0008592605590820312, "__label__travel": 0.00031304359436035156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71835, 0.04205]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71835, 0.29849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71835, 0.80234]], "google_gemma-3-12b-it_contains_pii": [[0, 273, false], [273, 5129, null], [5129, 5129, null], [5129, 7683, null], [7683, 10680, null], [10680, 12456, null], [12456, 12647, null], [12647, 15281, null], [15281, 17919, null], [17919, 19468, null], [19468, 21042, null], [21042, 22619, null], [22619, 24308, null], [24308, 26305, null], [26305, 28532, null], [28532, 30496, null], [30496, 32035, null], [32035, 33379, null], [33379, 34947, null], [34947, 34947, null], [34947, 35893, null], [35893, 36906, null], [36906, 38201, null], [38201, 39644, null], [39644, 42366, null], [42366, 43956, null], [43956, 45194, null], [45194, 47104, null], [47104, 48427, null], [48427, 49576, null], [49576, 51031, null], [51031, 51116, null], [51116, 52505, null], [52505, 54318, null], [54318, 54436, null], [54436, 54480, null], [54480, 55737, null], [55737, 57058, null], [57058, 57286, null], [57286, 58361, null], [58361, 58623, null], [58623, 58772, null], [58772, 59379, null], [59379, 60728, null], [60728, 63191, null], [63191, 65589, null], [65589, 66797, null], [66797, 66853, null], [66853, 66853, null], [66853, 67770, null], [67770, 68253, null], [68253, 68511, null], [68511, 69123, null], [69123, 69238, null], [69238, 69356, null], [69356, 69622, null], [69622, 70307, null], [70307, 70346, null], [70346, 70916, null], [70916, 70967, null], [70967, 71011, null], [71011, 71141, null], [71141, 71203, null], [71203, 71227, null], [71227, 71469, null], [71469, 71835, null]], "google_gemma-3-12b-it_is_public_document": [[0, 273, true], [273, 5129, null], [5129, 5129, null], [5129, 7683, null], [7683, 10680, null], [10680, 12456, null], [12456, 12647, null], [12647, 15281, null], [15281, 17919, null], [17919, 19468, null], [19468, 21042, null], [21042, 22619, null], [22619, 24308, null], [24308, 26305, null], [26305, 28532, null], [28532, 30496, null], [30496, 32035, null], [32035, 33379, null], [33379, 34947, null], [34947, 34947, null], [34947, 35893, null], [35893, 36906, null], [36906, 38201, null], [38201, 39644, null], [39644, 42366, null], [42366, 43956, null], [43956, 45194, null], [45194, 47104, null], [47104, 48427, null], [48427, 49576, null], [49576, 51031, null], [51031, 51116, null], [51116, 52505, null], [52505, 54318, null], [54318, 54436, null], [54436, 54480, null], [54480, 55737, null], [55737, 57058, null], [57058, 57286, null], [57286, 58361, null], [58361, 58623, null], [58623, 58772, null], [58772, 59379, null], [59379, 60728, null], [60728, 63191, null], [63191, 65589, null], [65589, 66797, null], [66797, 66853, null], [66853, 66853, null], [66853, 67770, null], [67770, 68253, null], [68253, 68511, null], [68511, 69123, null], [69123, 69238, null], [69238, 69356, null], [69356, 69622, null], [69622, 70307, null], [70307, 70346, null], [70346, 70916, null], [70916, 70967, null], [70967, 71011, null], [71011, 71141, null], [71141, 71203, null], [71203, 71227, null], [71227, 71469, null], [71469, 71835, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71835, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71835, null]], "pdf_page_numbers": [[0, 273, 1], [273, 5129, 2], [5129, 5129, 3], [5129, 7683, 4], [7683, 10680, 5], [10680, 12456, 6], [12456, 12647, 7], [12647, 15281, 8], [15281, 17919, 9], [17919, 19468, 10], [19468, 21042, 11], [21042, 22619, 12], [22619, 24308, 13], [24308, 26305, 14], [26305, 28532, 15], [28532, 30496, 16], [30496, 32035, 17], [32035, 33379, 18], [33379, 34947, 19], [34947, 34947, 20], [34947, 35893, 21], [35893, 36906, 22], [36906, 38201, 23], [38201, 39644, 24], [39644, 42366, 25], [42366, 43956, 26], [43956, 45194, 27], [45194, 47104, 28], [47104, 48427, 29], [48427, 49576, 30], [49576, 51031, 31], [51031, 51116, 32], [51116, 52505, 33], [52505, 54318, 34], [54318, 54436, 35], [54436, 54480, 36], [54480, 55737, 37], [55737, 57058, 38], [57058, 57286, 39], [57286, 58361, 40], [58361, 58623, 41], [58623, 58772, 42], [58772, 59379, 43], [59379, 60728, 44], [60728, 63191, 45], [63191, 65589, 46], [65589, 66797, 47], [66797, 66853, 48], [66853, 66853, 49], [66853, 67770, 50], [67770, 68253, 51], [68253, 68511, 52], [68511, 69123, 53], [69123, 69238, 54], [69238, 69356, 55], [69356, 69622, 56], [69622, 70307, 57], [70307, 70346, 58], [70346, 70916, 59], [70916, 70967, 60], [70967, 71011, 61], [71011, 71141, 62], [71141, 71203, 63], [71203, 71227, 64], [71227, 71469, 65], [71469, 71835, 66]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71835, 0.20536]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
195aa2958a082e25c183a7ea1eac7b2ea94c3abd
|
OPC Overview Definitions and Interfaces
Version 1.0
September 24, 1998
Synopsis:
This is the specification of rules, design criteria and interfaces that are common to developers of OPC clients and OPC servers. The specification is a result of an analysis and design process to develop a standard interface to facilitate the development of servers and clients by multiple vendors that shall inter-operate seamlessly together.
Trademarks:
Most computer and software brand names have trademarks or registered trademarks. The individual trademarks have not been listed here.
Required Runtime Environment:
This specification requires Windows 95, Windows NT 4.0 or later
NON-EXCLUSIVE LICENSE AGREEMENT
The OPC Foundation, a non-profit corporation (the “OPC Foundation”), has established a set of standard OLE/COM interface protocols intended to foster greater interoperability between automation/control applications, field systems/devices, and business/office applications in the process control industry.
The current OPC specifications, prototype software examples and related documentation (collectively, the “OPC Materials”), form a set of standard OLE/COM interface protocols based upon the functional requirements of Microsoft’s OLE/COM technology. Such technology defines standard objects, methods, and properties for servers of real-time information like distributed process systems, programmable logic controllers, smart field devices and analyzers in order to communicate the information that such servers contain to standard OLE/COM compliant technologies enabled devices (e.g., servers, applications, etc.).
The OPC Foundation will grant to you (the “User”), whether an individual or legal entity, a license to use, and provide User with a copy of, the current version of the OPC Materials so long as User abides by the terms contained in this Non-Exclusive License Agreement (“Agreement”). If User does not agree to the terms and conditions contained in this Agreement, the OPC Materials may not be used, and all copies (in all formats) of such materials in User’s possession must either be destroyed or returned to the OPC Foundation. By using the OPC Materials, User (including any employees and agents of User) agrees to be bound by the terms of this Agreement.
LICENSE GRANT:
Subject to the terms and conditions of this Agreement, the OPC Foundation hereby grants to User a non-exclusive, royalty-free, limited license to use, copy, display and distribute the OPC Materials in order to make, use, sell or otherwise distribute any products and/or product literature that are compliant with the standards included in the OPC Materials.
All copies of the OPC Materials made and/or distributed by User must include all copyright and other proprietary rights notices include on or in the copy of such materials provided to User by the OPC Foundation.
The OPC Foundation shall retain all right, title and interest (including, without limitation, the copyrights) in the OPC Materials, subject to the limited license granted to User under this Agreement.
WARRANTY AND LIABILITY DISCLAIMERS:
User acknowledges that the OPC Foundation has provided the OPC Materials for informational purposes only in order to help User understand Microsoft’s OLE/COM technology. THE OPC MATERIALS ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, WARRANTIES OF PERFORMANCE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. USER BEARS ALL RISK RELATING TO QUALITY, DESIGN, USE AND PERFORMANCE OF THE OPC MATERIALS. The OPC Foundation and its members do not warrant that the OPC Materials, their design or their use will meet User’s requirements, operate without interruption or be error free.
IN NO EVENT SHALL THE OPC FOUNDATION, ITS MEMBERS, OR ANY THIRD PARTY BE LIABLE FOR ANY COSTS, EXPENSES, LOSSES, DAMAGES (INCLUDING, BUT NOT LIMITED TO, DIRECT, INDIRECT, CONSEQUENTIAL, INCIDENTAL, SPECIAL OR PUNITIVE DAMAGES) OR INJURIES INCURRED BY USER OR ANY THIRD PARTY AS A RESULT OF THIS AGREEMENT OR ANY USE OF THE OPC MATERIALS.
OPC Common Definitions
GENERAL PROVISIONS:
This Agreement and User’s license to the OPC Materials shall be terminated (a) by User ceasing all use of the OPC Materials, (b) by User obtaining a superseding version of the OPC Materials, or (c) by the OPC Foundation, at its option, if User commits a material breach hereof. Upon any termination of this Agreement, User shall immediately cease all use of the OPC Materials, destroy all copies thereof then in its possession and take such other actions as the OPC Foundation may reasonably request to ensure that no copies of the OPC Materials licensed under this Agreement remain in its possession.
User shall not export or re-export the OPC Materials or any product produced directly by the use thereof to any person or destination that is not authorized to receive them under the export control laws and regulations of the United States.
The Software and Documentation are provided with Restricted Rights. Use, duplication or disclosure by the U.S. government is subject to restrictions as set forth in (a) this Agreement pursuant to DFARs 227.7202-3(a); (b) subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARs 252.227-7013; or (c) the Commercial Computer Software Restricted Rights clause at FAR 52.227-19 subdivision (c)(1) and (2), as applicable. Contractor/manufacturer is the OPC Foundation, P.O. Box 140524, Austin, Texas 78714-0524.
Should any provision of this Agreement be held to be void, invalid, unenforceable or illegal by a court, the validity and enforceability of the other provisions shall not be affected thereby.
This Agreement shall be governed by and construed under the laws of the State of Minnesota, excluding its choice of law rules.
This Agreement embodies the entire understanding between the parties with respect to, and supersedes any prior understanding or agreement (oral or written) relating to, the OPC Materials.
Table of Contents
1. INTRODUCTION ................................................................................................................................. 1
1.1 READERS GUIDE ............................................................................................................................... 1
2. OPC DESIGN FUNDAMENTALS ........................................................................................................... 2
2.1 INTERFACE DEFINITIONS ................................................................................................................ 2
2.1.1 Required Interface Definition .................................................................................................. 2
2.1.2 Optional Interface Definition ................................................................................................ 2
2.1.3 Which interface should the client application use. ............................................................... 2
2.2 UNICODE, NT AND WIN95 ............................................................................................................. 2
2.3 THREADS AND MULTITASKING ..................................................................................................... 3
3. OPC COMMON INTERFACE ISSUES .................................................................................................... 4
3.1 COMMON INTERFACE ISSUES ....................................................................................................... 4
3.1.1 Custom vs. Automation Interface ........................................................................................... 4
3.1.2 Required vs Optional Interface Definition ............................................................................ 4
3.1.3 Ownership of memory .......................................................................................................... 4
3.1.4 Null Strings and Null Pointers ............................................................................................ 4
3.1.5 Returned Arrays ..................................................................................................................... 5
3.1.6 Errors and return codes ...................................................................................................... 5
4. SHUTDOWN OF OPCSERVERS ............................................................................................................ 6
4.1 ICONNECTIONPOINTCONTAINER (ON OPCSERVER) ................................................................. 6
4.1.1 IConnectionPointContainer::EnumConnectionPoints ........................................................ 6
4.1.2 IConnectionPointContainer::FindConnectionPoint ............................................................ 7
4.2 IOPCSSHUTDOWN ........................................................................................................................... 7
4.2.1 IOPCSShutdown::ShutdownRequest .................................................................................... 8
5. IOPCCOMMON .................................................................................................................................... 9
5.1.1 IOPCCOMMON::SetLocaleID ............................................................................................... 9
5.1.2 IOPCCOMMON::GetLocaleID ............................................................................................. 10
5.1.3 IOPCCOMMON::QueryAvailableLocaleIDs .......................................................................... 10
5.1.4 IOPCCOMMON::GetErrorString ........................................................................................ 11
5.1.5 IOPCCOMMON::SetClientName ........................................................................................ 12
6. INSTALLATION AND REGISTRATION ISSUES ............................................................................... 13
6.1 COMPONENT CATEGORIES ........................................................................................................... 13
6.1.1 Component Categories Registration ..................................................................................... 13
6.2 REGISTRY ENTRIES FOR THE PROXY/STUB DLL ................................................................... 14
6.3 CREATING THE REGISTRY ENTRIES ....................................................................................... 14
6.4 VERSION CONVENTION .............................................................................................................. 16
6.5 INSTALLING OPC BINARIES ........................................................................................................ 16
7. OPC SERVER BROWSER ...................................................................................................................... 18
7.1 OVERVIEW ................................................................................................................................. 18
7.2 INFORMATION FOR USERS ....................................................................................................... 18
7.3 INFORMATION FOR SERVER PROGRAMMERS ......................................................................... 18
7.4 INFORMATION FOR CLIENT PROGRAMMERS ......................................................................... 18
7.5 IOPCSSERVERLIST REFERENCE ............................................................................................ 19
7.5.1 IOPCSERVERLIST::EnumClassesOfCategory .................................................................. 19
7.5.2 IOPCSERVERLIST::GetClassDetails .............................................................................. 20
7.5.3 IOPCSERVERLIST::CLSIDFromProgID ........................................................................... 21
OPC Common Definitions
8. APPENDIX A – OPC COMMON IDL SPECIFICATION.......................................................... 22
9. APPENDIX B – SAMPLE STRING FILTER FUNCTION......................................................... 25
1. Introduction
1.1 Readers Guide
This document contains common rules and design criteria and the specification of interfaces which are common for several topics.
Specific interface specifications to develop OPC clients and/or OPC Servers (e.g., for DataAccess, Alarm&Event Handling or Historical DataAccess) are available as separate documents.
Chapter 1 is this Readers Guide.
Chapter 2 describes the fundamentals of the design and characteristics of OPC components.
Chapter 3 describes issues that are common to all OPC interfaces.
Chapter 4 specifies the shutdown capability of OPC Servers.
Chapter 5 specifies IOPCCommon, an interface that is also “common” to all types of OPC Servers.
Chapter 6 gives general information about OPC Server registration.
Chapter 7 specifies the interface for OPC Server Browsing.
Appendix A contains the IDL of the common interfaces.
Finally, Appendix B specifies a sample string filter function. It defines the minimum filtering required on various methods of the OPC Server Interfaces.
2. OPC Design Fundamentals
OPC is based on Microsoft’s OLE/COM technology.
2.1 Interface Definitions
OPC specifications always contain two sets of interfaces; Custom Interfaces and Automation interfaces. This is shown in Figure 2-1.
An OPC client application communicates to an OPC server through the specified custom and automation interfaces. OPC servers must implement the custom interface, and optionally may implement the automation interface. In some cases the OPC Foundation provides a standard automation interface wrapper. This “wrapperDLL” can be used for any vendor-specific custom-server.
2.1.1 Required Interface Definition
OPC server developers must implement all functionality of required interfaces. An OPC client communicates to an OPC server by calling functions from the OPC required interfaces.
2.1.2 Optional Interface Definition
OPC server developers may implement the functionality of the optional interfaces.
An optional interface is one that the server developer may elect to implement. When an OPC Server supports an optional interface, all functions within that optional interface must be implemented, even if the function just returns E_NOTIMPL. An OPC client that wishes to use the functionality of an optional interface will query the OPC server for the optional interface. The client must be designed to not require that this optional interface exist.
2.1.3 Which interface should the client application use.
In general, client programs which are created using scripting languages will use the automation interface. Client programs which are created in C++ will find it easiest to use the custom interface for maximum performance.
2.2 UNICODE, NT and WIN95
All string parameters to the OPC Interfaces are UNICODE, because the native OLE APIs are all UNICODE. Microsoft Visual Basic 4.0 and higher is UNICODE internally and, while it normally
converts strings to ANSI when calling a DLL, it will pass strings directly as UNICODE where a corresponding TYPELIB indicates this should be done (as it will for OPC).
At the time of this writing, MIDL 3.0 or later is required in order to correctly compile the IDL code and generate proxy/stub software. Microsoft Windows NT 4.0 (or later), or Windows 95 with DCOM support is required to properly handle the marshaling of OPC parameters.
Note that in order to implement OPC servers which will run on both Microsoft Windows NT and Microsoft Windows 95 it is necessary for these servers to test the platform at runtime. In the case of Microsoft Windows 95, conversion of any strings to be passed to Win32 from UNICODE to ANSI needs to be done.
2.3 Threads and Multitasking
This specification does NOT require any particular threading model for the server.
The topic of multiple threads and their relationship to OLE is important. While these issues are also difficult to summarize, the performance gains for a medium to large scale server are worth the investment.
For OPC Servers
For servers, the default handling of threads by OLE is very simplistic. OLE will use one thread per local or remote server to handle all requests for all clients. An alternate approach is referred to ‘Apartment Model Threading’ where all OLE calls into an OLE server are guaranteed to be serialized. The apartment model simplifies the issues surrounding multiple client access.
An advantage to this single threaded approach is that it simplifies implementation of servers with respect to reentrancy issues. Since all method calls are serialized automatically by the message loop, methods are never reentered or interrupted by other methods. Another advantage is that it insures (as required by COM) that all access to an object is done by the thread that created the object.
The major disadvantage of this single threaded approach is that all method calls must run to completion without significant delay. Any delay by a call prevents execution of the message loop and dispatch of additional requests, thus blocking all clients of the server. This means that a data read or write will need to be buffered so as not to seriously compromise speed. In particular, this means that physical communications (unless they are very fast) should be handled by a separate thread within the server (clearly logic related to data handling by this thread would need to be thread safe). This in turn makes write verification and error handling for writes more difficult. These issues are reflected in the design of the interfaces, particularly in the areas of ‘allowed behavior’. It will be noted later that the design allows for optional Read and Write modes where the data is read or written directly to the device.
For OPC Clients
It is currently a requirement of COM that an object be accessed only by the thread that created it. This applies both to the actual objects in the server and to any ‘proxy’ objects represented by a marshaling stub or handler. Note that there are ways to partially relax this constraint (e.g. through the use of CoMarshallInterThreadInterfaceInStream()) however, this simply routes all method calls back through the thread that created the object and this involves considerable overhead. In addition, no matter how many threads attempt to access the objects in parallel, they will all be gated by the operation of the dispatch loop in the thread owning the object which will tend to negate any performance improvement.
Note the general OLE rule that code within asynchronous OLE methods (e.g. OnDataChange) cannot make synchronous or asynchronous OLE calls.
3. OPC Common Interface Issues
3.1 Common Interface Issues
This section describes issues which are common to all interfaces, and some background information about how the designers of OPC expected these interfaces to be implemented and used.
3.1.1 Custom vs. Automation Interface
OPC specifications always contain two sets of interfaces; Custom Interfaces and Automation Interfaces. It has been found that it is not possible to define a single (dual-automation) interface which is both highly efficient and provides the look-and-feel of typical automation servers, like Excel.
In general, client programs which are created using scripting languages, like Visual Basic (or VBA) will use the automation interface. Client programs which are created in C++ will find it easiest to use the custom interface for maximum performance.
OPC servers must implement the custom interface, and optionally may implement the automation interface. The OPC Foundation provides a standard automation interface wrapper. This “wrapperDLL” can be used for any vendor-specific custom-server.
3.1.2 Required vs Optional Interface Definition
OPC server developers must implement all functionality of required interfaces. An OPC client communicates to an OPC server by calling functions from the OPC required interfaces.
OPC server developers may implement the functionality of the optional interfaces. An optional interface is one that the server developer may elect to implement. When an OPC Server supports an optional interface, all functions within that optional interface must be implemented, even if the function just returns E_NOTIMPL. An OPC client that wishes to use the functionality of an optional interface will query the OPC server for the optional interface. The client must be designed to not require that this optional interface exist.
3.1.3 Ownership of memory
Per the COM specification, clients must free all memory associated with ‘out’ or ‘in/out’ parameters. This includes memory that is pointed to by elements within any structures. This is very important for client writers to understand, otherwise they will experience memory leaks that are difficult to find. See the IDL files to determine which parameters are out parameters. The recommended approach is for a client to create a subroutine that is used for freeing each type of structure properly.
Independent of success/failure, the server must always return well defined values for ‘out’ parameters. Releasing the allocated resources is the client’s responsibility.
Note: If the error result is any FAILED error such as E_OUTOFMEMORY, the OPC server should return NULL for all ‘out’ pointers (this is standard COM behavior). This rule also applies to the error arrays (ppErrors) returned by many of the functions below. In general, a robust OPC client should check each out or in/out pointer for NULL prior to freeing it.
3.1.4 Null Strings and Null Pointers
Both of these terms are used. They are NOT the same thing. A NULL Pointer is an invalid pointer (0) which will cause an exception if used. A NUL String is a valid (non zero) pointer to a 1 character array where that character is a NUL (i.e. 0). If a NUL string is returned from a method as an [out] parameter (or as an element of a structure) it must be freed, otherwise the memory containing the
NUL will be lost. Also note that a NULL pointer cannot be passed for an [in,string] argument due to COM marshalling restrictions. In this case a pointer to a NUL string should be passed to indicate an omitted parameter.
3.1.5 Returned Arrays
You will note the syntax `size_is(dwCount)` in the IDL of several interfaces used in combination with pointers to pointers. This indicates that the returned item is a pointer to an actual array of the indicated type, rather than a pointer to an array of pointers to items of the indicated type. This simplifies marshaling, creation, and access of the data by the server and client.
3.1.6 Errors and return codes
The OPC specifications describe interfaces and corresponding behavior that an OPC server implements, and an OPC client application depends on. A list of errors and return codes is contained in each specification. For each method described a list of all possible OPC error codes as well as the most common OLE error codes is included. It is likely that clients will encounter additional error codes such as RPC and Security related codes in practice and they should be prepared to deal with them.
In all cases ‘E’ error codes will indicate FAILED type errors and ‘S’ error codes will indicate at least partial success.
4. Shutdown of OPC Servers
The shutdown capability allows an OPC Server to request that all clients disconnect from the server. It is provided for all types of OPC Servers (DataAccess, Alarm&Event, ...).
The functionality is available via a Connection point on the Server object and a corresponding Client side IOPCShutdown interface. Clients should make use of this feature to support graceful shutdown.
4.1 IConnectionPointContainer (on OPCServer)
This interface provides access to the connection point for IOPCShutdown.
The general principles of ConnectionPoints are not discussed here as they are covered very clearly in the Microsoft Documentation. The reader is assumed to be familiar with this technology.
Likewise the details of the IEnumConnectionPoints, IConnectionPoint and IEnumConnections interfaces are well defined by Microsoft and are not discussed here.
Note: OPC Compliant servers are not required to support more than one connection between each Server and the Client. Given that servers are client specific entities it is expected that a single connection will be sufficient for virtually all applications. For this reason (as per the COM Specification) the EnumConnections method for IConnectionPoint interface for the IOPCShutdown is allowed to return E_NOTIMPL.
4.1.1 IConnectionPointContainer::EnumConnectionPoints
HRESULT EnumConnectionPoints(
IEnumConnectionPoints **ppEnum
);
Description
Create an enumerator for the Connection Points supported between the OPC Group and the Client.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>ppEnum</td>
<td>Where to save the pointer to the connection point enumerator. See the Microsoft documentation for a discussion of IEnumConnectionPoints.</td>
</tr>
</tbody>
</table>
OPC Common Definitions
HRESULT Return Codes
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>S_OK</td>
<td>The function was successful.</td>
</tr>
<tr>
<td></td>
<td>For other codes see the OLE programmers reference</td>
</tr>
</tbody>
</table>
Comments
OPCServers must return an enumerator that includes IOPCShutdown. Additional vendor specific callbacks are also allowed.
4.1.2 IConnectionPointContainer:: FindConnectionPoint
HRESULT FindConnectionPoint(
REFIID riid,
IConnectionPoint **ppCP
);
Description
Find a particular connection point between the OPC Server and the Client.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>ppCP</td>
<td>Where to store the Connection Point. See the Microsoft documentation for a discussion of IConnectionPoint.</td>
</tr>
<tr>
<td>riid</td>
<td>The IID of the Connection Point. (e.g. IID_IOPCShutdown)</td>
</tr>
</tbody>
</table>
HRESULT Return Codes
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>S_OK</td>
<td>The function was successful.</td>
</tr>
<tr>
<td></td>
<td>For other codes see the OLE programmers reference</td>
</tr>
</tbody>
</table>
Comments
OPCServers must support IID_IOPCShutdown. Additional vendor specific callbacks are also allowed.
4.2 IOPCShutdown
In order to use this connection point, the client must create an object that supports both the IUnknown and IOPCShutdown Interface. The client would pass a pointer to the IUnknown interface (NOT the IOPCShutdown) to the Advise method of the proper IConnectionPoint in the server (as obtained from IConnectionPointContainer:: FindConnectionPoint or EnumConnectionPoints). The Server will call QueryInterface on the client object to obtain the IOPCShutdown interface. Note that the transaction
must be performed in this way in order for the interface marshalling to work properly for Local or Remote servers.
The ShutdownRequest method on this Interface will be called when the server needs to shutdown. The client should release all connections and interfaces for this server.
A client which is connected to multiple OPC Servers (for example Data Access and/or other servers such as Alarms and events servers from one or more vendors) should maintain separate shutdown callbacks for each object since any server can shut down independently of the others.
4.2.1 IOPCShutdown::ShutdownRequest
HRESULT ShutdownRequest ( [in] LPWSTR szReason );
Description
This method is provided by the client so that the server can request that the client disconnect from the server. The client should UnAdvise all connections, Remove all groups and release all interfaces.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>szReason</td>
<td>An optional text string provided by the server indicating the reason for the shutdown. The server may pass a pointer to a NUL string if no reason is provided.</td>
</tr>
</tbody>
</table>
HRESULT Return Codes
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>S_OK</td>
<td>The client must always return S_OK.</td>
</tr>
</tbody>
</table>
Comments
The shutdown connection point is on a ‘per COM object’ basis. That is, it relates to the object created by CoCreate… If a client connects to multiple COM objects then it should monitor each one separately for shutdown requests.
5. **IOPCCommon**
This interface is used by all OPC Server types (DataAccess, Alarm&Event, Historical Data). It provides the ability to set and query a LocaleID which would be in effect for the particular client/server session. That is, the actions of one client do not affect any other clients.
As with other interfaces such as IUnknown, the instance of this interface for each server is unique. That is, an OPC Data Access server object and an OPC Alarms and Events server object might both provide an implementation of IOPCCommon. A client which is maintaining connections to both servers would, as with any other interface, use the interfaces on these two objects independently.
### 5.1.1 IOPCCommon::SetLocaleID
```cpp
HRESULT SetLocaleID (
[in] LCID dwLcid
);
```
**Description**
Set the default LocaleID for this server/client session. This localeid will be used by the GetErrorString method on this interface. It should also be used as the ‘default’ localeid by any other server functions that are affected by localID. Other OPC interfaces may provide additional LocaleID capability by allowing this LocalID to be overridden either via a parameter to a method or via a property on a child object.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>dwLcid</td>
<td>The default LocaleID for this server/client session</td>
</tr>
</tbody>
</table>
**Return Codes**
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>An argument to the function was invalid. (For example, the LocaleID specified is not valid.)</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
**Comments**
The default value for the server should be LOCALE_SYSTEM_DEFAULT.
5.1.2 IOPCCommon::GetLocaleID
HRESULT GetLocaleID(
[out] LCID *pdwLcid
);
Description
Return the default LocaleID for this server/client session.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>pdwLcid</td>
<td>Where to return the default LocaleID for this server/client session</td>
</tr>
</tbody>
</table>
Return Codes
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>An argument to the function was invalid. (For example, the passed pointer is not valid.)</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
Comments
5.1.3 IOPCCommon::QueryAvailableLocaleIDs
HRESULT QueryAvailableLocaleIDs(
[out] DWORD *pdwCount,
[out, sizeis(dwCount)] LCID **pdwLcid
);
Description
Return the available LocaleIDs for this server/client session.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>pdwCount</td>
<td>Where to return the LocaleID count</td>
</tr>
<tr>
<td>pdwLcid</td>
<td>Where to return the LocaleID list.</td>
</tr>
</tbody>
</table>
OPC Common Definitions
Return Codes
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>An argument to the function was invalid. (For example, the passed pointer is not valid.)</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
Comments
5.1.4 IOPCommon::GetErrorString
HRESULT GetErrorString(
[in] HRESULT dwError,
[out, string] LPWSTR *ppString
);
Description
Returns the error string for a server specific error code.
Parameters
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>dwError</td>
<td>A server specific error code that the client application had returned from an interface function from the server, and for which the client application is requesting the server’s textual representation.</td>
</tr>
<tr>
<td>ppString</td>
<td>Pointer to pointer where server supplied result will be saved</td>
</tr>
</tbody>
</table>
Return Codes
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>E_OUTOFMEMORY</td>
<td>Not enough memory</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>An argument to the function was invalid. (For example, the error code specified is not valid.)</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
Comments
The expected behavior is that this will include handling of Win32 errors as well (such as RPC errors). Client must free the returned string.
It is recommended that the server put any OPC specific strings into an external resource to simplify translation.
Note that if this method is being called via DCOM then it is very possible that RPC or other network related errors will be returned. For this reason it is probably good practice for the client to attempt to call a local Win32 function such as FormatMessage if this function fails.
### 5.1.5 IOPCCommon::SetClientName
**HRESULT SetClientName (]**
- **[in, string] LPCWSTR szName**
**);**
**Description**
Allows the client to optionally register a client name with the server. This is included primarily for debugging purposes. The recommended behavior is that the client set his Node name and EXE name here.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>szName</td>
<td>An arbitrary string containing information about the client task.</td>
</tr>
</tbody>
</table>
**Return Codes**
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>An argument to the function was invalid. (For example, the pointer specified is not valid.)</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
**Comments**
6. Installation and Registration Issues
This section describes all installation issues which are common to all OPC Servers (no matter which interfaces they implement). Specific installation and registration issues will be described in the interface-specific documents.
It is assumed that the server vendor will provide a SETUP.EXE to install the needed components for their server. This will not be discussed further. Other than the actual components, the main issue affecting OLE software is management of the Windows Registry and Component Categories. The issues here are (a) what entries need to be made and (b) how they can be made.
6.1 Component Categories
With the possibly huge amount of available components on a single computer system, their management becomes increasingly difficult. OPC Clients often need to enumerate the OPC Servers that they want to use in a certain context. In its first version, OPC specified a sub-key called OPC to tag the OPC Server entries in the registry. Clients have to browse for this subkey. This method is inefficient as it requires browsing all CLSID entries in the registry. Name collisions may occur. And finally, access to remote registries will be restricted in NT5.0.
For all server specifications past DataAccess 1.0A, OPC uses Component Categories as a way to categorize OPC Servers by their implemented functionality. Clients can use the new interface IOPCServerList to obtain a list of servers with the required functionality. See the following chapter for the specification of this interface.
OPC defines “implemented categories” for each version of each OPC Interface specification. Each category is identified by a globally unique identifier (GUID), the CATID. CATIDs are specified in the registry section of each specification. It is expected that a server will first create any category it uses and then will register for that category. Unregistering a server should cause it to be removed from that category. See the ICatRegister documentation for additional information.
A single server may belong to more than one category. I.e., it may support DataAccess Versions 1.0A and 2.0 and in addition Alarm & Event Handling.
6.1.1 Component Categories Registration
During the registration process, each OPC Server must register itself with the Component Categories Manager, a Microsoft supplied system COM object. OPC Clients will query the Components Category Manager to enumerate the CLSIDs of all registered OPC Servers.
6.1.1.1 Server Registration
To Register with the Component Categories Manager, a server should first register the OPC defined Category ID (CATID) and the OPC defined Category Description by calling ICatRegister::RegisterCategories(), and then register its own CLSID as an implementation of the CATID with a call to ICatRegister::RegisterClassImplCategories().
To get an interface pointer to ICatRegister, call CoCreateInstance() as in this example from the Alarm & Events Sample Server:
```c
#include <comcat.h>
CoCreateInstance(CLSID_StdComponentCategoriesMgr, NULL, CLSCTX_INPROC_SERVER, IID_ICatRegister, (void**)&pcr);
```
The OPC Alarm & Events Sample Server code uses helper functions defined in CATHELP.CPP to make the actual calls to ICatRegister. Here is how the sample server registers and un-registers the component categories:
```cpp
#include "cathelp.h"
#include "opc_ae.h"
#include "opcaedef.h"
void RegisterServer()
{
// register component categories
HRESULT hr;
// IID_OPCEventServerCATID is the Category ID (a GUID) defined in opc_ae.idl.
// OPC_EVENTSERVER_CAT_DESC is the category description defined in opcaedef.h
// All servers should register the category this way
hr = CreateComponentCategory( IID_OPCEventServerCATID, OPC_EVENTSERVER_CAT_DESC);
// CLSID_OPCEventServer is the CLSID for this sample server. Each server
// will need to register its own unique CLSID here with the component manager.
hr = RegisterCLSIDInCategory( CLSID_OPCEventServer, IID_OPCEventServerCATID );
}
void UnregisterServer()
{
UnRegisterCLSIDInCategory( CLSID_OPCEventServer, IID_OPCEventServerCATID );
}
```
6.1.1.2 Client Enumeration
Clients will use the Interface IOPCServerList to obtain a list of servers either locally or on a remote host. This interface basically provides the functionality of the Component Categories Manager. It has been defined by OPC, because access to the Component Categories Manager does not work for remote machines.
See the following chapter for the specification of IOPCServerList.
6.2 Registry Entries for the Proxy/Stub DLL
The proxy/stub DLLs are used for marshalling interfaces to LOCAL or REMOTE servers. It is generated directly from the IDL code and should be the same for every OPC Server. In general the Proxy/Stub will use self registration. (Define REGISTER_PROXY_DLL during the build). Since this is completely automatic and transparent it is not discussed further. Also note that a prebuilt and tested proxy/stub DLL will be provided at the OPC Foundation Web site making it unnecessary for vendors to rebuild this DLL.
Although vendors are allowed to add their own interfaces to OPC objects (as with any COM object) they should NEVER modify the standard OPC IDL files or Proxy/Stub DLLs to include such interfaces. Such interfaces should ALWAYS be defined in a separate vendor specific IDL file and should be marshalled by a separate vendor specific Proxy/Stub DLL.
6.3 Creating the Registry Entries
COM defines a “self-registration” mechanism that enables you to encapsulate registry needs into a DLL or EXE, providing clients and servers an easy way to make sure that any given module is fully and accurately registered. In addition, COM also includes “unregistration” so that a server can remove all of its registry entries when the DLL or EXE is removed from the file system, thereby keeping the registry clean from useless entries.
OPC Common Definitions
When asked to self-register, a server must create all entries for every component that it supports, including any entries for type libraries. When asked to “un-register” the server must remove those entries that it created in its self-registration.
For a DLL server, these requests are made through calls to the exported functions DllRegisterServer and DllUnregisterServer, which must exist in the DLL under these exact names. Both functions take no arguments and return an HRESULT to indicate the result. The two applicable error codes are SELFREG_E_CLASS (failure to register/unregister CLSID information) and SELFREG_E_TYPELIB (failure to register/unregister TypeLib information).
If the server is packaged in an EXE module, then the application wishing to register the server launches the EXE server with the command-line argument /RegServer or -RegServer (case-insensitive). If the application wishes to unregister the server, it launches the EXE with the command-line argument /UnregServer or -UnregServer. The self-registering EXE detects these command-line arguments and invokes the same operations as a DLL would within DllRegisterServer and DllUnregisterServer, respectively, registering its module path under LocalServer32 instead of InprocServer32 or InprocHandler32.
The server must register the full path to the installation location of the DLL or EXE module for their respective InprocServer32, InprocHandler32, and LocalServer32 keys in the registry. The module path is easily obtained through the Win32 API function GetModuleFileName.
**NOTE:** The server should NOT register the proxy/stub interfaces. They should be registered by the proxy/stub DLL as discussed earlier.
The registry entries for proxy interfaces can be easily generated when compiling the proxy dll. Simply define the constant REGISTER_PROXY_DLL during compilation, and export DllRegisterServer and DllUnregisterServer during the link. One can now populate the registry by executing regsvr32 and passing the proxy dll name as an argument.
The following are the Microsoft COM required registry entries for a local server (EXE) shown in Registry File (.reg) format:
```regedit
HKEY_CLASSES_ROOT\MyVendor\ServerName.1 = My OPC Server Description
HKEY_CLASSES_ROOT\MyVendor\ServerName.1\CLSID = {Your Server’s unique CLSID}
HKEY_CLASSES_ROOT\CLSID\Your Server’s unique CLSID = My OPC Server Description
HKEY_CLASSES_ROOT\CLSID\Your Server’s unique CLSID\ProgID = MyVendor:ServerName.1
HKEY_CLASSES_ROOT\CLSID\Your Server’s unique CLSID\LocalServer32 = c:\FULLPATH\MyOPCServer.exe
```
The following are the Microsoft COM required registry entries for an Inproc server (DLL) shown in Registry File (.reg) format:
```regedit
HKEY_CLASSES_ROOT\MyVendor\ServerName.1 = My OPC Server Description
HKEY_CLASSES_ROOT\MyVendor\ServerName.1\CLSID = {Your Server’s unique CLSID}
HKEY_CLASSES_ROOT\CLSID\Your Server’s unique CLSID = My OPC Server Description
HKEY_CLASSES_ROOT\CLSID\Your Server’s unique CLSID\ProgID = MyVendor:ServerName.1
HKEY_CLASSES_ROOT\CLSID\Your Server’s unique CLSID\InprocServer32 = c:\FULLPATH\MyOPCServer.dll
```
The following are the OPC required registry entries for all Data Access 1.0 servers shown in Registry File (.reg) format. **Only servers that support the Data Access 1.0 interface should make these entries:**
```regedit
```
1. SELFREG_E_CLASS and SELFREG_E_TYPELIB are defined in the OLE Control’s header OLECTL.H.
6.4 Version Convention
All OPC provided runtime files (DLLs and EXEs) will contain version information embedded in the file’s resource. By convention, the version number will use the following format:
\[ \text{MM.mm.bb} \]
Where:
- MM == Major Version
- mm == Minor Version
- bb == Build Number
The version resource provides two version numbers, one for file and one for product. The same version number will be used for both fields. In the resource, the version numbers are represented by four comma delimited integers. To represent our three-part version number, the third integer will always be zero. For example, if the version is 5.2.41 then the version resource (in the source .RC file) will look like this:
```plaintext
VS_VERSION_INFO VERSIONINFO
FILEVERSION 5,2,0,41
PRODUCTVERSION 5,2,0,41
FILEFLAGSMASK 0x3FL
#define _DEBUG
FILEFLAGS 0x1L
#else
FILEFLAGS 0x0L
#endif
FILEOS 0x40004L
FILETYPE 0x2L
FILESUBTYPE 0x0L
BEGIN
BLOCK "StringFileInfo"
BEGIN
BLOCK "040904b0"
BEGIN
VALUE "CompanyName", "OPC Foundation "
VALUE "FileDescription", "OPC Alarm and Event Server Proxy/Stub"
VALUE "FileVersion", "5.2.41"
VALUE "InternalName", "opc_aeps"
VALUE "LegalCopyright", "Copyright © 1997 OPC Foundation"
VALUE "OriginalFilename", "opc_aeps.dll"
VALUE "ProductName", "OPC Alarm and Event Server Proxy/Stub"
VALUE "ProductVersion", "5.2.41"
END
END
BEGIN "VarFileInfo"
BEGIN
VALUE "Translation", 0x409, 1200
END
END
```
The version information will be used to ensure that during installation, an older version of a file will not overwrite a newer version.
6.5 Installing OPC Binaries
All OPC vendors will need to install the appropriate OPC Foundation provided components (proxy/stub DLLs, Automation wrappers etc.) to work with their components.
Since multiple vendors will be installing identical OPC Foundation components, it is imperative that all vendors follow these installation instructions exactly without deviation:
All OPC Foundation binaries must be installed and registered in the Windows Systems directory. This is the directory returned by the WIN32 function GetSystemDirectory. If a given file already exists in this directory, the program should overwrite it with your application file only if your file is a more recent version. The GetFileTime, GetFileVersionInfo, and GetFileInformationByHandle functions can be used to determine which file is more recent.
All OPC Foundation binaries must be installed/uninstalled with reference counting. After copying a file, your installation program must make sure to increment the usage counter for that file in the registry. When removing an application, it should decrement the use counter. If the result is zero, the user should be given the option of unregistering and deleting the file. The user should be warned that other applications may actually use this file and will not work if it is missing. The registry key used for reference counting of all files is:
```
\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SharedDLLs
```
The following example shows a reference count of 5 for OPCPROXY.DLL and a reference count of 3 for OPCENUM.EXE:
```
\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SharedDLLs
C:\WINNT\System32\OPCPROXY.DLL=5
C:\WINNT\System32\OPCENUM.EXE=3
```
Most installation utilities like InstallShield handle the installation of shared, version checked files easily.
7. OPC Server Browser
7.1 Overview
The OPC Foundation supplied Server Browser OPCENUM.EXE can reside on any machine, will access the local Component Categories Manager and provides a new interface IOPCServerList which can be marshaled and used by remote clients. This server has a published classid (see below) and can be installed once on any machine which hosts OPC servers. The client still needs to know the nodename of the target machine however he can now create this object remotely and use its IOPCServerList interface to determine what types and brands of servers are available on that machine.
7.2 Information for Users
The OPC Server Browser (OPCENUM.EXE) and the required proxy/stub (OPCCOMN_PS.DLL) can be obtained from the OPC Foundation Web Site. The EXE and DLL should be copied to the main WINDOWS directory (see the section “Installing OPC Binaries”, above).
The EXE is installed by running
```
OPCENUM /RegServer
```
or
```
OPCENUM /Service
```
to install the server as a service on Windows NT.
The DLL is installed by running
```
REGSVR32 OPCCOMN_ps.dll
```
No further user action is required. Doing the steps above will allow Client programs you have purchased which support this server browser capability to function properly. Note that the OPC Server Browser is designed to allow access by any user regardless of the DCOM security setup.
7.3 Information for Server Programmers
Note that the OPC Foundation provides the OPC Browser Object. OPC Servers should NOT implement this interface. OPC Servers should simply register themselves with the appropriate component category as described on the appropriate OPC Specification.
7.4 Information for Client Programmers
Client programmers should create the OPC Server Browser Object on the target machine by passing its class id (CLSID_OPCServerList as defined in opc_cats.c) to CoCreateInstanceEx. They should obtain the OPCServerList interface (IID_IOPCServerList as defined in opccomn_i.c). They can then use this interface to obtain lists of the available servers for particular component categories. The OPC Component categories for the various OPC Server types are defined in opc_cats.c. The marshalling for this interface is included in the OPCComn_ps.dll.
7.5 **IOPCServerList Reference**
The interface is designed to be as simple as possible to use. It is similar to the standard ICatInformation but has been simplified and also modified so that it can work remotely. It provides just the minimum functionality required for this particular application. It provides the methods which are described in more detail later.
7.5.1 **IOPCServerList::EnumClassesofCategory**
```c
HRESULT EnumClassesOfCategories(
[in] ULONG cImplemented,
[in, size_is(cImplemented)] CATID rgcatidImpl[],
[in] ULONG cRequired,
[in, size_is(cRequired)] CATID rgcatidReq[],
[out] IEnumGUID** ppenumClsid);
```
**Description**
Returns a standard EnumCLSID containing the CLSIDs of the servers that implement any of the listed categories on the target machine. This method is similar to the method of the same name provided in ICatInformation except that the caller should use a value of 0 instead of –1 for the cImplemented and cRequired arguments to include classes regardless of which classes they implement or require (respectively).
Note that the easiest way to use this method is to pass in a single CATID (such as an OPC Data Access 2.0 Server) and to pass a 0 for Required IDs. This will give you an enumeration of the CLSIDs of the servers that implement the specified category.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>cImplemented</td>
<td>0 (see description, above)</td>
</tr>
<tr>
<td></td>
<td>The number of category IDs in the <code>rgcatidImpl</code> array</td>
</tr>
<tr>
<td>rgcatidImpl</td>
<td>An array of category identifiers.</td>
</tr>
<tr>
<td>cRequired</td>
<td>0 (see description, above)</td>
</tr>
<tr>
<td></td>
<td>The number of category IDs in the <code>rgcatidReq</code> array</td>
</tr>
<tr>
<td>rgcatidReq</td>
<td>An array of category identifiers.</td>
</tr>
<tr>
<td>ppenumClsid</td>
<td>The location in which to return an <code>IEnumGUID</code> interface</td>
</tr>
<tr>
<td></td>
<td>that can be used to enumerate the CLSIDs of the classes that implement</td>
</tr>
<tr>
<td></td>
<td>category <code>rgcatid</code>.</td>
</tr>
</tbody>
</table>
**Return Codes**
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>REGDB_E_CLASSNOTREG</td>
<td>Unable to create an instance of the Component Categories Manager on the remote machine.</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>One or more arguments are incorrect.</td>
</tr>
<tr>
<td>E_OUTOFMEMORY</td>
<td>Insufficient memory to create and return an enumerator object.</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
7.5.2 IOPCServerList::GetClassDetails
HRESULT GetClassDetails(
[in] REFCLSID clsid,
[out] LPOLESTR* ppszProgID,
[out] LPOLESTR* ppszUserType);
Description
Given a class ID, obtain the ProgID and the User Readable Name of the associated server.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>clsid</td>
<td>One of the CLSIDs returned by EnumClassesOfCategory (above).</td>
</tr>
<tr>
<td>ppszProgID</td>
<td>[out] ProgID for the specified CLSID.</td>
</tr>
<tr>
<td>ppszUserType</td>
<td>[out] User Readable Name for the specified CLSID.</td>
</tr>
</tbody>
</table>
Return Codes
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>REGDB_E_CLASSNOTREG</td>
<td>There is no CLSID registered for the class object.</td>
</tr>
<tr>
<td>REGDB_E_READREGDB</td>
<td>There was an error reading the registry.</td>
</tr>
<tr>
<td>OLE_E_REGDB_KEY</td>
<td>The ProgID = MainUserTypeName or CLSID = MainUserTypeName keys are missing from the registry.</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>One or more arguments are incorrect.</td>
</tr>
<tr>
<td>E_OUTOFMEMORY</td>
<td>Insufficient memory to create and return an enumerator object.</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
### 7.5.3 IOPCServerList::CLSIDFromProgID
```cpp
HRESULT CLSIDFromProgID(
[in] LPOLESTR szProgId,
[out] LPCLSID clsid);
```
**Description**
Given the ProgID which as a string, return the CLSID which is a GUID. This is useful when the client (e.g. an Automation Wrapper DLL) already knows the PROGID of the target server on a remote machine. ProgID is a string and thus easy to deal with however this needs to be translated to a CLSID to be passed to CoCreateInstanceEx.
<table>
<thead>
<tr>
<th>Parameters</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>szProgId</td>
<td>ProgID string for which to read the CLSID.</td>
</tr>
<tr>
<td>clsid</td>
<td>[out] CLSID which is registered for the given ProgID.</td>
</tr>
</tbody>
</table>
**Return Codes**
<table>
<thead>
<tr>
<th>Return Code</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E_FAIL</td>
<td>The operation failed.</td>
</tr>
<tr>
<td>REGDB_E_CLASSNOTREG</td>
<td>There is no CLSID registered for the class object.</td>
</tr>
<tr>
<td>REGDB_E_READREGDB</td>
<td>There was an error reading the registry.</td>
</tr>
<tr>
<td>OLE_E_REGDB_KEY</td>
<td>The <code>ProgID = MainUserTypeName</code> or <code>CLSID = MainUserTypeName</code> keys are missing from the registry.</td>
</tr>
<tr>
<td>E_INVALIDARG</td>
<td>One or more arguments are incorrect.</td>
</tr>
<tr>
<td>E_OUTOFMEMORY</td>
<td>Insufficient memory to create and return an enumerator object.</td>
</tr>
<tr>
<td>S_OK</td>
<td>The operation succeeded.</td>
</tr>
</tbody>
</table>
8. Appendix A – OPC Common IDL Specification
The current files require MIDL compiler 3.00.15 or later and the WIN NT 4.0 release SDK.
Use the command line MIDL /ms_ext /c_ext /app_config opcda.idl.
The resulting **OPCCOMN.H** file should be **included** in all clients and servers.
The resulting **OPCCOMN_I.C** file defines the interface IDs and should be **linked** into all clients and servers.
**NOTE:** This IDL file and the Proxy/Stub generated from it should NEVER be modified in any way. If you add vendor specific interfaces to your server (which is allowed) you must generate a SEPARATE vendor specific IDL file to describe only those interfaces and a separate vendor specific ProxyStub DLL to marshall only those interfaces.
```idl
// OPCCOMN.IDL
// REVISION: 04/06/98 08:00 PM (EST)
// VERSIONINFO 1.0.0.0
//
// 04/09/98 acc import unknwn.idl rather than oaidl.idl
// 06/15/98 acc add 'library' object at end to allow typelib generation
// 06/19/98 acc change V2 uuids prior to final release
// to avoid conflict with 'old' OPCDA Automation uuids
// 09/18/98 acc add OPCServerList IDL (with help from Gary Klassen)
//
import "unknwn.idl";
import "comcat.idl";
#include "unknown.idl"
#include "comcat.idl"
//****************************************************
// All servers except OPCDA1.0 have the ability to
// make callbacks into the client on shutdown via
// IOPCShutdown
//****************************************************
[object,
uuid(F31DFDE1-07B6-11d2-B2D8-0060083BA1FB),
pointer_default(unique)
]
interface IOPCShutdown : IUnknown
{
HRESULT ShutdownRequest ( [in, string] LPCWSTR szReason );
}
//****************************************************
// All servers except OPCDA1.0 support IOPCCommon
//****************************************************
[object,
uuid(F31DFDE2-07B6-11d2-B2D8-0060083BA1FB),
pointer_default(unique)
]
interface IOPCCommon : IUnknown
{
HRESULT SetLocaleID (
[in] LCID dwLcid
);
HRESULT GetLocaleID (
[out] LCID *pdwLcid
);
HRESULT QueryAvailableLocaleIDs (
[out] DWORD *pdwCount,
[out, size_is(*pdwCount)] LCID **pdwLcid
);
HRESULT GetErrorString(
[in] HRESULT dwError,
[out, string] LPWSTR *ppString
);
HRESULT SetClientName (
[in, string] LPCWSTR szName
);
}
interface IOPCServerList : IUnknown
{
HRESULT EnumClassesOfCategories(
[in] ULONG cImplemented,
[in, size_is(cImplemented)] CATID rgcatidImpl[],
[in] ULONG cRequired,
[in, size_is(cRequired)] CATID rgcatidReq[],
[out] IEnumGUID** ppenumClsid);
HRESULT GetClassDetails(
[in] REFCLSID clsid,
[out] LPOLESTR* ppszProgID,
[out] LPOLESTR* ppszUserType);
HRESULT CLSIDFromProgID(
[in] LPCOLESTR szProgId,
[out] LPCLSID clsid);
}
//****************************************************
// This TYPELIB is generated as a convenience to users of high level tools
// which are capable of using or browsing TYPELIBs.
// 'Smart Pointers' in VC5 is one example.
//***************************************************************************
[
uuid(B28EEDB1-AC6F-11d1-84D5-00608CB8A7E9),
version(1.0),
helpstring("OPCCOMN 1.0 Type Library")
]
library OPCCOMN
{
importlib("stdole32.tlb");
importlib("stdole2.tlb");
interface IOPCCommon;
interface IOPCShutdown;
interface IOpcServerList;
};
9. Appendix B – Sample String Filter Function
This function provides essentially the same functionality as the LIKE operator in Visual Basic.
MatchPattern
Syntax
BOOL MatchPattern( LPCTSTR string, LPCTSTR pattern, BOOL bCaseSensitive )
Return Value
If string matches pattern, return is TRUE; if there is no match, return is FALSE. If either string or pattern is Null, return is FALSE;
Parameters
string String to be compared with pattern.
pattern Any string conforming to the pattern-matching conventions described in Remarks.
bCaseSensitive TRUE if comparison should be case sensitive.
Remarks
A versatile tool used to compare two strings. The pattern-matching features allow you to use wildcard characters, character lists, or character ranges, in any combination, to match strings. The following table shows the characters allowed in pattern and what they match:
<table>
<thead>
<tr>
<th>Characters in pattern</th>
<th>Matches in string</th>
</tr>
</thead>
<tbody>
<tr>
<td>?</td>
<td>Any single character.</td>
</tr>
<tr>
<td>*</td>
<td>Zero or more characters.</td>
</tr>
<tr>
<td>#</td>
<td>Any single digit (0-9).</td>
</tr>
<tr>
<td>[charlist]</td>
<td>Any single character in charlist.</td>
</tr>
<tr>
<td>![charlist]</td>
<td>Any single character not in charlist.</td>
</tr>
</tbody>
</table>
A group of one or more characters (charlist) enclosed in brackets ([ ]) can be used to match any single character in string and can include almost any character code, including digits.
Note To match the special characters left bracket ( [ ), question mark (?), number sign (#), and asterisk (*), enclose them in brackets. The right bracket ( ] ) can't be used within a group to match itself, but it can be used outside a group as an individual character.
By using a hyphen (-) to separate the upper and lower bounds of the range, charlist can specify a range of characters. For example, [A-Z] results in a match if the corresponding character position in string contains any uppercase letters in the range A-Z. Multiple ranges are included within the brackets without delimiters.
Other important rules for pattern matching include the following:
OPC Common Definitions
- An exclamation point (!) at the beginning of charlist means that a match is made if any character except the characters in charlist is found in string. When used outside brackets, the exclamation point matches itself.
- A hyphen (-) can appear either at the beginning (after an exclamation point if one is used) or at the end of charlist to match itself. In any other location, the hyphen is used to identify a range of characters.
- When a range of characters is specified, they must appear in ascending sort order (from lowest to highest). [A–Z] is a valid pattern, but [Z–A] is not.
- The character sequence [] is considered a zero-length string ("").
Here is the code:
```c
inline int ConvertCase( int c, BOOL bCaseSensitive )
{
return bCaseSensitive ? c : toupper(c);
}
//*********************************************************************************
// return TRUE if String Matches Pattern --
// -- uses Visual Basic LIKE operator syntax
// CAUTION: Function is recursive
//*********************************************************************************
BOOL MatchPattern( LPCTSTR String, LPCTSTR Pattern, BOOL bCaseSensitive )
{
TCHAR c, p, l;
for (; ;)
{
switch (p = ConvertCase( *Pattern++, bCaseSensitive ) )
{
case 0: // end of pattern
return *String ? FALSE : TRUE; // if end of string TRUE
case _T('*'):
while (*String) // match zero or more char
{
if (MatchPattern (String++, Pattern, bCaseSensitive))
return TRUE;
}
return MatchPattern (String, Pattern, bCaseSensitive );
case _T('?'):
if (*String++ == 0) // match any one char
return FALSE; // not end of string
break;
case _T('['):
// match char set
if ( (c = ConvertCase( *String++, bCaseSensitive ) ) == 0)
return FALSE; // syntax
l = 0;
if( *(Pattern++ _T('!')) ) // match a char if NOT in set []
{
++Pattern;
while( (p = ConvertCase( *Pattern++, bCaseSensitive ) )
!= _T('\0') )
```
OPC Common Definitions
```c
{
if (p == _T(']')) // if end of char set, then
data: no match found
if (p == _T('-'))
( // check a range of chars?
p = ConvertCase( *Pattern, bCaseSensitive );
// get high limit of range
if (p == 0 // get high limit of range
|| p == _T(']')) // syntax
return FALSE; // syntax
if (c >= l // in range, return FALSE
&& c <= p)
return FALSE; // in range, return FALSE
}
l = p;
if (c == p) // if char matches this element
return FALSE; // return false
}
} else // match if char is in set []
{
while( (p = ConvertCase( *Pattern++, bCaseSensitive) )
!= _T('\0') )
{
if (p == _T(']')) // if end of char set, then
return FALSE; // no match found
if (p == _T('-'))
( // check a range of chars?
p = ConvertCase( *Pattern, bCaseSensitive );
// get high limit of range
if (p == 0 // get high limit of range
|| p == _T(']')) // syntax
return FALSE; // syntax
if (c >= l // in range, move on
&& c <= p)
break; // in range, move on
}
l = p;
if (c == p) // if char matches this element
break; // move on
}
while (p // got a match in char set
&& p != _T(']')) // skip to end of set
{
break;
}
case _T('#'):
c = *String++;
if( !_istdigit(c) ) // not a digit
return FALSE;
break;
default:
c = ConvertCase( *String++, bCaseSensitive );
if( c != p ) // check for exact char
return FALSE; // not a match
break;
}
```
|
{"Source-Url": "http://www.dia.uniroma3.it/autom/Reti_e_Sistemi_Automazione/PDF/OPCCOMN.pdf", "len_cl100k_base": 14472, "olmocr-version": "0.1.53", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 61629, "total-output-tokens": 15256, "length": "2e13", "weborganizer": {"__label__adult": 0.00024235248565673828, "__label__art_design": 0.0004088878631591797, "__label__crime_law": 0.0003464221954345703, "__label__education_jobs": 0.0005292892456054688, "__label__entertainment": 5.620718002319336e-05, "__label__fashion_beauty": 0.00010406970977783204, "__label__finance_business": 0.000659942626953125, "__label__food_dining": 0.0001461505889892578, "__label__games": 0.0006847381591796875, "__label__hardware": 0.0021114349365234375, "__label__health": 0.00011873245239257812, "__label__history": 0.0001417398452758789, "__label__home_hobbies": 7.814168930053711e-05, "__label__industrial": 0.0005435943603515625, "__label__literature": 0.0001480579376220703, "__label__politics": 0.00018286705017089844, "__label__religion": 0.0002651214599609375, "__label__science_tech": 0.0099639892578125, "__label__social_life": 2.9802322387695312e-05, "__label__software": 0.0207977294921875, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.00010991096496582033, "__label__transportation": 0.0003199577331542969, "__label__travel": 0.00011336803436279296}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64950, 0.01578]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64950, 0.21055]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64950, 0.74349]], "google_gemma-3-12b-it_contains_pii": [[0, 73, false], [73, 669, null], [669, 4104, null], [4104, 6048, null], [6048, 12099, null], [12099, 12336, null], [12336, 13365, null], [13365, 15251, null], [15251, 18918, null], [18918, 22237, null], [22237, 23512, null], [23512, 25254, null], [25254, 26993, null], [26993, 28479, null], [28479, 30365, null], [30365, 31327, null], [31327, 32782, null], [32782, 33959, null], [33959, 37078, null], [37078, 39888, null], [39888, 43351, null], [43351, 45189, null], [45189, 46825, null], [46825, 49072, null], [49072, 52148, null], [52148, 53873, null], [53873, 55108, null], [55108, 56981, null], [56981, 57966, null], [57966, 58549, null], [58549, 60657, null], [60657, 63040, null], [63040, 64950, null]], "google_gemma-3-12b-it_is_public_document": [[0, 73, true], [73, 669, null], [669, 4104, null], [4104, 6048, null], [6048, 12099, null], [12099, 12336, null], [12336, 13365, null], [13365, 15251, null], [15251, 18918, null], [18918, 22237, null], [22237, 23512, null], [23512, 25254, null], [25254, 26993, null], [26993, 28479, null], [28479, 30365, null], [30365, 31327, null], [31327, 32782, null], [32782, 33959, null], [33959, 37078, null], [37078, 39888, null], [39888, 43351, null], [43351, 45189, null], [45189, 46825, null], [46825, 49072, null], [49072, 52148, null], [52148, 53873, null], [53873, 55108, null], [55108, 56981, null], [56981, 57966, null], [57966, 58549, null], [58549, 60657, null], [60657, 63040, null], [63040, 64950, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64950, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64950, null]], "pdf_page_numbers": [[0, 73, 1], [73, 669, 2], [669, 4104, 3], [4104, 6048, 4], [6048, 12099, 5], [12099, 12336, 6], [12336, 13365, 7], [13365, 15251, 8], [15251, 18918, 9], [18918, 22237, 10], [22237, 23512, 11], [23512, 25254, 12], [25254, 26993, 13], [26993, 28479, 14], [28479, 30365, 15], [30365, 31327, 16], [31327, 32782, 17], [32782, 33959, 18], [33959, 37078, 19], [37078, 39888, 20], [39888, 43351, 21], [43351, 45189, 22], [45189, 46825, 23], [46825, 49072, 24], [49072, 52148, 25], [52148, 53873, 26], [53873, 55108, 27], [55108, 56981, 28], [56981, 57966, 29], [57966, 58549, 30], [58549, 60657, 31], [60657, 63040, 32], [63040, 64950, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64950, 0.15405]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
59f4a94e63da13567e77bb1795675f75eecc0d2e
|
Using Continuous Code Change Analysis to Understand the Practice of Refactoring
Stas Negara, Nicholas Chen, Mohsen Vakilian, Ralph E. Johnson, Danny Dig
University of Illinois at Urbana-Champaign
Urbana, IL 61801, USA
{snegara2, nchen, mvakili2, rjohnson, dig}@illinois.edu
Abstract—Despite the enormous success that manual and automated refactoring has enjoyed during the last decade, we know little about the practice of refactoring. Understanding the refactoring practice is important for developers, refactoring tool builders, and researchers. Many previous approaches to study refactorings are based on comparing code snapshots, which is imprecise, incomplete, and does not allow answering research questions that involve time or compare manual and automated refactoring.
We present the first extended empirical study that considers both manual and automated refactoring. This study is enabled by our algorithm, which infers refactorings from continuous changes. We implemented and applied this algorithm to the code evolution data collected from 23 developers working in their natural environment for 1,520 hours. Using a corpus of 5,371 refactorings, we reveal several new facts about manual and automated refactoring. For example, more than a half of the refactorings were performed manually. The popularity of automated and manual refactorings differs. More than one third of the refactorings performed by developers are clustered. For some refactoring kinds, up to 64% of performed refactorings do not reach the Version Control System.
I. INTRODUCTION
Refactoring [1] is an important part of software development. Development processes like eXtreme Programming [2] treat refactoring as a key practice. Refactoring has revolutionized how programmers design software: it has enabled programmers to continuously explore the design space of large codebases, while preserving the existing behavior. Modern IDEs such as Eclipse [3], NetBeans [4], IntelliJ IDEA [5], or Visual Studio [6] incorporate refactoring in their top menu and often compete on the basis of refactoring support.
Several research projects [7]–[13] made strides into understanding the practice of refactoring. This is important for developers, refactoring tool builders, and researchers. Tool builders can improve the current generation of tools or design new tools to match the practice, which will help developers to perform their daily tasks more effectively. Understanding the practice also helps researchers by validating or refuting assumptions that were previously based on folklore. It can also focus the research attention on the refactorings that are popular in practice. Last, it can open new directions of research. For example, we recently discovered that more than one third of the refactorings performed in practice are applied in a group, thus motivating new research into refactoring composition.
The fundamental technical problem in understanding the practice is being able to identify the refactorings that were applied by developers. There are a few approaches. One is to bring developers in the lab and watch how they refactor [8]. This has the advantage of observing all code changes, so it is precise. But this approach studies the programmers in a confined environment, for a short period of time.
Another approach is to study the refactorings applied in the wild. The most common way is to analyze two Version Control System (VCS) snapshots of the code either manually [9], [14]–[16] or automatically [17]–[23]. However, the snapshot-based analysis has several disadvantages. First, it is imprecise. Many times refactorings overlap with editing sessions (e.g., a method is both renamed, and its method body is changed dramatically). Refactorings can also overlap with other refactorings (e.g., a method is both renamed and its arguments are reordered). The more overlap, the more noise. Our recent study [10] shows that 46% of refactored program entities are also edited or further refactored in the same commit. Second, it is incomplete. For example, if a method is renamed more than once, a snapshot-based analysis would only infer the last refactoring. Third, it is impossible to answer many empirical questions. For example, from snapshots we cannot determine how long it takes developers to refactor, and we cannot compare manual vs. automated refactorings.
A much better approach is to study the refactoring practice in the wild, while employing a continuous analysis. Refactoring tools like the ones in Eclipse record all automated refactorings applied by a developer [9], [24]. Recent empirical studies about the practice of refactoring [11], [12] have used these recorded logs as the source of their analysis. But this approach does not take into account the refactorings that are applied manually. Others [8], [11], [12] have shown that programmers sometimes perform a refactoring manually, even when the IDE provides an automated refactoring.
Our paper is the first empirical study that uses a continuous change analysis to study the practice of both manual and automated refactorings. We answer seven research questions:
**RQ1:** What is the proportion of manual vs. automated refactorings?
**RQ2:** What are the most popular automated and manual refactorings?
**RQ3:** Does a developer perform automated refactorings more often than manual ones?
**RQ4:** How much time do developers spend on manual vs. automated refactorings?
Refactoring
<table>
<thead>
<tr>
<th>Scope</th>
<th>Refactoring</th>
</tr>
</thead>
<tbody>
<tr>
<td>API-level</td>
<td>Encapsulate Field</td>
</tr>
<tr>
<td></td>
<td>Rename Class</td>
</tr>
<tr>
<td></td>
<td>Rename Field</td>
</tr>
<tr>
<td></td>
<td>Rename Method</td>
</tr>
<tr>
<td>Partially local</td>
<td>Convert Local Variable to Field</td>
</tr>
<tr>
<td></td>
<td>Extract Constant</td>
</tr>
<tr>
<td></td>
<td>Extract Method</td>
</tr>
<tr>
<td>Completely local</td>
<td>Extract Local Variable</td>
</tr>
<tr>
<td></td>
<td>Inline Local Variable</td>
</tr>
<tr>
<td></td>
<td>Rename Local Variable</td>
</tr>
</tbody>
</table>
Fig. 1. Inferred refactorings. API-level refactorings operate on the elements of a program’s API. Partially local refactorings operate on the elements of a method’s body, but also affect the program’s API. Completely local refactorings affect elements in the body of a single method only.
RQ5: What is the size of manual vs. automated refactorings?
RQ6: How many refactorings are clustered?
RQ7: How many refactorings do not reach VCS?
Answering these empirical questions requires us to infer refactorings from continuous code changes. Recent tools [25], [26] that were developed for such inference neither were designed for empirical studies nor are publicly available. Therefore, we designed and implemented our own refactoring inference algorithm that analyzes code changes continuously. Currently, our algorithm infers ten kinds of refactorings performed either manually or automatically. These were previously reported [12] as the most popular among automated refactorings. Fig. 1 shows the inferred refactorings, ranging from API-level refactorings (e.g., Rename Class), to partially local (e.g., Extract Method), to completely local refactorings (e.g., Extract Local Variable). The inferred refactorings cover a wide range of common refactorings, and we believe that our algorithm can be easily extended to handle other refactorings as well.
In our previous study [10], we continuously inferred Abstract Syntax Tree (AST) node operations, i.e., add, delete, and update AST node from fine-grained code edits (e.g., typing characters). In this study, we designed and implemented an algorithm that infers refactorings from these AST node operations. First, our algorithm infers high-level properties, e.g., replacing a variable reference with an expression. Then, from combination of properties it infers refactorings. For example, it infers that a local variable was inlined when it noticed that a variable declaration is deleted, and all its references are replaced with the initialization expression.
We applied our inference algorithm on the real code evolution data from 23 developers, working in their natural environment for 1,520 hours. We found that more than half of the refactorings were performed manually, and thus, the existing studies that focus on automated refactorings only might not be generalizable since they consider less than half of the total picture. We also found that the popularity of automated and manual refactorings differs. Our results present a fuller picture about the popularity of refactorings in general, which should help both researchers and tool builders to prioritize their work. Our findings provide an additional evidence that developers underuse automated refactoring tools, which raises the concern of the usability problems in these tools. We discovered that more than one third of the refactorings performed by developers are clustered. This result emphasizes the importance of researching refactoring clusters in order to identify refactoring composition patterns. Finally, we found that up to 64% of the performed refactorings do not reach the VCS. Thus, using VCS snapshots alone to analyze refactorings might produce misleading results.
This paper makes the following contributions:
1) We designed seven questions to understand the practice of manual and automated refactoring.
2) We discovered new facts about the practice of refactoring (see above).
3) We designed, implemented, and evaluated an algorithm that employs continuous change analysis to infer refactorings. Our implementation is open source and available at http://sneagara2.projects.cs.illinois.edu/CodingTracker.
II. RESEARCH METHODOLOGY
To answer our research questions, we employed the code evolution data that we collected as part of our previous user study [10] on 23 participants. We recruited 13 Computer Science graduate students and senior undergraduate summer interns who worked on a variety of research projects from six research labs at the University of Illinois at Urbana-Champaign. We also recruited 10 professional programmers who worked on different projects in domains such as marketing, banking, business process management, and database management. Fig. 2 shows the programming experience of our participants. In the course of our study, we collected code evolution data for 1,520 hours of code development with a mean distribution of 66 hours per programmer and a standard deviation of 52.
To collect code evolution data, we asked each participant to install the CodingTracker [10] plug-in in his/her Eclipse IDE. During the study, CodingTracker recorded a variety of evolution data at several levels ranging from individual code edits up to the high-level events like automated refactoring invocations and interactions with Version Control System (VCS). CodingTracker employed existing infrastructure [12] to regularly upload the collected data to our centralized repository.
At the time when CodingTracker recorded the data, we did not have a refactoring inference algorithm. However, CodingTracker can accurately replay all the code editing events.
Note that only 22 out of 23 participants filled the survey and specified their programming experience.
thus recreating an exact replica of the evolution session that happened in reality. We replayed the coding sessions and this time, we applied our newly developed refactoring inference algorithm.
We first applied our AST node operations inference algorithm [10] on the collected raw data to represent code changes as add, delete, and update operations on the underlying AST. These basic AST node operations serve as input to our refactoring inference algorithm. Section IV presents more details about our refactoring inference algorithm.
Next, we answer every research question by processing the output of the algorithm with the question-specific analyzer. Note that our analyzers for RQ1 – RQ5 ignore trivial refactorings. We consider a refactoring trivial if it affects a single line of code, e.g., renaming a variable with no uses.
III. RESEARCH QUESTIONS
RQ1: What is the proportion of manual vs. automated refactorings? Previous research on refactoring practice either predominantly focused on automated refactorings [7], [11], [12] or did not discriminate manual and automated refactorings [9], [13]. Answering the question about the relative proportion of manual and automated refactorings will allow us to estimate how representative automated refactorings are of the total number of refactorings, and consequently, how general are the conclusions based on studying automated refactorings only. Additionally, we will get a better insight about the refactoring behavior of developers.
For each of the ten refactoring kinds inferred by our algorithm, we counted how many refactorings were applied using Eclipse automated refactoring tools and how many of the inferred refactorings were applied manually. Fig. 3 shows our results. The last column represents the combined result for all the ten refactoring kinds.
Overall, our participants performed 11% more manual than automated refactorings (2,820 vs. 2,551). Thus, research focusing on automated refactorings considers less than a half of the total picture. Moreover, half of the refactoring kinds that we investigated, Convert Local Variable to Field, Extract Method, Rename Field, Rename Local Variable, and Rename Method, are predominantly performed manually. This observation undermines generalizability of the existing studies based on the automated execution of these popular refactorings. Also, it raises concerns for tool builders about the underuse of the automated refactoring tools, which could be a sign that these tools require a considerable improvement.
RQ2: What are the most popular automated and manual refactorings? Murphy et al. [7] and Vakilian et al. [12] identified the most popular automated refactorings to better understand how developers refactor their code. We would like to get a more complete picture of the refactoring popularity by looking at both manual and automated refactorings. Additionally, we would like to contrast how similar or different are popularities of automated refactorings, manual refactorings, and refactorings in general.
To measure the popularity of refactorings, we employ the same refactoring counts that we used to answer the previous research question. Fig. 3 correspondingly shows the popularity of automated, manual, and all refactorings. The Y axis represents refactoring counts. The X axis shows refactorings ordered from the highest popularity rank at the left to the lowest rank at the right.
Our results on popularity of automated refactorings mostly corroborate previous findings [12]. The only exceptions are Inline Local Variable refactoring, whose popularity has increased from the seventh to the third position, and Encapsulate Field refactoring, whose popularity has declined from the fifth to the seventh position. Overall, our results show that the popularity of automated and manual refactorings is quite different: the top five most popular automated and manual refactorings have only three refactorings in common – Rename Local Variable, Rename Method, and Extract Local Variable, and even these refactorings have different ranks. The most important observation though is that the popularity of automated refactorings does not reflect well the popularity of refactorings in general. In particular, the top five most popular refactorings and automated refactorings share only three refactorings, out of which only one, Rename Method, has the same rank.
Having a fuller picture about the popularity of refactorings, researchers would be able to automate or infer the refactorings that are popular when considering both automated and manual refactorings. Similarly, tool builders should pay more attention to the support of the popular refactorings. Finally, novice developers might decide what refactorings to learn first depending on their relative popularity.
RQ3: Does a developer perform automated refactorings more often than manual ones? In our previous study [12], we argued that developers may underuse automated refactoring tools for a variety of reasons, one of the most important being
Note that we can not directly compare our results with the findings of Murphy et al. [7] since their data represents the related refactoring kinds as a single category (e.g., Rename, Extract, Inline, etc.).
that developers are simply unaware of automated refactoring tools. Answering this question will help us to better understand whether developers who are aware about an automated refactoring tool use the tool rather than refactor manually.
In the following, we denote the quantity of automated tool usage as $A$. We compute $A$ as a ratio of automated refactorings to the total number of refactorings of a particular kind performed by an individual participant. For each of the ten inferred refactoring kinds, we counted the number of participants who never use an automated refactoring tool ($A = 0$), the number of participants who predominantly refactor manually ($0 \% < A \leq 25\%$), the number of participants who use an automated tool quite often, but still refactor manually most of the time ($25\% < A \leq 50\%$), the number of participants who refactor using an automated tool most of the time, but still often refactor manually ($50\% < A \leq 75\%$), the number of participants who predominantly use an automated tool ($75\% < A \leq 100\%$), and the number of participants who always use the automated refactoring tool ($A = 100\%$).
Fig. 7 shows our results. The Y axis represents the number of participants. Every bar shows the number of participants in each of the six automated tool usage categories, $A$, for a particular refactoring kind.
Our results show that only for two refactorings, Rename Class and Extract Constant, the number of participants who always perform the automated refactoring is higher than the number of participants who always perform the refactoring manually. Also, the fraction of participants who always perform a refactoring manually is relatively high for all the ten refactoring kinds. Overall, our results corroborate the previous findings [11, 12] that the automated refactoring tools are underused.
Another important observation is that for two refactoring kinds, Extract Method and Rename Local Variable, the number of participants who are aware about the automated refactoring, but still apply it manually most of the time ($0\% < A \leq 50\%$)
is higher than the number of participants who apply this refactoring automatically most of the time (50% < A <= 100%). This shows that some automated refactoring tools are underused even when developers are aware of them and apply them from time to time. Moreover, for each of the ten refactoring kinds, the number of participants who apply the automated refactoring only (A = 100%) is significantly lower than the number of participants who both apply the automated refactoring and refactor manually (0% < A < 100%). In particular, there are no participants who apply Convert Local Variable to Field, Encapsulate Field, Extract Method, and Rename Field using the automated refactoring tools only. These results show that developers underuse automated refactoring tools, some more so than the others, which could be an indication of a varying degree of usability problems in these tools.
RQ4: How much time do developers spend on manual vs. automated refactorings? One of the major arguments in favor of performing a refactoring automatically is that it takes less time than performing this refactoring manually [27]. We would like to assess this time difference as well as compare the average durations of different kinds of refactorings performed manually.
To measure the duration of a manual refactoring, we consider all AST node operations that contribute to it. Our algorithm marks AST node operations that contribute to a particular inferred refactoring with a generated refactoring’s ID, which allows us to track each refactoring individually. Note that a developer might intersperse a refactoring with other code changes, e.g., another refactoring, small bug fixes, etc. Therefore, to compute the duration of a manual refactoring, we cannot subtract the timestamp of the first AST node operation that contributes to it from the timestamp of the last contributing AST node operation. Instead, we compute the duration of each contributing AST node operation separately by subtracting the timestamp of the preceding AST node operation (regardless of whether it contributes to the same refactoring or not) from the timestamp of the contributing AST node operation. If the obtained duration is greater than two minutes, we discard it, since it might indicate an interruption in code editing, e.g., a developer might get distracted by a phone call or take a break. Finally, we sum up all the durations of contributing AST node operations to obtain the duration of the corresponding refactoring.
We get the durations of automated refactorings from CODINGSPETATOR [12]. CODINGSPETATOR measures configuration time of a refactoring performed automatically, which is the time that a developer spends in the refactoring’s dialog box. Note that the measured configuration time does not include the time that it takes Eclipse to actually change the code, which could range from a couple of milliseconds to several seconds, depending on the performed refactoring kind and the underlying code.
Fig. 8 shows our results. The Y axis represents the duration time in seconds. Note that the configuration time bar for Encapsulate Field refactoring is missing since we do not have data for this refactoring.
On average, manual refactorings take longer than their automated counterparts with a high statistical significance (p < 0.0001, using two-sided unpaired t-test) only for Extract Local Variable, Extract Method, Inline Local Variable, and Rename Class since for the other refactoring kinds our participants rarely used the configuration dialog boxes. The most time consuming, both manually and automatically, is Extract Method refactoring, which probably could be explained by its complexity and the high amount of code changes involved. All other refactorings are performed manually on average in under 15 – 25 seconds. Some refactorings take longer than others. A developer could take into account this difference when deciding what automated refactoring tool to learn first.
Another observation is that Rename Field refactoring is on average the fastest manual refactoring. It takes less time than the arguably simpler Rename Local Variable refactoring. One of the possible explanations is that developers perform Rename Field refactoring manually when it does not require many changes, e.g., when there are few references to the renamed field, which is supported by our results for the following question.
RQ5: What is the size of manual vs. automated refactorings? In an earlier project [12], we noticed that developers tend to apply automated refactoring tools for small code changes. Therefore, we would like to compare the average size of manual and automated refactorings to better understand this behavior of developers.
To perform the comparison, we measured the size of manual and automated refactorings as the number of the affected AST nodes. For manual refactorings, we counted the number of AST node operations contributing to a particular refactoring. For automated refactorings, we counted all AST node operations that appear in between the start and the finish
refactoring operations recorded by CODINGTRACKER. Note that all operations in between the start and the finish refactoring operations represent the effects of the corresponding automated refactoring on the underlying code [10].
Fig. 9 shows our results. The logarithmic Y axis represents the number of the affected AST nodes. Our results show that automated refactorings on average affect more AST nodes than manual refactorings for four refactoring kinds, Convert Local Variable to Field, Extract Method, Rename Field, and Rename Local Variable, with a high statistical significance (p < 0.0001), and for three refactoring kinds, Extract Local Variable, Inline Local Variable, and Rename Method, with a sufficient statistical significance (p < 0.03). One of the reasons could be that developers tend to perform smaller refactorings manually since such refactorings have a smaller overhead.
Intuitively, one could think that developers perform small refactorings by hand and large refactorings with a tool. On the contrary, our findings show that developers perform manually even large refactorings. In particular, Extract Method is by far the largest refactoring performed both manually and automatically – it is more than two times larger than Encapsulate Field, which is the next largest refactoring. At the same time, according to Fig. 7, most of the developers predominantly perform Extract Method refactoring manually in spite of the significant amount of the required code changes. Thus, the size of a refactoring is not a decisive factor for choosing whether to perform it manually or with a tool. This also serves as an additional indication that the developers might not be happy with the existing automation of Extract Method refactoring [8].
RQ6: How many refactorings are clustered? To better understand and support refactoring activities of developers, Murphy-Hill et al. [11] identified different refactoring patterns, in particular, root canal and floss refactorings. A root canal refactoring represents a consecutive sequence of refactorings that are performed as a separate task. Floss refactorings, on the contrary, are interspersed with other coding activities of a developer. In general, grouping several refactorings in a single cluster might be a sign of a higher level refactoring pattern, and thus, it is important to know how many refactorings belong to such clusters.
To detect whether several refactorings belong to the same cluster, we compute a ratio of the number of AST node operations that are part of these refactorings to the number of AST node operations that happen in the same time window as these refactorings, but do not belong to them (such operations could happen either in between refactorings or could be interspersed with them). If this ratio is higher than a particular threshold, T, we consider that the refactorings belong to the same cluster. I.e., rather than using a specific time window, we try to get as large clusters as possible, adding refactorings to a cluster as long as the ratio of refactoring to non-refactoring changes in the cluster does not fall below a particular threshold. The minimum size of a cluster is three. Note that for the clustering analysis we consider automated refactorings of all kinds and manual refactorings of the ten kinds inferred by our tool.
Fig. 10 shows the proportion of clustered and separate refactorings for different values of T, which we vary from 1 to 10. T = 1 means that the amount of non-refactoring changes does not exceed the amount of refactoring changes in the same cluster. Fig. 11 shows the average size of gaps between separate refactorings (i.e., refactorings that do not belong to any cluster) expressed as the number of AST node operations that happen in between two separate refactorings or a separate refactoring and a cluster.
Our results show that for T = 1, 45% of the refactorings are clustered. When the threshold grows, the number of the clustered refactorings goes down, but not much – even for T = 10, 28% of refactorings are clustered. The average gap between floss refactorings is not very sensitive to the value of the threshold as well. Overall, developers tend to perform a significant fraction of refactorings in batch mode. This observation emphasizes the importance of researching refactoring clusters in order to identify refactoring composition patterns.
RQ7: How many refactorings do not reach VCS? Software evolution researchers [17], [28]–[33] use file-based Version Control Systems (VCSs), e.g., Git [34], SVN [35], and thus, are missed by any analysis based on VCS snapshots. In particular, we showed that VCS snapshots provide incomplete and imprecise evolution data. In particular, we showed that 37% of code changes do not reach VCS. Since refactorings play an import role in software development, in this study, we would like to assess the amount of refactorings that never make it to VCS, and thus, are missed by any analysis based on VCS snapshots.
We consider that a refactoring does not reach VCS if none of the AST node operations that are part of this refactoring reach VCS. An AST node operation does not reach VCS if there is another, later operation that affects the same node, up to the
moment the file containing this node is committed to VCS. These non-reaching AST node operations and refactorings are essentially shadowed by other changes.
Fig. 12 shows the ratio of reaching and shadowed refactorings. Since even a reaching refactoring might be partially shadowed, we also compute the ratio of reaching and shadowed AST node operations that are part of reaching refactorings, which is shown in Fig. 13.
Our results show that for all refactoring kinds except Inline Local Variable, there is some fraction of refactorings that are shadowed. The highest shadowing ratio is for Rename refactorings. In particular, 64% of Rename Field refactorings do not reach VCS. Thus, using VCS snapshots to analyze these refactoring kinds might significantly skew the analysis results.
Although we did not expect to see any noticeable difference between manual and automated refactorings, our results show that there are significantly more shadowed manual than automated refactorings for each refactoring kind (except Inline Local Variable, which does not have any shadowed refactorings at all). Overall, 40% of manual and only 16% of automated refactorings are shadowed. This interesting fact requires further research to understand why developers underuse automated refactorings more in code editing scenarios whose changes are unlikely to reach VCS.
Another observation is that even refactorings that reach VCS might be hard to infer from VCS snapshots, since a noticeable fraction of AST node operations that are part of them do not reach VCS. This is particularly characteristic to Extract refactorings, which have the highest ratio of shadowed AST node operations.
IV. REFACTORING INFERENCE ALGORITHM
A. Inferring Migrated AST Nodes
Many kinds of refactorings that we would like to infer rearrange elements in the refactored program. To correctly infer such refactorings, we need to track how AST nodes migrate in the program’s AST. A node might migrate from a single site to another single site (i.e., this node is moved from one parent node to another parent node), for example, as a result of Inline Local Variable refactoring applied to a variable with a single usage. Such migration is one-to-one migration. Also, a node might migrate from a single site to multiple sites, e.g., as a result of Inline Local Variable refactoring applied to a variable with multiple usages in the code. Such migration is one-to-many migration. Finally, a node might migrate from multiple sites to a single site, e.g., as a result of Extract Local Variable refactoring applied to an expression that appears in multiple places in the code. Such migration is many-to-one migration.
Fig. 14 shows an example of the Extract Local Variable refactoring that results in many-to-one migration of the extracted AST node. Fig. 15 shows the effect of this refactoring on the underlying AST. Note that the extracted AST node, string literal "-", is deleted from two places in the old AST and inserted in a single place in the new AST – as the initialization of the newly created local variable.
Our refactoring inference algorithm takes as input a sequence of basic AST node operations: add, delete, and update. Note that an update operation deletes the old value (update_delete) and adds the new value (update_add). The algorithm infers migrate operation from the basic operations. A single migrate operation is composed either from one delete or update_delete operation and one or more add or update_add operations, or from one add or update_add operation and one or more delete or update_delete operations applied on the same AST node within a specific time window. We consider that two AST nodes represent the same node if they have the same AST node type and the same content. As a time window, we employ a five minutes time interval.
The algorithm assigns a unique ID to each inferred migrate operation. Note that a basic AST node operation can make part of at most one migrate operation. The algorithm marks each basic AST node operation that makes part of a particular migrate operation with its ID. This allows to easily establish whether two basic AST node operations belong to the same migrate operation in the following stages of our refactoring inference algorithm.
B. Refactoring Inference Algorithm Overview
Our algorithm infers ten kinds of refactorings shown in Fig. 1. To infer a particular kind of refactoring, our algorithm looks for properties that are characteristic to it. A refactoring property is a high-level semantic code change, e.g., addition or deletion of a variable declaration. Fig. 16 shows an example of the Inline Local Variable refactoring and its characteristic properties: deletion of a variable declaration, replacement of a reference to an entity with an expression, and migration of the variable’s initialization expression to the former usage of the variable.
Our algorithm identifies refactoring properties directly from the basic AST node operations that represent the actions of a developer. A developer may change the code in any order, e.g., first delete the variable declaration and then replace its references with the initialization expression, or first replace the references and then delete the variable declaration, etc. Consequently, the order in which the properties are identified does not matter.
A refactoring property is described with its attributes, whose values are derived from the corresponding AST node operation. Fig. 17 shows 15 attributes that our algorithm employs for a variety of refactoring properties. A property may contain one or more such attributes. Fig. 18 presents refactoring properties and their attributes. When the algorithm checks whether a property can be part of a particular refactoring, the property’s attributes are matched against attributes of all other properties that already make part of this refactoring. As a basic rule, two attributes match if either they have different names or they have the same value. Additionally, the algorithm checks that the disjoint attributes have different values: destinationMethodID should be different from sourceMethodID and getterMethodID should be different from setterMethodID.
Our algorithm combines two or more closely related refactoring properties in a single refactoring fragment. Such fragments allow to express high level properties that could not be derived from a single AST node operation, e.g., replacing a reference to an entity with an expression involves two AST node operations: delete entity reference and add expression. Fig. 19 shows the inferred refactoring fragments and their component properties.
The algorithm considers that a refactoring is complete if all its required characteristic properties are identified within a specific time window, which in our study is five minutes. Some characteristic properties are optional, e.g., replacing field references with getters and setters in Encapsulate Field refactoring is optional. Also, a refactoring might include several instances of the same characteristic property. For example, an Inline Local Variable refactoring applied to a variable that is used in multiple places includes several properties of migration of the variable’s initialization expression to the former usage of the variable. Even though it is sufficient to have a single instance of each required characteristic property to infer a refactoring, our algorithm infers a refactoring as fully as possible, incorporating all properties that belong to it. If no more properties are added to a complete refactoring within two minutes, the algorithm considers that the inference of this refactoring is finished. Fig. 20 presents the characteristic properties of the ten refactorings inferred by our algorithm.
Putting It All Together. Fig. 21 shows a high level overview of our refactoring inference algorithm. The algorithm takes as input the sequence of basic AST node operations.
public String wrap(int num) {
return "-" + num + "-";
}
public String wrap(int num) {
String dash = "-";
return dash + num + dash;
}
Fig. 14. An example of the Extract Local Variable refactoring that results in many-to-one migration of the extracted AST node.
AST of the old method body
AST of the new method body
Fig. 15. The effect of the Extract Local Variable refactoring presented in Fig. 14 on the underlying AST.
public int scale(int num) {
int factor = 5;
return factor * num;
}
public int scale(int num) {
return 5 * num;
}
Fig. 16. An example of the Inline Local Variable refactoring and its characteristic properties.
marked with migrate IDs, astNodeOperations. The output of the algorithm is a sequence of the inferred refactorings, inferredRefactorings. The algorithm assigns a unique ID to each inferred refactoring and marks all basic AST node operations that contribute to a refactoring with the refactoring’s ID.
The refactoring inference algorithm processes each basic AST node operation from astNodeOperations (lines 6 – 49). First, the algorithm removes old pending complete refactorings from pendingCompleteRefactorings (line 8) as well as timed out pending refactorings from pendingRefactoringFragments (line 9). An incomplete refactoring or a refactoring fragment times out if it was created more than five minutes ago, i.e., the algorithm allocates a five minutes time window for a refactoring or a refactoring fragment to become complete.
Next, the algorithm generates refactoring properties specific to a particular AST node operation (line 10). The kind of the AST node operation (add, delete, or update), the type of
Refactoring | Properties/Fragments | Optional | Multiple instances
--- | --- | --- | ---
Convert Local Variable to Field | Added Field Declaration<br>Deleted Variable Declaration | no | no
Encapsulate Field | Added Getter Method Declaration<br>Added Setter Method Declaration<br>Added Field Assignment<br>Added Field Return<br>Made Field Private<br>Replaced Entity With Getter<br>Replaced Entity With Setter | no | no
Extract Constant | Added Field Declaration<br>Migrated To Field Initialization<br>Replaced Expression With Entity | no | yes
Extract Local Variable | Added Variable Declaration<br>Migrated To Variable Initialization<br>Replaced Expression With Entity | no | yes
Extract Method | Added Method Declaration<br>Added Method Invocation<br>Migrated Across Methods | no | yes
Inline Local Variable | Deleted Variable Declaration<br>Migrated From Variable Initialization<br>Replaced Entity With Expression | no | yes
Rename Class | Changed Global Entity Name In Usage<br>Changed Type Name In Constructor<br>Changed Type Name In Declaration | yes* | yes
Rename Field | Changed Global Entity Name In Usage<br>Changed Field Name In Declaration | no | yes
Rename Local Variable | Changed Local Entity Name In Usage<br>Changed Variable Name In Declaration | no | yes
Rename Method | Changed Method Name In Invocation<br>Changed Method Name In Declaration | no | yes
Fig. 20. Characteristic properties of the inferred refactorings. Note that at least one of the two optional properties of the Rename Class refactoring, Changed Global Entity Name In Usage and Changed Type Name In Constructor, is required for this refactoring to be considered complete.
the affected node (e.g., a variable declaration or reference, a method declaration, etc.), the context of the affected node (e.g., the containing method, the containing field or variable declaration, etc.), whether this operation is part of a migrate operation – all are the factors that the algorithm accounts for in order to generate one or more properties shown in Fig. [18]
In the following step, the algorithm processes the generated properties one by one (lines 11 – 49). First, every new property is checked against each pending refactoring fragment (lines 12 – 21). If there is a refactoring fragment that accepts the new property and becomes complete, then this refactoring fragment itself turns into a new property to be considered by the algorithm (line 17). Note that a refactoring fragment or a pending refactoring accepts a property if the property’s attributes match the attributes of the properties that already make part of the fragment or the refactoring (more details on matching properties can be found in the previous subsection). If the new property can be part of a new refactoring fragment, the algorithm creates the fragment and adds it to pendingRefactoringFragments (lines 22 – 24).
Next, the algorithm tries to add the new property to pending complete refactorings (lines 25 – 30). If the new property is added to a complete refactoring, the algorithm proceeds to the next new property (line 28).
If there is no pending complete refactoring that accepts the new property, the algorithm checks whether this property can be added to pending incomplete refactorings (lines 31 – 42). If an incomplete refactoring accepts the property, it is added to a copy of this incomplete refactoring (lines 33 – 34). This ensures that the initial incomplete refactoring remains unchanged in pendingIncompleteRefactorings and thus, could be considered for future properties, if there are any. If adding the new property makes the new refactoring complete, it is added to pendingCompleteRefactorings (line 36) and the algorithm proceeds to the next new property (line 37). Otherwise, the new refactoring is added to pendingIncompleteRefactorings (line 39).
If the new property does not make any of the pending incomplete refactorings complete, the algorithm creates new refactorings of the kinds that the new property is character-
input: astNodeOperations // the sequence of basic AST node operations marked with migrate IDs
output: inferredRefactorings
inferredRefactorings = ∅;
inferredRefactoringKinds = getAllInferredRefactoringKinds();
pendingCompleteRefactorings = ∅;
pendingIncompleteRefactorings = ∅;
pendingRefactoringFragments = ∅;
foreach (astNodeOperation ∈ astNodeOperations) {
inferredRefactorings ∪= removeOldRefactorings(pendingCompleteRefactorings);
removeTimedOutRefactorings(pendingIncompleteRefactorings);
removeTimedOutRefactoringFragments(pendingRefactoringFragments);
newProperties = getProperties(astNodeOperation);
foreach (newProperty ∈ newProperties) {
foreach (pendingRefactoringFragment ∈ pendingRefactoringFragments) {
if (accepts(pendingRefactoringFragment, newProperty)) {
addProperty(pendingRefactoringFragment, newProperty);
if (isComplete(pendingRefactoringFragment)) {
remove(pendingRefactoringFragments, pendingRefactoringFragment);
newProperties ∪= pendingRefactoringFragment;
break;
}
}
}
if (canBePartOfRefactoringFragment(newProperty)) {
pendingRefactoringFragments ∪= createRefactoringFragment(newProperty);
}
}
foreach (pendingCompleteRefactoring ∈ pendingCompleteRefactorings) {
if (accepts(pendingCompleteRefactoring, newProperty)) {
addProperty(pendingCompleteRefactoring, newProperty);
continue foreach_line11; // the property is consumed
}
}
foreach (pendingIncompleteRefactoring ∈ pendingIncompleteRefactorings) {
if (accepts(pendingIncompleteRefactoring, newProperty)) {
newRefactoring = clone(pendingIncompleteRefactoring);
addProperty(newRefactoring, newProperty);
if (isComplete(newRefactoring)) {
pendingCompleteRefactorings ∪= newRefactoring;
continue foreach_line11; // the property is consumed
} else {
pendingIncompleteRefactorings ∪= newRefactoring;
}
}
}
foreach (inferredRefactoringKind ∈ inferredRefactoringKinds) {
if (isCharacteristicOf(inferredRefactoringKind, newProperty)) {
newRefactoring = createRefactoring(inferredRefactoringKind, newProperty);
pendingIncompleteRefactorings ∪= newRefactoring;
}
}
}
inferredRefactorings ∪= pendingCompleteRefactorings;
Fig. 21. Overview of our refactoring inference algorithm.
<table>
<thead>
<tr>
<th>Property name</th>
<th>Property attributes</th>
<th>Fragment name</th>
<th>Component properties</th>
</tr>
</thead>
<tbody>
<tr>
<td>Added Entity Reference</td>
<td>entityName, entityNameNodeID, parentID, enclosingClassNodeID</td>
<td>Migrated Across Methods</td>
<td>Migrated From Method, Migrated To Method</td>
</tr>
<tr>
<td>Added Field Assignment</td>
<td>entityName, entityNameNodeID, setterMethodID</td>
<td>Replaced Entity With Expression</td>
<td>Migrated To Usage, Deleted Entity Reference</td>
</tr>
<tr>
<td>Added Field Declaration</td>
<td>entityName, entityNameNodeID, enclosingClassNodeID</td>
<td>Replaced Entity With Getter</td>
<td>Added Getter Method Invocation, Deleted Entity Reference</td>
</tr>
<tr>
<td>Added Field Return</td>
<td>entityName, entityNameNodeID, getterMethodID</td>
<td>Replaced Entity With Setter</td>
<td>Added Setter Method Invocation, Deleted Entity Reference</td>
</tr>
<tr>
<td>Added Getter Method Declaration</td>
<td>getterMethodName, getterMethodID</td>
<td>Replaced Expression With Entity</td>
<td>Migrated From Usage, Added Entity Reference</td>
</tr>
<tr>
<td>Added Getter Method Invocation</td>
<td>getterMethodName, parentID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Added Method Declaration</td>
<td>entityName, entityNameNodeID, destinationMethodID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Added Method Invocation</td>
<td>entityName, entityNameNodeID, sourceMethodName, sourceMethodID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Added Setter Method Declaration</td>
<td>setterMethodName, setterMethodID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Added Setter Method Invocation</td>
<td>setterMethodName, parentID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Added Variable Declaration</td>
<td>entityName, entityNameNodeID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Global Entity Name In Usage</td>
<td>oldEntityName, newEntityName, entityNameNodeID, sourceMethodID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Local Entity Name In Usage</td>
<td>oldEntityName, newEntityName, entityNameNodeID, sourceMethodID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Method Name In Invocation</td>
<td>oldEntityName, newEntityName</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Field Name In Declaration</td>
<td>oldEntityName, newEntityName</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Method Name In Declaration</td>
<td>oldEntityName, newEntityName</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Type Name In Constructor</td>
<td>oldEntityName, newEntityName, entityNameNodeID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Type Name In Declaration</td>
<td>oldEntityName, newEntityName</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Changed Variable Name In Declaration</td>
<td>oldEntityName, newEntityName, sourceMethodID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Deleted Entity Reference</td>
<td>entityName, entityNameNodeID, parentID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Deleted Variable Declaration</td>
<td>entityName, entityNameNodeID, enclosingClassNodeID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Made Field Private</td>
<td>entityName, entityNameNodeID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Migrated From Method</td>
<td>sourceMethodID, migrateID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Migrated From Usage</td>
<td>migratedNode, migrateID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Migrated From Variable Initialization</td>
<td>entityName, migratedNode, migrateID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Migrated To Field Initialization</td>
<td>entityName, migratedNode, migrateID, enclosingClassNodeID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Migrated To Method</td>
<td>entityName, entityNameNodeID, destinationMethodID, migrateID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Migrated To Usage</td>
<td>migratedNode, migrateID</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Migrated To Variable Initialization</td>
<td>entityName, migratedNode, migrateID</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Fig. 19. Refactoring fragments.
istic of and adds these new refactorings to pendingIncompleteRefactorings (lines 43 – 48).
Finally, after processing all AST node operations, the algorithm adds to inferredRefactorings any of the remaining pending complete refactorings (line 50).
C. Evaluation of Refactoring Inference Algorithm
We are the first to report the accuracy of a continuous refactoring inference algorithm on real world data. First, we evaluated our algorithm on the automated refactorings performed by our participants, which are recorded precisely by Eclipse. We considered 2,398 automated refactorings of the nine out of the ten kinds that our algorithm infers (we disabled the inference of automated Encapsulate Field refactoring in our experiment because the inferencer did not scale for one participant, who performed many such refactorings one after another). A challenge of any inference tool is to establish the ground truth, and we are the first to use such a large ground truth. Our algorithm correctly inferred 99.3% of these 2,398 refactorings. The uninferred 16 refactorings represent unlikely code editing scenarios, e.g., ten of them are Extract Local Variable refactorings in which Eclipse re-writes huge chunks of code in a single shot.
Also, we randomly sampled 16.5 hours of code development from our corpus of 1,520 hours. Each sample is a 30-minute chunk of development activity, which includes writing code, refactoring code, running tests, committing files, etc. To establish the ground truth, the second author manually replayed each sample and recorded any refactorings (of the ten kinds that we infer) that he observed. He then compared this to the numbers reported by our inference algorithm. The first and the second authors discussed any observed discrepancies and classified them as either false positives or false negatives. Fig. 22 shows the sampling results for each kind of the refactoring that our algorithm infers.
The confusion matrix [37] for our inference algorithm is presented below. The number of true negatives is represented as X. True negatives measure instances where a refactoring did not occur. Since a refactoring could occur at any time epoch (down to the last millisecond as recorded by our tool), there could be an enormous number of such true negatives. Our evaluation metrics do not depend on the number of true negatives.
We infer only ten kinds of refactorings, which is a subset of the total number of refactorings that a developer can apply. To address this limitation to some extent, we inferred those refactoring kinds that are previously reported as being the most popular among automated refactorings [12].
B. Refactoring Inference Algorithm
Our refactoring inference algorithm takes as input the basic AST node operations that are inferred by another algorithm [10]. Thus, any inaccuracies in the AST node operations inference algorithm could lead to imprecisions in the refactoring inference algorithm. However, we compute the precision and recall for both these algorithms applied together, and thus, account for any inaccuracies in the input of the refactoring inference algorithm.
Although the recall of our refactoring inference algorithm is very high, the precision is noticeably lower. As a result, some of our numbers might be skewed. Nevertheless, we believe that the precision is high enough not to undermine our general observations.
To measure the precision and recall of the refactoring inference algorithm, we sampled around 1% of the total amount of data. Although this is a relatively small fraction of the analyzed data, the sampling was random and involved 33 distinct 30-minute intervals of code development activities.
VI. RELATED WORK
To accurately answer questions about the practice of refactoring, we have to consider both manual and automated refactorings. Collecting information about automated refactoring is relatively simple and can be done through instrumenting the Eclipse refactoring infrastructure. Collecting information about manual refactorings, on the other hand, is more complex and relies on algorithms for inferring refactorings. This section summarizes state-of-the-art work in refactoring inference and empirical research of refactoring, and contrasts our work to them.
A. Empirical Studies of Refactoring Practice
Xing and Stroulia [13] report that 70% of all changes observed in the evolution of the Eclipse code base are expressible as refactorings. Our previous study [9] of four open source frameworks and one library concluded that more than 80% of component API evolution is expressible through refactorings. These studies indicate that the practice of refactoring plays a vital role in software evolution and is an important area of research.
Our paper focuses on studying software evolution through the lens of refactoring, juxtaposing both manual and automated refactorings. Work on empirical research on the usage of automated refactoring tools was stimulated by Murphy et al.’s study [7] of 41 developers using the Java tools in Eclipse. Their study provided the first empirical ranking of the relative popularities of different automated refactorings, demonstrating that some tools are used more frequently than others. Subsequently, Murphy-Hill et al.’s [38] study on the
A. Experimental Setup
We encountered difficulties in recruiting a larger group of experienced programmers due to issues such as privacy, confidentiality, and lack of trust in the reliability of research tools. However, we managed to recruit 23 participants, which we consider a sufficiently big group for our kind of study. Our dataset is not publicly available due the non-disclosure agreement with our participants.
Section II shows that some participants used CodingTracker for longer periods of time than the others. Also, some participants might be more prolific coders or apply refactorings more often. Consequently, such participants produced a more significant impact on our results. At the same time, we think that this non-uniformity is representative of the real world.
Our results are based on the code evolution data obtained from developers who use Eclipse for Java programming. Nevertheless, we expect our results to generalize to similar programming environments.
use of automated refactoring tools provided valuable insights into the use of automated refactorings in the wild by analyzing data from multiple sources.
Due to the non-intrusive nature of CodingTracker, we were able to deploy our tool to more developers for longer periods of time. As such, we were able to infer and record an order of magnitude more manual refactoring invocations compared to Murphy-Hill et al.’s sampling-based approach, providing a more complete picture of refactoring in the wild. To compare manual and automated refactorings, Murphy-Hill sampled 80 commits from 12 developers for a total of 261 refactoring invocations whereas our tool recorded 1,520 hours from 23 developers for a total of 5,371 refactoring invocations.
Murphy-Hill et al.’s [38] study found that (i) refactoring tools are underused and (ii) the kinds of refactorings performed manually are different from those performed using tools. Our data (see RQ3) corroborates both these claims. We found that some refactorings are performed manually more frequently, even when the automated tools exists and the developer is aware of it. Due to the large differences in the data sets (261 from Murph-Hill et al. vs. 5,371 from ours), it is not possible to meaningfully compare the raw numbers of each refactoring kind. However, the general conclusion holds: different refactoring tools are underused at different degrees. Our work also builds upon their work by providing a more detailed breakdown of the manual and automated usage of each refactoring tool according to different participant’s behavior.
Vakilian et al. [27] observed that many advanced users tend to compose several refactorings together to achieve different purposes. Our results about clustered refactorings (see RQ6) provide additional empirical evidence of such practices. Analyzing the actual elements that are affected by each refactoring would help us better understand how these clusters are formed and what are the implications of these clustering behaviors on software evolution.
B. Automatic Inference of Refactorings
Early work by Demeyer et al. [18] inferred refactorings by comparing two different versions of source code using heuristics based only on low-level software metrics (method size, class size and inheritance levels). To improve accuracy, subsequent work by other researchers described changes between versions of code using higher-level characteristic properties. A refactoring is detected based on how well it matches a set of characteristic properties. Our previous tool, RefactoringCrawler [17], used references of program entities (instantiation, method calls, type imports) as its set of characteristic properties. Weißgerber and Diehl [22] used names, signature analysis, and clone detection as their set of characteristic properties. More recently, Prete et al. [39] devised a template-based approach that can infer up to 63 of the 72 refactorings cataloged by Fowler [1]. Their templates build upon characteristic properties such as accesses, calls, inherited fields, etc., that model code elements in Java. Their tool, Ref-Finder, infers the widest variety of refactorings to date.
All these approaches rely exclusively on snapshots from VCS to infer refactorings. Thus, the accuracy of detection depends on the closeness of the two snapshots being compared. We have shown in RQ7 that many refactorings are shadowed and do not ever reach a commit. This compromises the accuracy of inference algorithms that rely on snapshots. Moreover, snapshot-based approaches (with the exception of Ref-Finder) usually concentrate only on API-level changes leaving out many of the completely or partially local refactorings that we infer (see Fig. 1). This paints an incomplete picture of the evolution of the code.
To address such inadequacies, our inference algorithm leverages fine-grained edits. Similar to existing approaches, our algorithm (see Fig. 20) infers refactorings by matching a set of characteristic properties for each refactoring. Our properties consist of high-level semantic changes such as adding a field, deleting a variable, etc. In contrast to existing approaches, our properties are precise because they are constructed directly from the AST operations that are recorded on each code edit.
In parallel with our tool, Ge et al. [25] developed BeneFactor and Foster et al. [26] developed WitchDoctor. Both these tools continuously monitor code changes to detect and complete manual refactorings in real-time. Although conceptually similar, our tools have different goals – we infer complete refactorings, while BeneFactor and WitchDoctor try to infer and complete partial refactorings. Thus, their tools can afford to infer fewer kinds of refactorings and with much lower accuracy. While orthogonal to our work on studying code evolution, these projects highlight the potential of using refactoring inference algorithms based on fine-grained code changes to improve the IDE. In the following, we compare our tool with the most similar tool, WitchDoctor, in more detail.
Like our tool, WitchDoctor represents fine-grained code changes as AST node operations and uses these operations to infer refactorings. Although similar, the AST node operations and refactoring inference algorithms employed by WitchDoctor and our tool have a number of differences. In particular, our AST node operations inference algorithm [10] employs a range of heuristics for better precision, e.g., it handles Eclipse’s linked edits and jumps over the unparsable state of the underlying code. WitchDoctor specifies refactorings as requirements and constraints. Our refactoring inference algorithm defines refactorings as collections of properties without explicitly specifying any constraints on them. Instead, the properties’ attributes matching ensures compatibility of the properties that are part of the same refactoring (see Section V-B). Additionally, our algorithm infers migrated AST nodes and refactoring fragments, which represent a higher level of abstraction than properties that are constructed directly from AST node operations. The authors of WitchDoctor focused on real-time performance of their tool. Since we applied our tool off-line, we were not concerned with its real-time performance, but rather assessed both precision and recall of our tool on the real world data.
VII. CONCLUSIONS
There are many ways to learn about the practice of refactoring, such as observing and reflecting on one’s own practice, observing and interviewing other practitioners, and controlled experiments. But an important way is to analyze the changes made to a program, since programmers’ beliefs about what they do can be contradicted by the evidence. Thus, it is important to be able to analyze programs and determine the kind of changes that have been made. This is traditionally done by looking at the difference between snapshots. In this paper, we have shown that VCS snapshots lose information. A continuous analysis of change lets us see that refactorings tend to be clustered, that programmers often change the name of an item several times within a short period of time and perform more manual than automated refactorings.
Our algorithm for inferring change continuously can be used for purposes other than understanding refactoring. We plan to use it as the base of a programming environment that treats changes intelligently. Continuous analysis is better at detecting refactorings than analysis of snapshots, and it ought to become the standard for detecting refactorings.
REFERENCES
|
{"Source-Url": "https://www.ideals.illinois.edu/bitstream/handle/2142/33783/UnderstandingRefractoring%20Practice.pdf?sequence=3", "len_cl100k_base": 12933, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 56659, "total-output-tokens": 14909, "length": "2e13", "weborganizer": {"__label__adult": 0.0004010200500488281, "__label__art_design": 0.0002715587615966797, "__label__crime_law": 0.0003008842468261719, "__label__education_jobs": 0.0013580322265625, "__label__entertainment": 4.249811172485352e-05, "__label__fashion_beauty": 0.0001710653305053711, "__label__finance_business": 0.00017189979553222656, "__label__food_dining": 0.0002818107604980469, "__label__games": 0.0005636215209960938, "__label__hardware": 0.0005025863647460938, "__label__health": 0.0003786087036132813, "__label__history": 0.0002033710479736328, "__label__home_hobbies": 7.790327072143555e-05, "__label__industrial": 0.0002486705780029297, "__label__literature": 0.00022733211517333984, "__label__politics": 0.00024962425231933594, "__label__religion": 0.00039577484130859375, "__label__science_tech": 0.0025272369384765625, "__label__social_life": 8.970499038696289e-05, "__label__software": 0.0035877227783203125, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0003082752227783203, "__label__transportation": 0.0003867149353027344, "__label__travel": 0.00020253658294677737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69460, 0.01072]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69460, 0.37187]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69460, 0.90542]], "google_gemma-3-12b-it_contains_pii": [[0, 5454, false], [5454, 11236, null], [11236, 16482, null], [16482, 18582, null], [18582, 23662, null], [23662, 28905, null], [28905, 31584, null], [31584, 36850, null], [36850, 38530, null], [38530, 42535, null], [42535, 45003, null], [45003, 53175, null], [53175, 57082, null], [57082, 63450, null], [63450, 69460, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5454, true], [5454, 11236, null], [11236, 16482, null], [16482, 18582, null], [18582, 23662, null], [23662, 28905, null], [28905, 31584, null], [31584, 36850, null], [36850, 38530, null], [38530, 42535, null], [42535, 45003, null], [45003, 53175, null], [53175, 57082, null], [57082, 63450, null], [63450, 69460, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69460, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69460, null]], "pdf_page_numbers": [[0, 5454, 1], [5454, 11236, 2], [11236, 16482, 3], [16482, 18582, 4], [18582, 23662, 5], [23662, 28905, 6], [28905, 31584, 7], [31584, 36850, 8], [36850, 38530, 9], [38530, 42535, 10], [42535, 45003, 11], [45003, 53175, 12], [53175, 57082, 13], [57082, 63450, 14], [63450, 69460, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69460, 0.14333]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
81d9ec5ab838df9efab9318afd7b7619ad49cec5
|
The Automatic Inversion of Attribute Grammars
by
Daniel Yellin\textsuperscript{1} and Eva-Maria M. Mueckstein\textsuperscript{2}
CUCS-135-84
\textsuperscript{1}Computer Science Department
Columbia University
New York, New York, 10027
\textsuperscript{2}IBM T. J. Watson Research Center
Yorktown Heights, New York, 10598
revised version: October 1985
Table of Contents
1. Introduction .......................... 1
2. A Brief Description Of Attribute Grammars .......................... 2
2.1. Attribute Grammars .................. 2
2.2. An Attribute Grammar Example ......... 4
2.3. Attribute Grammars and Context Conditions ............. 4
3. Inversion Of Attribute Grammars .......................... 6
3.1. Token Permuting Functions ............. 6
3.2. Restricted Inverse Form ................ 7
3.3. The Inversion Algorithm ................. 8
3.4. Extending the Inversion Paradigm ....... 11
3.5. Efficiency .......................... 13
4. Using Attribute Grammar Inversion To Build An Interface For SQL .... 13
4.1. Non-invertible function constructs .......... 15
4.2. Ambiguity .......................... 17
5. Conclusion .......................... 18
### List of Figures
<table>
<thead>
<tr>
<th>Figure</th>
<th>Description</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>1-1</td>
<td>Inverse attribute grammars used for two-way translations</td>
<td>2</td>
</tr>
<tr>
<td>2-1</td>
<td>An attribute grammar example</td>
<td>5</td>
</tr>
<tr>
<td>2-2</td>
<td>A typical semantic tree for the example AG</td>
<td>6</td>
</tr>
<tr>
<td>3-1</td>
<td>The inversion of $p_0$ splits into two productions</td>
<td>10</td>
</tr>
<tr>
<td>3-2</td>
<td>The inverse AG generated from the example AG</td>
<td>11</td>
</tr>
<tr>
<td>3-3</td>
<td>A typical semantic tree for the inverse AG</td>
<td>12</td>
</tr>
<tr>
<td>3-4</td>
<td>A semantic function using a non-\textit{token permuting function}</td>
<td>12</td>
</tr>
<tr>
<td>3-5</td>
<td>The inverse productions</td>
<td>13</td>
</tr>
<tr>
<td>4-1</td>
<td>A SQL query and its English paraphrase</td>
<td>14</td>
</tr>
<tr>
<td>4-2</td>
<td>A non-invertible function construct</td>
<td>15</td>
</tr>
<tr>
<td>4-3</td>
<td>Figure 4-2 changed to restricted inverse form</td>
<td>15</td>
</tr>
<tr>
<td>4-4</td>
<td>Another non-invertible function construct</td>
<td>16</td>
</tr>
<tr>
<td>4-5</td>
<td>Figure 4-4 changed to restricted inverse form</td>
<td>16</td>
</tr>
<tr>
<td>4-6</td>
<td>Two unique productions inverting to identical ones</td>
<td>17</td>
</tr>
<tr>
<td>4-7</td>
<td>Two productions collapsing into one</td>
<td>18</td>
</tr>
</tbody>
</table>
ABSTRACT
Over the last decade there has developed an acute awareness of the need to introduce abstraction and mathematical rigor into the programming process. This increased formality allows for the automatic manipulation of software, increasing productivity and, even more importantly, the manageability of complex systems. Along these lines, attribute grammars constitute a formal mechanism for specifying translations between languages; from a formal description of the translation a translator can be automatically constructed. In this paper we consider taking this process one step further: given an attribute grammar specifying the translation from language $L_1$ to the language $L_2$, we address the question of whether the inverse attribute grammar specifying the inverse translation from $L_2$ to $L_1$ can be automatically generated. We show how to solve this problem for a restricted subset of attribute grammars. This inversion process allows for compatible two-way translators to be generated from a single description. To show the practical feasibility of attribute grammar inversion, we relate our experience in inverting an attribute grammar used as an interface for a formal database accessing language, SQL. The attribute grammar is used to paraphrase SQL database queries in English.
1. Introduction
This paper discusses a method to invert attribute grammars. Given an attribute grammar (AG) defining a translation $T: L_1 \rightarrow L_2$, we show how to automatically synthesize the inverse attribute grammar specifying the inverse translation $T^{-1}: L_2 \rightarrow L_1$. To do so we impose restrictions on the the attribute grammars we consider.
Our research has been motivated by both theoretical interests and practical applications. Theoretically, this paper adds to a theory of inversion. It demonstrates, for a particular framework based on attribute grammars, how inversion of subprocesses (context-free productions and semantic functions) leads to the inversion of the entire process (the AG). It also shows that a strong duality between syntax and semantics exists in attribute grammars and that this duality can be exploited for purposes of inversion. Along practical lines, attribute grammar inversion promises to be a powerful tool for software development. Because it can be accomplished automatically, it increases production efficiency and insures the consistency of complex software.
Efficiency can be enhanced in systems where two-way translations are needed. In particular, if there is a need for an attribute grammar $T: L_1 \rightarrow L_2$ and its inverse $T^{-1}: L_2 \rightarrow L_1$, then by writing the attribute grammar $T$ and automatically generating the inverse attribute grammar $T^{-1}$ only half of the labor need be performed. More importantly, $T^{-1}$ is guaranteed to be the actual inverse of $T$; $T^{-1}(T(s)) = s$ for all $s$ in the domain of $T$. If $T^{-1}$ were to be written manually and independently of $T$, it would be difficult to prove that this property is preserved. Furthermore, if at some later date $T$ is changed or updated, $T^{-1}$ can be automatically generated from the updated attribute grammar $T$. Hence consistency between the two translators can be maintained.
Attribute grammar inversion can also be used to translate between high level programming languages. For example, suppose that $L_A$ and $L_B$ are programming languages and $T_A: L_A \rightarrow I$ and $T_B: L_B \rightarrow I$ are attribute grammars describing the translations from $A$ and $B$ into an intermediate language $I$. If we can generate the inverse attribute grammar $T^{-1}_B$ then we can create the translation $T_{AB}: L_A \rightarrow L_B$ by forming the composition $T_{AB} = T^{-1}_B \circ T_A$. (A method of composing AGs without using an intermediate representation is discussed by Ganzinger in [8]). These ideas can be extended to a distributed system with $k$ processors linked together, each using its own command language. If programs need to be shared between processors, we can define a canonical form and write invertible translators from this canonical form into each command language. By automatically generating the inverse translators we would be able to translate a program written for one processor into the command language of some other processor. Furthermore, using this method one can create $n^2$ translators (translating from any one of $n$ languages into any other one) from only $n$ specifications, instead of $n^2$. This is illustrated schematically in figure 1-1. Other applications of inverting translation specifications are discussed in [23].
The organization of this paper is as follows: Section 2 contains a brief introduction to attribute grammars and presents an example grammar which will be used throughout the
Figure 1-1: Inverse attribute grammars used for two-way translations
paper. In section 3 we introduce a restricted form for attribute grammars and discuss the inversion algorithm. In section 4 we relate our experience in inverting an actual attribute grammar. Section 5 summarizes our results and suggests areas for future research.
2. A Brief Description Of Attribute Grammars
In this section we provide a brief introduction to attribute grammars, present an example attribute grammar used in the rest of the paper, and define a small extension to attribute grammars, namely, context conditions.
2.1. Attribute Grammars
Attribute grammars were first proposed by Knuth [15] as a way to specify the semantics of context-free languages. The basis of an attribute grammar is a context-free grammar. This describes the context-free language that is the domain of the translation, that is, those strings on which the translation is defined. This context-free grammar is augmented with attributes and semantic functions. Attributes are associated with the nonterminal symbols of the grammar. We write "X.A" to denote attribute A of symbol X, and \( \mathcal{A}(X) \) to denote the set of attributes associated with X. Semantic functions are associated with productions; they describe how the values of some attributes of the production are defined in terms of the values of other attributes of the production.
The underlying context-free grammar of an attribute grammar describes a language. Any string in this language has a parse tree associated with it by the grammar. The nodes of this parse tree can be labelled with symbols of the grammar. Each interior node of this tree, N, has two productions associated with it. The left-part production (LP) of N is the production that applies at N deriving N's children. The right-part production (RP) of the node N is the production that applies at the parent of N deriving N and its siblings. Leaves of the tree don't have LP productions; the root doesn't have an RP production.
A semantic tree is a parse tree in which each node contains fields that correspond to the attributes of its labelling grammar symbol. Each of these fields is an attribute-instance. The
values of attribute-instances are specified by the semantic functions. For example, if a production \([p: X_0 \equiv X_1 \cdots X_{np}]\) has a semantic function \(X_0.A = f(X_2.B, X_4.C)\), then for any instance of \(p\) in any semantic tree, the attribute-instance corresponding to \(X_0.A\) will be defined by applying the function \(f\) to the attribute-instances corresponding to \(X_2.B\) and \(X_4.C\).
Since two different productions are associated with each attribute-instance, there could be two semantic functions that independently specify its value, one from the LP production and one from the RP production. If we assume that each attribute-instance is defined by only one semantic function, either from the LP production or from the RP production, then we must guard against an attribute-instance not being defined at all because the LP production assumed that the RP production would define it and vice versa. These difficulties are avoided in attribute grammars by adopting the convention that for every attribute, \(X.A\), either: (1) every instance of \(X.A\) is defined by a semantic function associated with its LP production, or (2) every instance of \(X.A\) is defined by a semantic function associated with its RP production. Attributes whose instances are all defined in their LP production are called synthesized attributes; attributes whose instances are all defined in their RP production are called inherited attributes. Every attribute is either inherited or synthesized. Inherited attributes propagate information down the tree, towards the leaves. Synthesized attributes propagate information up the tree, toward the root. The inherited attributes of a non-terminal \(X\) are denoted by \(A(X)\), the synthesized attributes by \(S(X)\); \(A(X) = A(X) \cup S(X)\). The start symbol has no inherited attributes. From the point of view of an individual production the above conditions require that the semantic functions of a production MUST define EXACTLY all the inherited attributes of the right-part symbols and all synthesized attributes of the left-part symbol. For a given a production \([p: X_0 \equiv X_1 \cdots X_{np}]\), we often refer to the attributes of \(p\), \(A(p) = A(X_0) \cup \cdots \cup A(X_{np})\).
The result of the translation specified by an attribute grammar is realized as the values of one or more (necessarily synthesized) attribute-instances of the root of the semantic tree. In order to compute these values the other attribute-instances must be computed. In extreme cases an attribute-instance can depend on itself; such a situation is called a circularity and by definition such situations are forbidden from occurring in well-defined attribute grammars. In general, it is an exponentially hard problem [9] to determine that an attribute grammar is non-circular; i.e. that no semantic tree that can be generated by the attribute grammar contains a circularly defined attribute-instance. Fortunately there are several interesting and widely applicable sufficient conditions that can be checked in polynomial time [3, 10, 12, 14]; e.g., absolute noncircularity [14].
Many translator writing systems have been built using the attribute grammar formalism [16, 19, 13, 4, 7]. Such a system accepts an attribute grammar as input and generates a compiler for the attribute grammar. Part of this task calls for generating an evaluator of semantic trees; such an evaluator must evaluate each attribute-instance of the tree after all attribute-instances that it depends on have already been evaluated. Many strategies for efficient evaluation have been discussed in the literature [22] and include multi-pass [10] and
2.2. An Attribute Grammar Example
Figure 2-1 gives an attribute grammar which translates simple English descriptions of mathematical expressions into post-fix Polish notation. This grammar distinguishes between expressions involving only integer values (in which case operators of the form \( \dagger \) and \( i \) are required) and those involving a decimal point value (in which case operators of the form \(+\) and \( \ast \) are required). So, for example, it will translate the English phrase 'multiply 5.7 by 8' into the post-fix Polish expression \((5.7, \ast, i)\) and the phrase 'add 5 to 9' into \((5, 9, +, i)\).
In this AG there are 8 productions and each production has associated semantic functions. In production \( p_1 \), \(<\text{Num}1>\) and \(<\text{Num}2>\) denote separate occurrences of the same symbol, \(<\text{Num}>\); the numeric suffixes distinguish these different occurrences. \( \text{Strans} \) is the distinguished attribute of the root; at the end of attribute evaluation the translation resides in this attribute.
Figure 2-2 shows a semantic tree corresponding to the input string 'multiply 80 by 5.8'. Each node in this tree is labelled with its associated grammar symbol and has attribute-instances corresponding to the attributes of that grammar symbol.
2.3. Attribute Grammars and Context Conditions
In this paper we shall consider a small extension to attribute grammars. This extension allows for the attachment of semantic conditions to productions as illustrated in productions \( p_1 \) and \( p_2 \) of figure 2-1. In general we allow a production \( p \) to have a context condition of the form:
\(<\text{CONDITION: expr}_1 \ \text{AND} \ \text{expr}_2 \ \text{AND} \ ... \ \text{AND} \ \text{expr}_k>\)
where each \( \text{expr}_i \) is a boolean expression involving constants and attributes of \( p \). A condition of the above form attached to a production is to be interpreted as saying that the production-instance is valid if and only if the condition evaluates to true. If the condition evaluates to false then the production-instance is not valid and the input violates context sensitivities of the attribute grammar. An attribute grammar system allowing conditions on the productions would first parse the input, build a semantic tree, and evaluate the attribute-instances of the tree as in a regular attribute grammar system. It would then evaluate all conditions associated with production-instances of the tree. If all evaluate to true it would return the translation given in the distinguished attribute of the root. If any evaluate to false, however, the translation is defined to be 'error' as the input violates context sensitivities of the attribute grammar.\(^1\) So, for instance, the sentence 'multiply 80 to 5.8' of our example attribute grammar would be parsed and a semantic tree built for it. After evaluation of the attribute-instances in the tree it would be determined that a context-
\(^1\)If the underlying context-free grammar of the AG is ambiguous, then the translation of a string is 'error' only if every parse for this string contains violated context conditions.
**Context free symbols of the attribute grammar and their attributes:**
<table>
<thead>
<tr>
<th>Context-free symbols</th>
<th>synthesized attributes</th>
<th>inherited attributes</th>
</tr>
</thead>
<tbody>
<tr>
<td>S:</td>
<td>{ trans }</td>
<td>$ }</td>
</tr>
<tr>
<td>Op:</td>
<td>{ trans }</td>
<td>{ type }</td>
</tr>
<tr>
<td>Num:</td>
<td>{ trans, type }</td>
<td>$ }</td>
</tr>
<tr>
<td>Integer:</td>
<td>{ trans }</td>
<td>$ }</td>
</tr>
<tr>
<td>Decimal_num:</td>
<td>{ trans }</td>
<td>$ }</td>
</tr>
<tr>
<td>digit:</td>
<td>{ trans }</td>
<td>$ }</td>
</tr>
</tbody>
</table>
**Productions of the attribute grammar and their semantic functions:**
\[ p_1: S ::= Op \text{ Num1 by Num2}. \quad <\text{Condition: (Op.trans = 'e') or (Op.trans = 'a')}> \]
\[ S.\text{trans} = \text{Concatenate('(', Num1.\text{trans}, ',', Num2.\text{trans}, ')');} \]
\[ \text{Op.type} = \text{If (Num1.type = real) or (Num2.type = real)} \]
\[ \text{then real else int;} \]
\[ p_2: S ::= Op \text{ Num1 to Num2}. \quad <\text{Condition: (Op.trans = '+') or (Op.trans = '+_1')}> \]
\[ S.\text{trans} = \text{Concatenate('(', Num1.\text{trans}, ',', Num2.\text{trans}, ')');} \]
\[ \text{Op.type} = \text{If (Num1.type = real) or (Num2.type = real)} \]
\[ \text{then real else int;} \]
\[ p_3: \text{Num ::= Integer.} \]
\[ \text{Num.}\text{trans} = \text{Integer.}\text{trans}; \]
\[ \text{Num.type} = \text{int;} \]
\[ p_4: \text{Decimal_num ::= Decimal_num.} \]
\[ \text{Num.}\text{trans} = \text{Decimal_num.}\text{trans}; \]
\[ \text{Num.type} = \text{real;} \]
\[ p_5: \text{Op ::= add.} \]
\[ \text{Op.}\text{trans} = \text{If (Op.type = real) then '+_r' else '+_1'}; \]
\[ p_6: \text{Op ::= multiply.} \]
\[ \text{Op.}\text{trans} = \text{If (Op.type = real) then 'e' else 'e_i'}; \]
\[ p_7: \text{Integer ::= digits.} \]
\[ \text{Integer.}\text{trans} = \text{digits.}\text{trans}; \]
\[ p_8: \text{Decimal_num ::= digits1 .' digits2.} \]
\[ \text{Decimal_num.}\text{trans} = \text{Concatenate(digits1.}\text{trans}, '.', \text{digits2.}\text{trans}); \]
**Figure 2-1**: An attribute grammar example
Sensitive condition of \( p_2 \) is violated; an instance of that production is only valid if \( \text{Op.trans} \) equals an additive operator (i.e., '+_r' or '+_1') and in this case \( \text{Op.trans} \) equals '*'. The idea...
of context conditions for attribute grammars was first suggested in [20]. By putting further restrictions on the allowable form of conditions, we can make them useful in parsing the input ([11, 21]). In [5] it is shown how context conditions can be incorporated into the regular semantic functions of the productions.
3. Inversion Of Attribute Grammars
In this section we give an algorithm to invert AGs. For example, given the AG above describing the translation from English descriptions of mathematical expressions into post-fix Polish notation, the inversion algorithm will produce a new inverse AG describing the translation from post-fix Polish notation into English descriptions of mathematical expressions. In order to perform the inversion, the AG must be in a restricted inverse form. A formal definition of this restricted form is given in section 3.2. In essence, it restricts the AG so that each nonterminal of the grammar has a special trans attribute, which must be defined by a restricted functional form. For each interior node in a semantic tree, the trans attribute at that node will compute the translation of the subtree beneath it. Although other attributes of the AG influence the translation by passing context sensitive information around the semantic tree, it is the trans attribute which ultimately computes the translation. In the next section we introduce the concept of token permuting functions, which will subsequently be used in our definition of restricted inverse form.
3.1. Token Permuting Functions
A function $f$ is a token permuting function over an alphabet $\Delta$ if and only if it is of the form: $f(Y_1, \ldots, Y_n) = \text{concatenate}(\beta_0, Y_{i_1}, \beta_1, Y_{i_2}, \ldots, Y_{i_m}, \beta_n)$, where each $Y_{i_k}$ ($1 \leq k \leq n$) is a variable taking on values in $\Delta$, each $\beta_k$ ($1 \leq k \leq n$) is a constant in $\Delta$, and each $Y_{i_k}$ of the left hand side appears once and only once as some $Y_{it}$ ($1 \leq t \leq n$) of the right hand side.
The function $f$ is called a token permuting function as it permutes the order of its
arguments and inserts constant tokens of $\Delta$ in between them. It is important to emphasize that a token permuting function cannot delete any of its arguments; each $Y_k$ must appear as some $Y_{ik}$ and it cannot appear twice. For example, $f(Y_1, Y_2) = \text{concatenate('Hello', Y1, 'and', Y2)}$ is a token permuting function. If $Y_1 = \text{'Bob'}$ and $Y_2 = \text{'Shirley'}$ then this function would yield the string 'Hello Bob and Shirley'. However, the function $g(Y) = \text{concatenate(Y, Y, 'where are you', Y)}$ is not a token permuting function as it duplicates the value of the string $Y$ several times in the output string.
3.2. Restricted Inverse Form
An attribute grammar, without any restrictions on its semantic functions, is computationally equivalent to a Turing machine. As such, it is almost impossible to formally manipulate, let alone invert. In this section we introduce restricted inverse form attribute grammars, in which some semantic functions are required to be token permuting ones. By definition, an attribute grammar $T: \Sigma^* \rightarrow \Delta^*$ is in restricted inverse form if it obeys the following constraints:
1. Each nonterminal $X$ has a distinguished synthesized attribute $X\text{.trans}$ taking on values in $\Delta^*$. $X\text{.trans}$ represents the translation of the substring which $X$ derives.
2. For each production $[p: X_0 \rightarrow a_0 X_1 a_1 X_2 \ldots X_{np} a_{np}]$ the semantic function defining $X_0\text{.trans}$ is of the form
$X_0\text{.trans} = \begin{cases}
g_1(\text{atts}_1) \text{ then } f_1(X_1\text{.trans}, X_2\text{.trans}, \ldots, X_{np}\text{.trans}) \\
\text{elseif } g_2(\text{atts}_2) \text{ then } f_2(X_1\text{.trans}, X_2\text{.trans}, \ldots, X_{np}\text{.trans}) \\
\text{elseif } g_{s-1}(\text{atts}_{s-1}) \text{ then } f_{s-1}(X_1\text{.trans}, X_2\text{.trans}, \ldots, X_{np}\text{.trans}) \\
\text{else } f_s(X_1\text{.trans}, X_2\text{.trans}, \ldots, X_{np}\text{.trans})
\end{cases}$
where each $\text{atts}_j \subseteq \Delta(p)$ ($1 \leq j \leq s-1$), each $g_j$ ($1 \leq j \leq s-1$) is a boolean function, and each $f_j$ ($1 \leq j \leq s$) is a token permuting function as described above. Note that the arguments to each $f_j$ token permuting function are exactly the trans attributes of the production's right-part nonterminals.
3. The value of the translation is specified to be the value of the trans attribute of the root ($S\text{.trans}$).
In this definition there is no restrictions on the number of inherited or synthesized attributes a nonterminal can have nor is there placed any restrictions on how they are computed other than the trans attribute. Constraint 2, however, requires that each $f_j$ ($1 \leq j \leq s$) used to compute the trans attribute is a token permuting function.
Restricted inverse form attribute grammars (RIF grammars) can be viewed as restricted AGs or as a generalized version of syntax-directed translation schema [8]. Like syntax-directed translation schema, RIF grammars associate a special synthesized attribute (the trans attribute), to each nonterminal. This attribute stores the translation of its subtree.
and is defined by a token permuting function. However, a RIF grammar surpasses a syntax-directed translation scheme in expressive power not only in that it associates context conditions to productions, but in that it allows other attributes to be associated with nonterminals. These "other" attributes influence the translation by determining which token permuting function is chosen to evaluate the trans attribute (they serve as arguments to the $g_j$ boolean expressions). This allows RIF grammars to express context sensitive translations, something syntax-directed translation schema cannot do. For example, it is easy to construct a RIF grammar which accepts strings of the form $a^i b^j c^k$, and translates them to 'OK $a^i b^j c^k$ if $i = j = k$, and to 'NOT OK $a^i b^j c^k$ otherwise. This language cannot be expressed by any syntax-directed translation schema, since the target language is not context-free. In general, the translations describable by syntax-directed translation schema are fairly restricted (see [1, 2]), whereas RIF grammars can, at least theoretically, describe any translation describable by an attribute grammar. The theoretical power of RIF grammars is discussed in [5].
3.3. The Inversion Algorithm
An attribute grammar in restricted inverse form displays a duality between syntax and semantics, as can be seen by considering a semantic tree of such an AG. On one hand, each node of the tree has an associated context-free label. On the other hand, each node can be considered labeled by its trans attribute. Inversion of the attribute grammar consists of switching these labels. To make sure that this is possible, we had to restrict the nature of the trans label; in restricted inverse form the trans attribute can only be defined by a token permuting function. The inversion process then consists of switching the labels and undoing the permutation specified by this function. This section formally defines the inversion algorithm.
Let $T: E^* \rightarrow \Delta^*$ be an attribute grammar in restricted inverse form. The inverse AG is created modularly from $T$, production by production. Each production in $T$ will give rise to one or more productions of the inverse attribute grammar. As $T$ translates strings of $E^*$ into strings of $\Delta^*$, the inverse AG will translate strings of $\Delta^*$ into strings of $E^*$. However, it will only translate those strings of $\Delta^*$ that are in the range of $T$.
Formally, let $\Delta_T^*$ be the range of $T$; i.e., $\Delta_T^* = \{ \beta \in \Delta^* \text{ and there exists a semantic tree translating } \alpha \in E^* \text{ to } \beta \}$. Then the attribute grammar $T^{-1}: \Delta_T^* \rightarrow E^*$ is generated from $T$ as follows:
1. For each token $\delta$ of $\Delta$, create a terminal $\delta$ in $T^{-1}$.
2. For each nonterminal $X$ in $T$, create a nonterminal $XI$ in $T^{-1}$ (we call it XI and not $X$ to avoid confusion. We will not be very strict about this usage, however, when our meaning is clear. For example, when we refer to a semantic function $f$ of $T$ as also being a semantic function of $T^{-1}$, we mean the semantic function $f'$ which is obtained from $f$ by substituting every occurrence of $X.A$ in $f$ by $XI.A$).
3. Let each nonterminal $XI$ in $T^{-1}$ have the same set of attributes as $X$ in $T$ with
one additional attribute: XI.transinv. The attribute transinv will play the same
role in T' as the attribute trans did in T; i.e., the transinv attribute will take
on values in S' and represents the translation of the substring that XI derives.
4. For each production \( p: X_0 \rightarrow a_0 X_1 a_1 X_2 \cdots X_{np} \alpha_{np} \) in T with the
distinguished semantic function
\[
X_0 \text{.trans} = \begin{cases} \text{if } g_1(\text{atts}_1) \text{ then } f_1(X_1 \text{.trans}, X_2 \text{.trans}, \ldots, X_{np} \text{.trans}) \\
\text{elseif } g_2(\text{atts}_2) \text{ then } f_2(X_1 \text{.trans}, X_2 \text{.trans}, \ldots, X_{np} \text{.trans}) \\
\quad \vdots \\
\text{elseif } g_{s-1}(\text{atts}_{s-1}) \text{ then } f_{s-1}(X_1 \text{.trans}, X_2 \text{.trans}, \ldots, X_{np} \text{.trans}) \\
\text{else } f_s(X_1 \text{.trans}, X_2 \text{.trans}, \ldots, X_{np} \text{.trans})
\end{cases}
\]
create s productions in T', one corresponding to each of the token
permuting functions \( f_j \). In particular, for each \( f_j \), \( 1 \leq j \leq s \), where \( f_j(X_1 \text{.trans}, \ldots, X_{np} \text{.trans}) = \text{concatenate}(\beta_0, X_{i1} \text{.trans}, \beta_1, X_{i2} \text{.trans}, \ldots, X_{inp} \text{.trans}, \beta_{np}) \) create
an inverse production \( p_{fj}: XI_0 \rightarrow \beta_0 X_{i1} \beta_1 X_{i2} \cdots X_{inp} \beta_{np} \) with an
attached context
\[
\text{condition } <\text{COND: (NOT } g_1(\text{atts}_1)) \text{ AND (NOT } g_2(\text{atts}_2)) \text{ AND...AND (NOT } g_{j-1}(\text{atts}_{j-1})) \text{ AND } g_j(\text{atts}_j)>.
\]
Let this production have all the
semantic functions that \( p \) has except that in place of the semantic function
defining \( X_0 \).trans as given above, it has the semantic function
\[
XI_0 \text{.trans} = f_j(X_1 \text{.trans}, \ldots, X_{np} \text{.trans}).
\]
It also has one additional semantic function defining
\( XI_0 \text{.transinv} \) given by
\[
XI_0 \text{.transinv} = \text{concatenate}(\alpha_0, X_{i1} \text{.transinv}, \alpha_1, X_{i2} \text{.transinv}, \ldots, X_{inp} \text{.transinv}, \alpha_{np}).
\]
5. The value of the translation is specified to be the value of the transinv attribute
of the root (XI.transinv).
The essence of the inversion algorithm lies in point 4. To make this point more concrete,
figure 3-1 shows the inversion of production \( p_6 \) of our example attribute grammar of figure
2-1. This production is split into two productions in the inverse attribute grammar.
Whereas the production \( p_6 \) of T specified that Op derived 'multiply' and had a translation
of either 'r' or 'i', the inverse productions \( p_{f6a} \) and \( p_{f6b} \) specify that Op derives either
'r' or 'i'; and in either case has a translation of 'multiply'. Op's derivation of 'r' or 'i'
is specified to be valid only if certain context conditions are satisfied.
Figure 3-2 presents the inverse of the remaining productions of the attribute grammar of
figure 2-1. This specification would be produced automatically by the inversion algorithm.
Due to space considerations, the inverse of productions \( p_7 \) and \( p_8 \) are not presented. Note
that productions \( p_1 \) and \( p_2 \), while having different semantics, have the same context-free
portion; the underlying context-free grammar of the inverse AG is ambiguous. In section
4.2 we show how to remove this ambiguity from the inverse specification.
\[ c \]
\[ c \]
\[ c \]
\[ c \]
\[ c \]
\[ c \]
\( p_{B_1} : \text{OpI} ::= \ast_r \). \text{<Condition: OpI.type = real>}
\hspace{1cm} \text{OpI.trans = 'r'};
\hspace{1cm} \text{OpI.transinv = 'multiply'};
\( p_{B_2} : \text{OpI} ::= \ast_l \). \text{<Condition: NOT(OpI.type = real)>}
\hspace{1cm} \text{OpI.trans = 'l'};
\hspace{1cm} \text{OpI.transinv = 'multiply'};
**Figure 3-1:** The inversion of \( p_0 \) splits into two productions
If an attribute grammar is in restricted inverse form, then there exists a duality between the context-free portion of the production (the syntax of the production) and the semantic function defining the \( X_0 \text{.trans} \) attribute (the semantics of the production). While the context-free portion defines the strings \( X_0 \) can legally derive, the semantic function computing \( X_0 \text{.trans} \) defines the translation of such strings. The inversion process exploits this duality by switching the role of syntax and semantics.
All the attributes of a nonterminal in the original attribute grammar remain in the corresponding nonterminal of the inverse AG. They will be defined properly as all the semantic functions of a production remain in the inverse production as well. Even the trans attribute remains in the inverse attribute grammar because it is no worse than any other attribute; it may be directly or indirectly used in some condition \( g_j(\text{atts}_j) \) thereby influencing the translation.
The inverse grammar will have context conditions attached to the productions (see section 2.3) even if the original attribute grammar did not have any attached conditions. These conditions enforce context-sensitivities in the input. For example, according to the grammar \( T \), the inverse grammar \( T^{-1} \) should not accept \( '(80,5.8,\ast_l)' \) as well-formed input; \( T \) would not translate any input string to \( '(80,5.8,\ast_r)' \). The context conditions placed on \( T^{-1} \) will accomplish this. Without the conditions \( '(80,5.8,\ast_l)' \) would be accepted and translated by \( T^{-1} \) to either 'Multiply 80 by 5.8' or 'Multiply 80 to 5.8'. The attached context conditions can also be useful in parsing the input using the techniques of attributed parsing [21, 11].
Using the inversion method outlined in this section, it can be shown that if there exists a semantic tree in \( T \) translating \( s \) to \( m \) then there will exist a semantic tree in \( T^{-1} \) translating \( m \) to \( s \). However, if \( T \) is many-to-one (it translates two unique strings \( s_1 \) and \( s_2 \) into the same output \( m \)), then \( T^{-1} \) will specify two ways to parse \( m \), one parse tree producing the output \( s_1 \) and the other producing the output \( s_2 \). Hence if \( T \) is many-to-one, \( T^{-1} \) will not only be ambiguous, it will not be a function. We will return to the problem of ambiguity in section 4.2. To demonstrate the relationship between trees in the original attribute grammar and trees in the generated inverse attribute grammar, figure 3-3 gives a semantic tree for the string \( '(80,5.8,\ast_r)' \), based on the inverse attribute grammar of figure 3-2. Compare this semantic tree to the semantic tree of figure 2-2.
\[ p_{1a} : SI ::= ( \text{NumI1}, \text{NumI2}, \text{OpI} ) . \quad \text{<Condition: (OpI\text{.trans} = '+') or (OpI\text{.trans} = '+\_\_')>} \]
\[ SL\text{.trans} = \text{Concatenate}('(', \text{NumI1}\text{.trans}, ',', \text{NumI2}\text{.trans}, ',', \text{OpI}\text{.trans}, ')') \]
\[ \text{OpI\text{.type}} = \text{If (NumI1\text{.type} = \text{real}) or (NumI2\text{.type} = \text{real}) then real else int;} \]
\[ SL\text{.transinv} = \text{Concatenate(OpI\text{.transinv}, NumI1\text{.transinv}, 'by', NumI2\text{.transinv})} \]
\[ p_{1b} : SI ::= ( \text{NumI1}, \text{NumI2}, \text{OpI} ) . \quad \text{<Condition: (OpI\text{.trans} = '+\_\_') or (OpI\text{.trans} = '+\_')>} \]
\[ SL\text{.trans} = \text{Concatenate}('(', \text{NumI1}\text{.trans}, ',', \text{NumI2}\text{.trans}, ',', \text{OpI}\text{.trans}, ')') \]
\[ \text{OpI\text{.type}} = \text{If (NumI1\text{.type} = \text{real}) or (NumI2\text{.type} = \text{real}) then real else int;} \]
\[ SL\text{.transinv} = \text{Concatenate(OpI\text{.transinv}, NumI1\text{.transinv}, 'to', NumI2\text{.transinv})} \]
\[ p_{2a} : \text{NumI} ::= \text{IntegerI}. \]
\[ \text{NumI}\text{.trans} = \text{IntegerI}\text{.trans}; \]
\[ \text{NumI}\text{.type} = \text{int}; \]
\[ \text{NumI}\text{.transinv} = \text{IntegerI}\text{.transinv}; \]
\[ p_{2b} : \text{NumI} ::= \text{Decimal\_numI}. \]
\[ \text{NumI}\text{.translation} = \text{Decimal\_numI}\text{.translation}; \]
\[ \text{NumI}\text{.type} = \text{real}; \]
\[ \text{NumI}\text{.transinv} = \text{Decimal\_numI}\text{.transinv}; \]
\[ p_{3a} : \text{OpI} ::= +. \quad \text{<Condition: OpI\text{.type} = \text{real}>} \]
\[ \text{OpI}\text{.trans} = '+'; \]
\[ \text{OpI}\text{.transinv} = \text{'add'}; \]
\[ p_{3b} : \text{OpI} ::= +_. \quad \text{<Condition: NOT(OpI\text{.type} = \text{real})>} \]
\[ \text{OpI}\text{.trans} = '+'; \]
\[ \text{OpI}\text{.transinv} = \text{'add'}; \]
**Figure 3-2:** The inverse AG generated from the example AG
3.4. Extending the Inversion Paradigm
In the last section we showed how any AG in restricted inverse form can be inverted. However, it is not always apparent how to express translations in this restricted form; many attribute grammars make use of constructs which violate these constraints. In section 4.1 we show how we were able to transform an attribute grammar which was not in restricted inverse form into one which was. However, this may not always be possible. In this section we suggest another alternative: extending RIF grammars to express a wider
Figure 3-3: A typical semantic tree for the inverse AG
variety of translations yet still retain invertibility.
In our current work, using RIF grammars to express translations between programming languages, we have found that it often requires more than a simple token permuting function to define the translation of a subtree. For example, consider the production of figure 3-4. In this case the attribute if_stmt.trans is not defined by a token permuting function, and hence the production is not invertible by the inversion algorithm of the last section. In this example, genLab is a function from the domain of integers to the domain of labels, and genLab(i) = 'Li', where i is an integer and 'Li' is a string.
p: if_stmt ::= IF expression THEN stmt.
if_stmt.trans = Concat[expression.trans,
'FJP', genLab(if_stmt.labnum),
stmt.trans,
'LAB', genLab(if_stmt.labnum)];
Figure 3-4: A semantic function using a non-token permuting function
The problem then is how the inversion algorithm can be expanded to deal with such constructs. Intuitively, the syntax of the inverse production should have the following form: [pl: if_stmtl ::= expressionl FJP X stmtl LAB X] where X represents a label. Assuming that we provide the inversion algorithm with knowledge about the primitive types (domains) employed by the semantic functions of the RIF grammar, there is no reason why it cannot also deduce this syntax for the inverse production. In particular, to invert this production the inversion algorithm would need to know
1. the syntax of a label and that
12
2. genLab is a function from integers to labels.
Using this information, it could invert the production \( p \), producing the inverse productions \( p^\prime \) given in figure 3-4. In this figure label is a nonterminal deriving a label. This nonterminal has the distinguished attribute, label.value, which gives the string derived by this nonterminal (e.g., if label derives 'Li', then the value of label.value is 'Li'). The condition attached to production \( p \) enforces the relationship that the label derived from this nonterminal (given in label.value) must equal genLab(if(stmtI.labnum)), as required by the original production \( p \).
\[
\begin{align*}
pI: & \text{ if } stmtI ::= expressionI FJP label1 stmtI LAB label2. \\
& \text{ <Condition: (label1.value = genLab(if stmtI.labnum)) AND}
\text{ (label2.value = genLab(if stmtI.labnum))>}
\end{align*}
\]
\[
\begin{align*}
& \text{if stmtI.transinv = Concat['IF', expressionI.transinv, 'THEN',}
\text{ stmtI.transinv];}
\end{align*}
\]
\[
\begin{align*}
pI': & \text{ label ::= L.} \\
& \text{label.value = Concat['L', '1'];}
\end{align*}
\]
Figure 3-5: The inverse productions
The technique illustrated by this example can be formalized and generalized, allowing RIF grammars to express a greater variety of constructs that arise naturally in AGs. Yet, this is only one out of several techniques that can be used to extend RIF grammars and the inversion algorithm. Part of our current work is aimed at finding a general version of RIF grammars and the inversion algorithm that will enable RIF grammars to express, without too much difficulty, most translations that arise in practice.
3.5. Efficiency
Although the inverted attribute grammar \( T^1 \) generated by inversion algorithm is guaranteed to be the inverse of the original attribute grammar \( T \), it may be a very inefficient version of it. We can 'clean up' the attribute grammar \( T^1 \) by removing all useless attributes—those which cannot possibly contribute to the translation. A prime suspect as a useless attribute is the trans attribute; although it is essential in the original attribute grammar \( T \), it probably (but not necessarily) contains unneeded information in the inverse attribute grammar \( T^1 \). If we look at figure 3-2, we see that the attributes SL.trans, NumI.trans, Decimal_numI.trans and IntegerI.trans are useless and can be removed but that OpI.trans does contribute to the translation and cannot be removed. This 'cleaning up' of the attribute grammar can also be done automatically.
4. Using Attribute Grammar Inversion To Build An Interface For SQL
Attribute grammar technology is used in the PERFORM (Paraphrase and Error message for Formal languages) system, developed at the IBM Thomas J. Watson Research Center [17]. The PERFORM system is currently implemented to generate paraphrases and error
messages for a relational database querying language (SQL). It serves SQL users as a feedback device to make sure their queries are semantically correct from their point of view and from the system's point of view. It is an aid for the novice user in learning SQL and serves the occasional user as a documentation device for SQL queries. The paraphrases are designed in one-to-one correspondence to SQL expressions, preserving the SQL structure yet obeying natural language rules. The number of different natural language constructions employed is relatively small (essentially the same number as there are SQL constructions), and so is the basic vocabulary. Figure 4-1 gives an example of a SQL query and the English paraphrase generated by the PERFORM system.
```
SELECT DIVISION, ID, LOCATION, NAME FROM STAFF
WHERE DIVISION = "EASTERN" AND JOB = "CLERK";
```
What is the division, id number, city and last name for employees in division "EASTERN", and with the job description "CLERK".
**Figure 4-1:** A SQL query and its English paraphrase
With PERFORM, users are still expected to construct their queries in SQL. To make the query construction itself easier for users, a guided natural language interface has been designed. It displays template queries in natural language on the screen with windows for the selection of specific items. The natural language constructs are based on the same language as PERFORM, consistent with the lexicon and syntax. The interface frees users from formal language requirements such as variable binding, or in the case of SQL, joining of tables. To assure the correct translation of the natural language input back into SQL, an "inverse" attribute grammar is needed [18].
To examine the feasibility of attribute grammar inversion, we decided to take a subset of the PERFORM attribute grammar (translating a subset of all SQL queries into an English paraphrase) and to apply the techniques given above to invert this subset attribute grammar. We performed this process by hand, but were faithful to the principles given above. The inverted attribute grammar translates simple English queries (paraphrases) into SQL queries and will become part of a larger system built around the PERFORM attribute grammar.
The original PERFORM attribute grammar was written without any thought of inversion and without any consideration to the principles of sections 3.2 and 3.3. For this reason we encountered several difficulties when we attempted the inversion process. Some of these difficulties were overcome by making small changes to the original attribute grammar. Other problems proved more stubborn and forced us to develop richer techniques of inversion to deal with specialized cases.
4.1. Non-Invertible function constructs
Our first job in inverting the PERFORM AG was to put it into restricted inverse form. For most productions of the AG this was quite easy, requiring only small syntactic changes to the function computing the trans attribute. Sometimes, however, the function computing the trans attribute was semantically very different than a token permuting function and stronger techniques were required. An example of this sort of production is given in figure 4-2.
p: EXPR ::= FIELD_NAME.
EXPR.trans = if (EXPR.plural = true)
then make_plural(FIELD_NAME.trans) else FIELD_NAME.trans;
q: FIELD_NAME ::= location.
FIELD_NAME.trans = 'city';
Figure 4-2: A non-invertible function construct
In this example EXPR derives the nonterminal FIELD_NAME. FIELD_NAME in turn can derive several terminal strings (SQL field names). EXPR.trans is set to the value of FIELD_NAME.trans with one qualification: if it has been determined elsewhere that this value, a noun which is the English equivalent of the SQL field name, is to be made plural, then first a function make_plural is called which finds the plural form of the noun. This function is not a token permuting function and cannot be inverted according to the paradigm of section 3.3. Conceptually, production p and productions of type q should invert to a set of productions \( \{ p_1, p_1', p_2, p_2', \ldots \} \) where \( p_i \) is of the form \( [p_i: \text{EXPR} ::= \text{fname}_\text{singular}] \) and \( [p_i': \text{EXPR} ::= \text{fname}_\text{plural}] \), where \( \text{fname}_\text{singular} \) and \( \text{fname}_\text{plural} \) are singular and plural terminal strings representing English field names. Besides many technical difficulties in deriving such an inverse set of productions, to do so would require an amount of semantic knowledge concerning the function make_plural which is beyond the scope of our paradigm. Instead we chose to rewrite the attribute grammar as in figure 4-3.
p: EXPR ::= FIELD_NAME.
FIELD_NAME.plural = EXPR.plural;
EXPR.trans = FIELD_NAME.trans;
q: FIELD_NAME ::= location.
FIELD_NAME.trans = if FIELD_NAME.plural then 'cities' else 'city';
Figure 4-3: Figure 4-2 changed to restricted inverse form
By adding the attribute FIELD_NAME.plural we transmit the information as to whether the noun should be plural or singular further down the tree to the point where the translation for the field name is generated. We then explicitly choose either the plural or singular form based upon this information. The rewritten attribute grammar is equivalent
to the initial one and it is in restricted inverse form. It is less efficient since we had to
make explicit the generation of different noun forms instead of performing this act in an
efficient semantic function. Yet perhaps for this very reason the attribute grammar also
becomes easier to read and understand.
In a similar fashion we rewrote the attribute grammar to accommodate another non-
invertible function construct given in 4-4.
\[
\text{r: PRED ::= EXPR}_1 \text{ COMP}\_\text{OP} \text{ EXPR}_2.
\]
\[
P\text{RED.}\text{trans} = \begin{cases}
\text{if } g(...) \\
\text{then concatenate(EXPR}_1\text{.trans, head(COMP}\_\text{OP}.\text{trans), EXPR}_2\text{.trans),} \\
\text{else concatenate(EXPR}_1\text{.trans, head(tail(COMP}\_\text{OP}.\text{trans)), EXPR}_2\text{.trans);}
\end{cases}
\]
\[
\text{s: COMP}\_\text{OP} ::= \ <.
\]
\[
\text{COMP}\_\text{OP.}\text{trans} = \{\text{‘less than’, ‘is less then’}\};
\]
**Figure 4-4:** Another non-invertible function construct
\[
\text{r: PRED ::= EXPR}_1 \text{ COMP}\_\text{OP} \text{ EXPR}_2.
\]
\[
\text{COMP}\_\text{OP.}\text{value1} = g(...);
\]
\[
\text{PRED.}\text{trans} = \text{concatenate(EXPR}_1\text{.trans, COMP}\_\text{OP}.\text{trans, EXPR}_2\text{.trans);}
\]
\[
\text{s: COMP}\_\text{OP} ::= \ <.
\]
\[
\text{COMP}\_\text{OP.}\text{trans} = \begin{cases}
\text{if COMP}\_\text{OP.}\text{value1 then ‘less than’} \\
\text{else ‘is less then’;}
\end{cases}
\]
**Figure 4-5:** Figure 4-4 changed to restricted inverse form
In production s of this figure COMP\_OP.trans was set equal to two possible values. The
correct one was chosen higher up in the tree (at production r) depending on information
available there. Once again the function defining PRED.trans is not in restricted inverse
form due to the functions “head” (first element of list) and “tail” (all but the first element
of list). We got around this problem by introducing a new attribute COMP\_OP.value1 as
given in figure 4-5. With these changes the productions were in restricted inverse form and
the attribute grammar computed the same translation. Once again a little extra expense
was entailed (introduction of the additional attribute COMP\_OP.value1) but the attribute
grammar became invertible. The attribute grammar also became cleaner in that we no
longer assign two possible translations to a single node passing these values up the tree until
there is enough information present to choose between them but we instead passed enough
information down the tree to correctly choose the proper value initially.
Although several other problems were encountered, the examples presented above should
suffice to give a flavour of the method of resolving these difficulties. In general we found
that with a little effort most non-invertible constructs could be rewritten into an invertible
format. Some of our solutions could be stated in more general terms and brought into the
paradigm of automatic inversion (such as the solution to the "head" and "tail" functions). A practical system might also employ special techniques to invert non-invertible function constructs which occur frequently in attribute grammars (such as the make_plural semantic function). To do so, more data needs to be collected concerning typical attribute grammars and the type of semantic functions they use.
4.2. Ambiguity
One other problem which we encountered in our inversion of the PERFORM subset deserves mention. In Figure 4-8, although $p_a$ and $p_b$ are unique context-free productions, $p_l_a$ and $p_l_b$ are the same context-free productions but with different semantics. This is due to the fact that the original grammar allows two pseudonyms (prodno and prodnum) to express the same meaning ('product number'). It results in an ambiguous grammar since we do not know which production applies to the input 'product number'. Fortunately this can be resolved by collapsing the two productions into a single production $p_{l_{ab}}$. In this production, FIELD_NAME derives the terminal 'prodno' and is assigned the translation {'prodno', 'product number'}, meaning that either translation is acceptable.
$p_a$: FIELD_NAME ::= prodno.
FIELD_NAME.trans = 'product number';
$p_b$: FIELD_NAME ::= prodnum.
FIELD_NAME.trans = 'product number';
$p_{l_a}$: FIELD_NAMEI ::= product number.
FIELD_NAMEI.transinv = 'prodno';
$p_{l_b}$: FIELD_NAMEI ::= product number.
FIELD_NAMEI.transinv = 'prodnum';
$p_{l_{ab}}$: FIELD_NAMEI ::= product number.
FIELD_NAMEI.transinv = {'prodno', 'prodnum'};
**Figure 4-8:** Two unique productions inverting to identical ones
This technique of collapsing multiple productions into a single one can be more involved then demonstrated above if the semantic functions are more complicated or if there are context conditions on the productions. For example, consider production $p_{l_1}$ and $p_{l_2}$ of figure 3-2. Here, once again, the context-free portion of the productions are the same but the semantics are different. In this case, the productions also have different conditions attached. Once again we can collapse these productions into a unique production, $p_{l_{12}}$, given in figure 4-7. Notice how the conditions attached to the productions get introduced into the semantic function defining $SI$.transinv. Using this single production instead of the two productions $p_{l_1}$ and $p_{l_2}$, the inverse RIF grammar no longer has an ambiguous underlying context-free grammar.
In the cases given above we were able to solve the ambiguity of the inverse attribute grammar by collapsing several productions into one. Unfortunately, often the ambiguity is spread out over several productions and can be hard to detect and remove. In general, if
p12: SI := ( Num11 , Num12 , Opl ),
<Condition: (Opl.trans = "*", or (Opl.trans = "*", or
(Opl.trans = "", or (Opl.trans = "", or
SI.trans = Concatenate(’, Num11.trans, ‘, Num12.trans, ‘, ‘, Opl.trans, ‘)));
Opl.type = If (Num11.type = real) or (Num12.type = real)
then real else int;
SI.transinv = if (Opl.trans = "*", or (Opl.trans = "*", or
then Concatenate(Opl.transinv, Num11.transinv, ‘by’, Num12.transinv)
else Concatenate(Opl.transinv, Num11.transinv, ‘to’, Num12.transinv);
Figure 4-7: Two productions collapsing into one
the original translation is many-to-one, the inverse grammar will be one-to-many. This
means that, if in the original attribute grammar two unique inputs produce the same output
m, then in the inverse attribute grammar the input m will have two unique parse trees each
producing a different output. The problem is which one should be selected? We have not
yet been able to solve this problem to our satisfaction. One solution is to choose during
run-time one of the parse trees. This choice could be based on some notion of a “best”
translation or could be made arbitrarily. A better but much more difficult solution is to
statically detect and remove the ambiguity from the inverse grammar.
5. Conclusion
This paper has introduced the technique of attribute grammar inversion. Given an
attribute grammar in restricted inverse form, describing a translation T: L1 $\rightarrow$ L2, the
inversion algorithm presented in this paper will automatically synthesize the inverse attribute
grammar $T^{-1}$: L2 $\rightarrow$ L1.
The inversion process is highly modular; each production of the original attribute grammar
gives rise to one or more productions in the inverse attribute grammar. Even if one
production is not in restricted inverse form and is not invertible, the rest of the productions
of the attribute grammar may still be invertible. And even within a non-invertible
production, the construct causing the problem can be easily identified. An interactive
inversion system could take advantage of this fact by automatically inverting as much of the
attribute grammar as it can and then prompting the user for help where it encounters non-
invertible constructs.
In this paper we also related our experience in inverting a subset of the PERFORM
attribute grammar. This experiment was very successful. It proved that automatic
inversion of attribute grammars is feasible and useful. It required surprisingly little effort;
we believe that manual generation of the inverse attribute grammar $T^{-1}$ from
scratch would have required significantly more resources besides the fact that it would
probably not be the true inverse of PERFORM. Our experience with PERFORM also
indicates that even without a completely automated system for inversion, the principles of
section 3.3 provide useful guidelines on how to generate an inverse attribute grammar. In the worse case, it will provide users with a rough draft of the inverse attribute grammar which can then be further refined.
Our future research is aimed at building an automated system for translating between programming languages, based upon the idea of AG inversion, as outlined in section 1. The concepts introduced in this paper and the experience gained from our inversion of the PERFORM AG makes us optimistic on the success of this task.
ACKNOWLEDGEMENT
We would like to thank Rodney Farrow for his untiring support in discussing all aspects of attribute grammars with us. While his contributions are many, all errors are ours.
References
Syntax Directed Translations and the Pushdown Assembler.
Properties of Syntax Directed Translations.
Semantic evaluation from left to right.
*Communications of the ACM* 19, 1976.
pp. 55-62.
LINGUIST-86 Yet another translator writing system based on attribute grammars.
*Generating Bi-Directional Translators from RIF Grammars*.
Attribute Coupled Grammars.
Published as Volume 19, Number 6, of *SIGPLAN Notices*.
A Truly Generative Semantics-Directed Compiler Generator.
A Syntax Directed Compiler for ALGOL-60.
The intrinsically exponential complexity of the circularity problem for attribute grammars.
*Communications of the ACM* 18, 1975.
Alternating semantic evaluator.
Attribute-Influenced LR Parsing.
Ordered attribute grammars.
GAG:A Practical Compiler Generator.
Automatic generation of efficient evaluators for attribute grammars.
Semantics of context-free languages.
correction in volume 5, number 1.
Semantic attribute processing in the system DELTA.
Q-TRANS: Query Translation Into English.
[18] Eva-Maria M. Mueckstein.
Controlled Natural Language Interfaces: The Best of Three Worlds.
The Compiler Writing System HLP (Helsinki Language Processor).
Extended Attribute Grammars.
Rule splitting and attribute-directed parsing.
In Lecture Notes in Computer Science 94, pages 383 - 392. Springer-Verlag, Berlin-
[22] Daniel M. Yellin.
Technical Report, Department of Computer Science, Columbia University, New York,
[23] Daniel M. Yellin.
Thesis Proposal: Restricted Inverse Form Grammars and Bi-Directional Translators.
Technical Report, Department of Computer Science, Columbia University, New York,
|
{"Source-Url": "https://mice.cs.columbia.edu/getTechreport.php?format=pdf&techreportID=955", "len_cl100k_base": 14556, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 29903, "total-output-tokens": 16821, "length": "2e13", "weborganizer": {"__label__adult": 0.00033974647521972656, "__label__art_design": 0.00045013427734375, "__label__crime_law": 0.0002541542053222656, "__label__education_jobs": 0.0011501312255859375, "__label__entertainment": 9.018182754516602e-05, "__label__fashion_beauty": 0.0001500844955444336, "__label__finance_business": 0.0002429485321044922, "__label__food_dining": 0.000354766845703125, "__label__games": 0.00047516822814941406, "__label__hardware": 0.0008015632629394531, "__label__health": 0.0004701614379882813, "__label__history": 0.0002856254577636719, "__label__home_hobbies": 9.572505950927734e-05, "__label__industrial": 0.00043892860412597656, "__label__literature": 0.0005850791931152344, "__label__politics": 0.00027298927307128906, "__label__religion": 0.0005373954772949219, "__label__science_tech": 0.0447998046875, "__label__social_life": 9.608268737792967e-05, "__label__software": 0.00821685791015625, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.00021648406982421875, "__label__transportation": 0.0004630088806152344, "__label__travel": 0.0001747608184814453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59954, 0.02911]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59954, 0.83941]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59954, 0.81096]], "google_gemma-3-12b-it_contains_pii": [[0, 356, false], [356, 1199, null], [1199, 2664, null], [2664, 3969, null], [3969, 7480, null], [7480, 9692, null], [9692, 13365, null], [13365, 16557, null], [16557, 18974, null], [18974, 21088, null], [21088, 24282, null], [24282, 27631, null], [27631, 31119, null], [31119, 34328, null], [34328, 36891, null], [36891, 38479, null], [38479, 41355, null], [41355, 44081, null], [44081, 46691, null], [46691, 49627, null], [49627, 52422, null], [52422, 55283, null], [55283, 56012, null], [56012, 57642, null], [57642, 59357, null], [59357, 59954, null]], "google_gemma-3-12b-it_is_public_document": [[0, 356, true], [356, 1199, null], [1199, 2664, null], [2664, 3969, null], [3969, 7480, null], [7480, 9692, null], [9692, 13365, null], [13365, 16557, null], [16557, 18974, null], [18974, 21088, null], [21088, 24282, null], [24282, 27631, null], [27631, 31119, null], [31119, 34328, null], [34328, 36891, null], [36891, 38479, null], [38479, 41355, null], [41355, 44081, null], [44081, 46691, null], [46691, 49627, null], [49627, 52422, null], [52422, 55283, null], [55283, 56012, null], [56012, 57642, null], [57642, 59357, null], [59357, 59954, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59954, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59954, null]], "pdf_page_numbers": [[0, 356, 1], [356, 1199, 2], [1199, 2664, 3], [2664, 3969, 4], [3969, 7480, 5], [7480, 9692, 6], [9692, 13365, 7], [13365, 16557, 8], [16557, 18974, 9], [18974, 21088, 10], [21088, 24282, 11], [24282, 27631, 12], [27631, 31119, 13], [31119, 34328, 14], [34328, 36891, 15], [36891, 38479, 16], [38479, 41355, 17], [41355, 44081, 18], [44081, 46691, 19], [46691, 49627, 20], [49627, 52422, 21], [52422, 55283, 22], [55283, 56012, 23], [56012, 57642, 24], [57642, 59357, 25], [59357, 59954, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59954, 0.05187]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
70e995f93d31d71e9917cede1283e9dad7bfc393
|
Validation of Mixed SIGNAL-ALPHA Real-Time Systems through Affine Calculus on Clock Synchronisation Constraints
Irina M. Smarandache\textsuperscript{1}, Thierry Gautier\textsuperscript{2}, and Paul Le Guernic\textsuperscript{2}
\textsuperscript{1} The University of Reading, Department of Computer Science
Whiteknights, PO Box 225, Reading RG6 6AY, United Kingdom
Tel.: (44) 118 931 8611 (7626), Fax: (44) 118 975 1994
I.M.Smarandache@reading.ac.uk
\textsuperscript{2} IRISA-INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France
Thierry.Gautier@irisa.fr, Paul.LeGuernic@irisa.fr
Abstract. In this paper we present the affine clock calculus as an extension of the formal verification techniques provided by the SIGNAL language. A SIGNAL program describes a system of clock synchronisation constraints the consistency of which is verified by compilation (clock calculus). Well-adapted in control-based system design, the clock calculus has to be extended in order to enable the validation of SIGNAL-ALPHA applications which usually contain important numerical calculations. The new affine clock calculus is based on the properties of affine relations induced between clocks by the refinement of SIGNAL-ALPHA specifications in a co-design context. Affine relations enable the derivation of a new set of synchronisability rules which represent conditions against which synchronisation constraints on clocks can be assessed. Properties of affine relations and synchronisability rules are derived in the semantical model of traces of SIGNAL. A prototype implementing a subset of the synchronisability rules has been integrated in the SIGNAL compiler and used for the validation of a video image coding application specified using SIGNAL and ALPHA.
1 Introduction
Real-time systems, and more generally reactive systems [4], are in continuous interaction with their environment. Therefore, they must respond in time to external stimuli. Moreover, real-time systems must be safe, thus one would wish to prove their correctness. Time constraints and safety are two important aspects to be considered in the design of a real-time application.
Real-time systems may be constrained by very tight real-time deadlines. Moreover, a hardware implementation of parts of these systems is sometimes required, to meet specific constraints for instance. An example is an application consisting of numerical calculations performed iteratively on large structures of regular multidimensional data. In this case, a hardware/software implementation may be envisaged, in which the numerical calculations are conveyed to hardware
for efficiency reasons, while the control relating these parts is implemented in software.
In general, designing a mixed hardware/software real-time system requires a rigorous methodology that comprises methods and tools addressing, among others, system specification and validation, optimal code generation and hardware synthesis. These aspects are dealt with in codesign which denotes the specification, validation and implementation of an application which consists both of a hardware part, in the form of a set of specialised integrated circuits, and a software part implemented on general programmable processors. The idea is to explore various possible implementations of hardware/software systems in order to improve their performance and to ensure the respect of cost constraints.
1.1 Real-Time System Codesign
System codesign is a complex process which can be decomposed into three main activities: 1. The cospecification of an application at various levels of abstraction; 2. The validation of a specification by formal verification or simulation, also known as cosimulation; 3. The hardware/software partitioning of an application, the evaluation of a partitioning from the point of view of the time constraints and cost, the generation of executable code, the synthesis of hardware, and the production of the interface between hardware and software, i.e., cosynthesis. A lot of work has been done, the purpose of which was to define a well-structured methodology for codesign. An important point was generally the description of both hardware and software using the same language, like for instance VHDL enhanced with mechanisms for calling C functions, or high-level languages like C, C++ or FORTRAN extended with facilities for the description of hardware systems. These approaches enable the programming of both the hardware and software parts of a system in a unique framework and their validation by simulation. However, they cannot guarantee system correctness. This aspect can be much improved by using formal languages for system specification, refinement of specifications towards lower levels of abstraction (implementation) and validation of the various specifications by formal verification.
Defining a complete methodology of codesign requires addressing other relevant problems, most of them concerning cosynthesis. Among these problems there are the automatic partitioning into hardware and software, the synthesis of hardware and the generation of optimal code for software implementation.
The work presented in this paper is part of a more general effort for building a hybrid framework in which the SIGNAL and ALPHA languages can be used for real-time system codesign.
1.2 Cospecification and Cosimulation of SIGNAL-ALPHA Systems
SIGNAL is a synchronous language developed for the specification, validation and implementation of real-time systems. SIGNAL variables represent finite or infinite sequences of values (data) which can be filtered or merged before being submitted to classical boolean or mathematical operations. A clock is implicitly
associated with each Signal variable: it represents a set of temporal indices which denote the logical instants where the variable is present and has a value. The semantics of a Signal program can be described by a system of constraints (relations) on clocks and values, which is constructed and verified for consistency during compilation. The verification of the clock constraints is called clock calculus. The Signal environment is enhanced with tools for C [5] and VHDL [3] code generation and formal verification of dynamic properties [2].
In its present form, Signal is well-adapted for the design of control-based real-time systems. Firstly, this is due to its limitations concerning the treatment of computations on multidimensional data such as matrices. Only simple algorithms can be expressed in Signal and no significant optimisation is performed at the level of the generation of executable C or VHDL code concerning vectors. In contrast with Signal, the Alpha language has been developed primarily for the specification and implementation of algorithms on multidimensional data. Such algorithms can be described in Alpha using affine recurrence equations over convex polyhedral domains [20] and be further transformed for optimal hardware or software implementation on parallel or sequential architectures [21].
Given their complementary properties, the Signal and Alpha languages can be used jointly for the design of real-time systems containing important numerical calculations on multidimensional data and control: numerical computations are expressed in Alpha and the control is conveyed to Signal. When the real-time requirements of the system are very tight, a mixed hardware/software implementation may be envisaged. In [9] we propose a hybrid framework for the combined use of Signal and Alpha in real-time system codesign. In order for this framework to be operational, it is necessary to interface Signal and Alpha programs both at the functional and architectural level. The former corresponds to a high-level mathematical representation of an algorithm in Alpha, while the latter contains a set of new temporal indices corresponding to the execution of the algorithm on a parallel or sequential architecture.
In Signal-Alpha systems, the refinement of an Alpha program from a functional level to an architectural level oriented toward a particular implementation also induces a refinement of the temporal indices in Signal. The new time indices are obtained through affine transformations on the instants of time of the initial Signal specification. Consider clocks \( c \) and \( c_1 \) in Signal which are identical at the functional level (they are also denoted as synchronous). After refinement, their relative position is such that clock \( c_1 \) can be obtained by an affine transformation applied to clock \( c \): the instants of time of \( c \) and \( c_1 \), denoted respectively \( T \) and \( T_1 \), can be described by a pair of affine functions \( T = \{ nt + \varphi_1 \mid t \in \mathcal{T} \} \), \( T_1 = \{ dt + \varphi_2 \mid t \in \mathcal{T} \} \), on the same set of instants \( \mathcal{T} \). With \( \varphi = \varphi_2 - \varphi_1 \), we will say that clock \( c_1 \) is obtained by an \( (n, \varphi, d) \)-affine transformation applied to clock \( c \), where \( n, d \in \mathbb{N}^* \) the set of strictly positive integers and \( \varphi \in \mathbb{Z} \) the set of integers. Clocks \( c \) and \( c_1 \) are also said to be in an \( (n, \varphi, d) \)-affine relation.
Clocks obtained by affine transformation may be re-synchronised at the architectural level. As an example, consider clocks \( c, c_1 \) and \( c_2 \) which are identical
in the Signal functional specification. At the architectural level, clocks $c_1$ and $c_2$ have been transformed such that $c$, $c_1$ and $c$, $c_2$ are respectively in affine relations of parameters $(n_1, \varphi_1, d_1)$ and $(n_2, \varphi_2, d_2)$. Whether clocks $c_1$ and $c_2$ can be re-synchronised depends on the properties of the affine relations which are induced from the values of $(n_1, \varphi_1, d_1)$ and $(n_2, \varphi_2, d_2)$. Moreover, the relations between $c$, $c_1$ and respectively, $c$, $c_2$ may be expressions on $(n, \varphi, d)$-affine relations constructed using operations like composition, union, etc. In this case, the re-synchronisation of clocks $c_1$ and $c_2$ depends on the properties of these operations.
The Signal clock calculus performs the verification of clock synchronisation constraints using a set of synchronisability rules, i.e. conditions against which these constraints can be assessed. The current clock calculus depends on boolean equation resolution methods [5] [1] which have been successfully used for the validation of numerous control-based real-time applications. However, in order to validate mixed Signal-Apha systems as presented above, it is necessary to extend the current clock calculus with a set of synchronisability rules deduced from the properties of $(n, \varphi, d)$-affine relations. The new set of rules defines the affine clock calculus, which constitutes the main topic of this paper. We explore the space of $(n, \varphi, d)$-affine relations and study to which extent it is closed under the main operations that can be performed on affine relations. Following this study, we define a set of synchronisability rules which, although incomplete, enables the validation of the principles underlying the cospecification and cosimulation using Signal and Alpha. The semantical model of traces of Signal [12] [16] constitutes the support for the study of the properties of affine relations and for the definition of the new synchronisability rules.
1.3 Organisation of the Paper
In Section 2 we present the integration of Signal and Alpha for system codeign. Section 3 is the central core of this paper and is dedicated to the definition and implementation of the affine clock calculus. The main concepts useful for this purpose are progressively introduced: these are the model of traces of the Signal language, the properties of affine relations on clocks, the set of synchronisability rules induced by the latter, and finally the necessary elements for the integration of the affine clock calculus in the compiler. The affine clock calculus has been applied to the cospecification and cosimulation of a video image coding application; this is briefly illustrated in Section 4. In the same section we discuss in which way the Signal and Alpha environments may further contribute to the development of a complete codesign methodology based on both languages. Finally, in Section 5 we present conclusions and perspectives of our work.
2 Signal and Alpha in Real-Time System Codesign
Figure 1 summarizes the main elements of the environments around Signal and Alpha that make both languages well-adapted for real-time system codesign.
**Signal** and **Alpha** programs represent mathematical notations for the properties of the processes they define. The system of constraints on clocks and values associated with a **Signal** program is transformed by compilation into a *synchronised data flow graph* (SDFG). This data structure constitutes the support for executable code generation (C or VHDL) or verification of dynamic properties using the formal tool **Signal** [2].
The **Alpha** compiler includes a powerful type checking mechanism based on the structure of an **Alpha** variable as a function over convex polyhedra. The syntax tree obtained after compilation can be directly translated into C code for functional simulation, or it can be transformed into a subset of **Alpha** called **Alpha0** which exhibits the details of a parallel or sequential implementation. The syntax tree in **Alpha0** form can be further translated in C or VHDL executable code or directly mapped on a netlist [21].
The interface between **Signal** and **Alpha** is based on the fact that both languages can be translated in C and executed for functional simulation. Furthermore, **Signal** offers the possibility to call *external processes*; such a process can be the specification of an algorithm in a language other than **Signal**. A particular type of an external process is a *function*, the execution of which is considered instantaneous from the point of view of **Signal**. A **Signal** function can be a predefined or a user-defined C function.

**Fig. 1.** **Signal** and **Alpha** in system co-design.
### 2.1 Functional Cospecification and Cosimulation
Being a synchronous language, **Signal** is based on the following hypotheses [4]:
1. All actions (communications and calculations) in a system have zero *logical*
duration (the elapsed time is represented by the precedence of successive values on a same data flow); 2. Two or more actions can take place at the same logical instant, such actions being termed “simultaneous”. From the point of view of the logical temporal properties of a system, only succession and simultaneity of instants are of interest. Although their exact time values are not considered, note however that they will be considered for a given implementation. The process associated with a SIGNAL program represents thus a succession of logical instants, with each instant being associated one or more actions considered of zero logical duration and involving process variables present at that instant.
Consider for example a coding system for sequences of video images at 34 Mbits/s [8]. A system of this type consists of a set of numerical treatments applied iteratively on images of the same dimension. Images are divided into luminance and chrominance blocks and treatments are applied to each block. Numerical treatments consist mainly of algorithms for inter and intra image coding which require operations like a discrete cosine transformation (DCT). In order to illustrate the interfacing between SIGNAL and ALPHA, we have isolated from the coding application a simple SIGNAL program and have illustrated the associated process in Fig. 2. It consists of a DCT operation applied in sequence to different values $A_i$ of the matrix of pixels $A$ present at each logical instant of time $t_i$. The matrix $A$ corresponds to a block of luminance or chrominance of an image. The DCT can be expressed in SIGNAL as $B := Dct(A)$, where DCT is actually an external process. The DCT is a time consuming algorithm, particularly for large matrices or when applied to images containing a large number of blocks. In order to improve the overall performance of the coding application, one would wish to execute each instance $B_i := Dct(A_i)$ on a parallel integrated architecture as derived by the ALPHA environment.
The DCT can be easily described in ALPHA. The SIGNAL-ALPHA cospecification and cosimulation of the new system is made possible at the functional level as follows (see Fig. 2): 1. The ALPHA system is translated in executable C code; 2. The C function $ALPHA_C$ obtained at step 1 represents the external process implementing the DCT in SIGNAL. The function $ALPHA_C$ is considered instantaneous in SIGNAL; the clocks of the matrices $A$ and $B$, denoted respectively by $c$ and $c_1$, are therefore synchronous. The overall system is thus represented as a SIGNAL specification executing instantaneously the functional description of the ALPHA specification. The system can be validated in the SIGNAL environment by formal verification (compilation, model checking with Sigali) and/or simulation.
### 2.2 Implementation-oriented Cospecification and Cosimulation
A mixed SIGNAL-ALPHA specification at the functional level may be refined in order to take into consideration the details of a particular implementation. The ALPHA program of Section 2.1 describing a DCT may be submitted to a sequence of transformations for a parallel or sequential implementation. These transformations guarantee the equivalence of the final specification, noted $ALPHA'$ in Fig. 3, with the initial $ALPHA$ system of Fig. 2. The system $ALPHA'$ contains
the time indices corresponding to a particular scheduling of the DCT operation. In Fig. 3 these time indices are represented as the diagonal sets of micro-instants \( \mu i^j \) associated with each macro-instant \( t_i \).
The Signal specification has to be refined accordingly in order to enable the validation of the overall system. Therefore, the micro-instants of time of \( \text{ALPHA}^* \) are taken into consideration in the new process \( \text{SIGNAL}^* \) and described as the sets of instants \( \mu St^i_0, \mu St^i_1, \) etc. (see Fig. 3). The C function \( \text{ALPHA}^*\_C \) has been derived from \( \text{ALPHA}^* \) and transformed in order to describe the sequence of operations performed at each micro-instant of time.
The regularity of \( \text{ALPHA} \) values manifests itself in \( \ ext{SIGNAL} \) in several ways. First, the sets of micro-instants \( \mu St^i_0, \mu St^i_1, \) etc. have the same cardinality. Also, successive values for \( B \) are provided at specific micro-instants between any two successive macro-instants \( t_i \) and \( t_{i+1} \) in a regular manner. This situation is illustrated in Fig. 4 where the clocks of matrices \( A \) and \( B \), denoted respectively by \( c \) and \( c_1 \), are defined by the following instants of time: \( c = \{0, 9, 18, \ldots\} \) and \( c_1 = \{6, 15, \ldots\} \) (after providing the values \( B_i \) at the instants of time defined by \( c_1 \), the architecture implementing the operation \( B_i := \text{Det}(A_i) \) may execute further computations like initialisations for the next operation \( B_{i+1} := \text{Det}(A_{i+1}) \).
In Fig. 4, clock \( c' \) is defined by the set of instants \( \{0, 1, 2, 3, 4, 5, \ldots \} \). It can be noticed that clocks \( c \) and \( c_1 \) are placed in a regular manner on the support clock \( c' \); their relative position is such that \( c_1 \) has been obtained through an \( (9, 6, 9) \)-affine transformation applied to \( c \). By definition, clock \( c_1 \) is the result of an \( (n, \varphi, d) \)-affine transformation applied to clock \( c \) if it can be obtained from \( c \) through steps 1 and 2 as follows: 1. Constructing a new clock \( c' \) as the union of \( c \) with the set of instants obtained by introducing \( n - 1 \) fictive instants between any two successive instants of \( c \) (and \( -\varphi \) fictive instants before the first instant of \( c \) when \( \varphi \) is negative). 2. Defining the clock \( c_1 \) as the set of instants \( \{dt + \varphi \mid t \in c'\} \), with \( c' = \{t \mid t \in \mathbb{N}\} \) (in other words, counting every \( d \) instant, starting with the \( \varphi^k \) instant of \( c' \), or with the first instant of \( c' \) when \( \varphi \) is negative). Clocks \( c \) and \( c_1 \) are then said to be in an \( (n, \varphi, d) \)-affine relation. The above definition can be expressed in an equivalent form as follows: clocks \( c \) and \( c_1 \) are in \( (n, \varphi, d) \)-affine relation if there exists a clock \( c' \) such that \( c \) and \( c_1 \) can be respectively expressed using the affine functions \( \lambda (nt + \varphi_1) \) and \( \lambda (dt + \varphi_2) \), with \( \varphi_2 - \varphi_1 = \varphi \), with respect to the time indices of \( c' \): \( c' = \{t \mid t \in \mathbb{N}\} \), \( c = \{nt + \varphi_1 \mid t \in c'\} \), \( c_1 = \{dt + \varphi_2 \mid t \in c'\} \).
Properties on affine relations can be exploited in order to verify that clocks are synchronisable, that is, their sets of instants can be identified (re-synchronised). Consider (Fig. 2) a Signal program which executes two successive DCT operations at each macro-instant \( t_i \), one on a luminance block of an image, noted \( B := \text{Dct}(A) \), and the second one on the next block of red chrominance of the same image, described by \( D := \text{Dct}(C) \).
Each DCT function is expressed in Alpha at the functional level and further refined according to a particular implementation. The Signal specification is refined accordingly and we obtain the timing diagrams of Fig. 5: the clocks of \( A \) and \( C \) are synchronous and equal to \( c \), the clocks of \( B \) and \( D \) are respectively \( c_1 \) and \( c_2 \), and the clocks \( c' \) and \( c'' \) describe the instants of the execution of the DCT functions on a potential architecture derived in the Alpha environment.
In the functional Signal-Alpha specification, clocks \( c, c_1 \) and \( c_2 \) were synchronous (see Section 2.1 for details). After refinement of the time indices in the Signal-Alpha specification, the clocks \( c_1 \) and \( c_2 \) should be re-synchronised in order to preserve the temporal properties of the whole application. Whether the re-synchronisation of \( c_1 \) and \( c_2 \) is possible given their relative position as illustrated in Fig. 5, or after further adjustments of their time indices, can be decided based on the properties of the affine relations existing between \( c, c_1 \), and \( c_2 \).
Fig. 5. Synchronisable clocks in the context of codesign with SIGNAL and Alpha.
and $c$, $c_2$ respectively. Clocks $c$, $c_1$ and $c$, $c_2$ are respectively in $(9, 6, 9)$ and $(7, 3, 7)$-affine relation in the process SIGNAL’. The relation existing between the triplets $(9, 6, 9)$ and $(7, 3, 7)$ guarantees the equivalence of the corresponding affine relations. This will be detailed in Section 3. Informally, the equivalence of the above affine relations expresses the fact that the relative positions of clocks $c$ and $c_1$, respectively $c$ and $c_2$, are identical. Based on this observation, clocks $c_1$ and $c_2$ can be identified without contradicting the temporal behaviour of the other clocks in the SIGNAL program. The instants of time of clocks $c'$ and $c''$ situated between two successive instants of $c$ and $c_1$ (or $c_2$) are independent and can be positioned with respect to each other in various manners; in Fig. 5 we have illustrated one possibility. Therefore, $c_1$ and $c_2$ can be re-synchronised; we say that $c_1$ and $c_2$ are synchronisable.
The aim of the affine clock calculus discussed in Section 3 is to define necessary and sufficient conditions for clock synchronisability based on the properties of affine relations on clocks. These conditions are expressed as a set of synchronisability rules and are derived in the semantical model of traces of SIGNAL. Section 3 begins with an introduction to these concepts.
### 3 Affine Calculus on Clocks in SIGNAL
Figure 6 introduces the reader to the semantics of traces [12] [16] of SIGNAL. The most important concepts in SIGNAL are: 1. the signal, which denotes a variable of the language and represents a finite or infinite sequence of values; 2. the clock, a variable associated with each signal which represents the set of logical instants where the values of the signal are present. SIGNAL operators manipulate signals by imposing implicit or explicit constraints on their values and clocks. Constraints on clocks are usually expressed as identities between
clock expressions constructed using the operators of intersection (\(\land\)), union (\(\lor\)) or difference (\(\setminus\)). Clocks can be also subsets of other clocks defined as samplings by boolean conditions. When no condition is explicitly or implicitly stated on a pair of clocks, they are independent.
A **Signal** program describes a real-time system, which is in continuous interaction with its environment. Input values are transformed corresponding to the actions of a given specification and the results are provided to the environment. This situation is illustrated in Fig. 6 in the case of a program manipulating inputs \(x\) and \(y\) and providing output \(z\) depending on the values of \(x\) and \(y\). In case \(z\) is the addition of \(x\) and \(y\), signals \(x\), \(y\) and \(z\) are implicitly constrained by the + operator in **Signal** to have the same clocks \(c_x = c_y = c_z\).
The configurations \(F\) and \(F'\) illustrated in Fig. 6 correspond to two different executions of the **Signal** program, involving sequences \(x_i, y_i\) and \(z_i\) and respectively \(x'_i, y'_i\) and \(z'_i\). The set of all possible configurations, called *traces*, which can be exhibited during the execution of a **Signal** program, defines completely the process \(P\) associated with the program. Consider \(A\) a subset of the set \(B\) of signals manipulated by a program. A trace may contain instants with no action involving signals from \(A\). However, each instant of this type contains actions which involve other signals from the set \(B \setminus A\). Given a subset \(A\) of signals, a *flow* on \(A\) is a trace with at least one action involving signals from \(A\) for each logical instant. In the particular case of Fig. 6, if we consider the subset of signals to be \(\{x, y, z\}\), the traces illustrated are actually flows.
More generally, the process \(P\) associated with a **Signal** program is a set of flows on the variables of the program. Each flow \(F\) in \(P\) is constrained by a system of equations on the clocks and values of signals manipulated by \(P\). Equations on values can be further expressed in the abstract form of a data dependency graph (an example of a data dependency graph is illustrated in Fig. 6 for the +

operator). Besides the clock calculus, the compiler verifies data consistency by checking the absence of cycles in the data dependency graph. In the next section however, we will concentrate mainly on the clock calculus.
3.1 Clock calculus & Synchronisability
The clock calculus is equivalent to the resolution of a system of clock equations. For example:
\[
\begin{align*}
c &= c_1 \\
c' &= (c_1 \land c_2) \lor c_1 \\
c &= c'
\end{align*}
\]
(1)
can be a system derived from a SIGNAL program which manipulates clocks \(c, c', c_1\) and \(c_2\). In this simple system, \(c_1\) and \((c_1 \land c_2) \lor c_1\) have clearly to be proved equivalent, which is an immediate consequence of the axioms of the boolean lattice. The space of clocks associated with a SIGNAL program is a boolean lattice \([6]\) the properties of which are extensively used for the proof of equivalences. The resolution of the system is performed by triangularisation of the system \([5]\) \([1]\).
Given a boolean signal \(Cd\), its clock, denoted \(\hat{C}d\), can be partitioned into the clock \([Cd]\) where the signal \(Cd\) is present and true and the clock \([\neg Cd]\) where \(Cd\) is present and false (the clocks \([Cd]\) and \([\neg Cd]\) represent samplings by boolean conditions). The relations between clocks \(\hat{C}d\), \([Cd]\) and \([\neg Cd]\) are expressed by the partition equations below:
\[
\begin{align*}
[Cd] \lor [\neg Cd] &= \hat{C}d \\
[Cd] \land [\neg Cd] &= \emptyset
\end{align*}
\]
(2)
The axioms of the boolean lattice together with the partition equations induce on the space of clocks a lattice of an order \(\preceq\) “coarser” than the order \(\prec\) of the boolean lattice \([5]\). Clocks can be boolean formulas constructed either with samplings by boolean conditions \([Cd]\), \([\neg Cd]\) or with free variables of the boolean lattice. The properties of the lattice of order \(\preceq\) are actually used during the triangularisation of any system of clock equations.
The axioms of the lattice \(\preceq\) represent a system of synchronisability rules in the sense described below. Clocks \(c\) and \(c'\) are synchronisable in the process \(P\), which is denoted by \(c \overset{P}{\bowtie} c'\), if there exists a flow \(F\) in \(P\) in which \(c\) and \(c'\) are synchronous:
\[
c \overset{P}{\bowtie} c' \iff \exists F \in P, c \overset{F}{=} c'
\]
(3)
(we note \(c \overset{F}{=} c'\) the fact that \(c\) and \(c'\) are synchronous in \(F\)).
Whenever the property expressed by equation 3 is valid for each flow \(F\) in \(P\), the clocks \(c\) and \(c'\) are said to be synchronous in \(P\), which is denoted by \(c \overset{P}{=} c'\). This definition can be expressed as follows:
\[
c \overset{P}{=} c' \iff \forall F \in P, c \overset{F}{=} c'
\]
(4)
Unless explicitly constrained through the Signal program, clocks \( c \) and \( d \) are completely independent in the associated \( P \) process. Therefore, their relative position can be such that in some flows \( F \) in \( P \) they are identical, while in some other flows \( F' \) in \( P \) their instants interleave in an arbitrary manner: obviously, if \( c \) and \( d \) are independent in \( P \), they are synchronisable. When the relative position of clocks \( c \) and \( d \) is implicitly or explicitly constrained by the Signal operators, flows \( F \) in \( P \) are subsequently constrained and the synchronisability of \( c \) and \( d \) depends on these constraints.
In order to better understand the use of the synchronisability rules, consider for example a process \( P \) derived from a Signal program \( Prg \) in which clocks \( c \) and \( d \) are defined by the first two equations of the system (1):
\[
\begin{align*}
c &= c_1 \\
c' &= (c_1 \land c_2) \lor c_1
\end{align*}
\]
Program \( Prg \) may be transformed into \( Prg' \) in which an additional constraint has been expressed on clocks \( c \) and \( c' \): \( c = c' \) (in the Signal.Alpha context, \( Prg \) could be part of a transformed Signal.Alpha specification, as seen above, and \( Prg' \) the same specification, in which clocks are resynchronised). Consider the process \( P' \) corresponding to the program \( Prg' \). The system of clock equations associated with \( Prg' \) is (1). Given the set of flows \( F' \subseteq P \) such that \( c \models c' \), \( \forall F \in F' \), it results \( P' = F' \). Therefore, verifying the consistency of (1), which is equivalent to testing that clocks \( c \) and \( c' \) are equivalent in \( P' \), is further equivalent to testing the synchronisability of \( c \) and \( c' \) in \( P \). The rule \((c_1 \land c_2) \lor c_1 = c_1\) from the boolean lattice is indeed a synchronism rule: \((c_1 \land c_2) \lor c_1 \models c_1\) for every process \( P \). The same axiom holds for the process \( P \) associated with \( Prg \). And thus \((c_1 \land c_2) \lor c_1 \models c_1\), since synchronism implies synchronisability. Therefore in the example, \( F' \) is not empty and it can be concluded that \( P' \) is consistent from the point of view of the constraints expressed on its clocks.
The rules of the lattice \( \preceq \) represent synchronisability rules: each identity \( f_1 = f_2 \), with \( f_1 \) and \( f_2 \) boolean formulas on clocks, is equivalent to \( f_1 \models f_2 \) which implies \( f_1 \models f_2 \) for every process \( P \). These rules can be further extended using the properties of the affine relations between clocks. Figure 5 illustrates this idea: if \( P \) is the process associated with the program SIGNAL’, the configuration in which clocks \( c_1 \) and \( c_2 \) coincide represent a flow \( F \in P \) such that \( c_1 \models c_2 \). Thus, \( c_1 \) and \( c_2 \) are synchronisable in \( P \). The reason here is that the \((9,6,9)\) and \((7,3,7)\)-affine relations existing respectively between \( c_1 \), \( c_1 \) and \( c_2 \) are equivalent. In the next section, we define the affine relation associated with a flow and a process and further explicitate the concept of equivalence of affine relations.
### 3.2 Affine Relations in SIGNAL
Given \( n,d \in \mathbb{N}^* \) and \( \varphi \in \mathbb{Z} \) fixed, clocks \( c \) and \( c_1 \) are in \((n,\varphi,d)\)-affine relation in the flow \( F \)—which is denoted \( c \in \mathcal{R}_F^{(n,\varphi,d)} c_1 \) or \((c,c_1) \in \mathcal{R}_F^{(n,\varphi,d)} \)—if the relative
position of \( c \) and \( c_1 \) in \( F \) can be induced by an \((n, \varphi, d)\)-affine transformation as
defined in Section 2.2.
Clocks \( c \) and \( c_1 \) are in \((n, \varphi, d)\)-affine relation in process \( P \), denoted
\( c \mathcal{R}^F_{(n, \varphi, d)} c_1 \) or \( (c, c_1) \in \mathcal{R}^F_{(n, \varphi, d)} \), if they are in \((n, \varphi, d)\)-affine relation in each
flow \( F \) of \( P \), i.e. \( c \mathcal{R}^F_{(n, \varphi, d)} c_1, \forall F \in P \). Flows and processes are defined over the
set of variables they manipulate. For a given set \( A \), a flow \( F \) on \( A \) is a member of the
set of flows \( \mathcal{F}_A \) that can be constructed with the variables of \( A \). In a similar
manner, a process \( P \) on \( A \) belongs to the set of processes on \( A \), i.e. \( P \in \mathcal{P}_A \).
Because of the finite nature of the sets of variables associated with flows and
processes, affine relations can be defined as finite sets as follows:
\[
\forall F \in \mathcal{F}_A, \mathcal{R}^F_{(n, \varphi, d)} = \{(c, c_1) \in A \times A \mid c \mathcal{R}^F_{(n, \varphi, d)} c_1\} \quad (6)
\]
\[
\forall P \in \mathcal{F}_A, \mathcal{R}^P_{(n, \varphi, d)} = \{(c, c_1) \in A \times A \mid c \mathcal{R}^P_{(n, \varphi, d)} c_1\} \quad (7)
\]
Consider the process \( P \in \mathcal{P}_{\{c, c_1, c_2\}} \) defined as follows:
\[
P = \{ F \in \mathcal{F}_{\{c, c_1, c_2\}} \mid c \mathcal{R}^F_{(n_1, \varphi_1, d_1)} c_1, c \mathcal{R}^F_{(n_2, \varphi_2, d_2)} c_2 \}
\]
(induced by a SIGNAL program that manipulates only the clocks \( c, c_1 \) and \( c_2 \)).
From the definition of an affine relation associated with a process it results
\( c \mathcal{R}^F_{(n_1, \varphi_1, d_1)} c_1 \) and \( c \mathcal{R}^F_{(n_2, \varphi_2, d_2)} c_2 \). Clocks \( c_1 \) and \( c_2 \) are synchronisable in \( P \)
if there exists \( F \in P \) satisfying \( c_1 \triangleq c_2 \). Consider \( F \in P \) satisfying \( c_1 \triangleq c_2 \).
Obviously \( c \mathcal{R}^F_{(n_1, \varphi_1, d_1)} c_1 \) and \( c \mathcal{R}^F_{(n_2, \varphi_2, d_2)} c_2 \). Being identical in \( F \), clocks \( c_1 \)
and \( c_2 \) can be replaced with each other and therefore \( c \mathcal{R}^F_{(n_1, \varphi_1, d_1)} c_1 \) implies
\( c \mathcal{R}^F_{(n_1, \varphi_1, d_1)} c_2 \) and \( c \mathcal{R}^F_{(n_2, \varphi_2, d_2)} c_2 \) implies \( c \mathcal{R}^F_{(n_2, \varphi_2, d_2)} c_1 \). It results therefore
that \( \mathcal{R}^F_{(n_1, \varphi_1, d_1)} = \mathcal{R}^F_{(n_2, \varphi_2, d_2)} = \{(c, c_1), (c, c_2)\} \). In conclusion, a necessary condition
for clocks \( c_1 \) and \( c_2 \) to be synchronisable in \( P \) is that \( \mathcal{R}^F_{(n_1, \varphi_1, d_1)} \) and
\( \mathcal{R}^F_{(n_2, \varphi_2, d_2)} \) be equivalent. In the case of the process \( P \) defined by (8), it can be
proved that this condition is also sufficient.
The equivalence of affine relations depends on the closure properties of the
space of affine relations with respect to the main operations that can be applied
to it. These are either union, intersection or difference induced by the homonym
operations on clocks, or general operations on relations like inverse and com-
position [15]. In the next section we propose a study of these properties in the
semantical model of traces of SIGNAL.
### 3.3 Properties on Affine Relations & Synchronisability Rules
The semantics of traces. Consider a finite set of signals \( A \). The set of all
possible flows defined on \( A \) is denoted \( \mathcal{F}_A \). Subsets of flows from \( \mathcal{F}_A \) can be
grouped in processes which are members of the set \( \mathcal{P}_A \) of all processes that can
be defined on \( A \). A SIGNAL program on \( A \) defines a process \( P \in \mathcal{P}_A \); each flow
$F \in P$ satisfies some constraints imposed by the Signal operators on the clocks and values of the signals from $A$.
Signal disposes of four basic operators (kernel) which are sufficient for the construction of any program regardless of its complexity. Kernel operators are combined through composition and restriction in order to build programs. The composition and restriction of programs induce naturally the corresponding operations on processes and flows. Intuitively, the restriction of a flow $F$ to a set of variables $A' \subseteq A$ is the flow $\Pi_{A'}(F)$ which contains only those instants of $F$ with actions involving signals from $A'$.
Concerning processes, the main operations are defined as follows. Given a set of variables $A', A \subseteq A$, the restriction of $P \in \mathcal{P}_A$ to $A'$ (the projection of $P$ on $A'$) contains the flows $F \in P$ manipulating exclusively variables of $A'$:
$$\Pi_{A'}(P) = \{ F' \in \mathcal{F}_{A'} \mid F' = \Pi_{A'}(F), \forall F \in P \}$$
The composition of processes $P_1 \in \mathcal{P}_{A_1}$ and $P_2 \in \mathcal{P}_{A_2}$, with $A_1, A_2$ arbitrary sets of variables, is defined by:
$$P_1 \mid P_2 = \{ F \in \mathcal{F}_{A_1 \cup A_2} \mid \Pi_{A_1}(F) \in P_1, \Pi_{A_2}(F) \in P_2 \}$$
The following lemma describes the necessary and sufficient conditions—stated as $\Pi_{A_2}(P) \subseteq Q$—for a property valid in the process $Q$ to be also also in $P$:
**Lemma 1.** $\forall P \in \mathcal{P}_{A_1}, \forall Q \in \mathcal{P}_{A_2}, A_2 \subseteq A_1$,
$$\Pi_{A_2}(P) \subseteq Q \Leftrightarrow P \mid Q = P$$
In other words, given the hypothesis described by the left hand side of (11), $Q$ expresses a property valid also in $P$.
**Properties on affine relations.** Operations specific to relations in general, like inverse $(\cdot)^{-1}$ and composition $\ast$, can be applied to affine relations [15]. An example, consider a process $P \in \mathcal{P}_{\{c, c_1, c_2, c_3\}}$ with clocks $c, c_1, c_2$ and $c_3$ satisfying $c \mathcal{R}_{n_1, \psi_1, d_1} c_1, c_1 \mathcal{R}_{n_2, \psi_2, d_2} c_2$ and $c \mathcal{R}_{n_3, \psi_3, d_3} c_3$. Obviously, it results that $c \mathcal{R}_{n_1, \psi_1, d_1} \ast \mathcal{R}_{n_2, \psi_2, d_2} c_2$ and the synchronisability of $c_2$ and $c_3$ depends on properties of the composition. When the space of affine relations is closed under composition, the test of the synchronisability of $c_2$ and $c_3$ reduces itself to the verification of the equivalence of affine relations.
Affine relations can be further combined through union $\cup_r$, intersection $\cap_r$ and difference $\setminus_r$ induced by the homonym operations on clocks ($\vee, \wedge, \setminus$). A similar argument as before conducts to the necessity of studying closure properties of these operators with respect to the space of affine relations.
Here is a brief presentation of the main steps and results obtained in the study of affine relations.
Equivalence of Affine Relations. An equivalence relation, noted \( \sim \), can be defined between triplets \((n, \varphi, d)\) as follows: \((n, \varphi, d) \sim (n', \varphi', d')\) iff either \(n d = n'd\) and \(n \varphi = n' \varphi', \) for \(G \mid \varphi\) (i.e., \(G\) is a divisor of \(\varphi\)) and \(G' \mid \varphi', \) or \(n d = n'd\) and \[\left\lceil \frac{d t + \varphi}{n} \right\rceil = \left\lceil \frac{d' t' + \varphi'}{n'} \right\rceil, \forall t \in \mathbb{N}, dt + \varphi \geq 0, \text{ for } G \nmid \varphi \text{ and } G' \nmid \varphi', \text{ with } G = \gcd(n, d) \text{ the greatest common divisor of } n \text{ and } d, \text{ and } G' = \gcd(n', d) \text{ and } [x] \text{ the integer part of } x \in \mathbb{N}. \] The equivalence of affine relations depends exclusively on the values of the associated triplets \((n, \varphi, d)\) [17]:
**Proposition 1.**
\[
R^F_{(n, \varphi, d)} = R^F_{(n', \varphi', d')}, \quad \forall F \in F_A \Leftrightarrow (n, \varphi, d) \sim (n', \varphi', d) \tag{12}
\]
**Canonical Form.** In order to reduce the complexity of the test of the equivalence \(\sim\), we have then defined a canonical form \((n_{CF}, \varphi_{CF}, d_{CF})\) for a triplet \((n, \varphi, d)\) [18] as follows:
**Proposition 2.**
\[
a) \quad G \mid \varphi \Rightarrow (n_{CF}, \varphi_{CF}, d_{CF}) = \left( \frac{n}{G}, \frac{\varphi}{G}, \frac{d}{G} \right) \\
b) \quad G \nmid \varphi \Rightarrow (n_{CF}, \varphi_{CF}, d_{CF}) = \left( \frac{2n}{G}, \left( 2 \left[ \frac{n}{G} \right] + 1 \right), 2 \frac{\varphi}{G} \right) \tag{13}
\]
Consequently, the canonical form of \(R^F_{(n, \varphi, d)}\) is \(R^F_{(n_{CF}, \varphi_{CF}, d_{CF})}\) and the verification of the identity of two affine relations is thus reduced to the verification that two triplets of integers are identical:
**Proposition 3.**
\[
R^F_{(n, \varphi, d)} = R^F_{(n', \varphi', d')}, \Leftrightarrow (n_{CF}, \varphi_{CF}, d_{CF}) = (n'_{CF}, \varphi'_{CF}, d'_{CF}) \tag{14}
\]
**Operations on affine relations.** If any expression on affine relations could be rewritten as an affine relation, the verification of clock synchronisability would consist only in a test of equivalence on affine relations as above. But it has been observed that this was not the case in general. The closure property is true for the inverse of an affine relation. Also, the affine relation \(R^F_{(1, 0, 1)}\) is neutral with respect to composition. However, the closure property is lost when dealing with composition. The composition of two general affine relations \(R^F_{(n, \varphi, d)}\) and \(R^F_{(n', \varphi', d')}\) does not generally produce an affine relation. Nevertheless, it has been possible to identify in the space of the affine relations \(R^F_{(n, \varphi, d)}\) a subspace consisting of relations of the form \(R^F_{(1, \varphi, d)}\), with \(\varphi \geq 0\), in which the closure property is true. Following this observation, we have distinguished two cases, as detailed in the sequel.
**Properties of affine relations** \(R^F_{(1, \varphi, d)}, \text{ with } \varphi \geq 0\). It has been demonstrated [16] that the space of affine relations \(R^F_{(1, \varphi, d)}\), although closed under composition \(\ast\) and intersection \(\cap\), is not closed under union \(\cup\) and difference \(\setminus\). It is therefore necessary to define necessary and sufficient conditions for the equivalence
of arbitrary expressions constructed with affine relations of the form \( R^F_{\varphi, d} \) using composition, union, intersection and difference. Given the complexity of the space of expressions on affine relations \( R^F_{\varphi, d} \) and the necessity of efficient algorithms for testing their equivalence, the question of the existence of a canonical form appears. Our attempt to provide a canonical form using exclusively the \( \cup_r \) operator—based on the observation that any expression in this space can be rewritten as a union of affine relations \( R^F_{\varphi, d} \)—has failed because of the infinite number of possibilities in which a relation \( R^F_{\varphi, d} \) can be rewritten as a union of affine relations of the same type. However, in [16] we propose a relative normal form which reduces partially the complexity of the equivalence calculus.
Properties of general affine relations \( R^F_{\varphi, d} \): Deciding that two arbitrary expressions on general affine relations are equivalent is a difficult problem. An initial step may be to isolate subsets of triplets \( (n, \varphi, d) \) and \( (n', \varphi', d') \) which respect the condition that the result of the operation \( R^F_{\varphi, d} \odot_p R^F_{\varphi', d'} \) with \( \odot_p \in \{ \ast, \cup_r, \cap_r, \setminus_r \} \), is an affine relation. In [16] we propose a subset of such triplets \( \{(n, \varphi, d), (n', \varphi', d')\} \), for which the above property is true, for the composition. Computing this subset \( \{(n, \varphi, d), (n', \varphi', d')\} \) is an NP-complete problem. Future work may consider the applicability of heuristic search methods for this computation. Another open problem is the study of the properties of the union \( \cup_r \), intersection \( \cap_r \) and difference \( \setminus_r \) of general affine relations.
Synchronisability rules. The main results concerning the particular affine relations \( R^F_{\varphi, d} \), with \( \varphi \geq 0 \), and the general ones \( R^F_{\varphi, d} \) have respectively permitted the induction of a set of synchronism rules and a set of synchronisability rules. These rules actually represent a set of conditions which are necessary and sufficient for the synchronism and respectively the synchronisability of two clocks.
An example of synchronism rule is given below. Consider the process \( P \in \mathcal{P}_{\{c_1, c_2, c_3\}} \) defined by:
\[
P = \{ F \in \mathcal{F}_{\{c_1, c_2, c_3\}} \ | \ c \ R^F_{\varphi_1, d_1} c_1, c_1 R^F_{\varphi_2, d_2} c_2, c_2 R^F_{\varphi_3, d_3} c_3 \} \tag{15}
\]
Obviously \( c R^P_{\varphi_1, d_1} c_1 \ R^P_{\varphi_2, d_2} c_2 \) and \( c R^P_{\varphi_3, d_3} c_3 \). The calculus on affine relations \( R^F_{\varphi, d} \) induces \( R^P_{\varphi, d} = R^F_{\varphi, d} \) which is valid also for processes: \( R^P_{\varphi_1, d_1} \ast R^P_{\varphi_2, d_2} = R^P_{\varphi_1 + d_1, \varphi_2, d_2} \). Therefore \( c R^P_{\varphi_1, d_1, \varphi_2, d_2, d_3} c_2 \) and \( c_2 \) and \( c_3 \) are synchronizable if and only if \( R^P_{\varphi_1, d_1, \varphi_2, d_2, d_3} = R^P_{\varphi_3, d_3} \). With Propositions 2 and 3, \( R^P_{\varphi_1, d_1, \varphi_2, d_2, d_3} \) and \( R^P_{\varphi_3, d_3} \) are equivalent if and only if \( (c_1 + d_1) c_2 = (c_3 + d_3) \) and \( d_1 d_2 = d_3 \). This result is expressed in the following synchronism rule:
**Proposition 4.** \( \forall P \in \mathcal{P}_{\{c_1, c_2, c_3\}} \) with \( c \), \( c_1 \), \( c_2 \) and \( c_3 \) satisfying \( c R^P_{\varphi_1, d_1} c_1 \), \( c_1 R^P_{\varphi_2, d_2} c_2 \) and \( c_2 R^P_{\varphi_3, d_3} c_3 \), the following equivalences are
verified:
\[ c_2 \oplus c_3 \Leftrightarrow \left\{ \begin{array}{l}
\varphi_1 + d_1 \varphi_2 = \varphi_3 \\
d_1 d_2 = d_0
\end{array} \right\} \Leftrightarrow c_2 \sim c_3 \quad (16) \]
In Fig. 7 the particular case \( \varphi_1 = 6, d_1 = 2, \varphi_2 = 1, d_2 = 2, \) and \( \varphi_3 = 8, d_0 = 4 \) is illustrated. It can be observed that clock \( c_1 \) is an affine sampling of phase \( \varphi_1 \) and period \( d_1 \) on clock \( c \). Clock \( c_2 \) is defined similarly by an affine sampling of parameters \( \varphi_2 \) and \( d_2 \) on \( c_1 \). The same clock \( c_2 \) can be obtained by an affine sampling of \( \varphi_3 \) and \( d_3 \) on \( c \); the clock \( c_3 \) constructed in this manner is synchronous, and therefore synchronisable, with \( c_2 \).
Following a sequence of steps similar as for Proposition 4, we have derived a system of synchronisability rules which is minimal; it enables the verification of the synchronisability of two arbitrary clocks related by an expression on affine relations \( R_{\{1,\varphi,d\}}^P \), with \( \varphi \geq 0 \). The results concerning the equivalence of general affine relations \( R_{\{1,\varphi,d\}}^P \), summarized by Propositions 1, 2 and 3, and the partial result on composition of general affine relations, have allowed the derivation of a set of synchronisability rules which are sufficient for the validation of SIGNAL programs for which the single operation performed on affine relations is composition. Further work should be dedicated to the study of the union \( \bigcup_r \), intersection \( \bigcap_r \) and difference \( \backslash_r \) of general affine relations.
3.4 Implementation of the Affine Clock Calculus
A prototype implementing the synchronisability rules introduced in Section 3.3 has been integrated with the existing clock calculus and used for the validation of the SIGNAL-ALPHA interface on the video image coding application introduced in Section 2. In Section 3.1 we have explained that the existing (boolean)
clock calculus relies on the properties of the lattice $\preceq$ existing on the space of clocks, and that it is equivalent to a system of synchronisability rules. The implementation of the affine clock calculus is briefly described now. By choosing an appropriate implementation of a general affine relation $\mathcal{R}_P^{(m,\varphi,d)}$ as detailed in [16], the considered clock expressions contain formulas constructed only with affine clocks, that is, affine samplings of specified phase and period on a given basis clock. Thus, the order $\preceq_{aff}$ defined by
\[
\preceq_{aff} = \{(c_1,c_2) | \exists \varphi_i \geq 0, d_i > 1, \mathcal{R}_P = \mathcal{E}X_P(\ldots, \mathcal{R}_P^{(1,\varphi_i,d_i)}), \ldots, c_1 \mathcal{R}_P^{(c_2)} c_2 \} \tag{17}
\]
with $\mathcal{E}X_P$ a general expression on affine relations, induces on the space of affine clocks a lattice structure. The system of equations on affine clocks associated with a Signal program is solved by triangularisation. When the equivalence of two clock expressions has to be demonstrated, synchronisability rules such that deduced in Section 3.3 are applied. Finally, for the integration of the affine and boolean clock calculus, each synchronisability rule which has been deduced in a process $Q \in \mathcal{P}_{A_2}$, is used in a larger context $P \in \mathcal{P}_{A_1}$, with $A_2 \subseteq A_1$, satisfying $\Pi_{A_2}(P) \subseteq Q$. Following Lemma 1, the synchronisability rule is also valid in $P$.
4 Application
The affine clock calculus has been used for the validation of the video image coding application described in Section 2. This application contains an important control part, which has been programmed in Signal, and operations like the DCT, which have been expressed in Alpha. The application has been specified and simulated at both functional and architectural levels as described in Section 2. In the coding system described in [8], each image is decomposed into a fixed number of macro-blocks, each macro-block consisting of one block of luminance and two blocks of chrominance (red and blue). At the architectural level, we have refined the Alpha specifications of the DCTs corresponding to the blocks of luminance and red chrominance of a macro-block. These temporal refinements have been expressed in Signal by means of two general affine relations between clocks $c, c_1$ and $c, c_2$ as illustrated in Fig. 5. The synchronisability of $c_1$ and $c_2$ has been verified by compilation and the entire Signal-Alpha system has been simulated in C.
Most of the operations involved in image coding applications are critical from the point of view of execution time or resources. Therefore, a codesign approach can be considered. The affine clock calculus represents an important element in defining a complete codesign methodology based on the Signal and Alpha languages. Besides the cospecification and cosimulation of an application, using Signal and Alpha in a codesign framework is interesting since it offers solutions to other codesign problems such as the automatic synthesis of specialised circuits for regular algorithms, or the generation of optimal code for the software implementation of both calculations and control. Concerning the latter, one might consider the hardware/software partitioning of an application corresponding to the partitioning into Signal and Alpha subsystems. Therefore,
5 Conclusion
The joint use of the Signal and Alpha languages in hardware/software codesign has introduced the problem of the validation of mixed Signal-Alpha specifications both at the functional and architectural levels. The refinement of Signal-Alpha specifications towards the architectural level and their subsequent validation necessitates the extension of the formal clock calculus implemented in the Signal compiler. This paper presents the new affine clock calculus based on the properties of affine relations induced between clocks by the refinement of Signal-Alpha specifications. The properties of affine relations are studied in the semantical model of traces of the Signal language, but can be extended to any general model with similar characteristics. Based on this study, a new set of synchronisability rules is defined and integrated with the set already implemented by the existing formal clock calculus.
The affine clock calculus is relevant for the definition and implementation of a codesign methodology using the Signal and Alpha languages. Techniques for real-time system validation (formal verification, simulation) available in the Signal and Alpha environments can be used for cospecification and cosimulation. Both environments also have tools for automatic generation of optimal implementations which can be used in a complementary manner for hardware synthesis and/or implementation on general architectures. Further work should be devoted to the complete integration of the Signal and Alpha languages thus making possible the use of the most adapted formalism and environment for a given application.
References
|
{"Source-Url": "https://hal.archives-ouvertes.fr/file/index/docid/548887/filename/FM-99_affine.pdf", "len_cl100k_base": 14400, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 78184, "total-output-tokens": 16824, "length": "2e13", "weborganizer": {"__label__adult": 0.0004673004150390625, "__label__art_design": 0.000701904296875, "__label__crime_law": 0.00038814544677734375, "__label__education_jobs": 0.0006351470947265625, "__label__entertainment": 0.00011163949966430664, "__label__fashion_beauty": 0.00023281574249267575, "__label__finance_business": 0.0003802776336669922, "__label__food_dining": 0.0004858970642089844, "__label__games": 0.0009613037109375, "__label__hardware": 0.004840850830078125, "__label__health": 0.0007233619689941406, "__label__history": 0.0004246234893798828, "__label__home_hobbies": 0.00017201900482177734, "__label__industrial": 0.0010442733764648438, "__label__literature": 0.00035643577575683594, "__label__politics": 0.0003578662872314453, "__label__religion": 0.0007777214050292969, "__label__science_tech": 0.1693115234375, "__label__social_life": 8.213520050048828e-05, "__label__software": 0.00824737548828125, "__label__software_dev": 0.8076171875, "__label__sports_fitness": 0.00037026405334472656, "__label__transportation": 0.0010900497436523438, "__label__travel": 0.00027871131896972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57907, 0.03004]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57907, 0.74142]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57907, 0.86852]], "google_gemma-3-12b-it_contains_pii": [[0, 2609, false], [2609, 5693, null], [5693, 9399, null], [9399, 12610, null], [12610, 14422, null], [14422, 17779, null], [17779, 19408, null], [19408, 22809, null], [22809, 24861, null], [24861, 27204, null], [27204, 30013, null], [30013, 33653, null], [33653, 37456, null], [37456, 40425, null], [40425, 43865, null], [43865, 47525, null], [47525, 49551, null], [49551, 52962, null], [52962, 55398, null], [55398, 57907, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2609, true], [2609, 5693, null], [5693, 9399, null], [9399, 12610, null], [12610, 14422, null], [14422, 17779, null], [17779, 19408, null], [19408, 22809, null], [22809, 24861, null], [24861, 27204, null], [27204, 30013, null], [30013, 33653, null], [33653, 37456, null], [37456, 40425, null], [40425, 43865, null], [43865, 47525, null], [47525, 49551, null], [49551, 52962, null], [52962, 55398, null], [55398, 57907, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57907, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57907, null]], "pdf_page_numbers": [[0, 2609, 1], [2609, 5693, 2], [5693, 9399, 3], [9399, 12610, 4], [12610, 14422, 5], [14422, 17779, 6], [17779, 19408, 7], [19408, 22809, 8], [22809, 24861, 9], [24861, 27204, 10], [27204, 30013, 11], [30013, 33653, 12], [33653, 37456, 13], [37456, 40425, 14], [40425, 43865, 15], [43865, 47525, 16], [47525, 49551, 17], [49551, 52962, 18], [52962, 55398, 19], [55398, 57907, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57907, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
bea856886669552e455532bc7657a0785f3464a2
|
A SURVEY ON IMPLICIT REQUIREMENTS MANAGEMENT PRACTICES IN SMALL AND MEDIUM-SIZED ENTERPRISES
Onyeka Emebo, Olawande Daramola, Charles Ayo
Effective requirements management that embraces both explicit and implicit aspects is a prerequisite for successful software development. Although different researchers and practitioners have identified the importance of implicit requirements (IMR) for overall successful outcome of software development, there is a need to correlate these theoretical assumptions about implicit requirements with the state of the practice. This paper empirically investigates the perception and handling of implicit requirements in small and medium-sized software organisations. The survey was undertaken through a web-based questionnaire to which 56 participants from 23 countries responded. The study found that critical organisational factors such as number of years in business of an organisation, the years of experience of an organisation in requirements engineering, and size of software development team have positive correlation with the perception and handling of implicit requirements within an organisation. It also recommends that a comparative evaluation of the existing support tools for implicit requirements is necessary in order to validate the potential of these tools to solve existing challenges, and determine gaps that still exist.
Keywords: empirical survey; implicit requirements; requirements engineering; requirements management; software organisations
1 Introduction
The requirements of a software system are essential for effective performance of its functions and therefore effective requirements engineering is crucial for the success of software development projects [1].
From the perspective of requirements elicitation, requirements can be classified into explicit requirements (clearly stated requirements) and implicit or tacit requirements, which are assumed or unspoken requirements that are not stated or documented [2, 3]. Implicit requirements (MR) have also been defined as non-verbalized customer expectations [2]. IMR can occur due to a number of reasons, which include:
- when implicit shared understanding of the quality of requirements is lacking among stakeholders in a project [4, 5];
- when the advent of tacit knowledge causes a knowledge gap between developers and shareholders in a project [6];
- when a software organisation is developing a product in a new domain; or a project has been subcontracted to external organisation that has a different operational background [7]; and
- when various forms of ambiguity exist in requirements that could lead to different incompatible interpretations of same set of requirements by different stakeholder groups [8, 9].
However, the hidden nature of IMR make them challenging to capture, in most cases, developers and testers rely on their own experience to manage them [1, 3]. IMR are as essential to the successful implementation and acceptance of the system by the user as explicit requirements. In [10], it was stated that the quality of software is dependent on the measure of its conformance to both explicit and implicit requirements. Authors in [11] also indicated that the quality of software cannot be adjudged good, and guaranteed to meet customer’s satisfaction if only explicit requirements are satisfied while implicit requirements are ignored. Because of their relevance, different researchers have proposed different approaches, methods and tools to efficiently identify and manage IMR from different sources. These include [3, 12, 13, 14], which considered how to identify and handle IMR. The works in [8, 15, 16, 17, 18] focussed on dealing with tacit/implicit knowledge in requirements; while [9, 19, 20, 21, 22] dealt with handling ambiguity in requirements.
Although different researchers and practitioners have acknowledged the importance of IMR to the overall success of software development, there is yet the need to empirically investigate the way practitioners perceive them in terms of their real impact on the success or otherwise of software development and how they are managed by software organisations. Currently, not many empirical studies on the perception of IMR among practitioners and the way they are being handled in practice have been reported in the literature. This is the motivation for this study. The aim of this paper is to assess practitioner’s perspective of IMR and identify the relationship between specific characteristics of small and
medium-sized software organisations and implicit requirements management practices. Therefore, the research question investigated in this work is: What are the factors that determine how IMR are handled in system development practice in small and medium-sized software organisations?
The rest of this paper is as follows: Section 2 previews related work; Section 3 describes the framework for developing the hypotheses in the study and the research methodology while Section 4 presents the analysis and results. In Section 5, we present a discussion of results, while Section 6 discusses the validity threats. The paper is concluded in Section 7 with a brief note.
2 Related work
Generally, the issue of management of IMR has gained some attention in the literature with researchers focusing on aspects that deal with identification and handling of implicit/tacit requirements. Efforts such as [3, 13] engaged analogy reasoning and ontology-based approaches for identification of implicit requirements. Some other researchers have focused on handling implicit requirements by dealing with tacit/implicit knowledge. Examples of these include [8, 15, 16, 17, 18]. Additionally, some researchers have attempted to tackle implicit requirements by resolving ambiguity in requirements specifications. Instances of these include [19, 20, 21, 22]. However, so far in the literature, there are not many empirical studies that focussed specifically on the issue of implicit requirements within software organisations. The ongoing work reported in [23] was done to identify the impact of tacit and explicit knowledge transferred during software development projects. An inductive, exploratory, qualitative methodology was applied in order to validate the tacit knowledge spectrum in software development projects. The work aims to create a conceptual model that supports future software development projects in their tacit to explicit knowledge transfers. No concrete findings of the study was reported.
There are many studies that have addressed issues of requirements engineering within software organisations as a whole. For example in [24], the results of a diagnostic study of requirements engineering (RE) practices in very small software companies in Chile was presented. The study identified the state of the practice in these companies and the potential limitations that can hinder adoption of appropriate requirements engineering practices in the Chilean very small software enterprises. In [25] the report of an explorative study of software engineering practices in five small and medium-sized organisations was presented. Although the work did not focus particularly on RE practices, the study reveals interesting issues about software development practices in small organisations. In [26], a report of RE practices in seven very small scale enterprises (VSSE) in Canada was presented. The exploratory study found that RE practices in VSSE were diverse and are being successfully applied, the organisations engaged experienced personnel in charge of their RE processes, requirements errors were rarely severe, and the organisations had strong cultural orientations. In [27] authors identified critical factors that affect organisation-wide implementation of RE processes. The work was based on a broad literature review and three longitudinal case studies that were carried out using action research. In [28], a study of the current RE practices, development needs and preferred ways of technology transfer of twelve small to medium-sized companies in Finland was reported. The study gave attention to the level of adoption for several RE practices and degree of adherence to general guidelines for RE practices.
Other surveys or field studies that focussed on requirements engineering practices in software organisations include [29] – requirements modelling; [30, 31, 32, 33] – adoption of standard RE practices; [34, 35] – intelligent assistance; and [36, 37] – variability management. What is of note is that none of these previous empirical studies have focussed specifically on the management and handling of implicit requirements as we have done in this study.
3 Research methodology and hypotheses development
In this section, we discuss the framework that is used to investigate the factors influencing implicit requirements management during software development process as well as the research methodology used. The six factors that were used as basis for hypotheses development is shown in Fig. 1. Also, the description of the framework is shown in Tab. 1.

**Table 1 Factors influencing IMR management**
<table>
<thead>
<tr>
<th>S/no</th>
<th>Factor</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Number of years in business</td>
<td>Number of years that company has been in business</td>
</tr>
<tr>
<td>2</td>
<td>Software development team size</td>
<td>The number of persons in the development team</td>
</tr>
<tr>
<td>3</td>
<td>Scope of market operations</td>
<td>If the company's operational market can be classified as local, international or both</td>
</tr>
<tr>
<td>4</td>
<td>Professional status</td>
<td>The status of a respondent within an organisation, be it junior level, middle level or management level.</td>
</tr>
<tr>
<td>5</td>
<td>Experience of organisation in RE</td>
<td>Expertise of the personnel in RE</td>
</tr>
<tr>
<td>6</td>
<td>Experience of organisation in RE</td>
<td>Expertise of the organisation in RE</td>
</tr>
</tbody>
</table>
3.1 Research methodology
For the purpose of this study, a web-based questionnaire was designed. This questionnaire contained closed ended questions. The sampled population for the empirical study were software developers from small and medium-sized companies in different countries. The questionnaire was designed as a tool to investigate the perception of implicit requirements, and how they are managed by small and medium-sized software organisations. We focused more on small and medium-
sized software organisations because they are more in number and easily accessible compared to large-sized organisations. The objective of the survey is to understand the extent of consideration that is given to implicit requirements in practice in the course of software development. An overview of the adopted research process is presented in Fig. 2.
### 3.2 Structure of survey
The questionnaire is two-paged and contains two sections. The first section contained introductory questions on the name of country (where the company is based), name of respondents’ company, background information on the organisation and professional background of the respondent. This section was used to gain information about the respondent’s experience in RE, and also the organisation’s experience in RE in terms of number of years. The second section contained close-ended questions to elicit information on the perception of implicit requirements within the respondent’s organisation and how they are managed. The questions in this section seek to know the relevance attached to implicit requirements in a respondent’s organisation during the process of developing software.
### 3.3 Data collection method
A web-based questionnaire was used to draw participation of diverse respondents from different parts of the world. We made an open call through survey invites in relevant online requirements engineering and software engineering communities such as Yahoo Requirements Engineering Group, Linked-in Requirements Engineering Specialist group (RESG), and Requirements Engineering Conference mailing list, AISWorld, and SEWORLD. This is to ensure that interested and qualified persons from these communities that have diversified global memberships are notified of interested and qualified persons from these communities. The data on the background of the respondents as it pertains to the six factors is presented in Tab. 1. Also, Tab. 2 shows that a larger number of the respondents work for companies with over 20 years’ experience (46.4%) in software development business, that are based in Europe and the US to help disseminate information about the survey. Many of them did this by sending email invites to their colleagues within the software engineering community. The survey was online for a period of 6 months. At the end, we got the 56 respondents that participated, with respondents from countries such as Australia (2), Austria (3), Brazil (2), Chile (1), Germany (4), India (5), Ireland (1), Israel (2), Italy (2), Macedonia (1), New Zealand (2), Norway (2), Poland (1), Serbia (1), Sweden (1), United States of America (9), United Kingdom (4), Yugoslavia (1), Afghanistan (1), Spain (3), Netherland (3), Canada (3) and Nigeria (2). The data collected from the online survey formed the basis of our analysis. The survey questions and data are available at [38]. All of the respondents claimed to be software developers, with majority specialising in development of business and enterprise software solutions.
### 3.4 Test method
The major test that was carried out in this study is the correlation analysis test using the Spearman’s rank correlation coefficient (Spearman’s rho). We used this to test the six hypotheses that we formulated. For each hypothesis, we used the correlation analysis technique to determine the relationship between certain factors/characteristics of the respondents and their responses to the close-ended questions in the questionnaire. We investigated whether the six factors/characteristics have any significant impact on the perception and handling of implicit requirements. If so, how strong is the relationship? Below are the formulated Hypotheses:
- **H1**: Number of years in business has significant relationship with the knowledge and views of an organisation on implicit requirements.
- **H2**: Size of software development team of an organisation has significant impact on the knowledge and handling of implicit requirements.
- **H3**: The organisation’s scope of market operation has significant impact on its knowledge and views on implicit requirements.
- **H4**: Professional status of an employee in an organisation has significant impact on his/her knowledge and views of implicit requirements.
- **H5**: Years of personal experience of an individual in RE has significant impact on the knowledge and views of implicit requirements.
- **H6**: Experience of an organisation in RE has significant impact on its knowledge and handling of implicit requirements.
### 4 Analysis and results
#### 4.1 Background of respondents
There were 56 respondents (n = 56) from different parts of the world. The data on the background of respondents as it pertains to the six factors is presented in Tab. 1. Also, Tab. 2 shows that a larger number of the respondents work for companies with over 20 years’ experience (46.4%) in software development business,
A survey on implicit requirements management practices in small and medium-sized enterprises
O. Emebo et al.
while a 89.3% of the sampled population have more than 5 years experience in software development.
<table>
<thead>
<tr>
<th>S/No</th>
<th>Factor</th>
<th>Analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Years of business (years)</td>
<td>> 20 yrs = 26 (46.4%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>11-15 yrs = 9 (16.1%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>6-10 yrs = 9 (16.1%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0-5 yrs = 6 (10.7%)</td>
</tr>
<tr>
<td>2</td>
<td>Software development team size (persons)</td>
<td>> 50 = 10 (17.9%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>16-20 = 6 (10.7%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>11-15 = 9 (16.1%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>6-10 = 8 (14.3%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0-5 = 18 (32.1%)</td>
</tr>
<tr>
<td>3</td>
<td>Scope of market operation</td>
<td>Local = 24 (42.9%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>International = 11 (19.6%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Both = 21 (37.5%)</td>
</tr>
<tr>
<td>4</td>
<td>Professional status of respondent’s within their organisation</td>
<td>Management level = 19 (33.9%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Middle level = 35 (62.5%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Lower level = 2 (3.6%)</td>
</tr>
<tr>
<td>5</td>
<td>Respondent’s years of experience in RE</td>
<td>> 20 yrs = 15 %</td>
</tr>
<tr>
<td></td>
<td></td>
<td>16-20 yrs = 2 %</td>
</tr>
<tr>
<td></td>
<td></td>
<td>11-15 yrs = 21 %</td>
</tr>
<tr>
<td></td>
<td></td>
<td>6-10 yrs = 34 %</td>
</tr>
<tr>
<td></td>
<td></td>
<td>0-5 yrs = 28 %</td>
</tr>
<tr>
<td>6</td>
<td>Experience of the organisation in RE</td>
<td>> 20 yrs = 18 (32.1%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>16-20 yrs = 5 (8.9%)</td>
</tr>
<tr>
<td></td>
<td></td>
<td>11-15 yrs = 9 (16.1%)</td>
</tr>
</tbody>
</table>
| | | 6-10 yrs = 14 (25 % )
| | | 0-5 yrs = 10 (17.9%) |
4.2 Software team development size
Of the respondents, 19.6% came from companies that have international scope of operation, 42.9% from companies with local scope of operation, while 37.5% described the operational scope of their company as both local and international (see Tab. 2). In terms of professional status of respondents, 33.9% belong to the managerial level, 62.5% to middle career level, while 3.6% belong to the lower level. This shows that there is a greater population of middle level personnel amongst the respondents compared to management and junior level employees. In terms of experience in RE, 41% of respondents’ organisations have at least 15 years of experience in RE, while 38% of respondents claimed to have more than 10 years experience in RE practice.
4.2 Reliability test
We conducted reliability test in order to measure the consistency and stability of the data used for the analysis. We used Cronbach’s Alpha Test to determine the reliability of the data used in this study. According to [39], Cronbach’s alpha is a reliability test measure involving only one test administration to provide a given test with a unique evaluation. It is represented by the symbol α. During the process of establishing content validity of the questionnaire, we conducted pilot survey using a few experts, who acted as respondent in order to review the questions and offer suggestions for improvement. The revised questionnaire and additional suggested questions were used in the survey instrument. The data collected is reliable under the Cronbach’s Alpha test when α has a minimum of 0.7. For this study, the Cronbach’s Alpha Test is valued at 0.783. This indicates that the data collected from the set questionnaire is suitable for carrying out further test and analysis.
<table>
<thead>
<tr>
<th>N of items</th>
<th>Cronbach’s Alpha</th>
</tr>
</thead>
<tbody>
<tr>
<td>23</td>
<td>0.783</td>
</tr>
</tbody>
</table>
4.3 Hypothesis testing
4.3.1 Correlation analysis
For this study, Spearman Correlation Analysis was adopted to determine the impact of the six selected factors on the knowledge and perception of implicit requirements by software developers. The aim was to determine if certain factors have significant influence or relationship with the knowledge and perception of implicit requirements. The Spearman’s correlation coefficient is a statistical measure of the strength of a monotonic relationship between two variables. It is represented by the spearman’s rho (rs). In this research, the selected factors were tested against the questions. However the tables below reflect responses, which show the questions with significant relationship with the respective factor and all non-significant responses were excluded.
a) Number of years in business
H1: Number of years in business has significant relationship with the knowledge and view of implicit requirements
H1α: Number of years in business has no significant impact on the knowledge and view of implicit requirements
The number of years in business represents the number of years in the practice of software development by a company. From the result extracted as shown in table 4 the questions with significant relationship are as listed below. Although there were a few significant relationships, they were however weak not exceeding 0.4. This means that there exists significant influence although it is not very strong.
Where:
Q 2.7.1. A specialised approach, possibly with some automation support will be useful for managing implicit requirements (0.296)
Q 2.14. Established RE management methods are adequate to handle implicit requirements for now (0.295)
Q 2.6. Using experience plus tool support will be perfect for managing implicit requirements (0.379)
Q 2.4. Implicit requirements do not have any impact on correctness of system architecture (0.295)
Q2.3. Implicit requirements do not have any effect on the acceptability of software product (0.344)
Tab. 4 shows that although the number of years in the business of software engineering has some effect on the knowledge and view of implicit requirements, there are other factors that affect the knowledge and perception of how implicit requirements should be handled in an organisation. The results of the analysis show that the greater the number of years in business the better the knowledge and perception of implicit requirements. This means that those with longer years in the business have a
Hence, $H_1$ is accepted.
Importance to the functionality of the developed system also shows that they recognise the need for improvement and hence $H_1$ is accepted.
The result of the analysis showed that the size of the software development team had a significant impact on the perception and knowledge of implicit requirements. From this analysis, it can be inferred that although the size of the software development team has significant impact on the perception and handling of implicit requirements, there are other factors that also play a role since the values are closer to zero than to +1, which is a perfect positive correlation. Therefore, $H_2$ is selected.
The size of the software development team shows a positive correlation with questions $Q_{2.14}$, $Q_{2.3}$, $Q_{2.4}$, $Q_{2.13}$ with the exception of $Q_{2.8}$, which had a negative value of $(0.288)$. This connotes that with increase in the size of software development team the negative impact of implicit requirements on correctness of system architecture, acceptability of software product will be reduced. Also, established RE methods will become more adequate to handle implicit requirements, while reducing the size of software development team will increase improper handling of implicit requirements. From this analysis, it can be inferred that although the size of the software development team has significant impact on the perception and handling of implicit requirements, there are other factors that also play a role since the values are closer to zero than to +1, which is a perfect positive correlation. Therefore, $H_2$ is selected.
The size of the software development team has significant impact on the knowledge and handling of implicit requirements. Hence, $H_2$ is selected.
The organisation’s scope of market operation has no significant impact on the knowledge and view of implicit requirements.
The analysis conducted, the level of operation was classified based on the type of target market, which is local, global and both local and global. A larger percentage of the population of the respondents operate at either local level or at both local and global levels. The analysis showed that the target market of the company or level of operation of the organisation has no significant impact on the views and knowledge of implicit requirements. Hence, there is no table showing any significance relationship between any of the question, therefore $H_3$ is rejected and $H_{3o}$ is selected.
Professional status in organisation
Professional Status of an employee in an Organisation has significant impact on the knowledge and view of implicit requirements.
Professional Status of an employee in an Organisation has no significant impact on his/her knowledge and views of implicit requirements.
The professional status of an employer within an organisation has been categorized into three levels. These are the Junior Level, Middle Level and Managerial Level. The analysis result in Tab. 6 showed that there was only one significant relationship between one of the questions and the professional status.
Where:
Q8. Your professional status in your organisation.
Q2.5. Relying principally on experience is sufficient for the discovery of implicit requirements during requirements elicitation (0.347).
The result of the analysis showed that the higher the professional status, the greater the disagreement with the statement or close ended question. This means that those that are higher up in the career hierarchy do not believe that experience alone is sufficient for the discovery of implicit requirements. Although they agree that experience plays an important role, other approaches are required. Therefore, H4 is selected.
e) Years of personal experience in RE
H5: Years of personal experience in RE has significant impact on the knowledge and view of implicit requirements
H5: Years of personal experience in RE has no significant impact on the knowledge and view of implicit requirements.
The result of the analysis showed that years of personal experience in RE had significant impact on some of the responses to the close-ended questions. These questions include the following:
Q 8. Your experience in Requirements Engineering (RE) practice in terms of years
Q 2.5. Relying principally on experience is sufficient to the discovery of implicit requirements during requirements elicitation (0.290)
Q 2.6. Using experience plus tool support will be perfect for managing implicit requirements (0.365)
Q 2.14. Established RE management methods are adequate to handle implicit requirements for now (0.263)
Q 2.3. Implicit requirements do not have any effect on the acceptability of software product (0.291).
Table 7 Result of correlation testing for H5
<table>
<thead>
<tr>
<th>Spearman’s rho</th>
<th>Q 8.</th>
<th>Q 2.5.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Spearman’s rho</td>
<td>Q 2.5.</td>
<td>Q 8.</td>
</tr>
<tr>
<td>Correlation Coefficient</td>
<td>0.290*</td>
<td>0.365**</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.06</td>
<td>56</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
<td></td>
</tr>
</tbody>
</table>
* Correlation is significant at the 0.05 level (2-tailed).
** Correlation is significant at the 0.01 level (2-tailed).
This analysis showed that although there is a significant relationship, it is however not strong as the coefficients are closer to 0 than +1, which is an indicator of a perfect positive correlation. The analysis in Tab. 7 shows that developers with longer years of experience have more regard and understanding of the implicit requirements. This could be due to many practical cases of implicit requirements that they have handled in the course of their career. Therefore, H5 is selected.
f) Experience of the organisation in RE
H6: Experience of an Organisation in RE has significant impact on the knowledge and view of implicit requirements
H6: Experience of an Organisation in RE has no significant impact on the knowledge and view of implicit requirements.
Q 9. Experience of your organisation in RE.
Table 8 Result of correlation testing for H6
<table>
<thead>
<tr>
<th>Spearman’s rho</th>
<th>Q 9.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Spearman’s rho</td>
<td>Q 2.13.</td>
</tr>
<tr>
<td>Correlation Coefficient</td>
<td>0.297*</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.026</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
</tr>
<tr>
<td>Q 2.14.</td>
<td>Correlation Coefficient</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.002</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
</tr>
<tr>
<td>Q 2.15.</td>
<td>Correlation Coefficient</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.003</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
</tr>
<tr>
<td>Q 2.3.</td>
<td>Correlation Coefficient</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.024</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
</tr>
<tr>
<td>Q 2.4.</td>
<td>Correlation Coefficient</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.018</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
</tr>
<tr>
<td>Q 2.5.</td>
<td>Correlation Coefficient</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.028</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
</tr>
<tr>
<td>Q 2.6.</td>
<td>Correlation Coefficient</td>
</tr>
<tr>
<td>Sig. (2-tailed)</td>
<td>0.005</td>
</tr>
<tr>
<td>N</td>
<td>56</td>
</tr>
</tbody>
</table>
* Correlation is significant at the 0.05 level (2-tailed).
** Correlation is significant at the 0.01 level (2-tailed).
The analysis showed that the level/years of experience of an Organisation in RE have impact on the knowledge and perception of implicit requirements. The results as shown in Tab. 8 show that the years of experience of the Organisation had a significant influence on 7 out of the 17 questions. They include the following:
Q 2.5. Relying principally on experience is sufficient to the discovery of implicit requirements during requirements elicitation (0.293)
Q 2.6. Using experience plus tool support will be perfect for managing implicit requirements (0.373)
Q 2.14. Established RE management methods are adequate to handle implicit requirements for now (0.397)
Q 2.3. Implicit requirements do not have any effect on the acceptability of software product (0.301)
Q 2.4. Implicit requirements do not have any effect on correctness of system architecture (0.314)
Q 2.15. During requirements elicitation, stakeholders deliberately withhold certain information, which creates implicit requirements scenarios (0.387)
Q 2.13. There is no need to evolve new methods to specially handle implicit requirements (0.297).
The results of the analysis show that companies with longer years of experience in RE acknowledge the importance of implicit requirements, regard them as...
crucial to the functionality of a system, and that they have effect on the consumer satisfaction. Although there is a significant relationship, the relationship is however not a strong one as it is below 0.5. With the correlation coefficients closer to zero, this indicates a weak relationship. This implies there are other factors, which play a major role in the knowledge, understanding and view of implicit requirements. Hence, H6 is selected.
5 Discussion
Based on the outcome of the analysis of the result of the survey, we can identify four salient issues, which we shall now discuss. First, we observed that there are critical organisational factors such as number of years in business of an organisation, and the years of experience of an organisation in dealing with RE, and size of software development team that have positive correlation with the views, and handling of implicit requirements within an organisation. From this, we can safely argue that the level of maturity of the software process in an organisation will affect the way implicit requirements are managed, although high maturity of software process may not automatically translate to handling implicit requirements the right way because of existence of other factors. Also, the scope of operation of an organisation whether local or global is a key determinant factor of how well an organisation handles implicit requirements. Second, there are critical human factors such as general professional experience of employees, and the level of experience in RE that determines the way implicit requirements are perceived and managed within an organisation. Therefore, we can speculate that organisations that have persons with significant professional experience in software development and RE in managerial positions, and also a significant bunch of these types of personnel in mid-level positions are more likely to perform better in terms of handling of implicit requirements than those where this is not the case.
The result of this survey also points to the fact that although so far the use of experience has played significant role in handling implicit requirements, a significant number of practitioners believe that additional means that can complement the use of experience such as tool support are necessary. Interestingly, but contrariwise, there also exists a significant number of practitioners that believe that existing requirements management tools are sufficient to handle implicit requirements for now, if maximized, and there is no need for new tools. In addition, there is a consensus that implicit requirements are real, and that many deliberate situations caused by users that lead to the emergence of implicit requirements exist.
The findings from our survey, which we have presented above, reveal a number of issues and claims by respondents that need empirical verification by the requirements engineering community. For example, it will be interesting to ascertain the strength of specific RE tools to manage implicit requirements in terms of addressing specific concerns across the RE lifecycle such as discovery of hidden requirements, analysing implicitness, traceability, prioritization and change impact analysis of implicit requirements. Also, a comparative evaluation of the existing support tools for implicit requirements is necessary in order to validate the potential of these tools to solve existing challenges and ascertain gaps that still exist.
6 Validity threats
The results obtained in this empirical study need to be understood within the strengths and limitations of the selected research methodology. Hence, in this section we explain how this research addressed specific validity threats.
Conclusion Validity: this refers to whether we can draw the right conclusions about the relationship treatment and the result obtained from the survey. Some of the concerns addressed in this aspect of validity are:
Low statistical power: In a highly technical domain such as requirements management, having a large number of respondents is not so much of a strength as identifying persons that are truly knowledgeable on the issue of managing implicit requirements. The 56 respondents that are located in 38 distinct organisations and across 23 countries is sufficient for a small scale empirical study that seeks to give a first empirically based opinion on the handling of implicit requirements in small and medium-sized software organisations. The open call made to members of relevant online communities also allowed persons who are knowledgeable and interested in issues of IMR to participate.
Reliability of measure: the spearman’s correlation coefficient that was used to investigate the relationship between the variables in the stated hypotheses (H1÷H6) is a standard statistical measure that is suitable for the task it was used to perform. Also, in order to enhance the reliability of the measuring instrument, a pilot survey was conducted initially, which improved the quality of questions.
Reliability of treatment: all respondents had the same kind of information. The questions were in English, which happens to be the main language for business in the respondents’ organisations despite their different cultural contexts.
Construct validity: this refers to the extent to which the operational measures that are studied truly represent the theoretical constructs on which those operational measures were based [40]. To achieve this, all respondents had the same instructions as guide for completing the questionnaire. The task was the same for all respondents who completed the online questionnaire. Hence, the results obtained from the survey depend on only one variable, which eliminates any mono-method bias effect.
Internal Validity: this refers to whether other factors other than the treatment influenced the outcome of the survey. For the survey, all respondents were software practitioners who claimed to have ample experience in requirements engineering. The bulk of participants were recruited from professional online communities such as Linkedin Requirement Engineering Specialist Group (RESG), Yahoo Requirements Engineering Group, SEWORLD and AISWORLD. Generally, the respondents have significant experience in RE with 38% having more than 10 years experience, while 72% have above 5 years...
experience in RE. Also, they were given sufficient background introduction, which they had to read before the questions were presented to them. Considering the level of expertise in RE claimed by the respondents, we can conclude that issues such as difference in cultural contexts, gender and other social factors did not have significant influence on the findings of this study.
External Validity: The key interest of this aspect of validity is whether we can generalise the outcome of the survey to a larger context. The respondents were mostly experienced software engineers, who have practical experience on issues that deal with implicit requirements, and located in different parts of the world. A concern could be that possibly the result would have been different if a larger pool of qualified respondents was used for the survey. However, we waited six months to have the 56, it could not be ascertained if the number would have been significantly more if we had waited for a longer time since the call was made to everyone on the mailing lists of these online communities. Although we do not consider this as a major threat to the reliability of the outcome of this survey, an interesting point for future study is to have a wider group of requirements engineers participate in the survey.
In summary, we do not see any serious threats to the validity of our conclusions from the survey. Also, the fact that no other empirical study so far has looked specifically at the issue of implicit requirements in small and medium-sized software organisations makes the outcome of this study potentially valuable to practitioners.
7 Conclusion
In this paper, we have reported findings from a survey of implicit requirements management practices in small and medium-sized organisations. As a contribution, this paper presents a pioneering effort that is aimed at providing an understanding of implicit requirements management practices in small and medium-sized organisations based on empirical investigation. The survey results revealed that organisational experience in terms of age in business, experience in RE, and experience of personnel in RE, and software team size have positive correlation with effective management of implicit requirements within an organisation. The study also revealed that although use of experience has played significant role so far, the need for tool support is also desirable for better handling of implicit requirements. However, significant number of practitioners believe that existing RE tools are equally sufficient for managing implicit requirements if they are maximized, and there is no need for new tools. We can deduce from the study that there is a need to promote general understanding of implicit requirements and the need for more significant interest in issues of implicit requirements compared to explicit requirements, which have received the most attention in the literature.
For future work, this study found a number of issues, which would stimulate future empirical investigations. These include the need to evaluate the capability of existing RE management tools for managing implicit requirements, and the potentials of existing automated tools so far proposed in literature to support management of implicit requirements throughout the RE lifecycle.
Acknowledgement
The research was supported by the Covenant University Centre for Research, Innovation and Development (CUCRID).
8 References
Anketa o načinu upravljanja bezuvjetnim zahtjevima u malim i srednje malim poduzećima
O. Emebo i dr.
Tehnički vjesnik 2006, pp. 353
Requirements Engineering Conference (RE'06), IEEE, pp. 526-
IEEE Transactions on Software Engineering. 41, 6(2015), and resolution of lexical ambiguity in process models. // Pittke, F.; Leopold, H.; Mendling, J. Automatic detection
DOI: 10.1007/s00766-0178-3
IEEE/ACM International Conference on Automated
[26] Daramola, Associate Professor, Corresponding Author
PMB 1023 Ota, Nigeria
Charles Ayo, Professor
Covenant University
PMB 1023 Ota, Nigeria
Authors’ addresses
Onyeka Emebo
Covenant University
PMB 1023 Ota, Nigeria
onye.emebo@covenantuniversity.edu.ng
Olawande Daramola, Associate Professor
Covenant University
PMB 1023 Ota, Nigeria
olawande.daramola@covenantuniversity.edu.ng
Charles Ayo, Professor
Covenant University
PMB 1023 Ota, Nigeria
charles.ayo@covenantuniversity.edu.ng
|
{"Source-Url": "http://hrcak.srce.hr/file/267150", "len_cl100k_base": 8822, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 33653, "total-output-tokens": 11620, "length": "2e13", "weborganizer": {"__label__adult": 0.00046324729919433594, "__label__art_design": 0.0004260540008544922, "__label__crime_law": 0.000331878662109375, "__label__education_jobs": 0.005168914794921875, "__label__entertainment": 7.843971252441406e-05, "__label__fashion_beauty": 0.0002205371856689453, "__label__finance_business": 0.0015115737915039062, "__label__food_dining": 0.0004138946533203125, "__label__games": 0.0006399154663085938, "__label__hardware": 0.0005517005920410156, "__label__health": 0.0005197525024414062, "__label__history": 0.0002310276031494141, "__label__home_hobbies": 8.910894393920898e-05, "__label__industrial": 0.0003571510314941406, "__label__literature": 0.0004239082336425781, "__label__politics": 0.00026917457580566406, "__label__religion": 0.0004444122314453125, "__label__science_tech": 0.007549285888671875, "__label__social_life": 0.00015854835510253906, "__label__software": 0.0059356689453125, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00027179718017578125, "__label__transportation": 0.00044798851013183594, "__label__travel": 0.00020396709442138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47387, 0.07304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47387, 0.14398]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47387, 0.91156]], "google_gemma-3-12b-it_contains_pii": [[0, 4514, false], [4514, 10362, null], [10362, 15260, null], [15260, 20931, null], [20931, 24042, null], [24042, 28983, null], [28983, 35326, null], [35326, 41736, null], [41736, 47387, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4514, true], [4514, 10362, null], [10362, 15260, null], [15260, 20931, null], [20931, 24042, null], [24042, 28983, null], [28983, 35326, null], [35326, 41736, null], [41736, 47387, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47387, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47387, null]], "pdf_page_numbers": [[0, 4514, 1], [4514, 10362, 2], [10362, 15260, 3], [15260, 20931, 4], [20931, 24042, 5], [24042, 28983, 6], [28983, 35326, 7], [35326, 41736, 8], [41736, 47387, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47387, 0.26172]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
62129280795246bf30725235bd6a883215d800b3
|
[REMOVED]
|
{"len_cl100k_base": 10610, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 54285, "total-output-tokens": 12604, "length": "2e13", "weborganizer": {"__label__adult": 0.0003962516784667969, "__label__art_design": 0.0004711151123046875, "__label__crime_law": 0.0004062652587890625, "__label__education_jobs": 0.00222015380859375, "__label__entertainment": 0.00010061264038085938, "__label__fashion_beauty": 0.00018036365509033203, "__label__finance_business": 0.0004887580871582031, "__label__food_dining": 0.000522613525390625, "__label__games": 0.0012493133544921875, "__label__hardware": 0.0012035369873046875, "__label__health": 0.0012235641479492188, "__label__history": 0.0002880096435546875, "__label__home_hobbies": 0.0001595020294189453, "__label__industrial": 0.0005125999450683594, "__label__literature": 0.00047087669372558594, "__label__politics": 0.0002186298370361328, "__label__religion": 0.0004506111145019531, "__label__science_tech": 0.095458984375, "__label__social_life": 0.00011855363845825197, "__label__software": 0.01126861572265625, "__label__software_dev": 0.88134765625, "__label__sports_fitness": 0.0003590583801269531, "__label__transportation": 0.0004878044128417969, "__label__travel": 0.0002236366271972656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42037, 0.03057]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42037, 0.56071]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42037, 0.89815]], "google_gemma-3-12b-it_contains_pii": [[0, 2359, false], [2359, 5895, null], [5895, 9617, null], [9617, 12128, null], [12128, 15397, null], [15397, 18150, null], [18150, 20822, null], [20822, 23637, null], [23637, 27169, null], [27169, 29554, null], [29554, 32680, null], [32680, 35969, null], [35969, 36049, null], [36049, 37941, null], [37941, 40576, null], [40576, 42037, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2359, true], [2359, 5895, null], [5895, 9617, null], [9617, 12128, null], [12128, 15397, null], [15397, 18150, null], [18150, 20822, null], [20822, 23637, null], [23637, 27169, null], [27169, 29554, null], [29554, 32680, null], [32680, 35969, null], [35969, 36049, null], [36049, 37941, null], [37941, 40576, null], [40576, 42037, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42037, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42037, null]], "pdf_page_numbers": [[0, 2359, 1], [2359, 5895, 2], [5895, 9617, 3], [9617, 12128, 4], [12128, 15397, 5], [15397, 18150, 6], [18150, 20822, 7], [20822, 23637, 8], [23637, 27169, 9], [27169, 29554, 10], [29554, 32680, 11], [32680, 35969, 12], [35969, 36049, 13], [36049, 37941, 14], [37941, 40576, 15], [40576, 42037, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42037, 0.01167]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
75e2858755882b3b952218549c24d8ca5fe8fc9b
|
Linearly Bounded Reformulations
of Unary Databases
Rada Chirkova and Michael R. Genesereth,
Stanford University, Stanford CA 94305, USA
Abstract. Database reformulation is the process of rewriting the data and rules of a deductive database
in a functionally equivalent manner. We focus on the problem of automatically reformulating a database
in a way that reduces query processing time while satisfying strong storage space constraints.
In this paper we consider one class of deductive databases — those where all stored relations are
unary. For this class of so-called unary databases, we show that the database reformulation problem is
decidable if all rules can be expressed in nonrecursive datalog with negation; moreover, we show that
for such databases there always exists an “optimal” reformulation. We also suggest how this solution
for unary databases might be extended to the general case, i.e., to that of reformulating databases with
stored relations of arbitrary arity.
1 Introduction
Abstraction and reformulation techniques have been used successfully in a number of domains to reduce
the complexity of the problems to solve. We present an application of abstraction and reformulation in the
database domain, to the problem of reducing query processing time. While this problem is formulated in
the database context, it is easy to generalize, since broad classes of problems can be viewed and solved as
database problems.
A database system undergoes a number of transformations during its lifetime. Database schema and/or
rule transformations are central to database design, data model translation, schema (de)composition, view
materialization, and multidatabase integration. Interestingly, nearly all these tasks can be regarded as aspects
of the same problem in a theoretical framework that we proceed to describe.
Consider an abstract database transformation problem. Suppose the input to the problem comprises the
schema and rules of a deductive database and a set of elementary queries which, together with some algebra,
form a query language on the database. Suppose the objective of database transformation is to build an
“optimal” structure of the database with respect to the requirements and constraints that are also provided
in the input.
Generally, the transformations of the database schema and rules need to be performed in such a way that
the resulting database satisfies three conditions. First, it should be possible to extract from the transformed
database, by means of the input query language, exactly the same information as from the original database.
Second, the result should satisfy the input requirements, such as minimizing query processing costs. Finally,
the result should satisfy the input constraints; one common constraint is a guarantee of a (low) upper bound
on the disk space for storing the transformed database. Notice that all three conditions must hold for all
instances of the input database.
We call this problem database reformulation and consider logic-based approaches to its solution. Database
reformulation is the process of rewriting the data and rules of a deductive database in a functionally equivalent
manner. By specifying various input requirements and constraints, the database reformulation problem
translates into any of the database schema/query transformation problems mentioned above.
We focus on database reformulations whose input requirement is to minimize the computational costs
of processing the given queries, under strong storage space constraints that guarantee no more than linear
increase in database size. In this formulation, the database reformulation framework is most suitable for
dealing with the problems of view materialization and multidatabase integration.
In this paper we give a definition and a formal specification of the database reformulation problem. We
then present the main contribution of this paper, a complete solution of the database reformulation problem
for one class of databases. In this class of so-called unary databases, all stored relations are unary, i.e., have one attribute each; in addition, all rules can be expressed without recursion or built-in predicates.
There are a number of important applications where unary databases occur naturally. Unary databases come to mind whenever there is a need to single out and process features of objects. One example is indexing in libraries; books and articles are routinely classified by subject, and it is common for one item to belong to more than one class. Possible classes can be represented as unary relations with relevant books represented by tuples in the relations. For example, an article on statistical profile estimation in database systems can belong to classes “physical design”, “languages”, and “systems” at the same time.
Unary databases are also useful for taxonomic search in e-commerce; there, some of the more frequent queries are unions and intersections of classes in several taxonomies. For example, one might want to find all products which satisfy at least one of the stipulated properties (union of classes), or those products each of which satisfies all of the stipulated properties (intersection of classes).
After describing our solution to the database reformulation problem for unary deductive databases, we suggest how this solution might be extended to the general case, i.e., to the problem of reformulating databases with stored relations of arbitrary arity.
In this paper, proofs of the results presented in the text can be found in the appendix.
2 Preliminaries and Terminology
Our representation of the domain includes a set of relations; the set of attribute names for a relation is called a relation schema. A relation is called unary if it has exactly one attribute.
A relation is referred to as stored if it is physically recorded, as a table (a set of tuples, each tuple having a value for each attribute of the relation), on some storage media; a collection of stored relations is called a (regular) database. A database schema, for a given database $D$, is a collection of relation schemas for all stored relations in $D$. See [28] for more details.
A nonrecursive datalog rule is an expression of the form
$$ p(\bar{X}) : = l_1(\bar{Y}), \ldots, l_n(\bar{Z}), $$
where $p$ is a relation name, $\bar{X}$, $\bar{Y}$, $\ldots$, $\bar{Z}$ are tuples of variables and constants, and each $l_i$ is a literal, i.e., an expression of the form $p_i$ or $\neg p_i$ (by $\neg$ we denote negation), where $p_i$ is a relation name. $p(\bar{X})$ is called the head of the rule, and its body is a conjunction of subgoals $l_1(\bar{Y}), \ldots, l_n(\bar{Z})$. A rule is called safe if each variable in the rule occurs in a non-negated subgoal in the rule’s body.
A query (view) is a set of rules (in $nr$-datalog, for our purposes) with one distinguished relation name in the head of some rule(s). A query relation is the distinguished relation of the query, computed from the query using bottom-up logic evaluation, formalized, for example, in Algorithm 3.6 in [28]; a view relation is defined analogously. A query (view) is materialized if the query (view) relation is precomputed and stored in the database.
Two queries (views) are called equivalent if their relations are the same in any database. Given a query $q$, a query $q'$ is called a rewriting of $q$ in terms of a set $\mathcal{V}$ of relations if $q$ and $q'$ are equivalent and $q'$ contains only literals of $\mathcal{V}$.
A deductive database (see, for example, [22]) is a (regular) database as defined above, together with a set of queries and views defined on (the stored relations of) the database. A deductive database is called unary if all its stored relations are unary. In this paper we consider unary deductive databases where all queries and views are defined in safe $nr$-datalog. Since, as shown in [19], any recursive program with safe negation and unary stored relations is nonrecursive, all our results also apply to this more general case.
3 Example of a Unary Database
Let us consider an abstract example that involves a unary database. Suppose that an application queries a database with three unary stored relations, $r$, $s$, and $t$; see Table 1 for a concrete example. Suppose there are three important queries in that application, defined as follows:
\( q_1(X) : - r(X), s(X), \neg t(X); \)
\( q_2(X) : - s(X), \neg t(X); \)
\( q_3(X) : - t(X), \neg r(X); \)
see Table 2 for the resulting relations.
**Table 1.** Stored relations \( r, s, \) and \( t. \)
<table>
<thead>
<tr>
<th>( r )</th>
<th>( s )</th>
<th>( t )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( a )</td>
<td>( a )</td>
<td>( c )</td>
</tr>
<tr>
<td>( b )</td>
<td>( b )</td>
<td>( d )</td>
</tr>
<tr>
<td>( c )</td>
<td>( c )</td>
<td>( f )</td>
</tr>
<tr>
<td>( d )</td>
<td>( f )</td>
<td>( g )</td>
</tr>
</tbody>
</table>
Also suppose that in this application, all queries of interest can be expressed in terms of the three queries above. For example, one might pose to the database the following query \( q_4: \)
\( q_4(X, Y) : - r(X), s(X), \neg t(X), t(Y), \neg r(Y). \) \hspace{1cm} (5)
Notice that \( q_4 \) is simply a cross-product of queries \( q_1 \) and \( q_3, \) i.e., a set of combinations of each answer to query \( q_1 \) with each answer to \( q_3. \)
**Table 2.** Query relations \( q_1, q_2, \) and \( q_3. \)
<table>
<thead>
<tr>
<th>( q_1 )</th>
<th>( q_2 )</th>
<th>( q_3 )</th>
</tr>
</thead>
<tbody>
<tr>
<td>( a )</td>
<td>( a )</td>
<td>( f )</td>
</tr>
<tr>
<td>( b )</td>
<td>( b )</td>
<td>( g )</td>
</tr>
</tbody>
</table>
A straightforward solution to the database reformulation problem in this case would be to materialize queries \( q_1 \) through \( q_3. \) This solution would certainly reduce the query processing times for these queries, and consequently for all queries in the application. However, it would also materialize in the database duplicate copies of the same objects — those that belong to both \( r \) and \( s \) but not to \( t \) (objects \( a \) and \( b \) in our example), since answers to both \( q_1 \) and \( q_2 \) include such objects. If the number of such duplicate objects in the database is considerable, the resulting storage space overhead is a cause of concern. Our solution to the database reformulation problem for unary applications like this one guarantees good query execution time while avoiding the overhead suggested in the example.
### 4 Database Reformulation
We study a class of database applications where all queries of interest can be expressed in terms of some predefined set of elementary queries; this elementary set can be viewed as an alphabet which defines a query language. We would like to make “good” decisions on which views to materialize, in order to minimize query processing costs for this elementary set of queries (and, consequently, for all expected queries) and to satisfy some (for example, storage space) constraints on the resulting database.
*Database reformulation* is the process of rewriting the data and rules of a deductive database in a functionally equivalent manner. Our cost model for query execution is the classical bottom-up logic evaluation model; see Algorithm 3.6 in [28].
We describe the input and the output of the database reformulation process. Consider a set \( \mathcal{P} \) of relation names. Let \( \mathcal{S} \) be a database schema that consists of relation schemas for some relation names in \( \mathcal{P}; \) \( \mathcal{S} \) is the set of schemas for all *stored* relations in the input. Let \( \mathcal{R}_\mathcal{S} \) be a set of definitions, in terms of \( \mathcal{S}, \) for some relations whose names are in \( \mathcal{P}; \) \( \mathcal{R}_\mathcal{S} \) is the set of *views* in the input. Let \( \mathcal{Q} \) be a set of names of all elementary *query* relations of interest, such that \( \mathcal{Q} \subseteq \mathcal{P} \) and that \( \mathcal{R}_\mathcal{S} \) contains definitions of all relations in \( \mathcal{Q}. \)
Now let $\mathcal{V}$ be a database schema which consists of schemas for some relation names in $\mathcal{P}$; $\mathcal{V}$ describes new stored relations which are materialized in the process of database reformulation. Finally, let $\mathcal{R}_\mathcal{V}$ be a set of views defined in terms of $\mathcal{V}$.
**Definition 1.** For a given triple $(\mathcal{S}, \mathcal{R}_\mathcal{S}, \mathcal{Q})$, a triple $(\mathcal{V}, \mathcal{R}_\mathcal{V}, \mathcal{Q})$ is a reformulation of $(\mathcal{S}, \mathcal{R}_\mathcal{S}, \mathcal{Q})$ if for each query relation in $\mathcal{Q}$ with a definition $q_s$ in $\mathcal{R}_\mathcal{S}$, $\mathcal{R}_\mathcal{V}$ contains a rewriting of $q_s$.
As has already been mentioned, we focus on the problem of database reformulation under strong storage space constraints. Other constraints may be included as well; all constraints relevant to the application in question are considered part of the reformulation input. Let us describe the storage space constraints we focus on in this paper. Suppose $D$ is an arbitrary database with the schema $\mathcal{S}$; let $D'$ be a database that consists of the tables for all and only those (materialized, starting from $D$) view relations in $\mathcal{V}$ that are used in defining the query relations in $\mathcal{Q}$. For a fixed database schema $\mathcal{S}$ and a fixed set of views that define relations in $\mathcal{V}$ in terms of $\mathcal{S}$, consider all possible databases $D$ and all corresponding databases $D'$, with sizes (in bytes) $|D|$ and $|D'|$ respectively.
**Definition 2.** A reformulation $(\mathcal{V}, \mathcal{R}_\mathcal{V}, \mathcal{Q})$ of an input $(\mathcal{S}, \mathcal{R}_\mathcal{S}, \mathcal{Q})$ satisfies the no-growth storage space constraint if for all pairs $(D, D')$, the storage space $|D'|$ taken up by $D'$ does not exceed $|D|$.
$$|D'| \leq |D|. \quad (6)$$
A reformulation $(\mathcal{V}, \mathcal{R}_\mathcal{V}, \mathcal{Q})$ of a given input $(\mathcal{S}, \mathcal{R}_\mathcal{S}, \mathcal{Q})$ is called a *candidate reformulation* if it satisfies the constraints specified in its input. A reformulation output is called *worthwhile* if, in that reformulation, at least one elementary query in $\mathcal{Q}$ is executed faster than in the input formulation, for all database instances. In this paper we focus on candidate worthwhile reformulations of unary databases under the no-growth storage space constraint.
## 5 The Orthogonal Basis of a Unary Database Schema
Our ultimate objective in solving the database reformulation problem is to automate the reformulation process in as general a setting as possible; in other words, we would like to come up with some reformulation algorithm. We try to answer the question of whether the potentially infinite, for each input, search space of reformulations can be transformed in such a way that it would become finite but would still contain valuable reformulations.
One way of making the search space of reformulations more tractable is to restrict the number of view relations that are used to rewrite the input queries. Suppose we could show that, for unary databases, the set of view relations that can define any “good” reformulation, is finite, and that all and only these view relations can be defined in a particular format. Then the problem of finding “good” reformulations of arbitrary unary databases would be reduced to the clearly feasible problem of enumerating and combining all views defined in this particular format, thereby giving us a nice enumeration algorithm.
In this section we substantiate this hypothesis by showing that for an arbitrary unary input there exists a “good” reformulation with certain desirable properties and such that its materialized views are defined in a particular format.
Let us analyze the definition of query $q_1$ given in equation 2 in Section 3. The body of the definition is a conjunction of subgoals with the same variable; notice that each of the stored relations $r$, $s$, $t$ yields exactly one subgoal in the definition. Let us build a pattern based on this observation. For a unary database with $n$ stored relations $s_1$, $s_2$, ..., $s_n$, the pattern looks as follows:
$$l_1(X), l_2(X), \ldots, l_n(X); \quad (7)$$
here, $l_i(X)$ is either $s_i(X)$ or $\neg s_i(X)$.
In our example, the body of query $q_1$ is an instance of the pattern. We will show below that arbitrary unary queries, when defined on unary databases, can be rewritten as unions of such patterns. For instance, $q_3$ in our running example (equation 4 in Section 3) can be rewritten as a union of two patterns:
$$q_3(X) : = \lnot t(X), \lnot r(X), \lnot s(X) \bigcup t(X), \lnot r(X), s(X). \quad (8)$$
For an arbitrary unary database schema one can define a set of relations as (nearly) all possible instances of the pattern described in equation 7. The only exception is the instance where all subgoals are negated, since we only consider safe rules.
It is easy to show that a set \( B \) of relations defined in such a manner on a unary database schema \( S \) always exists and is unique, up to reorderings of subgoals in rules and to variable renamings. Notice that, if \( S \) has \( n \) elements, then there are \( 2^n - 1 \) relations in the set \( B \) for \( S \). Another property of the set \( B \) is that, for any instance \( D \) of a database with schema \( S \), each object in the universe of discourse of \( D \) belongs to exactly one relation in \( B \); for this reason, we call the set \( B \) the orthogonal basis of the unary database schema \( S \).
**Definition 3.** The orthogonal basis of a unary database schema \( S = \{ s_1, s_2, \ldots, s_n \} \) is the set \( B \) of (nearly) all possible relations defined as
\[
b_i(X) : = l_1(X), \ l_2(X), \ldots, \ l_n(X), \tag{9}
\]
where each \( l_j(X) \) is either \( s_j(X) \) or \( \neg s_j(X) \); the only such combination which is not in \( B \) is that where all subgoals are negated.
Notice that this definition effectively provides an algorithm to construct the orthogonal basis of a unary database schema.
We observe the following property of unary relations.
**Theorem 1.** Any unary relation that can be defined in \( nr\text{-datalog}^- \) on a unary schema \( S \) can be rewritten as a union of relations in the orthogonal basis \( B \) of the schema \( S \).
An important result is an immediate corollary of Theorem 1. Let \( r \) be a rule in \( nr\text{-datalog}^- \) which defines an arbitrary (not necessarily unary) query relation on a unary database schema \( S \). Then:
**Corollary 1.** There exists a unique, up to reordering of subgoals and variable renamings, rewriting of \( r \) in terms of the orthogonal basis \( B \) of \( S \).
Let us build the orthogonal basis and rewrite all the queries in our running example from Section 3.
**Example 1.** The unary database schema is \( S = \{ r, s, t \} \). The three query relations \( q_1 \) through \( q_3 \) constitute the set \( Q \); their definitions in equations 2 - 4 constitute the set \( R_S \).
The orthogonal basis \( B \) of the schema \( S \) consists of seven \((2^3 - 1)\) relations with the following definitions:
\[
b_1(X) : = \neg r(X), \ s(X), \ t(X); \tag{10}
\]
\[
b_2(X) : = \neg r(X), \ s(X), \ \neg t(X); \tag{11}
\]
\[
\ldots
\]
\[
b_7(X) : = r(X), \ s(X), \ t(X); \tag{12}
\]
and queries \( q_1 \) through \( q_3 \) can be rewritten in terms of the elements of \( B \) as:
\[
q_1(X) : = \neg b_1(X); \tag{13}
\]
\[
q_2(X) : = b_2(X) \cup b_3(X); \tag{14}
\]
\[
q_3(X) : = b_1(X) \cup b_3(X). \tag{15}
\]
Now the query \( q_4 \), which is a cross-product of queries \( q_1 \) and \( q_3 \), can be rewritten as the following disjunction of two rules:
\[
q_4(X, Y) : = b_0(X), \ b_1(Y); \tag{16}
\]
\[
q_4(X, Y) : = b_0(X), \ b_3(Y). \tag{17}
\]
\( \square \)
Let \( B \) be the orthogonal basis of a unary database schema \( \mathcal{S} \), and let \( \mathcal{R}_B \) be the set of rewritings of all rules in \( \mathcal{R}_S \) in terms of the elements of the set \( B \).
**Definition 4.** The triple \( (B, \mathcal{R}_B, Q) \) is called the orthogonal basis reformulation of the triple \( (\mathcal{S}, \mathcal{R}_S, Q) \).
Notice that Definition 3 and the proofs of Theorem 1 and of Corollary 1 effectively provide an algorithm for constructing the orthogonal basis reformulation of an arbitrary unary input.
It is easy to show that for any unary database schema, its orthogonal basis reformulation exists and is unique. To formulate another property of the orthogonal basis reformulation, we will need this definition.
**Definition 5.** A database satisfies the minimal-space constraint if each object in the universe of discourse (UOD) of the database is only stored once.
In other words, the minimal-space constraint requires a database to “fit into” the minimal space needed to store all the information about the database. Notice that if a database satisfies the minimal-space constraint then it also satisfies the no-growth storage space constraint.
**Theorem 2 (Properties of the Orthogonal Basis).** For the orthogonal basis reformulation \( (B, \mathcal{R}_B, Q) \) of a triple \( (\mathcal{S}, \mathcal{R}_S, Q) \), where \( \mathcal{S} \) is unary, the following properties hold:
1. The only operations in all rules in \( \mathcal{R}_B \) are union and cross-product: there are no intersections or negations.
2. \( (B, \mathcal{R}_B, Q) \) satisfies the minimal-space constraint.
3. Maintenance costs in the reformulated database, provided certain simple index structures are in place, are linear in the size of the schema \( \mathcal{S} \), i.e., in the number of the original stored relations.
Notice the low cost of updates in the reformulated database.
Not surprisingly, these nice properties come at a price; since the number of relations in the orthogonal basis is exponential in the size of the original database schema \( \mathcal{S} \), according to our cost model the time to answer the queries in \( Q \) will probably increase in the orthogonal basis reformulation, relative to that in database instances with the schema \( \mathcal{S} \). However, the increase is not too high because, even though the number of stored relations in the reformulated database is exponential in the number of the original stored relations, the size of the actual data (stored tuples) does not change after the reformulation. Thus, queries and updates on the reformulated database can be made faster by using certain simple index structures.
6 Enumerating Candidate Relations
From the previous section we know how to obtain one interesting reformulation of the given input. Is it possible, in the unary case, to generate all interesting reformulations, i.e., those that have the same nice properties as the orthogonal basis reformulation? It turns out that the answer is yes: in this section, we show how to finitely enumerate all worthwhile candidate (see definitions in the last paragraph of Section 4) reformulations of an arbitrary unary reformulation input.
Consider a unary database schema \( \mathcal{S} \). Let \( r \) be an arbitrary relation defined in \( \text{nr-datalog}^- \) on \( \mathcal{S} \), and let \( D \) be an arbitrary database instance with schema \( \mathcal{S} \). Consider the space \( |D| \) required to store \( D \) and the space \( |r| \) required to store \( r \) when it is materialized; both \( |D| \) and \( |r| \) are in bytes.
**Theorem 3.** In all databases \( D \) with schema \( \mathcal{S} \), \( |r| \) does not exceed \( |D| \):
\[
\forall D : \ |r| \leq |D|,
\]
if and only if \( r \) is a unary relation.
This result has one important consequence: it means that if we want to obtain candidate reformulations, i.e., those that satisfy a strong storage space constraint (see Definition 5 in Section 5), the only relations we can choose as stored (materialized) in reformulated databases are unary relations.
7 The Minimal Non-Forking Reformulation
In the previous sections, we have described an algorithm that generates all candidate reformulations of a given unary input, the problem with the algorithm is that it may generate many non-candidate reformulations as well and generally the search space for finding candidate reformulations is too large. Fortunately, it turns out that one does not even need to generate and compare all potentially good reformulations of the given input by using the algorithm. Instead of applying the storage space constraint to each output of Algorithm 2, one can reduce the search space of the original database for all database instances with schema $S$.
Theorem 5. For an arbitrary reformulation input $(S, R; Q)$, where $S$ is unary, Algorithm 2 generates all possible candidate reformulations.
Theorem 4. For a unary reformulation input $(S, R; Q)$, where $S$ is unary, the unary enumeration algorithm generates all views that could possibly be used to rewrite the definitions in terms of $V$, provided such rewriting exist for all relations defined in $R$.
Algorithm 1 (Unary Enumeration). First, build the orthogonal basis $B$ of $S$, then output all unions of the elements of $B$. Using Theorem 1, we have designed a unary enumeration algorithm whose input is a unary database.
Algorithm 2 (Enumeration of Candidate Reformulations). Output all branches $(V, R; Q)$ where $V$ is a subset of $W$ and $R$ is a set of rewritings of the rules in $R$ in terms of $W$ provided such rewritings exist for all relations defined in $R$. By Theorem 1 and 3, this algorithm generates the definitions of all and only those relations that can be defined in terms of the schema $S$.
1. The set $\mathcal{U}$ of the graph consists of three vertices, one for each elementary unary query ($q_1$ through $q_3$).
2. The set $\mathcal{B}$ represents all relations in the orthogonal basis $\mathcal{B}$ of the set $\mathcal{S}$.
3. The set $E$ contains edges $(q_1, b_6)$, $(q_2, b_6)$, $(q_2, b_5)$, $(q_3, b_4)$, and $(q_3, b_5)$.
The resulting graph is shown in Figure 1; here we see a depiction of the three unary subqueries of queries $q_1$ through $q_3$, redefined as unions of basis relations; for example, the only unary subquery of $q_2$ is a union of two basis relations $b_2$ and $b_6$, and so on. □
Reformulation graphs, built as illustrated in Example 2, suggest a method for building “good” reformulations of unary databases: the idea is to materialize all maximal unions of basis relations whose elements are used to define no more than one unary subquery. For instance, in Example 2 we would materialize three relations: $b_2$, $b_6$, and the union of $b_1$ and $b_5$. Materializing such relations would optimize query processing costs by minimizing the time required to compute the unary subqueries, under the constraint that none of the objects in the UOD of the database is stored twice. This idea is embodied in Algorithm 3, which takes as input a triple $(\mathcal{S}, \mathcal{R}_S, \mathcal{Q})$, where $\mathcal{S}$ is unary, and outputs a reformulation $(\mathcal{M}, \mathcal{R}_M, \mathcal{Q})$ of $(\mathcal{S}, \mathcal{R}_S, \mathcal{Q})$.
**Algorithm 3 (Minimal Non-Forking Reformulation).**
1. Construct the bipartite graph $\mathcal{G}$ of $(\mathcal{S}, \mathcal{R}_S, \mathcal{Q})$; $\mathcal{G} = (\mathcal{U}, \mathcal{B}, E)$.
2. Classification of the vertices in $\mathcal{B}$: for each vertex $b \subseteq \mathcal{B}$, place $b$ into the set $N$ (nonforking) if exactly one edge in $\mathcal{G}$ is incident on $b$, and place $b$ into the set $F$ (forking) if more than one edge in $\mathcal{G}$ is incident on $b$.
3. Transform $\mathcal{G}$ into $\mathcal{G}'$ by removing from $\mathcal{B}$ all vertices which are neither in $N$ nor in $F$, i.e., those that are not incident on any edge in $\mathcal{G}$.
4. View materialization I: materialize separately each relation $b$ in $F$.
5. View materialization II: transform the graph $\mathcal{G}'$ into $\mathcal{G}''$ by removing all vertices in $F$ and all edges incident on these vertices, then materialize all unions of relations $b$ such that the corresponding vertices in $\mathcal{B}$ belong to a connected subgraph of $\mathcal{G}''$.
6. Construct a set of rules $\mathcal{R}_M$ by rewritings all queries in $\mathcal{Q}_S$ in terms of the relations $\mathcal{M}$ materialized in steps 4 and 5.
In Example 2, $N = \{ b_1, b_2, b_3 \}$, $F = \{ b_6 \}$, the vertices discarded in step 3 are $b_4$, $b_5$, $b_7$; view materialization I materializes $b_6$, and view materialization II materializes relations $b_2$ and $b_1 \cup b_3$. Notice that since the stored relations in $(\mathcal{M}, \mathcal{R}_M, \mathcal{Q})$ are parts of unary subqueries of relations in $\mathcal{Q}$, step 6 of the algorithm, i.e., rewriting the query relations in terms of $\mathcal{M}$, is straightforward.
**Definition 6.** The output $(\mathcal{M}, \mathcal{R}_M, \mathcal{Q})$ of Algorithm 3 is called a minimal non-forking reformulation of $(\mathcal{S}, \mathcal{R}_S, \mathcal{Q})$.
The name non-forking comes from the method of building the materialized relations: in the bipartite graph $\mathcal{G}$ for our running example, in Figure 1 we can see a fork (more than one edge) at the basis relation $b_6$, which means that $b_6$ is used in the definition of more than one unary subquery and, for this reason, needs to be materialized as a separate relation.
It is easy to show that for any unary reformulation input, the minimal non-forking reformulation exists and is unique; moreover, by construction it is always a candidate reformulation of the input.
Now let us recall that the objective of database reformulation is to minimize query processing costs by materializing views. The most important result of this paper is that any input query is answered in the minimal non-forking reformulation at least as fast as in any candidate reformulation:
**Theorem 6.** In the minimal non-forking reformulation \((\mathcal{M}, \mathcal{R}_\mathcal{M}, \mathcal{Q})\) of a reformulation input \((\mathcal{S}, \mathcal{R}_\mathcal{S}, \mathcal{Q})\) where \(\mathcal{S}\) is unary, any query is answered at least as fast (for all database instances) as in any candidate reformulation of \((\mathcal{S}, \mathcal{R}_\mathcal{S}, \mathcal{Q})\).
Notice that, depending on whether the input database itself satisfies the minimal-space constraint, the minimal non-forking reformulation may or may not process the queries faster than the input database. In any case, Theorem 6 reduces the search space of reformulations to just two formulations: the input formulation \((\mathcal{S}, \mathcal{R}_\mathcal{S}, \mathcal{Q})\) and the minimal non-forking formulation \((\mathcal{M}, \mathcal{R}_\mathcal{M}, \mathcal{Q})\).
### 8 Going Beyond the Unary Case
Now that we have the complete solution to the unary database reformulation problem, we would like to extend the obtained results to the general case of reformulating databases with stored relations of arbitrary arity. We don’t have a solution yet, but the results we have obtained for the unary case give us insight into the directions to move in the general \((n\text{-ary})\) case. The example below shows one possible scenario.
**Example 3.** Suppose we have a database with five binary stored relations \(s_1, s_2, s_3, s_4,\) and \(s_5\). Suppose we have only three elementary queries of interest, \(p, q,\) and \(r\), with the following definitions:
\[
p(X, Y) := - s_1(X, Z), s_2(Y, Z), - s_3(X, Y), s_4(X, W); \tag{19}
\]
\[
q(X, T) := - s_1(X, Z), s_2(Y, Z), s_3(X, Y), s_5(X, T); \tag{20}
\]
\[
r(X, W) := - s_1(X, Z), s_2(Y, Z), s_4(X, W). \tag{21}
\]
We could notice a common subexpression \(s_1(X, Z), s_2(Y, Z)\) in these three definitions, and could materialize a new relation \(t\) defined as:
\[
t(X, Y) := - s_1(X, Z), s_2(Y, Z); \tag{22}
\]
this materialization might be done in traditional query optimization.
However, we can do better than that. Consider relations
\[
b_1(X, Y) := - s_1(X, Z), s_2(Y, Z), - s_3(X, Y); \tag{23}
\]
\[
b_2(X, Y) := - s_1(X, Z), s_2(Y, Z), s_3(X, Y); \tag{24}
\]
they are reminiscent of the orthogonal basis relations in the unary case.
Notice that the union of \(b_1\) and \(b_2\) gives us exactly the relation \(t\). Now, if we dematerialize \(s_1, s_2, s_3\) and materialize \(b_1\) and \(b_2\), we can rewrite our queries as
\[
p(X, Y) := - b_1(X, Y), s_4(X, W); \tag{25}
\]
\[
q(X, T) := - b_2(X, Y), s_5(X, T); \tag{26}
\]
\[
r(X, W) := - b_1(X, Y), s_4(X, W); \tag{27}
\]
\[
r(X, W) := - b_2(X, Y), s_4(X, W). \tag{28}
\]
\(\square\)
The resulting database still consists of binary relations only, so the required storage space cannot increase dramatically (assuming the absence of any functional dependencies in the original stored relations), but now the query definitions look much simpler and can be computed faster.
9 Related Work
Database schema evolution is an integral part of database design, data model translation, schema (de)composition, and multidatabase integration; fundamental to these problems is the notion of equivalence between database schemata.
Database schema equivalence was first studied in [4, 7, 24]. Later, relative information capacity was introduced in [16] as a fundamental theoretical concept which encompasses schema equivalence and dominance. Tutorial [15] surveys a number of frameworks, including relative information capacity, for dealing with the issue of semantic heterogeneity arising in database integration.
In practical database systems, database design frequently uses normalization, first introduced in [8] and described in detail in [28]. [6, 17] survey methods and issues in multidatabase integration.
Query transformation is another aspect of database transformation tasks. Query rewriting is important for query optimization (see [5, 27]), especially in deductive databases [22] where queries can be complex and the amount of data accessed can be overwhelming. [23] is a survey on implementation techniques and implemented projects in deductive databases.
There is an extensive body of work on theoretical aspects of query rewriting. The paper [1] discusses the complexity of answering queries using materialized views and contains references to major results in the areas of query containment and view materialization. [13, 14, 18, 25] describe various approaches to view materialization. [3, 9, 10, 21] treat the problem of using available materialized views for query evaluation.
Transformations of database schemas and queries can be considered together as reformulations of logical theories. [26] provides a theoretical foundation for theory reformulations, and [12, 20] contain work on general transformations of logical theories.
Descriptions of basic methods used in this paper can be found, e.g., in [11].
10 Conclusions and Future Work
We have defined and formally specified database reformulation, as the process of rewriting the data and rules of a deductive database in a functionally equivalent manner. We focus on the problem of automatically reformulating a database in a way that reduces the processing time for a prespecified set of queries while satisfying strong storage space constraints.
In this paper, we have described a complete solution of the database reformulation problem for one class of deductive databases, those where all stored relations are unary and all queries and views are expressed in nonrecursive datalog with negation. We have shown that the reformulation problem for these unary databases is decidable. Furthermore, we have shown that for any such unary database, there is a special reformulation which satisfies strong storage space constraints and where query processing costs for all input queries are as low or lower than in any reformulation that satisfies the same constraints. We have described how to build such a reformulation.
We have also suggested a possible extension of our solution for unary databases to the general case of deductive databases with stored relations of arbitrary arity, under strong storage space constraints.
This paper describes just the first step in the formidable task of taming database reformulation. Our long-term research objective is to explore how database reformulation can be automated for databases of arbitrary arity, with rules expressed in successively more complex standard query languages, i.e., various extensions of datalog. (We have already solved the problem for databases whose rules can be expressed as conjunctive queries.) We also plan to study reformulation of databases with various forms of integrity constraints.
Acknowledgments
The authors would like to thank the anonymous reviewers for their valuable comments.
References
A Theorem Proofs and Additional Examples
A.1 Proofs for Section 5
We start this section with a simple observation which we will be using in the proofs below.
Observation A.1 Any query in \textit{nr-datalog} on a database schema \( S \) has an (equivalent) safe rewriting where the set of relation schemas for all the subgoals is a subset of \( S \).
We will call the rewriting of a query \( q \) where all predicates in rule bodies correspond to stored relations in \( S \), the \textit{schema rewriting of} \( q \).
Proof (Theorem 1). Let \( S = \{ s_1, s_2, \ldots, s_n \} \). Consider a fixed pair \( (S, q) \), where \( q \) is a unary query defined on \( S \); let \( \mathcal{B} \) be the orthogonal basis of \( S \).
It is easy to show that the schema rewriting \( \tilde{q} \) (see Observation A.1) of \( q \) on \( S \) is a set of rules where the body of each rule is a unary subquery.
Let us show that the body of each rule in \( \tilde{q} \) can be converted into a union of relations in the orthogonal basis \( \mathcal{B} \) of \( S \). Consider an arbitrary rule \( r \) in \( \tilde{q} \); let the only variable in \( r \) be \( X \). The body of \( r \) is a unary subquery; let us call it \( C(X) \).
By definition of the schema rewriting \( \tilde{q} \), each subgoal in \( r \) corresponds to a relation name in \( S \), and thus \( C(X) \) consists of literals which are (possibly negated) relation names in \( S \); notice that because all rules are safe, at least one conjunct in \( C(X) \) is not negated. We can assume without loss of generality that each relation name in \( S \) occurs in \( C(X) \) no more than once. Then \( C(X) \) looks as follows:
\[
l_{i_1}(X), l_{i_2}(X), \ldots, l_{i_k}(X);
\]
here, \( l_j(X) \) is either \( s_j(X) \) or \( \neg s_j(X) \), where \( j \) is between 1 and \( n \); since each relation name in \( S \) occurs in \( C(X) \) at most once, the total number \( m \) of conjuncts in \( C(X) \) does not exceed the size \( n \) of \( S \); \( m \leq n \).
Now let us show, by induction on the difference \( k \) between \( n \) and \( m \), that \( C(X) \) has an equivalent rewriting as a union of relations in the orthogonal basis \( \mathcal{B} \) of \( S \).
1. Basis: \( k = n - m = 0 \). Here each relation \( s_j \in S \) is represented in \( C(X) \) exactly once, and at least one of the subgoals of \( C(X) \) is not negated. Thus, \( C(X) \) is the body of the definition of one of the orthogonal basis relations \( b_i \in \mathcal{B} \), and we can rewrite \( C(X) \) as \( b_i \).
2. Induction: \( k = n - m > 0 \). Consider \( C(X) \) with \( m \) literals. Since \( m < n \), there is at least one relation \( s_i \) in \( S \) which is not represented in \( C(X) \). Then \( C(X) \) can obviously be rewritten as a disjunction:
\[
C(X) \equiv (C(X), s_i(X)) \bigcup (C(X), \neg s_i(X)).
\]
Now each disjunct in the RHS of the equation has \( m + 1 \) literals and thus, by the inductive hypothesis, can be represented as a union of relations in \( \mathcal{B} \).
3. By repeatedly rewriting \( C(X) \) as an increasingly long union of components, as shown in 1 and 2 above, we obtain a disjunction of relations in \( \mathcal{B} \) which is an equivalent rewriting of \( C(X) \). The process terminates when the number of conjunctions in each disjunct reaches \( n \).
The case when \( X \) is a not a variable but a constant is treated analogously to the case with variables.
Now we replace each such \( C(X) \), for each variable or constant, in each rule in \( \tilde{q} \) by its rewriting as a union of orthogonal basis relations in \( \mathcal{B} \). The resulting set of rules \( q_B \) is equivalent to \( \tilde{q} \). Finally, by transitivity of equivalence via \( \tilde{q} \), we can conclude that \( q_B \) is a rewriting of \( q \). \( \square \)
Proof (Corollary 1). Let \( S = \{ s_1, s_2, \ldots, s_n \} \). Consider a fixed pair \( (S, q) \), where \( q \) is an arbitrary query defined in \textit{nr-datalog} on \( S \); let \( \mathcal{B} \) be the orthogonal basis of \( S \).
(1) Existence of a rewriting: since any rule in \( q \) is a cross-product of unary subqueries, any such rule can be (equivalently) rewritten completely as a cross-product of unions of relations in the orthogonal basis of the schema \( S \); see Theorem 1. To turn the resulting query into the \textit{nr-datalog} format, one may need to convert cross-products of unions, in bodies of rules, into a set of conjunctions, using a standard procedure.
(2) **Uniqueness of the rewriting.** Suppose there are two rewritings of \( q \) in terms of the set \( \mathcal{B}, q_B^{(1)} \) and \( q_B^{(2)} \). It is easy to show that any rule in these rewritings must be in the following format:
\[
r_i^{(j)}(X_1, X_2, \ldots, X_m) : = b_k(X_1), b_k(X_2), \ldots, b_k(X_m);
\]
where \( j \) is either 1 or 2, all \( m \) variable names in the head of the rule are different, and each \( b_k \) in the rule’s body is in \( \mathcal{B} \). Notice that since all variable names are different, there are no intersections of subgoals in the bodies of the rules; also, since all rules are safe, there can be no negated subgoals in the rules.
Now, since \( q_B^{(1)} \) and \( q_B^{(2)} \) are equivalent, by the containment mapping theorem for positive datalog with disjunctions, the relation for each rule in \( q_B^{(1)} \) is contained in the relation for some single rule in \( q_B^{(2)} \), and vice versa. Consider an arbitrary rule \( r^{(1)} \) in \( q_B^{(1)} \), and consider the rule \( r^{(2)} \) in \( q_B^{(2)} \) such that \( r^{(1)} \) is contained in \( r^{(2)} \). It is not possible that the containment is proper in any database instance with schema \( B \), since the sets of objects in the tables for basis relations are pairwise disjoint. Thus the definitions of \( r^{(1)} \) and \( r^{(2)} \) are the same, up to variable renamings.
From this observation it is clear that there is a one-to-one correspondence between the rules in \( q_B^{(1)} \) and \( q_B^{(2)} \). Thus, the rewriting of \( q \) in terms of the orthogonal basis \( \mathcal{B} \) of \( \mathcal{S} \) is unique up to reorderings of subgoals. □
**Proof (Theorem 2).**
1. Follows from the proof to Corollary 1.
2. Follows from the property that for any database instance \( D \) with schema \( \mathcal{S} \), each object in the universe of discourse (UOD) of \( D \) belongs to exactly one relation in \( \mathcal{B} \).
3. We consider three elementary types of database updates: (A) insertion, (B) deletion, and (C) proper update which we model as a deletion followed by an insertion. Let us consider a fixed database instance \( D \) with schema \( \mathcal{S} \); let \( D' \) be the database instance with the schema \( \mathcal{B} \), obtained from \( D \) by the orthogonal basis reformulation. In what follows, we assume the presence of certain indexes and metadata that will be described as needed.
Now let us consider, in turn, the three elementary update operations in \( D \) that we have isolated, and study the complexity of the corresponding operations in \( D' \).
(A) For an insertion of an object \( \alpha \) into the table for a relation \( s_i \) in \( D \), there are two cases:
- if \( \alpha \) is not already in the UOD of \( D \) then, in \( D' \), it needs to be placed into a relation \( b_j \) which contains objects belonging to \( s_i \) only and not to any other relation; this relation \( b_j \) can be mapped to \( s_i \) once before \( D' \) is populated; therefore, the time required to insert \( \alpha \) into \( D' \) is constant;
- if, however, \( \alpha \) is already in the UOD of \( D \), then the first action in \( D' \) will be to access, from \( \alpha \), the table to which it belongs (this operation takes constant time with the use of an index), and then to examine, in the metadata for \( D' \), the definition of the basis relation for that table (takes time which is linear in the length of the definition of the relation, i.e., in the number of elements in \( \mathcal{S} \)); if the subgoal for \( s_i \) is not negated in this definition then the object is already in the correct table, and no further action is required; if, on the other hand, the subgoal for \( s_i \) is negated in the definition, then, after deleting \( \alpha \) from that table, the next and final action is to find the basis relation which has exactly the same definition except that \( s_i \) is not negated there (takes constant time with an index), and to place \( \alpha \) into the corresponding table; in both cases the total complexity of the insertion operation in \( D' \) depends on simple index accesses described above and is thus linear in the number of elements of \( \mathcal{S} \).
(B) For a deletion of an object \( \alpha \) from the table for a relation \( s_i \) in \( D \), there are also two cases, and the analysis is similar to that for the insertion case.
(C) A proper update is a deletion followed by an insertion; therefore, its complexity is the maximum of the complexities of its components, i.e., is also linear in the size of the schema \( S \). □
**A.2 Proofs for Section 6**
**Proof (Theorem 3).** In this proof, we consider a relation \( r \), defined in \( nr\text{-datalog}^- \) on a unary database schema \( \mathcal{S} \), and a database instance \( D \) with schema \( S \).
(1) The “if” part: let \( r \) be a unary relation. Consider an arbitrary database \( D \) with schema \( S \); the set of answers to \( r \) in \( D \) is effectively a set of some objects that are already stored in \( D \). In the worst case, the set of answers to \( r \) includes all the objects stored in \( D \); even in this case, the space required to store the set of answers to \( r \) cannot exceed the space required to store \( D \). We conclude the proof by noting that this result does not depend on the choice of the database instance \( D \).
(2) The “only if” part: suppose some relation \( r \) is such that for any database instance \( D \) with schema \( S \), the set of answers to \( r \) in that database does not require more storage space than \( D \) itself.
Assume \( r \) is not unary; suppose \( r \) is a binary relation. We will show that in this case, there exists a database \( D \) with schema \( S \), such that the set of answers to \( r \) on that database cannot “fit into” the storage space required to store \( D \).
Consider a schema rewriting of the rules for \( r \) (see Observation A.1). For \( r \) to be binary, there must be at least one rule in the schema rewriting with two different variables in the head, since relations like \( r(X, X) \) are essentially unary; let us call these variables \( X \) and \( Y \). For this rule to be safe, the body of the rule must have at least two nonnegated subgoals, one with argument \( X \) and the other with argument \( Y \); let these subgoals be \( s_i(X) \) and \( s_j(Y) \), \( X \neq Y \), \( s_i \in S \) and \( s_j \in S \). Notice that for the set of answers to the rule not to be empty in all databases with schema \( D \), no negated subgoal with argument \( X \) in the body of the rule can have relation name \( s_i \); similarly for \( Y \) and \( s_j \). Let \( S' \) be the set of all relation names in \( S \) such that this rule for \( r \) has a nonnegated subgoal with that relation name (notice that subgoals with variables other than \( X \) or \( Y \) are redundant in the body of the rule); let \( k \) be the number of relations in \( S' \).
Now consider a database instance \( D \) with schema \( S \), such that the only nonempty tables in \( D \) are those for the relation names in \( S' \). Let the size of the UOD of \( D \) be any \( m > k/2 \); let each of the \( k \) nonempty tables in \( D \) contain all the \( m \) objects in the UOD of \( D \). Then the number of objects stored in \( D \) is \( k \times m \).
Now, when we compute this particular rule for \( r \), we see that the set of answers to this rule is the set of two-element tuples, where there is a tuple for each combination of two objects in the UOD of \( D \). Thus, the number of answers to this particular rule in \( D \) is \( m^2 \), and the number of objects that need to be stored for these answers is \( 2 \times m^2 \) (we count as a unit the space needed to store an argument value). Since \( m > k/2 \), we have \( 2 \times m^2 > k \times m \). Since the set of answers to \( r \) includes all answers to the rule, the space needed to store the set of answers to \( r \) is at least the space needed for this rule. Therefore, the set of answers to \( r \) in this database \( D \) requires more storage space than \( D \) itself.
We have shown that our premise does not hold when \( r \) is binary; thus we have proved the claim by contradiction for all binary relations that can be defined on a unary database schema \( S \). A similar counterexample can be built for a relation \( r \) of arbitrary arity greater than 2. We can conclude that to “fit into” the storage space of an arbitrary database with schema \( S, r \) needs to be unary. □
**Proof (Theorem 4).** After we notice that a union of orthogonal basis relations, when materialized, satisfies the minimal-space constraint, the claim of the theorem follows immediately from Theorems 1 and 3. □
**Proof (Theorem 5).** Consider an arbitrary candidate reformulation \( (V, R, Q) \) of a triple \( (S, R_S, Q) \) where \( S \) is unary. By definition, for any database instance \( D \) with schema \( S \) and its reformulated counterpart \( D' \) with schema \( V \), none of the stored (materialized) relations in \( D' \) take up more storage space than \( D \). Thus in all candidate reformulations of \( (S, R_S, Q) \), all stored relations are unary relations. Observing that Algorithm 6 outputs all reformulations whose all stored (materialized) relations are unary, concludes the proof. □
### A.3 Proofs for Section 7
**Proof (Theorem 6).** Observe that in the minimal non-forking reformulation, the only operations are unions and cross-products (since any candidate reformulation has the same properties as the orthogonal basis reformulation, and from Theorem 2). We assume the standard bottom-up query evaluation cost model; in this model, all unary subqueries of each rule are computed before any Cartesian product is processed. The stored (materialized) relations in the minimal non-forking reformulation are maximal unions of basis relations, such that these unions belong to the same unary subgoal. Assuming that it is at least as fast to scan a union once and then to perform a Cartesian product, than it is to retrieve the elements of the union one by one, combined with the Cartesian product each time, and then to take the union of all the results, we obtain the result of the theorem. □
|
{"Source-Url": "http://www4.ncsu.edu:80/~rychirko/Papers/unaryRef.pdf", "len_cl100k_base": 13514, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 88048, "total-output-tokens": 15987, "length": "2e13", "weborganizer": {"__label__adult": 0.0003814697265625, "__label__art_design": 0.0006017684936523438, "__label__crime_law": 0.0005402565002441406, "__label__education_jobs": 0.003894805908203125, "__label__entertainment": 0.00015842914581298828, "__label__fashion_beauty": 0.00022912025451660156, "__label__finance_business": 0.0009522438049316406, "__label__food_dining": 0.0005040168762207031, "__label__games": 0.0010423660278320312, "__label__hardware": 0.0011749267578125, "__label__health": 0.0009455680847167968, "__label__history": 0.0006241798400878906, "__label__home_hobbies": 0.00024056434631347656, "__label__industrial": 0.0008182525634765625, "__label__literature": 0.00089263916015625, "__label__politics": 0.00035381317138671875, "__label__religion": 0.0006289482116699219, "__label__science_tech": 0.350341796875, "__label__social_life": 0.0001513957977294922, "__label__software": 0.026947021484375, "__label__software_dev": 0.607421875, "__label__sports_fitness": 0.0002225637435913086, "__label__transportation": 0.0007147789001464844, "__label__travel": 0.0002498626708984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55293, 0.01979]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55293, 0.58993]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55293, 0.87351]], "google_gemma-3-12b-it_contains_pii": [[0, 3976, false], [3976, 8363, null], [8363, 11798, null], [11798, 16536, null], [16536, 19690, null], [19690, 23810, null], [23810, 25515, null], [25515, 29275, null], [29275, 32762, null], [32762, 36624, null], [36624, 40379, null], [40379, 44923, null], [44923, 49813, null], [49813, 55293, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3976, true], [3976, 8363, null], [8363, 11798, null], [11798, 16536, null], [16536, 19690, null], [19690, 23810, null], [23810, 25515, null], [25515, 29275, null], [29275, 32762, null], [32762, 36624, null], [36624, 40379, null], [40379, 44923, null], [44923, 49813, null], [49813, 55293, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55293, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55293, null]], "pdf_page_numbers": [[0, 3976, 1], [3976, 8363, 2], [8363, 11798, 3], [11798, 16536, 4], [16536, 19690, 5], [19690, 23810, 6], [23810, 25515, 7], [25515, 29275, 8], [29275, 32762, 9], [32762, 36624, 10], [36624, 40379, 11], [40379, 44923, 12], [44923, 49813, 13], [49813, 55293, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55293, 0.03021]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
939093a2d4020a0d301696963bb16d6b8e5d408a
|
Intelligent Instantiation and Supersafe Rules*
Vladimir Lifschitz
1 Department of Computer Science, University of Texas at Austin, 2317 Speedway, Stop D9500, Austin, TX 78712, USA v1@cs.utexas.edu
Abstract
In the input languages of most answer set solvers, a rule with variables has, conceptually, infinitely many instances. The primary role of the process of intelligent instillation is to identify a finite set of ground instances of rules of the given program that are “essential” for generating its stable models. This process can be launched only when all rules of the program are safe. If a program contains arithmetic operations or comparisons then its rules are expected to satisfy conditions that are even stronger than safety. This paper is an attempt to make the idea of an essential instance and the need for “supersafety” in the process of intelligent instantiation mathematically precise.
1998 ACM Subject Classification D.3.1 Formal Definitions and Theory
Keywords and phrases answer set programming
Digital Object Identifier 10.4230/OASIcs.ICLP.2016.first-page-number
1 Introduction
The input languages of most answer set solvers are not typed. When a program in such a language is grounded, the variables occurring in it can be replaced by arbitrary ground terms not containing arithmetic operations, and that includes arbitrary integers. Thus the set of ground instances of any non-ground rule is infinite. The primary role of the process of intelligent instantiation is to identify a finite set of ground instances of the rules of the program that are “essential” for generating its stable models.
The possibility of intelligent instantiation is predicated on the assumption that every rule of the given program is safe—that each variable occurring in the rule appears also nonnegated in its body. If an unsafe rule is found in the program then the solver produces an error message and stops. For example, generating the stable models of a program will not be attempted if it contains the rule
\[ p(X, Y) \leftarrow q(X), \]
because the variable \( Y \) occurs in its head but not in the body.
The safety assumption does not guarantee that the set of essential instances is finite. For example, all rules of the program
\[
\begin{align*}
p(a), \\
p(b), \\
p(f(f(X))) & \leftarrow p(X)
\end{align*}
\]
(1)
* This work was supported in part by the National Science Foundation under Grant IIS-1422455.
1 The language SPARC [1] is a notable exception. In a SPARC program, finite sorts are assigned to arguments of all predicates, and the range of integers allowed in the process of grounding is finite.
are safe, but its third rule has infinitely many instances essential for constructing the stable model:
\[
p(f(f(f(a)))) \leftarrow p(a), \quad p(f(f(b))) \leftarrow p(b),
\]
\[
p(f(f(f(f(a)))))) \leftarrow p(f(f(a))), \quad p(f(f(f(f(b)))))) \leftarrow p(f(f(b))),
\]
\[\ldots\]
The rules in the first line are essential because their bodies \(p(a)\) and \(p(b)\) are facts from (1). The rules in the second line are essential because their bodies are identical to the heads of the rules in the first line, and so on. An attempt to form all essential instances will add a finite set of rules at each step, but it will not terminate. In the terminology of Calimeri et al. [4], program (1) is not finitely ground. The safety of all rules does guarantee, however—for programs containing neither arithmetic operations nor comparisons—that all essential instances can be found in a stepwise manner, as in the example above, with finitely many instances added at every step.
In the presence of arithmetic operations and comparisons, on the other hand, the possibility of launching the process of accumulating essential instances is not ensured by the safety of all rules. For example, each of the rules
\[
p(X,Y) \leftarrow q(X + Y), \quad p(X,Y) \leftarrow X < Y, \quad p(X,Y) \leftarrow X = Y
\]
is safe in the sense that both variables occurring in it appear nonnegated in the body. But the presence of any of these rules in a program causes the grounder GRINGO to stop execution with the same error message as in the presence of an unsafe rule. On the other hand, GRINGO does not object against the rules
\[
p(X,Y) \leftarrow X = Y, \quad q(X)
\]
and
\[
p(X) \leftarrow X + 3 = 4.
\]
The discussion of safety in Version 2.0 of the Potassco User Guide (http://sourceforge.net/projects/potassco/files/guide/) shows that the conditions under which GRINGO treats a rule as safe are quite complicated. Such conditions have to be imposed because safety in the traditional sense does not guarantee the possibility of calculating essential instances in the step-by-step manner; the rules of the program must be “supersafe.”
This paper shows how our informal discussion of essential instances, of the role of intelligent instantiation, and of the need for supersafety can be made mathematically precise. In the next section we define which elements of a set \(\Gamma\) of ground rules are essential and
---
2 According to that document, occurrences of variables in the scope of arithmetic functions can only justify safety for “simple arithmetic terms”—terms containing a single occurrence of a variable and no arithmetic operations other than addition, subtraction, and multiplication. This explains why GRINGO does not accept rule (2) as safe: the term \(X + Y\) is not simple. Moreover, if multiplication is used, then the constant factor must not evaluate to 0 for the variable occurrence to justify safety. Furthermore, according to the User Guide, safety is not justified by occurrences of variables in inequalities; hence (3) is not accepted. This restriction does not apply to equalities. “However, this only works when unification can be made directionally, i.e., it must be possible to instantiate one side without knowing the values of variables on the other side.” This explains why GRINGO considers rule (4) unsafe but accepts (5) and (6).
prove that the set $E(\Gamma)$ of essential rules has the same stable models as the whole $\Gamma$. The set $E(\Gamma)$ is defined as the union of a monotone sequence of subsets $E_k(\Gamma)$, representing the stepwise process of accumulating essential rules. After describing a class of logic programs with variables and arithmetic in Section 3, we define and study the concept of a supersafe rule (Section 4). The main result of this paper, proved in Section 5, shows that if $\Gamma$ is the propositional image of a program consisting of supersafe rules then each of the sets $E_k(\Gamma)$ is finite. This theorem clarifies the role of the additional conditions that gringo imposes on safe rules.
## 2 Essential Rules
### 2.1 Propositional Programs
We start by describing the class of programs without variables for which the concept of an essential rule will be defined.\(^3\) Consider a fixed propositional signature—a set of symbols called atoms. (In applications to the study of logic programs with variables and arithmetic, the signature will consist of the ground atoms not containing arithmetic operations.) A (propositional) rule is an expression of the form $H \leftarrow B$, where the head $H$ and the body $B$ are propositional formulas formed from atoms and the symbols $\top$(true) and $\bot$(false) using the connectives $\land$, $\lor$, $\neg$.
A (propositional) program is a set of rules.
A set $M$ of atoms will be identified with the truth assignment that maps all elements of $M$ to true and all other atoms to false. The reduct $R^M$ of a rule $R$ relative to $M$ is the rule obtained by replacing, in the head and in the body of $R$, each subformula $F$ that begins with negation and is not in the scope of negation with $\top$ if $M$ satisfies $F$, and with $\bot$ otherwise. The reduct $\Gamma^M$ of a program $\Gamma$ is defined as the set of the reducts $R^M$ of all rules $R$ of $\Gamma$. We say that $M$ is a stable model of a program $\Gamma$ if $M$ is minimal among the sets satisfying $\Gamma^M$.
### 2.2 Essential Part of a Propositional Program
Consider a propositional program $\Gamma$ such that the body of every rule of $\Gamma$ is a conjunction of formulas of three types: (a) symbols $\top$ and $\bot$; (b) atoms; (c) formulas beginning with negation. In the definition of the essential part of $\Gamma$ below, the following terminology is used. A nonnegated atom of a propositional formula $F$ is an atom $A$ such that at least one occurrence of $A$ in $F$ is not in the scope of negation. A rule from $\Gamma$ is trivial if at least one of the conjunctive terms of its body is $\bot$.
The subsets $E_0(\Gamma)$, $E_1(\Gamma), \ldots$ of $\Gamma$ are defined as follows:
- $E_0(\Gamma) = \emptyset$.
- $E_{k+1}(\Gamma)$ is the set of all nontrivial rules $R$ of $\Gamma$ such that every nonnegated atom of the body of $R$ is also a nonnegated atom of the head of some rule from $E_k(\Gamma)$.
It is clear that every member of the sequence $E_0(\Gamma), E_1(\Gamma), \ldots$ is a subset of the one that follows (by induction). It is clear also that if $E_{k+1}(\Gamma) = E_k(\Gamma)$ then $E_l(\Gamma) = E_k(\Gamma)$ for all $l$ that are greater than $k$.
The set of essential rules of $\Gamma$, denoted by $E(\Gamma)$, is defined as the union $\bigcup_{k\geq 0} E_k(\Gamma)$. The degree of an essential rule $R$ is the smallest $k$ such that $R \in E_k(\Gamma)$.
---
\(^3\) Programs considered here are programs with nested expressions [7] without classical negation, with the usual symbols for propositional connectives used instead of the comma, the semicolon, and "not" in the original publication.
Theorem 1. Programs $\Gamma$ and $E(\Gamma)$ have the same stable models.
Example 2. If the rules of $\Gamma$ are
\[
\begin{align*}
a_1 & \lor a_2 \leftarrow \neg a_0, \\
b_n & \leftarrow a_n \land a_{n+1} \quad (n \geq 0)
\end{align*}
\]
then
\[
\begin{align*}
E_0(\Gamma) &= \emptyset, \\
E_1(\Gamma) &= \{a_1 \lor a_2 \leftarrow \neg a_0\}, \\
E_2(\Gamma) &= \{a_1 \lor a_2 \leftarrow \neg a_0, b_1 \leftarrow a_1 \land a_2\}, \\
E_3(\Gamma) &= \{a_1 \lor a_2 \leftarrow \neg a_0, b_1 \leftarrow a_1 \land a_2\}
\end{align*}
\]
and $E_3(\Gamma) = E_2(\Gamma)$. It follows that $\Gamma$ has two essential rules: rule $a_1 \lor a_2 \leftarrow \neg a_0$ of degree 1 and rule $b_1 \leftarrow a_1 \land a_2$ of degree 2. The program consisting of these two rules has the same stable models as $\Gamma$: $\{a_1\}$ and $\{a_2\}$.
Example 3. If the rules of $\Gamma$ are
\[
\begin{align*}
a_1 & \leftarrow \top, \\
a_{2n} & \leftarrow a_n \quad (n \geq 0)
\end{align*}
\]
then
\[
\begin{align*}
E_0(\Gamma) &= \emptyset, \\
E_1(\Gamma) &= \{a_1 \leftarrow \top\}, \\
E_2(\Gamma) &= \{a_1 \leftarrow \top, a_2 \leftarrow a_1\}, \\
E_3(\Gamma) &= \{a_1 \leftarrow \top, a_2 \leftarrow a_1, a_4 \leftarrow a_2\}, \\
\ldots,
\end{align*}
\]
so that
\[
E(\Gamma) = \{a_1 \leftarrow \top\} \cup \{a_{2^k+1} \leftarrow a_{2^k} : k \geq 0\}.
\]
The set of essential rules in this case is infinite, but for every positive $k$ the program has only one essential rule of degree $k$. The set $E(\Gamma)$ has the same stable model as $\Gamma$: $\{a_1, a_2, a_4, \ldots\}$.
Example 4. If the rules of $\Gamma$ are
\[
\begin{align*}
a_0 & \leftarrow \top, \\
b_{m,n} & \leftarrow a_{n-m} \quad (n \geq m \geq 0)
\end{align*}
\]
then
\[
\begin{align*}
E_0(\Gamma) &= \emptyset, \\
E_1(\Gamma) &= \{a_0 \leftarrow \top\}, \\
E_2(\Gamma) &= \{a_0 \leftarrow \top\} \cup \{b_{n,n} \leftarrow a_0 : n \geq 0\},
\end{align*}
\]
and $E_3(\Gamma) = E_2(\Gamma)$. It follows that $\Gamma$ has infinitely many essential rules: rule $a_0 \leftarrow \top$ of degree 1 and rules $b_{n,n} \leftarrow a_0$, for all $n$, of degree 2. The program consisting of these rules has the same stable model as $\Gamma$: $\{a_0, b_{0,0}, b_{1,1}, \ldots\}$.
In Section 5 we will apply the concept of an essential rule to “propositional images” of programs with variables and arithmetic operations, and we will see that the behaviors observed in the examples above correspond to programs of three types: (1) programs for which the process of intelligent instantiation terminates; (2) programs for which this process can be launched but does not terminate; and (3) programs for which this process cannot be even launched. We will see also that if every rule of a program is supersafe then the program cannot belong to the third group.
2.3 Proof of Theorem 1
Lemma 5. Let $\Delta$ be a subset of a propositional program $\Gamma$, and let $H$ be the set of all nonnegated atoms of the heads of the rules of $\Delta$. If the body of every nontrivial rule of $\Gamma \setminus \Delta$ contains a nonnegated atom that does not belong to $H$ then $\Gamma$ and $\Delta$ have the same stable models.
Proof. Consider first the case when the rules of $\Gamma$ do not contain negation. We need to show that $\Gamma$ and $\Delta$ have the same minimal models. Assume that $M$ is a minimal model of $\Delta$. Then $M \subseteq H$, so that the body of every nontrivial rule of $\Gamma \setminus \Delta$ contains a nonnegated atom that does not belong to $M$. It follows that $M$ satisfies all rules of $\Gamma \setminus \Delta$, so that $M$ is a model of $\Gamma$, and consequently a minimal model of $\Gamma$. In the other direction, assume than $M$ is a minimal model of $\Gamma$. To show that $M$ is minimal even among the models of $\Delta$, consider a subset $M'$ of $M$ that satisfies all rules of $\Delta$. Then $M' \cap H$ satisfies all rules of $\Delta$ as well, so that every nontrivial rule of $\Gamma \setminus \Delta$ contains a nonnegated atom that does not belong to $M' \cap H$. It follows that this set satisfies all rules of $\Gamma \setminus \Delta$, so that it is a model of $\Gamma$. Since it is a subset of a minimal model $M$ of $\Gamma$, we can conclude that $M' \cap H = M$. Since $M'$ is a subset of $M$, it follows that $M' = M$.
If some rules of $\Gamma$ contain negation then consider the reducts $\Gamma^M$ of $\Gamma$ and $\Delta^M$ of $\Delta$ with respect to the same set $M$ of atoms. It is clear that $\Delta^M$ is a subset of $\Gamma^M$, that $H$ is the set of all nonnegated atoms of the heads of the rules of $\Delta^M$, and that the body of every nontrivial rule of $\Gamma^M \setminus \Delta^M$ contains a nonnegated atom that does not belong to $H$. Furthermore, the rules of $\Gamma^M$ do not contain negation. It follows, by the special case of the lemma proved earlier, that $\Gamma^M$ and $\Delta^M$ have the same minimal models. In particular, $M$ is a minimal model of $\Gamma^M$ iff $M$ is a minimal model of $\Delta^M$. In other words, $M$ is a stable model of $\Gamma$ iff $M$ is a stable model of $\Delta$.
To prove the theorem, consider the set $H$ of all nonnegated atoms of the heads of the rules of $E(\Gamma)$. We will show that the body of every nontrivial rule of $\Gamma \setminus E(\Gamma)$ contains a nonnegated atom that does not belong to $H$; then the assertion of the theorem will follow from the lemma. Assume that $R$ is a nontrivial rule of $\Gamma$ such that all nonnegated atoms in the body of $R$ belong to $H$. Then each of these atoms $A$ is a nonnegated atom of the head of a rule that belongs to $E_k(\Gamma)$ for some $k$. This $k$ can be chosen uniformly for all these atoms $A$: take the largest of the values of $k$ corresponding to all nonnegated atoms in the head of $R$. Then $R$ belongs to $E_{k+1}(\Gamma)$, and consequently to $E(\Gamma)$.
3 Programs with Variables and Arithmetic
The programming language defined in this section is a subset of the “Abstract Gringo” language AG [5]. The meaning of a program in this language is characterized by means of a transformation, denoted by $\tau$, that turns rules and programs into their propositional images—propositional programs in the sense of Section 2.1. The stable models of a program are defined as the stable models of its propositional image.\(^4\)
\(^4\) Gebser et al. [5] write rules $H \leftarrow B$ of $\tau \Pi$ as implications $B \rightarrow H$. More importantly, $H$ and $B$ are allowed in that paper to contain implications and infinitely long conjunctions and disjunctions. This additional generality is not needed here because the programs that we study contain neither conditional literals nor aggregates.
3.1 Syntax
We assume that three disjoint sets of symbols are selected—numerals, symbolic constants, and variables—which do not contain the symbols
\[ + - \times / .. \] (7)
\[ = \neq < > \leq \geq \] (8)
\[ not \wedge \vee , ( ) \] (9)
(The symbol .. is used to represent intervals.) We assume that a 1–1 correspondence between the set of numerals and the set \(\mathbb{Z}\) of integers is chosen. The numeral corresponding to an integer \(n\) will be denoted by \(n\).
Terms are defined recursively, as follows:
\begin{itemize}
\item all numerals, symbolic constants, and variables are terms;
\item if \(f\) is a symbolic constant and \(t\) is a tuple of terms separated by commas then \(f(t)\) is a term;
\item if \(t_1\) and \(t_2\) are terms and \(\ast\) is one of the symbols (7) then \((t_1 \ast t_2)\) is a term.
\end{itemize}
An atom is an expression of the form \(p(t)\), where \(p\) is a symbolic constant and \(t\) is a tuple of terms separated by commas.
A term or an atom is precomputed if it contains neither variables nor symbols (7). We assume a total order on precomputed terms such that for any integers \(m\) and \(n\), \(m \leq n\) iff \(m \leq n\).
For any atom \(A\), the expressions \(A\), \(not A\), \(not not A\) are literals. A comparison is an expression of the form \(t_1 \prec t_2\) where \(t_1\) and \(t_2\) are terms and \(\prec\) is one of the symbols (8).
A rule is an expression of the form
\[ H_1 \lor \cdots \lor H_k \leftarrow B_1 \land \cdots \land B_m \] (10)
or a “choice rule” of the form
\[ \{A\} \leftarrow B_1 \land \cdots \land B_m \] (11)
\((k, m \geq 0)\), where each \(H_i\) and each \(B_j\) is a literal or a comparison, and \(A\) is an atom. A program is a set of rules.
A rule or another syntactic expression is ground if it does not contain variables. A rule (10) or (11) is safe if each variable occurring in it appears also in one of the expressions \(B_j\) which is an atom or a comparison.
3.2 Propositional Image of a Program
The signature of the propositional program \(\tau\Pi\), defined below, is the set of precomputed atoms.
3.2.1 Semantics of Ground Terms
Every ground term \(t\) represents a finite set \([t]\) of precomputed terms, which is defined recursively:
\begin{itemize}
\item if \(t\) is a numeral or a symbolic constant then \([t]\) is \([t]\);
\item if \(t = f(t_1, \ldots, t_n)\) then \([t]\) is the set of terms \(f(r_1, \ldots, r_n)\) for all \(r_1 \in [t_1], \ldots, r_n \in [t_n]\);
\item if \(t = t_1 + t_2\) then \([t]\) is the set of numerals \(\overline{m_1 + m_2}\) for all integers \(n_1, n_2\) such that \(n_1 \in [t_1]\) and \(n_2 \in [t_2]\); similarly when \(t = t_1 - t_2\) or \(t_1 \times t_2\);
\end{itemize}
if \( t = t_1 / t_2 \) then \([t]\) is the set of numerals \([n_1/n_2]\) for all integers \(n_1, n_2\) such that \(\overline{n_1} \in [t_1]\), \(\overline{n_2} \in [t_2]\), and \(n_2 \neq 0\);
if \( t = t_1 \cdot t_2 \) then \([t]\) is the set of numerals \(\overline{m}\) for all integers \(m\) such that, for some integers \(n_1, n_2\),
\[
\overline{n_1} \in [t_1], \quad \overline{n_2} \in [t_2], \quad n_1 \leq m \leq n_2.
\]
For example, \([\top \cdot (7 - 5)]\) = \([\top, 2]\); if \( t \) contains a symbolic constant in the scope of an arithmetic operation then \([t] = \emptyset\).
### 3.2.2 Propositional Images of Ground Literals and Comparisons
If \( A \) is a ground atom \( p(t_1, \ldots, t_n) \) then
- \( \tau_v A \) stands for the conjunction of the atoms \( p(r_1, \ldots, r_n) \) for all \( r_1 \in [t_1], \ldots, r_n \in [t_n] \), and \( \tau_v A \) is the disjunction of these atoms;
- \( \tau_v (\text{not } A) \) is \( \neg \tau_v A \), and \( \tau_v (\text{not not } A) \) is \( \neg \tau_v A \);
- \( \tau_v (\text{not not } A) \) is \( \neg \neg \tau_v A \), and \( \tau_v (\text{not } A) \) is \( \neg \neg \tau_v A \).
For any ground terms \( t_1, t_2 \),
- \( \tau_v (t_1 \prec t_2) \) is \( \top \) if the relation \( \prec \) holds between the terms \( r_1 \) and \( r_2 \) for all \( r_1 \in [t_1] \) and \( r_2 \in [t_2] \), and \( \bot \) otherwise;
- \( \tau_v (t_1 \prec t_2) \) is \( \top \) if the relation \( \prec \) holds between the terms \( r_1 \) and \( r_2 \) for some \( r_1, r_2 \) such that \( r_1 \in [t_1] \) and \( r_2 \in [t_2] \), and \( \bot \) otherwise.
For example, \( \tau_v (3 = \top.3) \) is \( \top \).
### 3.2.3 Propositional Images of Rules and Programs
For any ground rule \( R \) of form (10), \( \tau R \) stands for the propositional rule
\[
\tau_v H_1 \lor \cdots \lor \tau_v H_k \leftarrow \tau_v B_1 \land \cdots \land \tau_v B_m.
\]
For any ground rule \( R \) of form (11), where \( A = p(t_1, \ldots, t_n) \), \( \tau R \) stands for the propositional rule
\[
\bigwedge_{r_1 \in [t_1], \ldots, r_n \in [t_n]} (p(r_1, \ldots, r_n) \lor \neg p(r_1, \ldots, r_n)) \leftarrow \tau_v B_1 \land \cdots \land \tau_v B_m.
\]
A ground instance of a rule \( R \) is a ground rule obtained from \( R \) by substituting precomputed terms for variables. The propositional image \( \tau R \) of a rule \( R \) with variables is the set of the propositional images of the instances of \( R \). For any program \( \Pi \), \( \tau \Pi \) is the union of the sets \( \tau R \) for all rules \( R \) of \( \Pi \).
### 3.2.4 Examples
#### Example 6.
The propositional image of the ground rule
\[
a(\top .. \bar{3}) \leftarrow b(\bar{3} .. \bar{5})
\]
is the propositional rule
\[
a(\top) \land a(\bar{2}) \land a(\bar{3}) \leftarrow b(\bar{4}) \lor b(\bar{5}) \lor b(\bar{6}).
\]
Example 7. The propositional image of the rule
\[ \{a(X)\} \leftarrow X = 1..3 \]
consists of the propositional rules
\[
\begin{align*}
a(n) \lor \neg a(n) & \leftarrow \top \\
for n \in \{1, 2, 3\}, \\
a(r) \lor \neg a(r) & \leftarrow \bot \\
for all precomputed terms \( r \) other than \( 1, 2, 3 \).
\end{align*}
\]
Example 8. The propositional image of the program
\[
a(1) \lor a(2) \leftarrow \neg a(0), \\
b(X) \leftarrow a(X) \land a(X + 1)
\]
is
\[
\begin{align*}
a(1) \lor a(2) & \leftarrow \neg a(0), \\
b(\pi) & \leftarrow a(\pi) \land a(\pi + 1) \\
for all \( n \in \mathbb{Z} \), \\
b(\pi) & \leftarrow a(\pi) \land \bot \\
for all precomputed terms \( r \) other than numerals.
\end{align*}
\]
The first two lines are similar to the propositional program from Example 2 (Section 2.2); the rules in the last line are trivial, as defined in Section 2.2.
Example 9. The propositional image of the program
\[
a(1) \leftarrow , \\
a(2 \times X) \leftarrow a(X)
\]
is
\[
\begin{align*}
a(1) & \leftarrow \top, \\
a(\pi) & \leftarrow a(\pi) \\
for all \( n \in \mathbb{Z} \), \\
\top & \leftarrow a(r) \\
for all precomputed terms \( r \) other than numerals.
\end{align*}
\]
The first two lines are similar to the propositional program from Example 3.
Example 10. The propositional image of the program
\[
a(0) \leftarrow , \\
b(X, Y) \leftarrow a(Y - X)
\]
is
\[
\begin{align*}
a(0) & \leftarrow \top, \\
b(\pi, \pi) & \leftarrow a(n - m) \\
for all \( m, n \in \mathbb{Z} \), \\
b(\pi) & \leftarrow \bot \\
for all precomputed terms \( r, s \) such that at least one of them is not a numeral.
\end{align*}
\]
The first two lines are similar to the propositional program from Example 4.
4 Supersafe Rules
The idea of the definition below can be explained in terms of a “guessing game.” Imagine that you and I are looking at a safe rule \( R \), and I form a ground instance of \( R \) by substituting precomputed terms for its variables in such a way that all comparisons in the body of the rule become true. I do not show you that instance, but for every term that occurs as an argument of a nonnegated atom in its body I tell you what the value of that term is. If \( R \) is supersafe then on the basis of this information you will be able to find out which terms I substituted for the variables of \( R \); or, at the very least, you will be able to restrict the possible choices to a finite set. If, on the other hand, \( R \) is not supersafe then the information that I give you will be compatible with infinitely many substitutions.
Consider, for example, the rule
\[
p(X, Y, Z) \leftarrow X = \overline{5..7} \land q(\overline{2 \times Y}) \land \neg q(\overline{3 \times Y}) \land Y = Z + 1.
\]
(12)
Imagine that I chose an instance of this rule such that both comparisons in its body are true, and told you that the value of \( 2 \times Y \) in that instance is, for example, 10. You will be able to conclude that the value chosen for \( Y \) is 5, and that consequently the value of \( Z \) is 4. About the value that I chose for \( X \) you will be able to say that it is one of the numbers 5, 6, and 7.
We see that rule (12) has only three ground instances compatible with the information about the value of \( 2 \times Y \) that I gave you; the rule is supersafe.
On the other hand, if we replace \( 2 \times Y \) in rule (12) by \( 0 \times Y \) then the situation will be different: I will tell you that the value of \( 0 \times Y \) is 0, and this information will not allow you to restrict the possible substitutions to a finite set. The modified rule is not supersafe.
4.1 Definition of Supersafety
A term or an atom is interval-free if it does not contain the interval symbol (..). We will define when a safe rule \( R \) is supersafe assuming that \( R \) satisfies the following additional condition:
all nonnegated atoms in the body of \( R \) are interval-free. (IF)
This simplifying assumption eliminates rules like the one in Example 6. It is useful because, for a term \( t \) containing intervals, the set \([t]\) may have more than one element; in the description of the guessing game we glossed over this complication when we talked above about the value of a term as if it were a uniquely defined object. On the other hand, if a rule \( R \) satisfies condition (IF) then for every term \( t \) occurring in a nonnegated atom in the body of a ground instance of \( R \) the set \([t]\) has at most one element. (An atom violating condition (IF) can be eliminated using a new variable; for instance, we can replace \( b(\overline{4..6}) \) in Example 4 by \( b(X) \land X = \overline{4..5}. \))
The positive body arguments of \( R \) are the members of the tuples \( t \) for all nonnegated atoms \( p(t) \) in the body of \( R \). For example, the only positive body argument of (12) is \( 2 \times Y \).
The values of positive body arguments constitute the information about an instance of the rule that is available to you in the guessing game.
The instances of a rule that are allowed in the guessing game can be characterized by “acceptable tuples of terms,” defined as follows. Let \( x \) be the list of all variables occurring in \( R \), and let \( r \) be a tuple of precomputed terms of the same length as \( x \). We say that \( r \) is acceptable (for \( R \)) if
Intelligent Instantiation and Supersafe Rules
(i) for each comparison $C$ in the body of $R$, $\tau_C(C^x_\forall) = \top$;
(ii) for each positive body argument $t$ of $R$, the set $[t^x_\forall]$ is non-empty (and consequently is a singleton).
For instance, a tuple $r_1, r_2, r_3$ is acceptable for rule (12) if
- $r_1$ is one of the numerals $5, 6, 7$, so that $\tau_C(r_1 = 5..7) = \top$;
- $r_2$ is a numeral $\overline{1}$ (rather than symbolic constant), so that the set $[\overline{2} \times r_2]$ is non-empty;
- $r_3$ is the numeral $\overline{1}$, so that $\tau_C(r_2 = r_3 + \overline{1}) = \top$.
(See Figure 1.)
The information about the values of positive arguments that I give you in the guessing game can be described in terms of equivalence classes of acceptable tuples. About acceptable tuples $r, s$ we say that they are equivalent if for each positive body argument $t$ of rule $R$, $[t^x_\forall] = [s^x_\forall]$. In the case of rule (12), for example, acceptable tuples $r_1, r_2, r_3$ and $s_1, s_2, s_3$ are equivalent iff $r_2$ equals $s_2$ (so that $[\overline{2} \times r_2] = [\overline{2} \times s_2]$). In Figure 1, each column is an equivalence class of this relation.
We say that $R$ is supersafe if all equivalence classes of acceptable tuples for it are finite.
For example, rule (12) is supersafe because each equivalence class of acceptable tuples for it has 3 elements. Consider now the rule obtained from (12) by replacing $\overline{2} \times Y$ with $\overline{0} \times Y$:
$$p(X, Y, Z) \leftarrow X = 5..7 \land q(\overline{0} \times Y) \land \neg q(\overline{3} \times Y) \land Y = Z + \overline{1}. \quad (13)$$
The set of acceptable tuples does not change, but now all of them are equivalent: for any $r$ and $s$,
$$[\overline{0} \times r_2] = [\overline{0} \times s_2] = \{\overline{0}\}.$$
The only equivalence class is the set of all acceptable tuples, so that the rule is not supersafe.
It is easy to check that rules (1), (5), (6) and all rules in Examples 7–9 are supersafe, and that rules (2)–(4) and the second rule in Example 10 are not.
The concept of supersafety can be applied also to individual variables occurring in a rule. As before, let $R$ be a safe rule satisfying condition (IF). Let $x$ be the list of all variables $X_1, \ldots, X_n$ occurring in $R$, and let $r$ be an acceptable tuple of precomputed terms $r_1, \ldots, r_n$. We say that a variable $X_i$ is supersafe in $R$ if, for every equivalence class $E$ of acceptable tuples, the $i$-th projection of $E$ (that is, the set of the terms $r_i$ over all tuples $r_1, \ldots, r_n$ from $E$) is finite. It is easy to see that $R$ is supersafe iff all variables occurring in $R$ are supersafe. Indeed, a subset $E$ of the Cartesian product of finitely many sets is finite iff all projections of $E$ are finite.
As an example, consider the only equivalence class of acceptable tuples for rule (13), shown in Figure 1. The first projection of that set is $\{5, 6, 7\}$; the second projection is the set of all numerals, and the third projection is the set of all numerals as well. Consequently the variable $X$ is supersafe, and the variables $Y$ and $Z$ are not.
---
5 By $C^x_\forall$ we denote the result of substituting the terms $r$ for the variables $x$ in $C$.
6 In the syntax of Section 3.1, rules (5) and (6) would be written as $p(X, Y) \leftarrow X = Y \land q(X)$ and $p(X) \leftarrow X + \overline{3} = \overline{3}.$
4.2 Supersafety in the Absence of Arithmetic Operations
As could be expected, the difference between safety and supersafety disappears in the absence of arithmetic operations. In the following theorem, $R$ is a safe rule satisfying condition (IF).
$\blacktriangleright$ **Theorem 11.** If a positive body argument $t$ of $R$ does not contain the arithmetic operations $+ - \times /$
then all variables occurring in $t$ are supersafe.
Proof. We will show that if $X_i$ occurs in a positive body argument $t$ of $R$ that does not contain arithmetic operations then the $i$-th projection of any equivalence class of acceptable tuples is a singleton. We will prove, in other words, that for any pair of equivalent acceptable tuples $r$ and $s$, $r_i = s_i$. Assume that $r$ is equivalent to $s$. Then $[t_x^r] = [t_x^s]$. Since $t$ is interval-free and does not contain arithmetic operations, and $r$, $s$ are precomputed, both $t_x^r$ and $t_x^s$ are precomputed also, so that $[t_x^r]$ is the singleton \( \{t_x^r\} \), and $[t_x^s]$ is the singleton \( \{t_x^s\} \). It follows that the term $t_x^r$ is the same as $t_x^s$. Since $X_i$ is a member of the tuple $x$ and occurs in $t$, we can conclude that $r_i = s_i$. $\blacktriangleleft$
$\blacktriangleright$ **Corollary 12.** If the body of $R$ contains neither arithmetic operations nor comparisons then $R$ is supersafe.
Indeed, since $R$ is safe and its body does not contain comparisons, every variable occurring in $R$ occurs also in one of its positive body arguments.
4.3 Supersafety is Undecidable
The conditions imposed on variables by GRINGO (see Footnote 2) ensure their supersafety, but they can be relaxed without losing this property. There is no need, for example, to reject the rule
\[ p(Y) \leftarrow f(X/2) \]
—it is supersafe. The use of unnecessarily strong restrictions on variables in the design of GRINGO can be explained by the desire to make the grounding algorithm less complicated.
There is, however, a more fundamental reason why the class of rules accepted by GRINGO for grounding does not exactly match the class of supersafe rules:
$\blacktriangleright$ **Theorem 13.** Membership in the class of supersafe rules is undecidable.
Proof. The undecidable problem of determining whether a Diophantine equation has a solution [8] can be reduced to deciding whether a rule is supersafe as follows. The safe rule
\[ p(Y) \leftarrow f(x) = \overline{0} \times Y, \]
where $f(x)$ is a polynomial with integer coefficients and $Y$ is a variable different from the members of $x$, is supersafe iff the equation $f(x) = 0$ has no solutions. Indeed, if the equation has no solutions then the set of acceptable tuples is empty and the rule is trivially supersafe. If it has a solution $r$ then the set of acceptable tuples is infinite, because an acceptable tuple can be formed from $r$ by appending any numeral $\overline{n}$. All acceptable tuples form one equivalence class, because the rule in question has no positive body arguments. $\blacktriangleleft$
5 Intelligent Instantiation
5.1 Intelligent Instantiation as Selecting Essential Instances
Consider a program $\Pi$ such that its rules satisfy condition (IF). As observed in Section 4.1, for every term $t$ occurring in a nonnegated atom in the body of a ground instance of a rule of $\Pi$, the set $[t]$ has at most one element. It follows that for every nonnegated atom $A$ in the body of a ground instance of a rule of $\Pi$, the formula $\tau \cdot A$ is either an atom or the symbol $\bot$. Consequently the body of every rule of the propositional image $\tau \Pi$ of $\Pi$ is a conjunction of formulas of three types: symbols $\top$ and $\bot$, atoms, and formulas beginning with negation. In other words, the definition of an essential rule in Section 2.2 is applicable to the propositional program $\tau \Pi$, and we can talk about its essential rules.
For instance, if $\Pi$ is the program from Example 8 then the propositional program $\tau \Pi$ has two essential rules:
1. $a(1) \lor a(2) \leftarrow \neg a(0)$ (14)
of degree 1, and
2. $b(1) \leftarrow a(1) \land a(2)$ (15)
of degree 2.
If, for every $k$, $\tau \Pi$ has only finitely many essential rules of degree $k$, as in this example, then the stepwise process of generating the essential rules of $\tau \Pi$ of higher and higher degrees can be thought of as a primitive, but useful, theoretical model of the process of intelligent instantiation. It is primitive in the sense that this process involves not only identifying essential instances but also simplifying them. In application to the program from Example 8, gringo will not only find the essential instances (14) and (15); it will also simplify rule (14) by dropping its body.
The supersafety of all rules of a program guarantees the possibility of launching the process of intelligent instantiation, although without guarantee of termination:
$\triangleright$ Theorem 14. If $\Pi$ is a finite program, and all rules of $\Pi$ are supersafe, then each of the sets $E_k(\tau \Pi)$ is finite.
The program from Example 10 shows that the assertion of the theorem would be incorrect without the supersafety assumption. The propositional image of that program has infinitely many essential rules of degree 2—rules $b(\pi, \pi) \leftarrow a(0)$ for all integers $n$.
5.2 Proof of Theorem 14
5.2.1 Plan of the Proof
The assertion of the theorem will be derived from the two lemmas stated below.
Consider a finite program $\Pi$ such that all rules of $\Pi$ are supersafe. For any rule $R$ of $\Pi$ and any set $S$ of atoms from $\tau \Pi$, by $\rho(R, S)$ we denote the set of all tuples $r$ of precomputed terms that are acceptable for $R$ such that all nonnegated atoms of the body of $\tau(R^*)$ (where $x$ is the list of variables of $R$) belong to $S$.
$\triangleright$ Lemma 15. If $R$ is safe and $S$ is finite then $\rho(R, S)$ is finite.
By $S_k$ we denote the set of the nonnegated atoms of the heads of the rules of $E_k(\tau \Pi)$.
Lemma 16. Every rule of \( E_{k+1}(\tau \Pi) \) has the form \( \tau(R^x_k) \), where \( R \) is a rule of \( \Pi \), \( x \) is the list of its variables, and \( r \) belongs to \( \rho(R,S_k) \).
Given these lemmas, Theorem 14 can be proved by induction on \( k \) as follows. If \( E_k(\tau \Pi) \) is finite then \( S_k \) is finite also. By Lemma 15, we can further conclude that for every rule \( R \) of \( \Pi \), \( \rho(R,S_k) \) is finite. Hence, by Lemma 16, \( E_{k+1}(\tau \Pi) \) is finite as well.
5.2.2 Proof of Lemma 15
Let \( B \) be the set of positive body arguments of \( R \), and let \( T \) be the set of the members of the tuples \( t \) for all atoms \( p(t) \) in \( S \). For every function \( \phi \) from \( B \) to \( T \), by \( \rho_\phi(R,S) \) we denote the subset of \( \rho(R,S) \) consisting of the tuples \( r \) such that \( \phi(t) \in [t^x_k] \). We will prove the following two assertions:
Claim 1: The subsets \( \rho_\phi(R,S) \) cover the whole set \( \rho(R,S) \).
Claim 2: Each subset \( \rho_\phi(R,S) \) is finite.
It will follow then that \( \rho(R,S) \) is finite, because there are only finitely many functions from \( B \) to \( T \).
To prove Claim 1, consider an arbitrary tuple \( r \) from \( \rho(R,S) \). We want to find a function \( \phi \) from \( B \) to \( T \) such that \( r \) belongs to \( \rho_\phi(R,S) \). For every term \( t \) from \( B \), the set \( [t^x_k] \), where \( x \) is the list of variables of \( R \), is non-empty, in view of the fact that \( r \), like all tuples in \( \rho(R,S) \), is acceptable for \( R \). Since \( t \) is interval-free, we can further conclude that \( [t^x_k] \) is a singleton. Choose the only element of this set as \( \phi(t) \). Let us check that \( \phi(t) \) belongs to \( T \); it will be clear then that \( r \) belongs to \( \rho_\phi(R,S) \). Since \( t \) is a positive body argument of \( R \), it is a member of the tuple \( u \) for some atom \( p(u) \) of the body of \( R \). Then \( \tau(p(u^x_k)) \) is a nonnegated atom in the body of \( \tau(R^x_k) \). It has the form \( p(t) \), where \( t \) is a tuple of terms containing \( \phi(t) \). Since \( r \) belongs to \( \rho(R,S) \), the atom \( p(t) \) belongs to \( S \), so that \( \phi(t) \) belongs to \( T \).
To prove Claim 2, note that all tuples from \( \rho_\phi(R,S) \) are equivalent to each other. Indeed, if \( r_1 \) and \( r_2 \) belong to \( \rho_\phi(R,S) \), then, for every \( t \) from \( B \), \( \phi(t) \) belongs both to \( [t^x_k] \) and to \( [t^x_2] \); since both sets are singletons, it follows that they are equal to each other. We showed, in other words, that \( \rho_\phi(R,S) \) is a subset of a class of equivalent tuples. Since \( R \) is supersafe, this equivalence class is finite.
5.2.3 Proof of Lemma 16
Every rule of \( \tau \Pi \) is obtained by applying \( \tau \) to an instance \( R^x_k \) of some rule \( R \) of \( \Pi \). Assuming that a rule \( \tau(R^x_k) \) belongs to \( E_{k+1}(\tau \Pi) \), we need to show that \( r \) belongs to \( \rho(R,S_k) \). In other words, we need to check, first, that \( r \) is acceptable for \( R \), and second, that all nonnegated atoms of the body of \( \tau(R^x_k) \) belong to \( S_k \). The first property follows from the fact that all rules of \( E_{k+1}(\tau \Pi) \) are nontrivial, because if \( r \) is not acceptable for \( R \) then the body of \( \tau(R^x_k) \) includes the conjunctive term \( \bot \). According to the definition of \( S_k \), the second property can be expressed as follows: every nonnegated atom of the body of \( \tau(R^x_k) \) is a nonnegated atom of the head of some rule of \( E_k(\tau \Pi) \). This is immediate from the assumption that rule \( \tau(R^x_k) \) belongs to \( E_{k+1}(\tau \Pi) \).
6 Conclusion
Supersafety is a property of rules with variables and arithmetic operations. If all rules of a program are supersafe then the process of accumulating the ground instances of its rules that are essential for finding its stable models will produce only a finite set of rules at every step.
Intelligent Instantiation and Supersafe Rules
This paper extends earlier work on the mathematics of the input language of GRINGO [5]. Unlike other publications on the theory of safe rules and intelligent instantiation in answer set programming [2, 3, 4, 6, 9], it concentrates on the difficulties related to the use of arithmetic operations. It is limited, however, to programs without GRINGO constructs that involve local variables—conditional literals and aggregates. Extending the theory of supersafety to local variables is a topic for future work.
Acknowledgements Thanks to Amelia Harrison, Roland Kaminski, Dhananjay Raju, and the anonymous referees for useful comments.
References
|
{"Source-Url": "http://www.cs.utexas.edu/users/vl/papers/safety_final.pdf", "len_cl100k_base": 12076, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 57106, "total-output-tokens": 13919, "length": "2e13", "weborganizer": {"__label__adult": 0.0003910064697265625, "__label__art_design": 0.0004901885986328125, "__label__crime_law": 0.0005707740783691406, "__label__education_jobs": 0.00260162353515625, "__label__entertainment": 0.00011342763900756836, "__label__fashion_beauty": 0.00021445751190185547, "__label__finance_business": 0.0004699230194091797, "__label__food_dining": 0.0006084442138671875, "__label__games": 0.00102996826171875, "__label__hardware": 0.001209259033203125, "__label__health": 0.0010547637939453125, "__label__history": 0.0004107952117919922, "__label__home_hobbies": 0.0002148151397705078, "__label__industrial": 0.000812530517578125, "__label__literature": 0.0006909370422363281, "__label__politics": 0.0004620552062988281, "__label__religion": 0.0007266998291015625, "__label__science_tech": 0.201904296875, "__label__social_life": 0.0001455545425415039, "__label__software": 0.01059722900390625, "__label__software_dev": 0.77392578125, "__label__sports_fitness": 0.0003082752227783203, "__label__transportation": 0.000751495361328125, "__label__travel": 0.00019466876983642575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43341, 0.02415]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43341, 0.82395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43341, 0.86987]], "google_gemma-3-12b-it_contains_pii": [[0, 2633, false], [2633, 5990, null], [5990, 9646, null], [9646, 12509, null], [12509, 16424, null], [16424, 19159, null], [19159, 22013, null], [22013, 23725, null], [23725, 27360, null], [27360, 30829, null], [30829, 33873, null], [33873, 36855, null], [36855, 40970, null], [40970, 43341, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2633, true], [2633, 5990, null], [5990, 9646, null], [9646, 12509, null], [12509, 16424, null], [16424, 19159, null], [19159, 22013, null], [22013, 23725, null], [23725, 27360, null], [27360, 30829, null], [30829, 33873, null], [33873, 36855, null], [36855, 40970, null], [40970, 43341, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43341, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43341, null]], "pdf_page_numbers": [[0, 2633, 1], [2633, 5990, 2], [5990, 9646, 3], [9646, 12509, 4], [12509, 16424, 5], [16424, 19159, 6], [19159, 22013, 7], [22013, 23725, 8], [23725, 27360, 9], [27360, 30829, 10], [30829, 33873, 11], [33873, 36855, 12], [36855, 40970, 13], [40970, 43341, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43341, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
9e00e7c477c9910c8873d5dd821f52043f044909
|
[REMOVED]
|
{"len_cl100k_base": 12907, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39997, "total-output-tokens": 16639, "length": "2e13", "weborganizer": {"__label__adult": 0.00037169456481933594, "__label__art_design": 0.0004935264587402344, "__label__crime_law": 0.0008091926574707031, "__label__education_jobs": 0.00264739990234375, "__label__entertainment": 0.0001354217529296875, "__label__fashion_beauty": 0.00021278858184814453, "__label__finance_business": 0.0012760162353515625, "__label__food_dining": 0.0003783702850341797, "__label__games": 0.0011577606201171875, "__label__hardware": 0.0012454986572265625, "__label__health": 0.0009160041809082032, "__label__history": 0.0006017684936523438, "__label__home_hobbies": 0.00023233890533447263, "__label__industrial": 0.0011081695556640625, "__label__literature": 0.0005021095275878906, "__label__politics": 0.0003964900970458984, "__label__religion": 0.0005650520324707031, "__label__science_tech": 0.4619140625, "__label__social_life": 0.00019991397857666016, "__label__software": 0.03973388671875, "__label__software_dev": 0.484130859375, "__label__sports_fitness": 0.000286102294921875, "__label__transportation": 0.0004820823669433594, "__label__travel": 0.0002384185791015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53858, 0.03822]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53858, 0.55618]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53858, 0.84373]], "google_gemma-3-12b-it_contains_pii": [[0, 4865, false], [4865, 11514, null], [11514, 16097, null], [16097, 19928, null], [19928, 28066, null], [28066, 34586, null], [34586, 36897, null], [36897, 42183, null], [42183, 51257, null], [51257, 53858, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4865, true], [4865, 11514, null], [11514, 16097, null], [16097, 19928, null], [19928, 28066, null], [28066, 34586, null], [34586, 36897, null], [36897, 42183, null], [42183, 51257, null], [51257, 53858, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53858, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53858, null]], "pdf_page_numbers": [[0, 4865, 1], [4865, 11514, 2], [11514, 16097, 3], [16097, 19928, 4], [19928, 28066, 5], [28066, 34586, 6], [34586, 36897, 7], [36897, 42183, 8], [42183, 51257, 9], [51257, 53858, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53858, 0.04035]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
20253c25d67df4f1eefa3db1bc48136715193b2b
|
Package ‘biscuiteer’
May 3, 2024
Type Package
Title Convenience Functions for Biscuit
Description A test harness for bsseq loading of Biscuit output, summarization
of WGBS data over defined regions and in mappable samples, with or
without imputation, dropping of mostly-NA rows, age estimates, etc.
Version 1.18.0
Date 2024-03-01
URL https://github.com/trichelab/biscuiteer
BugReports https://github.com/trichelab/biscuiteer/issues
License GPL-3
Depends R (>= 4.1.0), biscuiteerData, bsseq
Imports readr, qualV, Matrix, impute, HDF5Array, S4Vectors, Rsamtools,
data.table, Biobase, GenomicRanges, IRanges, BiocGenerics,
VariantAnnotation, DelayedMatrixStats, SummarizedExperiment,
GenomeInfoDb, Mus.musculus, Homo.sapiens, matrixStats,
rtracklayer, QDNAseq, dmrseq, methods, utils, R.utils, gtools,
BiocParallel
Suggests DSS, covr, knitr, rmarkdown, markdown, rlang, scmeth,
pkgdown, roxygen2, testthat, QDNAseq.hg19, QDNAseq.mm10,
BiocStyle
biocViews DataImport, MethylSeq, DNAMethylation
Encoding UTF-8
RoxygenNote 7.2.3
Roxygen list(markdown = TRUE)
VignetteBuilder knitr
git_url https://git.bioconductor.org/packages/biscuiteer
git_branch RELEASE_3_19
git_last_commit b94d958
git_last_commit_date 2024-04-30
Repository Bioconductor 3.19
Date/Publication 2024-05-03
Author Tim Triche [aut],
Wanding Zhou [aut],
Benjamin Johnson [aut],
Jacob Morrison [aut, cre],
Lyong Heo [aut],
James Eapen [aut]
Maintainer Jacob Morrison <jacob.morrison@vai.org>
Contents
biscuiteer-package ........................................... 3
atRegions ...................................................... 4
binCoverage ..................................................... 5
biscuiteer-methods ............................................ 6
biscuitMetadata ............................................... 7
byChromArm ..................................................... 8
byExtremality .................................................. 9
checkBiscuitBED ............................................... 10
clocks .......................................................... 12
condenseSampleNames ......................................... 12
CpGindex ........................................................ 13
ENSR_subset.hg19 ............................................... 14
ENSR_subset.hg38 ............................................... 15
extremality ..................................................... 15
fexpit ........................................................... 16
filterLoci ....................................................... 16
fixAge ........................................................... 17
fixNAs ........................................................... 18
flogit ............................................................ 19
getClock ........................................................ 20
getLogitFracMeth .............................................. 21
GRCh37.chromArm ............................................... 22
GRCh38.chromArm ............................................... 22
grToSeg ........................................................ 23
H9state23 unmeth.hg19 ........................................ 24
H9state23 unmeth.hg38 ........................................ 24
hg19.chromArm ................................................ 25
hg38.chromArm ................................................ 25
HMM_CpG_islands.hg19 ......................................... 26
HMM_CpG_islands.hg38 ......................................... 26
makeBSseq ....................................................... 27
readBiscuit ..................................................... 28
readEpibed ..................................................... 30
RRBSeq .......................................................... 31
segToGr ........................................................ 31
Description
A test harness for bsseq loading of Biscuit output, summarization of WGBS data over defined regions and in mappable samples (with or without imputation, dropping mostly-NA rows, age estimates, etc.)
Author(s)
Timothy J Triche Jr <Tim.Triche@vai.org>, Wanding Zhou <Wanding.Zhou@vai.org>, Ben Johnson <Ben.Johnson@vai.org>, Jacob Morrison <Jacob.Morrison@vai.org>, Lyong Heo <Lyong.Heo@vai.org>
See Also
Useful links:
- https://github.com/trichelab/biscuiteer
- Report bugs at https://github.com/trichelab/biscuiteer/issues
Examples
```r
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
```
atRegions
**Summarize a bsseq dataset over defined regions**
**Description**
Calls `summarizeBsSeqOver` to summarize a bsseq object over provided DNA regions. Useful for exploring genomic data using cBioPortal.
**Usage**
```r
atRegions(bsseq, regions, mappings = NULL, nm = "POETICname", ...)
```
**Arguments**
- `bsseq`: A bsseq object
- `regions`: A GRanges or GRangesList of regions
- `mappings`: A mapping table with rownames(mappings) == colnames(bsseq) (DEFAULT: NULL)
- `nm`: Column of the mapping table to map to (DEFAULT: "POETICname")
- `...`: Other arguments to pass to `summarizeBsSeqOver`
**Value**
GRanges with summarized information about the bsseq object for the given DNA regions
**Examples**
```r
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
reg <- GRanges(seqnames = rep("chr11",5),
strand = rep("*",5),
ranges = IRanges(start = c(0,2.8e6,1.17e7,1.38e7,1.69e7),
end= c(2.8e6,1.17e7,1.38e7,1.69e7,2.2e7))
)
regions <- atRegions(bsseq = bisc, regions = reg)
```
**binCoverage**
**Bin CpG or CpH coverage to simplify and improve CNA "sketching"**
---
**Description**
Example usage for E-M
**Usage**
```r
binCoverage(
bsseq,
bins,
which = NULL,
QDNAseq = TRUE,
readLen = 100,
paired = TRUE
)
```
**Arguments**
- `bsseq`: A bsseq object - supplied to getCoverage()
- `bins`: Bins to summarize over - from tileGenome or QDNAseq.xxYY
- `which`: Limit to specific regions? - functions as an import() (DEFAULT: NULL)
- `QDNAseq`: Return a QDNAseqReadCounts? - if FALSE, returns a GRanges (DEFAULT: TRUE)
- `readLen`: Correction factor for coverage - read length in bp (DEFAULT: 100)
- `paired`: Whether the data are from paired-end sequencing (DEFAULT: TRUE)
**Details**
NOTE: As of early Sept 2019, QDNAseq did not have hg38 capabilities. If you desire to use the hg38 genome, biscuiteer suggests you use a GRanges object to define your bins.
NOTE: As of late July 2020, biscuiteer has started implemented support for hg38, hg19, mm10, and mm9 for bisulfite-specific features, including adaptive GC-content computation and SV integration for adjusting CNV ends.
**Value**
Binned read counts
Examples
```r
bins <- GRanges(seqnames = rep("chr11", 10),
strand = rep("*", 10),
ranges = IRanges(start=100000*0:9, width=100000)
)
reg <- GRanges(seqnames = rep("chr11", 5),
strand = rep("*", 5),
ranges = IRanges(start = c(0, 2.8e6, 1.17e7, 1.38e7, 1.69e7),
end = c(2.8e6, 1.17e7, 1.38e7, 1.69e7, 2.2e7))
)
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz",
package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz",
package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf,
merged = FALSE)
bc <- binCoverage(bsseq = bisc, bins = bins, which = reg, QDNAseq = FALSE)
```
biscuiteer-methods
bsseq class methods (VCF-centric) added by biscuiteer
Description
See biscuiteer manpage for package description
Usage
```r
## S4 method for signature 'BSseq' samples(object)
## S4 method for signature 'BSseq' header(x)
## S4 method for signature 'BSseq' meta(x)
## S4 method for signature 'BSseq' fixed(x)
## S4 method for signature 'BSseq' info(x)
## S4 method for signature 'BSseq,ANY' geno(x)
```
biscuitMetadata
Arguments
object A bsseq object, preferably with !is.null(metadata(x)$vcfHeader)
x A bsseq object, preferably with !is.null(metadata(x)$vcfHeader)
Details
biscuiteer adds VariantAnnotation methods to BSseq objects with VCF headers: samples, header, meta, fixed, info, geno.
Due to inherited method signatures, the argument (singular) to the method may be named x or it may be named object. Either way, it is a BSseq object.
These add to the existing methods defined in package bsseq for class BSseq: [, length, sampleNames, sampleNames<-, pData, pData<-
Those add to the methods BSseq inherits from SummarizedExperiment, such as: colData, rowRanges, metadata, subset, subsetByOverlaps, isDisjoint, &c.
Most of the biscuiteer methods operate on the VCF header, which readBiscuit likes to stuff into the metadata slot of BSseq objects it produces. Some may be handy for populating a BSseq object with QC stats, or querying those.
Value
Depends on the method - usually a List-like object of some sort
See Also
RangedSummarizedExperiment
VCFHeader-class
BSseq-class
BSseq
description
Returns metadata from a Biscuit run using either a supplied VCF file or the vcfHeader metadata element from the bsseq object
Usage
biscuitMetadata(bsseq = NULL, VCF = NULL)
getBiscuitMetadata(bsseq = NULL, VCF = NULL)
Arguments
bsseq A bsseq object with a vcfHeader element (DEFAULT: NULL)
VCF A tabix’ed VCF file (can just be the header information) from which the bsseq vcfHeader element is derived (DEFAULT: NULL)
byChromArm
Value
Information regarding the Biscuit run
Functions
• getBiscuitMetadata(): Alias for biscuitMetadata
Examples
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
meta <- biscuitMetadata(bisc)
byChromArm A simple parallization step
Description
This function splits an object by chromosome arm, which tends to make parallelization much easier, as cross-arm dependencies are unusual. Therefore, the larger chromosomes can be split across processes or machines without worrying much about data starvation for processes on smaller chromosomes.
Usage
byChromArm(x, arms = NULL)
Arguments
x Any object with a GRanges in it: bsseq, SummarizedExperiment...
arms Another GRanges, but specifying chromosome arms (DEFAULT: NULL)
Value
A list, List, or *list, with pieces of x by chromosome arm
Examples
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
reg <- GRanges(seqnames = rep("chr11",5),
strand = rep("*",5),
ranges = IRanges(start = c(0,2.8e6,1.17e7,1.38e7,1.69e7),
end= c(2.8e6,1.17e7,1.38e7,1.69e7,2.2e7))
)
names(reg) <- as.character(reg)
arms <- byChromArm(bisc, arms = reg)
byExtremality
Choose loci or features by extremality
Description
This function finds the k most extremal features (features above a certain fraction of the Bernoulli variance) in 'bsseq' and returns their values.
Usage
byExtremality(bsseq, r = NULL, k = 500)
Arguments
bsseq A bsseq object
r Regions to consider - NULL covers all loci (DEFAULT: NULL)
k How many rows/regions to return (DEFAULT: 500)
Details
For DNA methylation, particularly when summarized across regions, we can do better (a lot better) than MAD. Since we know: max(SD(X_j)) if X_j ~ Beta(a, b) < max(SD(X_j)) if X_j ~ Bernoulli(a/(a+b)) for X with a known mean and standard deviation (SD), then we can solve for (a+b) by MoM. We can then define the extremality by: extremality = sd(X_j) / bernoulliSD(mean(X_j))
Value
A GRanges object with methylation values sorted by extremality
Examples
```r
shuf_bed <- system.file("extdata", "MCF7_Cunha_chr11p15_shuffled.bed.gz", package="biscuiteer")
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
shuf_vcf <- system.file("extdata", "MCF7_Cunha_shuffled_header_only.vcf.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc1 <- readBiscuit(BEDfile = shuf_bed, VCFfile = shuf_vcf, merged = FALSE)
bisc2 <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
reg <- GRanges(seqnames = rep("chr11",5),
strand = rep("*",5),
ranges = IRanges(start = c(0,2.8e6,1.17e7,1.38e7,1.69e7),
end= c(2.8e6,1.17e7,1.38e7,1.69e7,2.2e7))
)
comb <- unionize(bisc1, bisc2)
ext <- byExtremality(comb, r = reg)
```
---
**checkBiscuitBED**
**Inspect Biscuit VCF and BED files**
**Description**
A BED checker for Biscuit CpG/CpH output (BED-like format with 2 or 3 columns per sample). By default, files with more than 50 million loci will be processed iteratively, since data.table tends to run into problems with gzipped joint CpH files.
**Usage**
```r
checkBiscuitBED(
BEDfile,
VCFfile,
merged,
sampleNames = NULL,
chunkSize = 5e+07,
hdf5 = FALSE,
sparse = TRUE,
how = c("data.table", "readr"),
chr = NULL
)
```
checkBiscuitBED
Arguments
BEDfile A BED-like file - must be compressed and tabix’ed
VCFfile A VCF file - must be compressed and tabix’ed. Only the header information is needed.
merged Is this merged CpG data?
sampleNames Names of samples - NULL: create names, vector: assign names, data.frame: make pData (DEFAULT: NULL)
chunkSize For files longer than yieldSize number of lines long, chunk the file (DEFAULT: 5e7)
hdf5 Use HDF5 arrays for backing the data? Using HDF5-backed arrays stores the data in a HDF5 file on disk, rather than loading entire object into memory. This allows for analyses to be done on memory-limited systems at the small cost of slightly reduced return times. (DEFAULT: FALSE)
sparse Use sparse Matrix objects for the data? If TRUE, use a Matrix object for sparse matrices (matrices with many zeroes in them) (DEFAULT: TRUE)
how How to load the data - ”data.table” or ”readr”? (DEFAULT: data.table)
chr Load a specific chromosome to rbind() later? (DEFAULT: NULL)
Details
Input BED and VCF files must be tabix’ed. No exceptions!
Value
Parameters to be supplied to makeBSseq
See Also
readBiscuit
Examples
```r
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
params <- checkBiscuitBED(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
```
condenseSampleNames
clocks clocks
Description
Epigenetic clock data
Usage
data(clocks, package="biscuiteer")
Details
Source: See inst/scripts/clocks.R for how the clocks data object was generated. For more information about sources, see the descriptions in ?getClock and ?WGBSage. Return type: data.frame
condenseSampleNames Simplify sample names for a bsseq object
Description
Utility function for extracting sample names from tabix’ed sample columns, assuming a VCF-naming scheme (such as Sample_1.foo, Sample_1.bar or Sample1_foo, Sample1_bar).
Usage
condenseSampleNames(tbx, stride, trailing = "\.$")
Arguments
tbx A TabixFile instance to parse
stride How many columns per sample
trailing Trailing character to trim (DEFAULT: "\.$")
Value
A character vector of sample names (longest common substrings)
Examples
library(Rsamtools)
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz",
package="biscuiteer")
if (length(headerTabix(orig_bed)$header) > 0) {
condenseSampleNames(orig_bed, 2)
}
Description
WARNING: This function will be deprecated in the next Bioconductor release
Usage
CpGindex(bsseq, CGIs = NULL, PRCs = NULL, WCGW = NULL, PMDs = NULL)
Arguments
bsseq A BSseq object
CGIs A GRanges of CpG island regions - HMM CGIs if NULL (DEFAULT: NULL)
PRCs A GRanges of Polycomb targets - H9 state 23 low-meth if NULL (DEFAULT: NULL)
WCGW A GRanges of solo-WCGW sites - PMD WCGWs if NULL (DEFAULT: NULL)
PMDs A GRanges of hypomethylating regions - PMDs if NULL (DEFAULT: NULL)
Details
Measures hypermethylation at PRCs in CGIs or hypomethylation at WCGWs in PMDs
At some point in some conference call somewhere, a collaborator suggested that a simple index of Polycomb repressor complex (PRC) binding site hyper-methylation and CpG-poor "partially methylated domain" (PMD) hypomethylation would be a handy yardstick for both deterministic and stochastic changes associated with proliferation, aging, and cancer. This function provides such an index by compiling measures of aberrant hyper- and hypo-methylation along with the ratio of hyper- to hypo-methylation. (The logic for this is that while the phenomena tend to occur together, there are many exceptions) The resulting measures can provide a high-level summary of proliferation-, aging-, and/or disease-associated changes in DNA methylation across samples.
The choice of defaults is fairly straightforward: in 2006, three independent groups reported recurrent hypermethylation in cancer at sites marked by both H3K4me3 (activating) and H3K27me3 (repressive) histone marks in embryonic stem cells; these became known as "bivalent" sites. The Roadmap Epigenome project performed ChIP-seq on hundreds of normal primary tissues and cell line results from the ENCODE project to generate a systematic catalog of "chromatin states" alongside dozens of whole-genome bisulfite sequencing experiments in the same tissues. We used both to generate a default atlas of bivalent (Polycomb-associated and transcriptionally-poised) sites from H9 human embryonic stem cells which retain low DNA methylation across normal (non-placental) REMC tissues. In 2018, Zhou and Dinh (Nature Genetics) found isolated (AT)CG(AT) sites, or "solo-WCGW" motifs, in common PMDs as the most universal barometer of proliferation- and aging-associated methylation loss in mammalian cells, so we use their solo-WCGW sites in common PMDs as the default measure for hypomethylation. The resulting CpGindex is a vector of length 3 for each sample: hypermethylation, hypomethylation, and their ratio.
We suggest fitting a model for the composition of bulk samples (tumor/normal, tissue1/tissue2, or whatever is most appropriate) prior to drawing any firm conclusions from the results of this function. For example, a mixture of two-thirds normal tissue and one-third tumor tissue may produce the same or lower degree of hyper/hypomethylation than high-tumor-content cell-free DNA samples from the blood plasma of the same patient. Intuition is simply not a reliable guide in such situations, which occur with some regularity. If orthogonal estimates of purity/composition are available (flow cytometry, ploidy, yield of filtered cfDNA), it is a Very Good Idea to include them.
The default for this function is to use the HMM-defined CpG islands from Hao Wu’s paper (Wu, Caffo, Jaffee, Irizarry & Feinberg, Biostatistics 2010) as generic “hypermethylation” targets inside of “bivalent” (H3K27me3+H3K4me3) sites (identified in H9 embryonic stem cells & unmethylated across normals), and the solo-WCGW sites within common partially methylated domains from Wanding Zhou and Huy Dinh’s paper (Zhou, Dinh, et al, Nat Genetics 2018) as genetic “hypomethylation” targets (as above, obvious caveats about tissue specificity and user-supplied possibilities exist, but the defaults are sane for many purposes, and can be exchanged for whatever targets a user wishes).
The function returns all three components of the “CpG index”, comprised of hyperCGI and hypopMMD (i.e. hyper, hypo, and their ratio). The PMD "score" is a base-coverage-weighted average of losses to solo-WCGW bases within PMDs; the PRC score is similarly base-coverage-weighted but across HMM CGI CpGs, within polycomb repressor complex sites (by default, the subset of state 23 segments in the 25-state, 12-mark ChromImpute model for H9 which have less than 10 percent CpG methylation across the CpG-island-overlapping segment in all normal primary tissues and cells from the Reference Epigenome project). By providing different targets and/or regions, users can customize as needed.
The return value is a CpGindex object, which is really just a DataFrame that knows about the regions at which it was summarized, and reminds the user of this when they implicitly call the show method on it.
Value
A CpGindex (DataFrame w/cols `hyper`, `hypo`, `ratio` + 2 GRs)
---
**ENSR_subset.hg19**
ENSR_subset data from hg19 genome
**Description**
Subset of ENSEMBL regulatory build regions for hg19 genome
**Usage**
data(ENSR_subset.hg19, package="biscuiteer")
**Details**
Source URL: homo_sapiens.GRCh37.Regulatory_Build.regulatory_features.20161117.gff.gz (regions that overlap Infinium annotation manifests - described at http://zwdzwd.github.io/InfiniumAnnotation - are selected for final GRanges) Source type: GFF Return type: GRanges
**Description**
Subset of ENSEMBL regulatory build regions for hg19 genome
**Usage**
```r
data(ENSR_subset.hg38, package="biscuit")
```
**Details**
Source URL: homo_sapiens.GRCh38.Regulatory_Build.regulatory_features.20161111.gff.gz (regions that overlap Infinium annotation manifests - described at http://zwdzwd.github.io/InfiniumAnnotation - are selected for final GRanges) Source type: GFF Return type: GRanges
---
**extremality**
*Compute fraction of a Bernoulli variance*
**Description**
Works efficiently on matrices and DelayedMatrix objects. Note that it is possible for "raw" extremality to be greater than 1, so this function does a second pass to correct for this.
**Usage**
```r
extremality(x, raw = FALSE)
```
**Arguments**
- `x`: A rectangular object with proportions in it
- `raw`: Skip the correction pass? (DEFAULT: FALSE)
**Value**
The extremality of each row (if more than one) of the object
**Examples**
```r
x <- rnorm(100, mean=0.5, sd=0.15)
x <- matrix(x, nrow=50, ncol=2)
ext <- extremality(x, raw=TRUE)
```
fexpit
*Helper function: expanded expit*
**Description**
Helper function: expanded expit
**Usage**
fexpit(x, sqz = 1e-06)
**Arguments**
- **x**: a vector of values between -Inf and +Inf
- **sqz**: the amount by which we 'squoze', default is .000001
**Value**
a vector of values between 0 and 1 inclusive
**Examples**
```r
set.seed(1234)
x <- rnorm(n=1000)
sqz <- 1 / (10**6)
p <- fexpit(x, sqz=sqz)
all( (abs(x - flogit(p)) / x) < sqz )
all( abs(x - flogit(fexpit(x))) < sqz )
```
---
**filterLoci**
*Filter loci with zero coverage*
**Description**
Function potentially used to be a part of dmrseq. Included here to avoid dmrseq failing due to any number of reasons related to lack of coverage.
**Usage**
filterLoci(bsseq, testCovariate)
**fixAge**
**Arguments**
bsseq A bsseq object for filtering
testCovariate The name of the pData column dmrseq will test on
**Details**
The code is adapted from the precheck loop of dmrseq::dmrseq
**Value**
A bsseq object ready for dmrseq to use
**See Also**
dmrseq
WGBSeq
RRBSeq
**Examples**
```r
shuf_bed <- system.file("extdata", "MCF7_Cunha_chr11p15_shuffled.bed.gz", package="biscuiteer")
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
shuf_vcf <- system.file("extdata", "MCF7_Cunha_shuffled_header_only.vcf.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc1 <- readBiscuit(BEDfile = shuf_bed, VCFfile = shuf_vcf, merged = FALSE)
bisc2 <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
comb <- unionize(bisc1, bisc2)
filt <- filterLoci(comb, "sampleNames")
```
---
**Description**
Uses Horvath-type 'epigenetic clock' raw output to project into actual ages
Usage
fixAge(x, adult = 21)
Arguments
x Untransformed or raw prediction(s)
adult Age of adulthood (DEFAULT: 21)
Details
The 'Epigenetic Clock' (Horvath 2012) and similar schemes use a number of CpG loci (or regions, or perhaps CpH loci – it doesn’t really matter what) to estimate the chronological/biological age of samples from DNA methylation with pre-trained feature weights (coefficients) for each region/locus.
All of these type of clocks use a nonlinear output transformation which switches from an exponential growth model for children into a linear model for adults, where adult is an arbitrary number (by default and custom, that number is 21; elsewhere it can sometimes be seen as 20, but all known epi-age transformation functions quietly add 1 to the constant internally).
This function implements the above standard output transformation step.
Value
Transformed prediction(s)
Examples
clock <- getClock(genome="hg38")
score <- clock$gr$score
age <- fixAge(score)
Description
Replace NAs with another value
Useful for coercing matrices into how bsseq is expecting the M matrix to be.
Usage
fixNAs(x, y = 0, sparseMatrix = FALSE)
Arguments
x The matrix-like object containing NAs to fix
y The value to replace the NAs with (DEFAULT: 0)
sparseMatrix Make the result a Matrix object? (DEFAULT: FALSE)
Value
x with no NAs (possibly a sparse Matrix)
Examples
nom <- c(rep(c(1,4,NA,9,NA,NA,7,NA), 5))
no_nas <- fixNAs(nom)
---
flogit
*Helper function: squeezed logit*
Description
Helper function: squeezed logit
Usage
flogit(p, sqz = 1e-06)
Arguments
p
a vector of values between 0 and 1 inclusive
sqz
the amount by which to 'squeeze', default is .000001
Value
a vector of values between -Inf and +Inf
Examples
set.seed(1234)
p <- runif(n=1000)
summary(p)
sqz <- 1 / (10**6)
x <- flogit(p, sqz=sqz)
summary(x)
all( abs(p - fexpit(x, sqz=sqz)) < sqz )
all( abs(p - fexpit(flogit(p, sqz=sqz), sqz=sqz)) < sqz )
getClock
Retrieves 'epigenetic clock' models
Description
Biscuiteer supports several 'epigenetic clock' models. This function retrieves the various models.
Usage
getClock(
model = c("horvath", "horvathshrunk", "hannum", "skinandblood"),
padding = 15,
genome = c("hg19", "hg38", "GRCh37", "GRCh38"),
useENSR = FALSE,
useHMMI = FALSE
)
Arguments
- **model**: One of "horvath", "horvathshrunk", "hannum", or "skinandblood"
- **padding**: How many base pairs (+/-) to expand a feature's footprint (DEFAULT: 15)
- **genome**: One of "hg19", "GRCh37", "hg38", or "GRCh38" (DEFAULT: "hg19")
- **useENSR**: Substitute ENSEMBL regulatory feature boundaries? (DEFAULT: FALSE)
- **useHMMI**: Substitute HMM-based CpG island boundaries? (DEFAULT: FALSE)
Details
The remapped coordinates for the Horvath (2012) and Hannum (2013) clocks, along with shrunken Horvath (2012) and improved Horvath (2018) models, are provided as part of biscuiteer (visit inst/scripts/clocks.R to find out how) along with some functionality to make them more usable in RRBS/WGBS data of varying coverage along varying genomes. For example, the HMM-based CpG island model introduced by Wu (2010) can be used to assign to within-island features the methylation rate of their associated island, and ENSEMBL regulatory build features (ENSR features, for short) such as CTCF binding sites can have their coordinates substituted for the default padded boundaries of a feature.
The net result of this process is that, while the default settings simply swap in a 30-bp stretch centered on the selected clock's CpG (and/or CpH) loci, add the intercept, and ship out the model, much more flexibility is available to the user. This function provides a single point for tuning of such options in the event that defaults don’t work well for a user.
The precedence of options is as follows:
1. If a feature has neither ENSR nor HMMI IDs, it is padded (only) +/- bp.
2. If it has an HMMI but not ENSR ID or ENSR==FALSE, the HMM island is used.
3. If a feature has an ENSR ID, and ENSR==TRUE, the ENSR feature is used.
If a feature has both an ENSR ID and an HMMI ID, and both options are TRUE, then the ENSR start and end coordinates will take precedence over its HMMI.
The above shenanigans produce the GRanges object returned as `gr` in a List. The intercept value returned with the model is its fixed (B0) coefficient. The cleanup function returned with the model transforms its raw output.
Value
- a List with elements `model`, `gr`, `intercept`, and `cleanup`
Examples
```r
clock <- getClock(model="horvathshrunk", genome="hg38")
```
---
**getLogitFracMeth**
*Helper function for compartment inference*
---
**Description**
Want an object with nominally Gaussian error for compartment inference, so this function uses 'suitable' (defaults to 3 or more reads in 2 or more samples) measurements. Using Dirichlet smoothing (adding 'k' reads to M and U), these measurements are then turned into lightly moderated, logit-transformed methylated-fraction estimates for compartment calling.
**Usage**
```r
getLogitFracMeth(x, minCov = 3, minSamp = 2, k = 0.1, r = NULL)
getMvals(x, minCov = 3, minSamp = 2, k = 0.1, r = NULL)
```
**Arguments**
- `x` A bsseq object with methylated and total reads
- `minCov` Minimum read coverage for landmarking samples (DEFAULT: 3)
- `minSamp` Minimum landmark samples with at least minCov (DEFAULT: 2)
- `k` Pseudoreads for smoothing (DEFAULT: 0.1)
- `r` Regions to collapse over - if NULL, do it by CpG (DEFAULT: NULL)
**Value**
Smoothed logit(M / Cov) GRanges with coordinates as row names
**Functions**
- `getMvals()`: Alias for `getLogitFracMeth`
**Examples**
```r
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
reg <- GRanges(seqnames = rep("chr11",5),
strand = rep("*",5),
ranges = IRanges(start = c(0,2.8e6,1.17e7,1.38e7,1.69e7),
end= c(2.8e6,1.17e7,1.38e7,1.69e7,2.2e7))
)
frac <- getLogitFracMeth(bisc, minSamp = 1, r = reg)
```
---
**GRCh37.chromArm**
**Description**
Chromosome arm locations for GRCh37 genome
**Usage**
```r
data(GRCh37.chromArm, package="biscuiteer")
```
**Details**
Source URL: https://genome.ucsc.edu/cgi-bin/hgTables (Cytogenic bands were retrieved using the UCSC Table Browser. The output was then exported to a TXT file, where the chromosome arms were combined and formed into a GRanges) Source type: TXT Return type: GRanges
---
**GRCh38.chromArm**
**Description**
Chromosome arm locations for GRCh38 genome
**Usage**
```r
data(GRCh38.chromArm, package="biscuiteer")
```
### grToSeg
**Dump GRanges to segmented data data.frame**
#### Description
Output data.frame can be written to a .seg file if supplied with filename input argument.
#### Usage
```r
grToSeg(gr, filename = NULL, minAbs = NULL)
```
#### Arguments
- `gr` A GRanges or GRangesList to dump to .seg file
- `filename` Where to save the result - unsaved if NULL (DEFAULT: NULL)
- `minAbs` Minimum absolute gain/loss cutoff (DEFAULT: NULL)
#### Value
A data.frame with columns:
- (ID, chrom, loc.start, loc.end, num.mark, seg.mean)
#### See Also
segToGr
#### Examples
```r
clock <- getClock(model="horvathshrunk", genome="hg38")
gr <- clock$gr
df <- grToSeg(gr = gr)
```
H9state23unmeth.hg19 H9state23unmeth.hg19
Description
Hypermethylated targets in bivalent histone sites from H9 embryonic stem cells which were unmethylated across normal cells for hg19 genome
Usage
data(H9state23unmeth.hg19, package="biscuiteer")
Details
GRanges was generated by taking the HMM-derived CpG islands (described in ?HMM_CpG_islands.hg19) and overlapping with regions that were unmethylated in normal H9 stem cells and had a ChromHMM state of 2 or 3 (see https://www.nature.com/articles/nmeth.1906#MOESM194 for a description of ChromHMM) Return type: GRanges
H9state23unmeth.hg38 H9state23unmeth.hg38
Description
Hypermethylated targets in bivalent histone sites from H9 embryonic stem cells which were unmethylated across normal cells for hg38 genome
Usage
data(H9state23unmeth.hg38, package="biscuiteer")
Details
GRanges was generated by taking the HMM-derived CpG islands (described in ?HMM_CpG_islands.hg38) and overlapping with regions that were unmethylated in normal H9 stem cells and had a ChromHMM state of 2 or 3 (see https://www.nature.com/articles/nmeth.1906#MOESM194 for a description of ChromHMM) Return type: GRanges
Description
Chromosome arm locations for hg19 genome
Usage
data(hg19.chromArm, package="biscuiteer")
Details
Source URL: http://hgdownload.cse.ucsc.edu/goldenPath/hg19/database/cytoBand.txt.gz (Chromosome arms were combined to form the final GRanges) Source type: TXT Return type: GRanges
Description
Chromosome arm locations for hg38 genome
Usage
data(hg38.chromArm, package="biscuiteer")
Details
Source URL: http://hgdownload.cse.ucsc.edu/goldenPath/hg38/database/cytoBand.txt.gz (Chromosome arms were combined to form the final GRanges) Source type: TXT Return type: GRanges
**HMM_CpG_islands.hg19**
**Description**
Hidden Markov Model-derived CpG islands from hg19 genome
**Usage**
```r
data(HMM_CpG_islands.hg19, package="biscuiteer")
```
**Details**
Source URL: https://www.ncbi.nlm.nih.gov/pubmed/20212320 (Hidden Markov Model CpG islands were produced using the method described in this paper. The hg19 genome was used for the CpG island production.) Source type: hg19 genome and procedure described in paper Return type: GRanges
---
**HMM_CpG_islands.hg38**
**Description**
Hidden Markov Model-derived CpG islands from hg38 genome
**Usage**
```r
data(HMM_CpG_islands.hg38, package="biscuiteer")
```
**Details**
Source URL: https://www.ncbi.nlm.nih.gov/pubmed/20212320 (Hidden Markov Model CpG islands were produced using the method described in this paper. The hg19 genome was used for the CpG island production.) Source type: hg19 genome and procedure described in paper Return type: GRanges
**makeBSseq**
*Make an in-memory bsseq object from a biscuit BED*
**Description**
Beware that any reasonably large BED files may not fit into memory!
**Usage**
```
makeBSseq(tbl, params, simplify = FALSE, verbose = FALSE)
```
**Arguments**
- `tbl` A tibble (from read_tsv) or a data.table (from fread)
- `params` Parameters from checkBiscuitBED
- `simplify` Simplify sample names by dropping .foo.bar.hg19? (or similar) (DEFAULT: FALSE)
- `verbose` Print extra statements? (DEFAULT: FALSE)
**Value**
An in-memory bsseq object
**Examples**
```r
library(data.table)
library(R.utils)
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
params <- checkBiscuitBED(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE, how = "data.table")
select <- grep("\ .context", params$colNames, invert=TRUE)
tbl <- fread(gunzip(params$tbx$path, remove = FALSE), sep="\t", sep2="", fill=TRUE, na.strings=".", select=select)
unzippedName <- sub("\.gz$", "", params$tbx$path)
if (file.exists(unzippedName)) {
file.remove(unzippedName)
}
if (params$hasHeader == FALSE) names(tbl) <- params$colNames[select]
names(tbl) <- sub(""#, "", names(tbl))
tbl <- tbl[rowSums(is.na(tbl)) == 0, ]
bsseq <- makeBSseq(tbl = tbl, params = params)
```
**readBiscuit**
*Read biscuit output into bsseq object*
**Description**
Takes BED-like format with 2 or 3 columns per sample. Unmerged CpG files have 2 columns (beta values and coverage), whereas merged CpG files have 3 columns (beta values, coverage, and context).
**Usage**
```r
readBiscuit(
BEDfile,
VCFfile,
merged,
sampleNames = NULL,
simplify = FALSE,
genome = "hg19",
how = c("data.table", "readr"),
hdf5 = FALSE,
hdf5dir = NULL,
sparse = FALSE,
chunkSize = 1e+06,
chr = NULL,
which = NULL,
verbose = FALSE
)
```
```r
loadBiscuit(
BEDfile,
VCFfile,
merged,
sampleNames = NULL,
simplify = FALSE,
genome = "hg19",
how = c("data.table", "readr"),
hdf5 = FALSE,
hdf5dir = NULL,
sparse = FALSE,
chunkSize = 1e+06,
chr = NULL,
which = NULL,
verbose = FALSE
)
```
**Arguments**
- **BEDfile** - A BED-like file - must be compressed and tabix'ed
readBiscuit
VCFfile A VCF file - must be compressed and tabix`ed. Only the header information is needed.
merged Is this merged CpG data?
sampleNames Names of samples - NULL: create names, vector: assign names, data.frame: make pData (DEFAULT: NULL)
simplify Simplify sample names by dropping .foo.bar.hg19? (or similar) (DEFAULT: FALSE)
genome Genome assembly the runs were aligned against (DEFAULT: "hg19")
how How to load data - either data.table or readr (DEFAULT: "data.table")
hdf5 Make the object HDF5-backed - CURRENTLY NOT AVAILABLE (DEFAULT: FALSE)
hdf5dir Directory to store HDF5 files if 'hdf5' = TRUE (DEFAULT: NULL)
sparse Use sparse Matrix objects for the data? (DEFAULT: FALSE)
chunkSize Number of rows before readr reading becomes chunked (DEFAULT: 1e6)
chr Load a specific chromosome? (DEFAULT: NULL)
which A GRanges of regions to load - NULL loads them all (DEFAULT: NULL)
verbose Print extra statements? (DEFAULT: FALSE)
Details
NOTE: Assumes alignment against hg19 (use genome argument to override). NOTE: Requires header from VCF file to detect sample names
Value
A bsseq::BSseq object
Functions
• loadBiscuit(): Alias for readBiscuit
See Also
bsseq
checkBiscuitBED
Examples
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
readEpibed
*Read in and decode the RLE representation of the epibed format out of biscuit epiread*
**Description**
Read in and decode the RLE representation of the epibed format out of biscuit epiread
**Usage**
```r
readEpibed(
epibed,
genome = NULL,
chr = NULL,
start = 1,
end = 2^28,
fragment_level = TRUE
)
```
**Arguments**
- `epibed` The path to the epibed file (must be bgzip and tabix indexed)
- `genome` What genome did this come from (e.g. "hg19") (default: NULL)
- `chr` Which chromosome to retrieve (default: NULL)
- `start` The starting position for a region of interest (default: 1)
- `end` The end position for a region of interest (default: $2^{28}$)
- `fragment_level` Whether to collapse reads to the fragment level (default: TRUE)
**Value**
A GRanges object
**Examples**
```r
epibed.nome <- system.file("extdata", "hct116.nome.epibed.gz", package="biscuiteer")
epibed.bsseq <- system.file("extdata", "hct116.bsseq.epibed.gz", package="biscuiteer")
epibed.nome.gr <- readEpibed(epibed = epibed.nome, genome = "hg19", chr = "chr1")
epibed.bsseq.gr <- readEpibed(epibed = epibed.bsseq, genome = "hg19", chr = "chr1")
```
(e)RRBS settings for dmrseq
Description
(e)RRBS settings for dmrseq
Usage
RRBSeq(bsseq, testCovariate, cutoff = 0.2, bpSpan = 750, ...)
Arguments
bsseq A bsseq object
testCovariate The pData column to test on
cutoff The minimum CpG-wise difference to use (DEFAULT: 0.2)
bpSpan Span of smoother AND max gap in DMR CpGs (DEFAULT: 750)
... Other arguments to pass along to dmrseq
Value
A GRanges object (same as from dmrseq)
Examples
data(BS.chr21, package="dmrseq")
dat <- BS.chr21
rrbs <- RRBSeq(dat[1:500, ], "Rep", cutoff = 0.05, BPPARAM=BiocParallel::SerialParam())
segToGr
Import a segmentation file into GRanges object
Description
Reverse of grToSeg
Usage
segToGr(seg, genome = "hg19", name = "ID", score = "seg.mean")
Arguments
seg The .seg filename
genome Genome against which segments were annotated (DEFAULT: "hg19")
name .seg file column to use as $name metadata (DEFAULT: "ID")
score .seg file column to use as $score metadata (DEFAULT: "seg.mean")
Value
A GRanges object
See Also
gToSeg
Examples
clock <- getClock(model="horvathshrunk", genome="hg38")
gr <- clock$gr
df <- grToSeg(gr = gr, file = "test_grToSeg.seg")
segs <- segToGr("test_grToSeg.seg", genome="hg38")
if (file.exists("test_grToSeg.seg")) file.remove("test_grToSeg.seg")
Description
Seqinfo for hg19 genome
Usage
data(seqinfo.hg19, package="biscuiteer")
Details
Source URL: http://hgdownload.cse.ucsc.edu/goldenPath/hg19/bigZips/hg19.chrom.sizes (The output from this site was downloaded into a TXT file and then loaded into a sorted Seqinfo table)
Source type: TXT Return type: Seqinfo
Description
Seqinfo for hg38 genome
Usage
data(seqinfo.hg38, package="biscuiteer")
Details
Source URL: http://hgdownload.cse.ucsc.edu/goldenPath/hg38/bigZips/hg38.chrom.sizes (The output from this site was downloaded into a TXT file and then loaded into a sorted Seqinfo table)
Source type: TXT
Return type: Seqinfo
Description
Seqinfo for mm10 genome
Usage
data(seqinfo.mm10, package="biscuiteer")
Details
Source URL: http://hgdownload.cse.ucsc.edu/goldenPath/mm10/bigZips/mm10.chrom.sizes (The output from this site was downloaded into a TXT file and then loaded into a sorted Seqinfo table)
Source type: TXT
Return type: Seqinfo
simplifySampleNames Simplify bsseq sample names
Description
Tries using the longest common subsequence to figure out what can be dropped. Usually used for VCF columns.
Usage
simplifySampleNames(x)
Arguments
x A SummarizedExperiment-derived object, or a character vector
Value
The input object, but with simplified sample names
Examples
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuit")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuit")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
bisc <- simplifySampleNames(bisc)
summarizeBsSeqOver Summarize methylation over provided regions
Description
Used for bsseq objects. Mostly a local wrap for getMeth.
Usage
summarizeBsSeqOver(bsseq, segs, dropNA = FALSE, impute = FALSE)
Arguments
bsseq The bsseq object to summarize
segs Regions to summarize over (GRanges object, no GRangesList yet)
dropNA Whether to drop rows if more than half of samples are NA (DEFAULT: FALSE)
impute Whether to impute NAs/NaNs (DEFAULT: FALSE)
Value
A matrix of regional methylation fractions
Examples
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
reg <- GRanges(seqnames = rep("chr11",5),
strand = rep("*",5),
ranges = IRanges(start = c(0,2.8e6,1.17e7,1.38e7,1.69e7),
end= c(2.8e6,1.17e7,1.38e7,1.69e7,2.2e7))
)
summary <- summarizeBsSeqOver(bsseq = bisc, segs = reg, dropNA = TRUE)
Description
Read from tabix-indexed bed file to list objects
Usage
tabixRetrieve(
paths,
chr,
start = 1,
end = 2^28,
sample_names = NULL,
is.epibed = FALSE,
BPPARAM = SerialParam()
)
Arguments
- **paths**: path(s) to the bed files
- **chr**: chromosome name
- **start**: start coordinate of region of interest
- **end**: end coordinate of region of interest
- **sample_names**: sample names, just use paths if not specified
- **is.epibed**: whether the input is epibed format
- **BPPARAM**: how to parallelize
Value
A list object with DNA methylation level and depth
Description
Wrapper for the combine(bsseq1, ...) method in bsseq
Usage
unionize(bs1, ...)
Arguments
- **bs1**: A bsseq object
- **...**: One or more bsseq objects to combine with bs1
Details
Takes provided bsseq objects, the union of their GRanges, fills out the sites not in the union with 0M/0Cov, and returns the even-sparser bsseq holding all of them.
Value
A larger and more sparse bsseq object
Examples
```r
shuf_bed <- system.file("extdata", "MCF7_Cunha_chr11p15_shuffled.bed.gz", package="biscuitex")
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuitex")
shuf_vcf <- system.file("extdata", "MCF7_Cunha_shuffled_header_only.vcf.gz", package="biscuitex")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuitex")
bisc1 <- readBiscuit(BEDfile = shuf_bed, VCFfile = shuf_vcf, merged = FALSE)
bisc2 <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf, merged = FALSE)
comb <- unionize(bisc1, bisc2)
```
---
WGBSage
**Guess ages using Horvath-style 'clock' models**
**Description**
See Horvath, Genome Biology, 2013 for more information
**Usage**
```r
WGBSage(
bsseq,
model = c("horvath", "horvathshrunk", "hannum", "skinandblood"),
padding = 15,
useENSR = FALSE,
useHMMI = FALSE,
minCovg = 5,
impute = FALSE,
minSamp = 5,
genome = NULL,
dropBad = FALSE,
...
)
```
**Arguments**
- **bsseq**: A bsseq object (must have assays named M and Cov)
- **model**: Which model ("horvath", "horvathshrunk", "hannum", "skinandblood")
- **padding**: How many bases +/- to pad the target CpG by (DEFAULT: 15)
useENSR Use ENSEMBL regulatory region bounds instead of CpGs (DEFAULT: FALSE)
useHMMI Use HMM CpG island boundaries instead of padded CpGs (DEFAULT: FALSE)
minCovg Minimum regional read coverage desired to estimate 5mC (DEFAULT: 5)
impute Use k-NN imputation to fill in low-coverage regions? (DEFAULT: FALSE)
minSamp Minimum number of non-NA samples to perform imputation (DEFAULT: 5)
genome Genome to use as reference, if no genome(bsseq) is set (DEFAULT: NULL)
dropBad Drop rows/cols with > half missing pre-imputation? (DEFAULT: FALSE)
...
Arguments to be passed to impute.knn, such as rng.seed
Details
Note: the accuracy of the prediction will increase or decrease depending on how various hyperparameters are set by the user. This is NOT a hands-off procedure, and the defaults are only a starting point for exploration. It will not be uncommon to tune padding, minCovg, and minSamp for each WGBS or RRBS experiment (and the latter may be impacted by whether dupes are removed prior to importing data). Consider yourself forewarned. In the near future we may add support for arbitrary region-coefficient inputs and result transformation functions, which of course will just make the problems worse.
Also, please cite the appropriate papers for the Epigenetic Clock(s) you use:
For the 'horvath' or 'horvathshrunk' clocks, cite Horvath, Genome Biology 2013. For the 'hannum' clock, cite Hannum et al, Molecular Cell 2013. For the 'skinandblood' clock, cite Horvath et al, Aging 2018.
Last but not least, keep track of the parameters YOU used for YOUR estimates. The call element in the returned list of results is for this exact purpose. If you need recover the GRanges object used to average(or impute) DNAme values for the model, try granges(result$methcoefs) on a result. The methylation fraction and coefficients for each region can be found in the GRanges object, result$methcoefs, where each sample has a corresponding column with the methylation fraction and the coefficients have their own column titled "coefs". Additionally, the age estimates are stored in resultSage (named, in case dropBad == TRUE).
Value
A list with call, methylation estimates, coefs, age estimates
Examples
shuf_bed <- system.file("extdata", "MCF7_Cunha_chr11p15_shuffled.bed.gz", package="biscuiteer")
orig_bed <- system.file("extdata", "MCF7_Cunha_chr11p15.bed.gz", package="biscuiteer")
shuf_vcf <- system.file("extdata", "MCF7_Cunha_shuffled_header_only.vcf.gz", package="biscuiteer")
orig_vcf <- system.file("extdata", "MCF7_Cunha_header_only.vcf.gz", package="biscuiteer")
bisc1 <- readBiscuit(BEDfile = shuf_bed, VCFfile = shuf_vcf,
```r
bisc2 <- readBiscuit(BEDfile = orig_bed, VCFfile = orig_vcf,
merged = FALSE)
comb <- unionize(bisc1, bisc2)
ages <- WGBSage(comb, "horvath")
```
---
**WGBSeq**
**Wrapper for WGBS settings for dmrseq**
### Description
Wrapper for WGBS settings for dmrseq
### Usage
```r
WGBSeq(bsseq, testCovariate, bpSpan = 1000, ...)
```
### Arguments
- `bsseq` A bsseq object
- `testCovariate` The pData column to test on
- `bpSpan` Span of smoother AND 2x max gap in DMR CpGs (DEFAULT: 1000)
- `...` Other arguments to pass along to dmrseq
### Value
A GRanges object (same as from dmrseq)
### Examples
```r
data(BS.chr21, package="dmrseq")
dat <- BS.chr21
wgbs <- WGBSeq(dat[1:500, ], "CellType", cutoff = 0.05,
BPPARAM=BiocParallel::SerialParam())
```
Index
* Biscuit
biscuiteer-package, 3
* DNA Methylation
biscuiteer-package, 3
* Data Import
biscuiteer-package, 3
* data
clocks, 12
ENSR_subset.hg19, 14
ENSR_subset.hg38, 15
GRCh37.chromArm, 22
GRCh38.chromArm, 22
H9state23unmeth.hg19, 24
H9state23unmeth.hg38, 24
hg19.chromArm, 25
hg38.chromArm, 25
HMM_CpG_islands.hg19, 26
HMM_CpG_islands.hg38, 26
seqinfo.hg19, 32
seqinfo.hg38, 33
seqinfo.mm10, 33
_PACKAGE (biscuiteer-package), 3
atRegions, 4
binCoverage, 5
biscuiteer (biscuiteer-package), 3
biscuiteer-methods, 6
biscuiteer-package, 3
biscuitMetadata, 7
BSseq-methods (biscuiteer-methods), 6
byChromArm, 8
byExtremality, 9
checkBiscuitBED, 10
clocks, 12
condenseSampleNames, 12
coverage (biscuiteer-methods), 6
CpGindex, 13
ENSR_subset.hg19, 14
ENSR_subset.hg38, 15
extremality, 15
fexpit, 16
filterLoci, 16
fixAge, 17
fixed, BSseq-method
(biscuiteer-methods), 6
fixNAs, 18
flogit, 19
geno, BSseq, ANY-method
(biscuiteer-methods), 6
getBiscuitMetadata (biscuitMetadata), 7
getClock, 20
getLogitFracMeth, 21
getMvals (getLogitFracMeth), 21
GRCh37.chromArm, 22
GRCh38.chromArm, 22
gToSeg, 23
H9state23unmeth.hg19, 24
H9state23unmeth.hg38, 24
header (biscuiteer-methods), 6
header, BSseq-method
(biscuiteer-methods), 6
hg19.chromArm, 25
hg38.chromArm, 25
HMM_CpG_islands.hg19, 26
HMM_CpG_islands.hg38, 26
info, BSseq-method (biscuiteer-methods), 6
loadBiscuit (readBiscuit), 28
makeBSseq, 27
meta, BSseq-method (biscuiteer-methods), 6
readBiscuit, 28
readEpibed, 30
reference (biscuiteer-methods), 6
RRBSeq, 31
samples, BSseq-method
(biscuiteer-methods), 6
segToGr, 31
seqinfo.hg19, 32
seqinfo.hg38, 33
seqinfo.mm10, 33
simplifySampleNames, 34
summarizeBsSeqOver, 34
tabixRetrieve, 35
unionize, 36
WGBSage, 37
WGBSeq, 39
|
{"Source-Url": "https://bioconductor.org/packages/release/bioc/manuals/biscuiteer/man/biscuiteer.pdf", "len_cl100k_base": 14617, "olmocr-version": "0.1.53", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 78746, "total-output-tokens": 17500, "length": "2e13", "weborganizer": {"__label__adult": 0.00038552284240722656, "__label__art_design": 0.00072479248046875, "__label__crime_law": 0.0003991127014160156, "__label__education_jobs": 0.001434326171875, "__label__entertainment": 0.00026488304138183594, "__label__fashion_beauty": 0.00023055076599121096, "__label__finance_business": 0.0005660057067871094, "__label__food_dining": 0.0004143714904785156, "__label__games": 0.0009164810180664062, "__label__hardware": 0.0016222000122070312, "__label__health": 0.0006241798400878906, "__label__history": 0.0006351470947265625, "__label__home_hobbies": 0.00021922588348388672, "__label__industrial": 0.0008106231689453125, "__label__literature": 0.0003414154052734375, "__label__politics": 0.00045418739318847656, "__label__religion": 0.000591278076171875, "__label__science_tech": 0.2479248046875, "__label__social_life": 0.00019252300262451172, "__label__software": 0.044647216796875, "__label__software_dev": 0.6953125, "__label__sports_fitness": 0.0003495216369628906, "__label__transportation": 0.0004391670227050781, "__label__travel": 0.00031065940856933594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50882, 0.02871]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50882, 0.81118]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50882, 0.66521]], "google_gemma-3-12b-it_contains_pii": [[0, 1328, false], [1328, 3991, null], [3991, 4805, null], [4805, 6066, null], [6066, 7228, null], [7228, 8466, null], [8466, 10018, null], [10018, 11033, null], [11033, 12494, null], [12494, 13866, null], [13866, 15272, null], [15272, 16322, null], [16322, 18868, null], [18868, 21667, null], [21667, 22718, null], [22718, 23478, null], [23478, 24494, null], [24494, 25823, null], [25823, 26449, null], [26449, 28535, null], [28535, 30119, null], [30119, 31257, null], [31257, 31931, null], [31931, 33086, null], [33086, 33675, null], [33675, 34613, null], [34613, 35972, null], [35972, 36881, null], [36881, 38354, null], [38354, 39514, null], [39514, 40246, null], [40246, 41112, null], [41112, 41763, null], [41763, 42593, null], [42593, 43686, null], [43686, 44483, null], [44483, 45677, null], [45677, 48312, null], [48312, 49108, null], [49108, 50610, null], [50610, 50882, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1328, true], [1328, 3991, null], [3991, 4805, null], [4805, 6066, null], [6066, 7228, null], [7228, 8466, null], [8466, 10018, null], [10018, 11033, null], [11033, 12494, null], [12494, 13866, null], [13866, 15272, null], [15272, 16322, null], [16322, 18868, null], [18868, 21667, null], [21667, 22718, null], [22718, 23478, null], [23478, 24494, null], [24494, 25823, null], [25823, 26449, null], [26449, 28535, null], [28535, 30119, null], [30119, 31257, null], [31257, 31931, null], [31931, 33086, null], [33086, 33675, null], [33675, 34613, null], [34613, 35972, null], [35972, 36881, null], [36881, 38354, null], [38354, 39514, null], [39514, 40246, null], [40246, 41112, null], [41112, 41763, null], [41763, 42593, null], [42593, 43686, null], [43686, 44483, null], [44483, 45677, null], [45677, 48312, null], [48312, 49108, null], [49108, 50610, null], [50610, 50882, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50882, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50882, null]], "pdf_page_numbers": [[0, 1328, 1], [1328, 3991, 2], [3991, 4805, 3], [4805, 6066, 4], [6066, 7228, 5], [7228, 8466, 6], [8466, 10018, 7], [10018, 11033, 8], [11033, 12494, 9], [12494, 13866, 10], [13866, 15272, 11], [15272, 16322, 12], [16322, 18868, 13], [18868, 21667, 14], [21667, 22718, 15], [22718, 23478, 16], [23478, 24494, 17], [24494, 25823, 18], [25823, 26449, 19], [26449, 28535, 20], [28535, 30119, 21], [30119, 31257, 22], [31257, 31931, 23], [31931, 33086, 24], [33086, 33675, 25], [33675, 34613, 26], [34613, 35972, 27], [35972, 36881, 28], [36881, 38354, 29], [38354, 39514, 30], [39514, 40246, 31], [40246, 41112, 32], [41112, 41763, 33], [41763, 42593, 34], [42593, 43686, 35], [43686, 44483, 36], [44483, 45677, 37], [45677, 48312, 38], [48312, 49108, 39], [49108, 50610, 40], [50610, 50882, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50882, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
918dc9c59dc0346833bc0d5531b717a293103b8c
|
Structure of Management Information
for version 2 of the
Simple Network Management Protocol (SNMPv2)
Status of this Memo
This RFC specifies an IAB standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "IAB Official Protocol Standards" for the standardization state and status of this protocol. Distribution of this memo is unlimited.
Table of Contents
1 Introduction ........................................... 2
1.1 A Note on Terminology ................................ 3
2 Definitions ........................................... 4
3.1 The MODULE-IDENTITY macro .......................... 5
3.2 Object Names and Syntaxes ............................ 7
3.3 The OBJECT-TYPE macro .............................. 10
3.5 The NOTIFICATION-TYPE macro ....................... 12
3 Information Modules ................................... 13
3.1 Macro Invocation .................................... 13
3.1.1 Textual Clauses .................................. 14
3.2 IMPORTing Symbols ................................... 14
4 Naming Hierarchy ...................................... 16
5 Mapping of the MODULE-IDENTITY macro ................. 17
5.1 Mapping of the LAST-UPDATED clause ................. 17
5.2 Mapping of the ORGANIZATION clause ................ 17
5.3 Mapping of the CONTACT-INFO clause ................. 17
5.4 Mapping of the DESCRIPTION clause .................. 17
5.5 Mapping of the REVISION clause ..................... 17
5.6 Mapping of the DESCRIPTION clause .................. 18
5.7 Mapping of the MODULE-IDENTITY value ............... 18
5.8 Usage Example ..................................... 19
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
<th>Title</th>
</tr>
</thead>
<tbody>
<tr>
<td>6.1</td>
<td>20</td>
<td>Mapping of the STATUS clause</td>
</tr>
<tr>
<td>6.2</td>
<td>20</td>
<td>Mapping of the DESCRIPTION clause</td>
</tr>
<tr>
<td>6.3</td>
<td>20</td>
<td>Mapping of the REFERENCE clause</td>
</tr>
<tr>
<td>6.4</td>
<td>20</td>
<td>Mapping of the OBJECT-IDENTITY value</td>
</tr>
<tr>
<td>6.5</td>
<td>21</td>
<td>Usage Example</td>
</tr>
<tr>
<td>7.1</td>
<td>22</td>
<td>Mapping of the SYNTAX clause</td>
</tr>
<tr>
<td>7.1.1</td>
<td>22</td>
<td>Integer32 and INTEGER</td>
</tr>
<tr>
<td>7.1.2</td>
<td>22</td>
<td>OCTET STRING</td>
</tr>
<tr>
<td>7.1.3</td>
<td>23</td>
<td>OBJECT IDENTIFIER</td>
</tr>
<tr>
<td>7.1.4</td>
<td>23</td>
<td>BIT STRING</td>
</tr>
<tr>
<td>7.1.5</td>
<td>23</td>
<td>IpAddress</td>
</tr>
<tr>
<td>7.1.6</td>
<td>24</td>
<td>Counter32</td>
</tr>
<tr>
<td>7.1.7</td>
<td>24</td>
<td>Gauge32</td>
</tr>
<tr>
<td>7.1.8</td>
<td>24</td>
<td>TimeTicks</td>
</tr>
<tr>
<td>7.1.9</td>
<td>25</td>
<td>Opaque</td>
</tr>
<tr>
<td>7.1.10</td>
<td>25</td>
<td>NsapAddress</td>
</tr>
<tr>
<td>7.1.11</td>
<td>26</td>
<td>Counter64</td>
</tr>
<tr>
<td>7.1.12</td>
<td>26</td>
<td>UInteger32</td>
</tr>
<tr>
<td>7.2</td>
<td>26</td>
<td>Mapping of the UNITS clause</td>
</tr>
<tr>
<td>7.3</td>
<td>27</td>
<td>Mapping of the MAX-ACCESS clause</td>
</tr>
<tr>
<td>7.4</td>
<td>27</td>
<td>Mapping of the STATUS clause</td>
</tr>
<tr>
<td>7.5</td>
<td>27</td>
<td>Mapping of the DESCRIPTION clause</td>
</tr>
<tr>
<td>7.6</td>
<td>28</td>
<td>Mapping of the REFERENCE clause</td>
</tr>
<tr>
<td>7.7</td>
<td>28</td>
<td>Mapping of the INDEX clause</td>
</tr>
<tr>
<td>7.7.1</td>
<td>30</td>
<td>Creation and Deletion of Conceptual Rows</td>
</tr>
<tr>
<td>7.8</td>
<td>31</td>
<td>Mapping of the AUGMENTS clause</td>
</tr>
<tr>
<td>7.8.1</td>
<td>31</td>
<td>Relation between INDEX and AUGMENTS clauses</td>
</tr>
<tr>
<td>7.9</td>
<td>32</td>
<td>Mapping of the DEFVAL clause</td>
</tr>
<tr>
<td>7.10</td>
<td>33</td>
<td>Mapping of the OBJECT-TYPE value</td>
</tr>
<tr>
<td>7.11</td>
<td>35</td>
<td>Usage Example</td>
</tr>
<tr>
<td>8.1</td>
<td>37</td>
<td>Mapping of the OBJECTS clause</td>
</tr>
<tr>
<td>8.2</td>
<td>37</td>
<td>Mapping of the STATUS clause</td>
</tr>
<tr>
<td>8.3</td>
<td>37</td>
<td>Mapping of the DESCRIPTION clause</td>
</tr>
<tr>
<td>8.4</td>
<td>37</td>
<td>Mapping of the REFERENCE clause</td>
</tr>
<tr>
<td>8.5</td>
<td>38</td>
<td>Mapping of the NOTIFICATION-TYPE value</td>
</tr>
<tr>
<td>8.6</td>
<td>39</td>
<td>Usage Example</td>
</tr>
<tr>
<td>9</td>
<td>40</td>
<td>Refined Syntax</td>
</tr>
<tr>
<td>10.1</td>
<td>41</td>
<td>Object Assignments</td>
</tr>
<tr>
<td>10.2</td>
<td>41</td>
<td>Object Definitions</td>
</tr>
<tr>
<td>10.3</td>
<td>42</td>
<td>Notification Definitions</td>
</tr>
</tbody>
</table>
11 Appendix: de-OSIfying a MIB module ................... 43
11.1 Managed Object Mapping ............................ 43
11.1.1 Mapping to the SYNTAX clause .................... 44
11.1.2 Mapping to the UNITS clause .................... 45
11.1.3 Mapping to the MAX-ACCESS clause ............... 45
11.1.4 Mapping to the STATUS clause ................... 45
11.1.5 Mapping to the DESCRIPTION clause ............... 45
11.1.6 Mapping to the REFERENCE clause ................. 45
11.1.7 Mapping to the INDEX clause ..................... 45
11.1.8 Mapping to the DEFVAL clause ................... 45
11.2 Action Mapping .................................... 46
11.2.1 Mapping to the SYNTAX clause .................... 46
11.2.2 Mapping to the MAX-ACCESS clause ............... 46
11.2.3 Mapping to the STATUS clause ................... 46
11.2.4 Mapping to the DESCRIPTION clause ............... 46
11.2.5 Mapping to the REFERENCE clause ................. 46
11.3 Event Mapping .................................... 46
11.3.1 Mapping to the STATUS clause ................... 47
11.3.2 Mapping to the DESCRIPTION clause ............... 47
11.3.3 Mapping to the REFERENCE clause ................. 47
12 Acknowledgements .................................. 48
13 References .......................................... 52
14 Security Considerations ............................. 54
15 Authors’ Addresses ................................. 54
1. Introduction
A network management system contains several (potentially many) nodes, each with a processing entity, termed an agent, which has access to management instrumentation; at least one management station; and, a management protocol, used to convey management information between the agents and management stations. Operations of the protocol are carried out under an administrative framework which defines both authentication and authorization policies.
Network management stations execute management applications which monitor and control network elements. Network elements are devices such as hosts, routers, terminal servers, etc., which are monitored and controlled through access to their management information.
Management information is viewed as a collection of managed objects, residing in a virtual information store, termed the Management Information Base (MIB). Collections of related objects are defined in MIB modules. These modules are written using a subset of OSI’s Abstract Syntax Notation One (ASN.1)\[1\]. It is the purpose of this document, the Structure of Management Information (SMI), to define that subset.
The SMI is divided into three parts: module definitions, object definitions, and, trap definitions.
(1) Module definitions are used when describing information modules. An ASN.1 macro, MODULE-IDENTITY, is used to concisely convey the semantics of an information module.
(2) Object definitions are used when describing managed objects. An ASN.1 macro, OBJECT-TYPE, is used to concisely convey the syntax and semantics of a managed object.
(3) Notification definitions are used when describing unsolicited transmissions of management information. An ASN.1 macro, NOTIFICATION-TYPE, is used to concisely convey the syntax and semantics of a notification.
1.1. A Note on Terminology
For the purpose of exposition, the original Internet-standard Network Management Framework, as described in RFCs 1155, 1157, and 1212, is termed the SNMP version 1 framework (SNMPv1). The current framework is termed the SNMP version 2 framework (SNMPv2).
2. Definitions
SNMPv2-SMI DEFINITIONS ::= BEGIN
-- the path to the root
internet OBJECT IDENTIFIER ::= { iso 3 6 1 }
directory OBJECT IDENTIFIER ::= { internet 1 }
mgmt OBJECT IDENTIFIER ::= { internet 2 }
experimental OBJECT IDENTIFIER ::= { internet 3 }
private OBJECT IDENTIFIER ::= { internet 4 }
enterprises OBJECT IDENTIFIER ::= { private 1 }
security OBJECT IDENTIFIER ::= { internet 5 }
snmpV2 OBJECT IDENTIFIER ::= { internet 6 }
-- transport domains
snmpDomains OBJECT IDENTIFIER ::= { snmpV2 1 }
-- transport proxies
snmpProxys OBJECT IDENTIFIER ::= { snmpV2 2 }
-- module identities
snmpModules OBJECT IDENTIFIER ::= { snmpV2 3 }
-- definitions for information modules
MODULE-IDENTITY MACRO ::= BEGIN
TYPE NOTATION ::= "LAST-UPDATED" value(Update UTCTime)
"ORGANIZATION" Text
"CONTACT-INFO" Text
"DESCRIPTION" Text
RevisionPart
VALUE NOTATION ::= value(VALUE OBJECT IDENTIFIER)
RevisionPart ::= Revisions | empty
Revisions ::= Revision | Revisions Revision
Revision ::= "REVISION" value(Update UTCTime)
"DESCRIPTION" Text
-- uses the NVT ASCII character set
Text ::= **** string ****
END
OBJECT-IDENTITY MACRO ::=
BEGIN
TYPE NOTATION ::=
"STATUS" Status
"DESCRIPTION" Text
ReferPart
VALUE NOTATION ::=
value(VALUE OBJECT IDENTIFIER)
Status ::=
"current"
| "obsolete"
ReferPart ::=
"REFERENCE" Text
| empty
Text ::= "***" string "***"
END
-- names of objects
ObjectName ::= OBJECT IDENTIFIER
-- syntax of objects
ObjectSyntax ::= CHOICE {
simple
SimpleSyntax,
-- note that SEQUENCEs for conceptual tables and
-- rows are not mentioned here...
application-wide
ApplicationSyntax
}
-- built-in ASN.1 types
SimpleSyntax ::= CHOICE {
-- INTEGERs with a more restrictive range
-- may also be used
integer-value
INTEGER (-2147483648..2147483647),
string-value
OCTET STRING,
objectID-value
OBJECT IDENTIFIER,
-- only the enumerated form is allowed
bit-value
BIT STRING
}
-- indistinguishable from INTEGER, but never needs more than
-- 32-bits for a two's complement representation
Integer32 ::= [UNIVERSAL 2]
IMPLICIT INTEGER (-2147483648..2147483647)
-- application-wide types
ApplicationSyntax ::= CHOICE {
ipAddress-value
IpAddress,
counter-value
Counter32,
gauge-value
Gauge32,
timeticks-value
TimeTicks,
arbitrary-value
Opaque,
nsapAddress-value
NsapAddress,
big-counter-value
Counter64,
unsigned-integer-value
UInteger32
}
-- in network-byte order
-- (this is a tagged type for historical reasons)
IpAddress ::= [APPLICATION 0]
IMPLICIT OCTET STRING (SIZE (4))
-- this wraps
Counter32 ::=
[APPLICATION 1]
IMPLICIT INTEGER (0..4294967295)
-- this doesn’t wrap
Gauge32 ::=
[APPLICATION 2]
IMPLICIT INTEGER (0..4294967295)
-- hundredths of seconds since an epoch
TimeTicks ::=
[APPLICATION 3]
IMPLICIT INTEGER (0..4294967295)
-- for backward-compatibility only
Opaque ::=
[APPLICATION 4]
IMPLICIT OCTET STRING
-- for OSI NSAP addresses
-- (this is a tagged type for historical reasons)
NsapAddress ::=
[APPLICATION 5]
IMPLICIT OCTET STRING (SIZE (1 | 4..21))
-- for counters that wrap in less than one hour with only 32 bits
Counter64 ::=
[APPLICATION 6]
IMPLICIT INTEGER (0..18446744073709551615)
-- an unsigned 32-bit quantity
UInteger32 ::=
[APPLICATION 7]
IMPLICIT INTEGER (0..4294967295)
-- definition for objects
OBJECT-TYPE MACRO ::= BEGIN
TYPE NOTATION ::= "SYNTAX" type(Syntax)
UnitsPart
"MAX-ACCESS" Access
"STATUS" Status
"DESCRIPTION" Text
ReferPart
IndexPart
DefValPart
VALUE NOTATION ::= value(VALUE ObjectName)
UnitsPart ::= "UNITS" Text
| empty
Access ::= "not-accessible"
| "read-only"
| "read-write"
| "read-create"
Status ::= "current"
| "deprecated"
| "obsolete"
ReferPart ::= "REFERENCE" Text
| empty
IndexPart ::= "INDEX" "{" IndexTypes "}"
| "AUGMENTS" "{" Entry "}"
| empty
IndexTypes ::= IndexType
| IndexTypes "," IndexType
IndexType ::= "IMPLIED" Index
| Index
Index ::= -- use the SYNTAX value of the
-- correspondent OBJECT-TYPE invocation
value(Indexobject ObjectName)
Entry ::= -- use the INDEX value of the
-- correspondent OBJECT-TYPE invocation
value(Entryobject ObjectName)
DefValPart ::= "DEFVAL" "{" value(Defval Syntax) "}"
| empty
-- uses the NVT ASCII character set
Text ::= "****" string "****"
-- definitions for notifications
NOTIFICATION-TYPE MACRO ::=
BEGIN
TYPE NOTATION ::=
ObjectsPart
"STATUS" Status
"DESCRIPTION" Text
ReferPart
VALUE NOTATION ::=
value(VALUE OBJECT IDENTIFIER)
ObjectsPart ::=
"OBJECTS" "{" Objects "}"
| empty
Objects ::=
Object
| Objects "," Object
Object ::=
value(Name ObjectName)
Status ::=
"current"
| "deprecated"
| "obsolete"
ReferPart ::=
"REFERENCE" Text
| empty
-- uses the NVT ASCII character set
Text ::= """ string """
END
3. Information Modules
An "information module" is an ASN.1 module defining information relating to network management.
The SMI describes how to use a subset of ASN.1 to define an information module. Further, additional restrictions are placed on "standard" information modules. It is strongly recommended that "enterprise-specific" information modules also adhere to these restrictions.
Typically, there are three kinds of information modules:
1. MIB modules, which contain definitions of inter-related managed objects, make use of the OBJECT-TYPE and NOTIFICATION-TYPE macros;
2. compliance statements for MIB modules, which make use of the MODULE-COMPLIANCE and OBJECT-GROUP macros [2]; and,
3. capability statements for agent implementations which make use of the AGENT-CAPABILITIES macros [2].
This classification scheme does not imply a rigid taxonomy. For example, a "standard" information module might include definitions of managed objects and a compliance statement. Similarly, an "enterprise-specific" information module might include definitions of managed objects and a capability statement. Of course, a "standard" information module may not contain capability statements.
All information modules start with exactly one invocation of the MODULE-IDENTITY macro, which provides contact and revision history. This invocation must appear immediately after any IMPORTs or EXPORTs statements.
3.1. Macro Invocation
Within an information module, each macro invocation appears as:
\[
\text{<descriptor> <macro> <clauses> ::= <value>}
\]
where <descriptor> corresponds to an ASN.1 identifier, <macro>
names the macro being invoked, and <clauses> and <value> depend on the definition of the macro.
An ASN.1 identifier consists of one or more letters, digits, or hyphens. The initial character must be a lower-case letter, and the final character may not be a hyphen. Further, a hyphen may not be immediately followed by another hyphen.
For all descriptors appearing in an information module, the descriptor shall be unique and mnemonic, and shall not exceed 64 characters in length. This promotes a common language for humans to use when discussing the information module and also facilitates simple table mappings for user-interfaces.
The set of descriptors defined in all "standard" information modules shall be unique. Further, within any information module, the hyphen is not allowed as a character in any descriptor.
Finally, by convention, if the descriptor refers to an object with a SYNTAX clause value of either Counter32 or Counter64, then the descriptor used for the object should denote plurality.
3.1.1. Textual Clauses
Some clauses in a macro invocation may take a textual value (e.g., the DESCRIPTION clause). Note that, in order to conform to the ASN.1 syntax, the entire value of these clauses must be enclosed in double quotation marks, and therefore cannot itself contain double quotation marks, although the value may be multi-line.
3.2. IMPORTing Symbols
To reference an external object, the IMPORTS statement must be used to identify both the descriptor and the module defining the descriptor.
Note that when symbols from "enterprise-specific" information modules are referenced (e.g., a descriptor), there is the possibility of collision. As such, if different objects with the same descriptor are IMPORTed, then this ambiguity is
resolved by prefixing the descriptor with the name of the
information module and a dot ("."), i.e.,
"module.descriptor"
(All descriptors must be unique within any information
module.)
Of course, this notation can be used even when there is no
collision when IMPORTing symbols.
Finally, the IMPORTS statement may not be used to import an
ASN.1 named type which corresponds to either the SEQUENCE or
SEQUENCE OF type.
4. Naming Hierarchy
The root of the subtree administered by the Internet Assigned Numbers Authority (IANA) for the Internet is:
internet OBJECT IDENTIFIER ::= { iso 3 6 1 }
That is, the Internet subtree of OBJECT IDENTIFIERS starts with the prefix:
1.3.6.1.
Several branches underneath this subtree are used for network management:
mgmt OBJECT IDENTIFIER ::= { internet 2 }
experimental OBJECT IDENTIFIER ::= { internet 3 }
private OBJECT IDENTIFIER ::= { internet 4 }
enterprises OBJECT IDENTIFIER ::= { private 1 }
However, the SMI does not prohibit the definition of objects in other portions of the object tree.
The mgmt(2) subtree is used to identify "standard" objects.
The experimental(3) subtree is used to identify objects being designed by working groups of the IETF. If an information module produced by a working group becomes a "standard" information module, then at the very beginning of its entry onto the Internet standards track, the objects are moved under the mgmt(2) subtree.
The private(4) subtree is used to identify objects defined unilaterally. The enterprises(1) subtree beneath private is used, among other things, to permit providers of networking subsystems to register models of their products.
5. Mapping of the MODULE-IDENTITY macro
The MODULE-IDENTITY macro is used to provide contact and revision history for each information module. It must appear exactly once in every information module. It should be noted that the expansion of the MODULE-IDENTITY macro is something which conceptually happens during implementation and not during run-time.
5.1. Mapping of the LAST-UPDATED clause
The LAST-UPDATED clause, which must be present, contains the date and time that this information module was last edited.
5.2. Mapping of the ORGANIZATION clause
The ORGANIZATION clause, which must be present, contains a textual description of the organization under whose auspices this information module was developed.
5.3. Mapping of the CONTACT-INFO clause
The CONTACT-INFO clause, which must be present, contains the name, postal address, telephone number, and electronic mail address of the person to whom technical queries concerning this information module should be sent.
5.4. Mapping of the DESCRIPTION clause
The DESCRIPTION clause, which must be present, contains a high-level textual description of the contents of this information module.
5.5. Mapping of the REVISION clause
The REVISION clause, which need not be present, is repeatedly used to describe the revisions made to this information module, in reverse chronological order. Each instance of this clause contains the date and time of the revision.
5.6. Mapping of the DESCRIPTION clause
The DESCRIPTION clause, which must be present for each REVISION clause, contains a high-level textual description of the revision identified in that REVISION clause.
5.7. Mapping of the MODULE-IDENTITY value
The value of an invocation of the MODULE-IDENTITY macro is an OBJECT IDENTIFIER. As such, this value may be authoritatively used when referring to the information module containing the invocation.
5.8. Usage Example
Consider how a skeletal MIB module might be constructed: e.g.,
FIZBIN-MIB DEFINITIONS ::= BEGIN
IMPORTS
MODULE-IDENTITY, OBJECT-TYPE, experimental
FROM SNMPv2-SMI;
fizbin MODULE-IDENTITY
LAST-UPDATED "9210070433Z"
ORGANIZATION "IETF SNMPv2 Working Group"
CONTACT-INFO
" Marshall T. Rose
Postal: Dover Beach Consulting, Inc.
420 Whisman Court
Mountain View, CA 94043-2186
US
Tel: +1 415 968 1052
Fax: +1 415 968 2510
E-mail: mrose@dbc.mtview.ca.us"
DESCRIPTION
"The MIB module for entities implementing the xxxx
protocol."
REVISION "9210070433Z"
DESCRIPTION
"Initial version of this MIB module."
-- contact IANA for actual number
::= { experimental xx }
END
6. Mapping of the OBJECT-IDENTITY macro
The OBJECT-IDENTITY macro is used to define information about an OBJECT IDENTIFIER assignment. It should be noted that the expansion of the OBJECT-IDENTITY macro is something which conceptually happens during implementation and not during run-time.
6.1. Mapping of the STATUS clause
The STATUS clause, which must be present, indicates whether this definition is current or historic.
The values "current", and "obsolete" are self-explanatory.
6.2. Mapping of the DESCRIPTION clause
The DESCRIPTION clause, which must be present, contains a textual description of the object assignment.
6.3. Mapping of the REFERENCE clause
The REFERENCE clause, which need not be present, contains a textual cross-reference to an object assignment defined in some other information module.
6.4. Mapping of the OBJECT-IDENTITY value
The value of an invocation of the OBJECT-IDENTITY macro is an OBJECT IDENTIFIER.
6.5. Usage Example
Consider how an OBJECT IDENTIFIER assignment might be made:
e.g.,
fizbin69 OBJECT-IDENTITY
STATUS current
DESCRIPTION
"The authoritative identity of the Fizbin 69
chipset."
::= { fizbinChipSets 1 }
7. Mapping of the OBJECT-TYPE macro
The OBJECT-TYPE macro is used to define a managed object. It should be noted that the expansion of the OBJECT-TYPE macro is something which conceptually happens during implementation and not during run-time.
7.1. Mapping of the SYNTAX clause
The SYNTAX clause, which must be present, defines the abstract data structure corresponding to that object. The data structure must be one of the alternatives defined in the ObjectSyntax CHOICE.
Full ASN.1 sub-typing is allowed, as appropriate to the underlyingly ASN.1 type, primarily as an aid to implementors in understanding the meaning of the object. Any such restriction on size, range, enumerations or repertoire specified in this clause represents the maximal level of support which makes "protocol sense". Of course, sub-typing is not allowed for the Counter32 or Counter64 types, but is allowed for the Gauge32 type.
The semantics of ObjectSyntax are now described.
7.1.1. Integer32 and INTEGER
The Integer32 type represents integer-valued information between $-2^{31}$ and $2^{31}-1$ inclusive ($-2147483648$ to $2147483647$ decimal). This type is indistinguishable from the INTEGER type.
The INTEGER type may also be used to represent integer-valued information, if it contains named-number enumerations, or if it is sub-typed to be more constrained than the Integer32 type. In the former case, only those named-numbers so enumerated may be present as a value. Note that although it is recommended that enumerated values start at 1 and be numbered contiguously, any valid value for Integer32 is allowed for an enumerated value and, further, enumerated values needn’t be contiguously assigned.
Finally, the hyphen character is not allowed as a part of the label name for any named-number enumeration.
7.1.2. OCTET STRING
The OCTET STRING type represents arbitrary binary or textual data. Although there is no SMI-specified size limitation for this type, MIB designers should realize that there may be implementation and interoperability limitations for sizes in excess of 255 octets.
7.1.3. OBJECT IDENTIFIER
The OBJECT IDENTIFIER type represents administratively assigned names. Any instance of this type may have at most 128 sub-identifiers. Further, each sub-identifier must not exceed the value $2^{32}-1$ (4294967295 decimal).
7.1.4. BIT STRING
The BIT STRING type represents an enumeration of named bits. This collection is assigned non-negative, contiguous values, starting at zero. Only those named-bits so enumerated may be present in a value.
A requirement on "standard" MIB modules is that the hyphen character is not allowed as a part of the label name for any named-bit enumeration.
7.1.5. IpAddress
The IpAddress type represents a 32-bit internet address. It is represented as an OCTET STRING of length 4, in network byte-order.
Note that the IpAddress type is a tagged type for historical reasons. Network addresses should be represented using an invocation of the TEXTUAL-CONVENTION macro [3].
7.1.6. Counter32
The Counter32 type represents a non-negative integer which monotonically increases until it reaches a maximum value of $2^{32}-1$ (4294967295 decimal), when it wraps around and starts increasing again from zero.
Counters have no defined "initial" value, and thus, a single value of a Counter has (in general) no information content. Discontinuities in the monotonically increasing value normally occur at re-initialization of the management system, and at other times as specified in the description of an object-type using this ASN.1 type. If such other times can occur, for example, the creation of an object instance at times other than re-initialization, then a corresponding object should be defined with a SYNTAX clause value of TimeStamp (a textual convention defined in [3]) indicating the time of the last discontinuity.
The value of the MAX-ACCESS clause for objects with a SYNTAX clause value of Counter32 is always "read-only".
A DEFVAL clause is not allowed for objects with a SYNTAX clause value of Counter32.
7.1.7. Gauge32
The Gauge32 type represents a non-negative integer, which may increase or decrease, but shall never exceed a maximum value. The maximum value can not be greater than $2^{32}-1$ (4294967295 decimal). The value of a Gauge has its maximum value whenever the information being modeled is greater or equal to that maximum value; if the information being modeled subsequently decreases below the maximum value, the Gauge also decreases.
7.1.8. TimeTicks
The TimeTicks type represents a non-negative integer which represents the time, modulo $2^{32}$ (4294967296 decimal), in hundredths of a second between two epochs. When objects are defined which use this ASN.1 type, the description of the object identifies both of the reference epochs.
For example, [3] defines the TimeStamp textual convention which is based on the TimeTicks type. With a TimeStamp, the first reference epoch is defined as when MIB-II’s sysUpTime [7] was zero, and the second reference epoch is defined as the current value of sysUpTime.
7.1.9. Opaque
The Opaque type is provided solely for backward-compatibility, and shall not be used for newly-defined object types.
The Opaque type supports the capability to pass arbitrary ASN.1 syntax. A value is encoded using the ASN.1 Basic Encoding Rules [4] into a string of octets. This, in turn, is encoded as an OCTET STRING, in effect "double-wrapping" the original ASN.1 value.
Note that a conforming implementation need only be able to accept and recognize opaquely-encoded data. It need not be able to unwrap the data and then interpret its contents.
A requirement on "standard" MIB modules is that no object may have a SYNTAX clause value of Opaque.
7.1.10. NsapAddress
The NsapAddress type represents an OSI address as a variable-length OCTET STRING. The first octet of the string contains a binary value in the range of 0..20, and indicates the length in octets of the NSAP. Following the first octet, is the NSAP, expressed in concrete binary notation, starting with the most significant octet. A zero-length NSAP is used as a "special" address meaning "the default NSAP" (analogous to the IP address of 0.0.0.0). Such an NSAP is encoded as a single octet, containing the value 0. All other NSAPs are encoded in at least 4 octets.
Note that the NsapAddress type is a tagged type for historical reasons. Network addresses should be represented using an invocation of the TEXTUAL-CONVENTION macro [3].
7.1.11. Counter64
The Counter64 type represents a non-negative integer which monotonically increases until it reaches a maximum value of $2^{64}-1$ (18446744073709551615 decimal), when it wraps around and starts increasing again from zero.
Counters have no defined "initial" value, and thus, a single value of a Counter has (in general) no information content. Discontinuities in the monotonically increasing value normally occur at re-initialization of the management system, and at other times as specified in the description of an object-type using this ASN.1 type. If such other times can occur, for example, the creation of an object instance at times other than re-initialization, then a corresponding object should be defined with a SYNTAX clause value of TimeStamp (a textual convention defined in [3]) indicating the time of the last discontinuity.
The value of the MAX-ACCESS clause for objects with a SYNTAX clause value of Counter64 is always "read-only".
A requirement on "standard" MIB modules is that the Counter64 type may be used only if the information being modeled would wrap in less than one hour if the Counter32 type was used instead.
A DEFVAL clause is not allowed for objects with a SYNTAX clause value of Counter64.
7.1.12. UInteger32
The UInteger32 type represents integer-valued information between 0 and $2^{32}-1$ inclusive (0 to 4294967295 decimal).
7.2. Mapping of the UNITS clause
This UNITS clause, which need not be present, contains a textual definition of the units associated with that object.
7.3. Mapping of the MAX-ACCESS clause
The MAX-ACCESS clause, which must be present, defines whether it makes "protocol sense" to read, write and/or create an instance of the object. This is the maximal level of access for the object. (This maximal level of access is independent of any administrative authorization policy.)
The value "read-write" indicates that read and write access make "protocol sense", but create does not. The value "read-create" indicates that read, write and create access make "protocol sense". The value "not-accessible" indicates either an auxiliary object (see Section 7.7) or an object which is accessible only via a notification (e.g., snmpTrapOID[5]).
These values are ordered, from least to greatest: "not-accessible", "read-only", "read-write", "read-create".
If any columnar object in a conceptual row has "read-create" as its maximal level of access, then no other columnar object of the same conceptual row may have a maximal access of "read-write". (Note that "read-create" is a superset of "read-write".)
7.4. Mapping of the STATUS clause
The STATUS clause, which must be present, indicates whether this definition is current or historic.
The values "current", and "obsolete" are self-explanatory. The "deprecated" value indicates that the object is obsolete, but that an implementor may wish to support that object to foster interoperability with older implementations.
7.5. Mapping of the DESCRIPTION clause
The DESCRIPTION clause, which must be present, contains a textual definition of that object which provides all semantic definitions necessary for implementation, and should embody any information which would otherwise be communicated in any ASN.1 commentary annotations associated with the object.
7.6. Mapping of the REFERENCE clause
The REFERENCE clause, which need not be present, contains a textual cross-reference to an object defined in some other information module. This is useful when de-osifying a MIB module produced by some other organization.
7.7. Mapping of the INDEX clause
The INDEX clause, which must be present if that object corresponds to a conceptual row (unless an AUGMENTS clause is present instead), and must be absent otherwise, defines instance identification information for the columnar objects subordinate to that object.
Management operations apply exclusively to scalar objects. However, it is convenient for developers of management applications to impose imaginary, tabular structures on the ordered collection of objects that constitute the MIB. Each such conceptual table contains zero or more rows, and each row may contain one or more scalar objects, termed columnar objects. This conceptualization is formalized by using the OBJECT-TYPE macro to define both an object which corresponds to a table and an object which corresponds to a row in that table. A conceptual table has SYNTAX of the form:
SEQUENCE OF <EntryType>
where <EntryType> refers to the SEQUENCE type of its subordinate conceptual row. A conceptual row has SYNTAX of the form:
<EntryType>
where <EntryType> is a SEQUENCE type defined as follows:
<EntryType> ::= SEQUENCE { <type1>, ... , <typeN> }
where there is one <type> for each subordinate object, and each <type> is of the form:
<descriptor> <syntax>
where <descriptor> is the descriptor naming a subordinate
object, and <syntax> has the value of that subordinate
object’s SYNTAX clause, optionally omitting the sub-typing
information. Further, these ASN.1 types are always present
(the DEFAULT and OPTIONAL clauses are disallowed in the
SEQUENCE definition). The MAX-ACCESS clause for conceptual
tables and rows is "not-accessible".
For leaf objects which are not columnar objects, instances of
the object are identified by appending a sub-identifier of
zero to the name of that object. Otherwise, the INDEX clause
of the conceptual row object superior to a columnar object
defines instance identification information.
The instance identification information in an INDEX clause
must specify object(s) such that value(s) of those object(s)
will unambiguously distinguish a conceptual row. The syntax
of those objects indicate how to form the instance-identifier:
(1) integer-valued: a single sub-identifier taking the
integer value (this works only for non-negative
integers);
(2) string-valued, fixed-length strings (or variable-length
preceded by the IMPLIED keyword): ‘n’ sub-identifiers,
where ‘n’ is the length of the string (each octet of the
string is encoded in a separate sub-identifier);
(3) string-valued, variable-length strings (not preceded by
the IMPLIED keyword): ‘n+1’ sub-identifiers, where ‘n’ is
the length of the string (the first sub-identifier is ‘n’
itself, following this, each octet of the string is
encoded in a separate sub-identifier);
(4) object identifier-valued: ‘n+1’ sub-identifiers, where
‘n’ is the number of sub-identifiers in the value (the
first sub-identifier is ‘n’ itself, following this, each
sub-identifier in the value is copied);
(5) IpAddress-valued: 4 sub-identifiers, in the familiar
a.b.c.d notation.
(6) NsapAddress-valued: ‘n’ sub-identifiers, where ‘n’ is the
length of the value (each octet of the value is encoded
in a separate sub-identifier);
Note that the IMPLIED keyword can only be present for objects having a variable-length syntax (e.g., variable-length strings or object identifier-valued objects). Further, the IMPLIED keyword may appear at most once within the INDEX clause, and if so, is associated with the right-most object having a variable-length syntax. Finally, the IMPLIED keyword may not be used on a variable-length string object if that string might have a value of zero-length.
Instances identified by use of integer-valued objects should be numbered starting from one (i.e., not from zero). The use of zero as a value for an integer-valued index object should be avoided, except in special cases.
Objects which are both specified in the INDEX clause of a conceptual row and also columnar objects of the same conceptual row are termed auxiliary objects. The MAX-ACCESS clause for newly-defined auxiliary objects is "not-accessible". However, a conceptual row must contain at least one columnar object which is not an auxiliary object (i.e., the value of the MAX-ACCESS clause for such an object is either "read-only" or "read-create").
Note that objects specified in a conceptual row’s INDEX clause need not be columnar objects of that conceptual row. In this situation, the DESCRIPTION clause of the conceptual row must include a textual explanation of how the objects which are included in the INDEX clause but not columnar objects of that conceptual row, are used in uniquely identifying instances of the conceptual row’s columnar objects.
7.7.1. Creation and Deletion of Conceptual Rows
For newly-defined conceptual rows which allow the creation of new object instances and the deletion of existing object instances, there should be one columnar object with a SYNTAX clause value of RowStatus (a textual convention defined in [3]) and a MAX-ACCESS clause value of read-create. By convention, this is termed the status column for the conceptual row.
7.8. Mapping of the AUGMENTS clause
The AUGMENTS clause, which must not be present unless the object corresponds to a conceptual row, is an alternative to the INDEX clause. Every object corresponding to a conceptual row has either an INDEX clause or an AUGMENTS clause.
If an object corresponding to a conceptual row has an INDEX clause, that row is termed a base conceptual row; alternatively, if the object has an AUGMENTS clause, the row is said to be a conceptual row augmentation, where the AUGMENTS clause names the object corresponding to the base conceptual row which is augmented by this conceptual row extension. Instances of subordinate columnar objects of a conceptual row extension are identified according to the INDEX clause of the base conceptual row corresponding to the object named in the AUGMENTS clause. Further, instances of subordinate columnar objects of a conceptual row extension exist according to the same semantics as instances of subordinate columnar objects of the base conceptual row being augmented. As such, note that creation of a base conceptual row implies the correspondent creation of any conceptual row augmentations.
For example, a MIB designer might wish to define additional columns in an "enterprise-specific" MIB which logically extend a conceptual row in a "standard" MIB. The "standard" MIB definition of the conceptual row would include the INDEX clause and the "enterprise-specific" MIB would contain the definition of a conceptual row using the AUGMENTS clause.
Note that a base conceptual row may be augmented by multiple conceptual row extensions.
7.8.1. Relation between INDEX and AUGMENTS clauses
When defining instance identification information for a conceptual table:
(1) If there is a one-to-one correspondence between the conceptual rows of this table and an existing table, then the AUGMENTS clause should be used.
(2) Otherwise, if there is a sparse relationship between the conceptual rows of this table and an existing table, then an INDEX clause should be used which is identical to that in the existing table.
(3) Otherwise, auxiliary objects should be defined within the conceptual row for the new table, and those objects should be used within the INDEX clause for the conceptual row.
7.9. Mapping of the DEFVAL clause
The DEFVAL clause, which need not be present, defines an acceptable default value which may be used at the discretion of a SNMPv2 entity acting in an agent role when an object instance is created.
During conceptual row creation, if an instance of a columnar object is not present as one of the operands in the correspondent management protocol set operation, then the value of the DEFVAL clause, if present, indicates an acceptable default value that a SNMPv2 entity acting in an agent role might use.
The value of the DEFVAL clause must, of course, correspond to the SYNTAX clause for the object. If the value is an OBJECT IDENTIFIER, then it must be expressed as a single ASN.1 identifier, and not as a collection of sub-identifiers.
Note that if an operand to the management protocol set operation is an instance of a read-only object, then the error 'notWritable' [6] will be returned. As such, the DEFVAL clause can be used to provide an acceptable default value that a SNMPv2 entity acting in an agent role might use.
By way of example, consider the following possible DEFVAL clauses:
ObjectSyntax DEFVAL clause
----------------- ------------
Integer32 1
-- same for Gauge32, TimeTicks, UIInteger32
INTEGER valid -- enumerated value
OCTET STRING ‘ffffffffffff’H
OBJECT IDENTIFIER sysDescr
BIT STRING { primary, secondary } -- enumerated values
’c0210415’H -- 192.33.4.21
IpAddress 'c0210415'H -- 192.33.4.21
Object types with SYNTAX of Counter32 and Counter64 may not have DEFVAL clauses, since they do not have defined initial values. However, it is recommended that they be initialized to zero.
7.10. Mapping of the OBJECT-TYPE value
The value of an invocation of the OBJECT-TYPE macro is the name of the object, which is an OBJECT IDENTIFIER, an administratively assigned name.
When an OBJECT IDENTIFIER is assigned to an object:
(1) If the object corresponds to a conceptual table, then only a single assignment, that for a conceptual row, is present immediately beneath that object. The administratively assigned name for the conceptual row object is derived by appending a sub-identifier of "1" to the administratively assigned name for the conceptual table.
(2) If the object corresponds to a conceptual row, then at least one assignment, one for each column in the conceptual row, is present beneath that object. The administratively assigned name for each column is derived by appending a unique, positive sub-identifier to the administratively assigned name for the conceptual row.
(3) Otherwise, no other OBJECT IDENTIFIERS which are subordinate to the object may be assigned.
Note that the final sub-identifier of any administratively assigned name for an object shall be positive. A zero-valued final sub-identifier is reserved for future use.
Further note that although conceptual tables and rows are given administratively assigned names, these conceptual objects may not be manipulated in aggregate form by the management protocol.
7.11. Usage Example
Consider how one might define a conceptual table and its subordinates.
evalSlot OBJECT-TYPE
SYNTAX INTEGER
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The index number of the first unassigned entry in the evaluation table."
A management station should create new entries in the evaluation table using this algorithm: first, issue a management protocol retrieval operation to determine the value of evalSlot; and, second, issue a management protocol set operation to create an instance of the evalStatus object setting its value to underCreation(1). If this latter operation succeeds, then the management station may continue modifying the instances corresponding to the newly created conceptual row, without fear of collision with other management stations."
::= { eval 1 }
evalTable OBJECT-TYPE
SYNTAX SEQUENCE OF EvalEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION "The (conceptual) evaluation table."
::= { eval 2 }
evalEntry OBJECT-TYPE
SYNTAX EvalEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION "An entry (conceptual row) in the evaluation table."
INDEX { evalIndex }
::= { evalTable 1 }
EvalEntry ::=
SEQUENCE {
evalIndex Integer32,
evalString DisplayString,
evalValue Integer32,
evalStatus RowStatus
}
evalIndex OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"The auxiliary variable used for identifying instances of the columnar objects in the evaluation table."
::= { evalEntry 1 }
evalString OBJECT-TYPE
SYNTAX DisplayString
MAX-ACCESS read-create
STATUS current
DESCRIPTION
"The string to evaluate."
::= { evalEntry 2 }
evalValue OBJECT-TYPE
SYNTAX Integer32
MAX-ACCESS read-only
STATUS current
DESCRIPTION
"The value when evalString was last executed."
DEFVAL { 0 }
::= { evalEntry 3 }
evalStatus OBJECT-TYPE
SYNTAX RowStatus
MAX-ACCESS read-create
STATUS current
DESCRIPTION
"The status column used for creating, modifying, and deleting instances of the columnar objects in the evaluation table."
DEFVAL { active }
::= { evalEntry 4 }
8. Mapping of the NOTIFICATION-TYPE macro
The NOTIFICATION-TYPE macro is used to define the information contained within an unsolicited transmission of management information (i.e., within either a SNMPv2-Trap-PDU or InformRequest-PDU). It should be noted that the expansion of the NOTIFICATION-TYPE macro is something which conceptually happens during implementation and not during run-time.
8.1. Mapping of the OBJECTS clause
The OBJECTS clause, which need not be present, defines the ordered sequence of MIB objects which are contained within every instance of the notification.
8.2. Mapping of the STATUS clause
The STATUS clause, which must be present, indicates whether this definition is current or historic.
The values "current", and "obsolete" are self-explanatory. The "deprecated" value indicates that the notification is obsolete, but that an implementor may wish to support that object to foster interoperability with older implementations.
8.3. Mapping of the DESCRIPTION clause
The DESCRIPTION clause, which must be present, contains a textual definition of the notification which provides all semantic definitions necessary for implementation, and should embody any information which would otherwise be communicated in any ASN.1 commentary annotations associated with the object. In particular, the DESCRIPTION clause should document which instances of the objects mentioned in the OBJECTS clause should be contained within notifications of this type.
8.4. Mapping of the REFERENCE clause
The REFERENCE clause, which need not be present, contains a textual cross-reference to a notification defined in some other information module. This is useful when de-osifying a
MIB module produced by some other organization.
8.5. Mapping of the NOTIFICATION-TYPE value
The value of an invocation of the NOTIFICATION-TYPE macro is the name of the notification, which is an OBJECT IDENTIFIER, an administratively assigned name.
Sections 4.2.6 and 4.2.7 of [6] describe how the NOTIFICATION-TYPE macro is used to generate a SNMPv2-Trap-PDU or InformRequest-PDU, respectively.
8.6. Usage Example
Consider how a linkUp trap might be described:
```
linkUp NOTIFICATION-TYPE
OBJECTS { ifIndex }
STATUS current
DESCRIPTION
"A linkUp trap signifies that the SNMPv2 entity, acting in an agent role, recognizes that one of the communication links represented in its configuration has come up."
::= { snmpTraps 4 }
```
According to this invocation, the trap authoritatively identified as
```
{ snmpTraps 4 }
```
is used to report a link coming up.
Note that a SNMPv2 entity acting in an agent role can be configured to send this trap to zero or more SNMPv2 entities acting in a manager role, depending on the contents of the aclTable and viewTable [8] tables. For example, by judicious use of the viewTable, a SNMPv2 entity acting in an agent role might be configured to send all linkUp traps to one particular SNMPv2 entity, and linkUp traps for only certain interfaces to other SNMPv2 entities.
9. Refined Syntax
Some macros allow an object’s syntax to be refined (e.g., the SYNTAX clause in the MODULE-COMPLIANCE macro [2]). However, not all refinements of syntax are appropriate. In particular, the object’s primitive or application type must not be changed.
Further, the following restrictions apply:
<table>
<thead>
<tr>
<th>object syntax</th>
<th>Restrictions to Refinement on</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>range</td>
</tr>
<tr>
<td>---------------------</td>
<td>-------</td>
</tr>
<tr>
<td>INTEGER</td>
<td>(1)</td>
</tr>
<tr>
<td>OCTET STRING</td>
<td>-</td>
</tr>
<tr>
<td>OBJECT IDENTIFIER</td>
<td>-</td>
</tr>
<tr>
<td>BIT STRING</td>
<td>-</td>
</tr>
<tr>
<td>IPAddress</td>
<td>-</td>
</tr>
<tr>
<td>Counter32</td>
<td>-</td>
</tr>
<tr>
<td>Gauge32</td>
<td>(1)</td>
</tr>
<tr>
<td>TimeTicks</td>
<td>-</td>
</tr>
<tr>
<td>NsapAddress</td>
<td>-</td>
</tr>
<tr>
<td>Counter64</td>
<td>-</td>
</tr>
</tbody>
</table>
where:
(1) the range of permitted values may be refined by raising the lower-bounds, by reducing the upper-bounds, and/or by reducing the alternative value/range choices;
(2) the enumeration of named-values may be refined by removing one or more named-values;
(3) the size in characters of the value may be refined by raising the lower-bounds, by reducing the upper-bounds, and/or by reducing the alternative size choices; or,
(4) the repertoire of characters in the value may be reduced by further sub-typing.
Otherwise no refinements are possible.
Note that when refining an object with a SYNTAX clause value of Integer32 or UInteger32, the refined SYNTAX is expressed as an INTEGER and the restrictions of the table above are used.
10. Extending an Information Module
As experience is gained with a published information module, it may be desirable to revise that information module.
To begin, the invocation of the MODULE-IDENTITY macro should be updated to include information about the revision. Usually, this consists of updating the LAST-UPDATED clause and adding a pair of REVISION and DESCRIPTION clauses. However, other existing clauses in the invocation may be updated.
Note that the module’s label (e.g., "FIZBIN-MIB" from the example in Section 5.8), is not changed when the information module is revised.
10.1. Object Assignments
If any non-editorial change is made to any clause of a object assignment, then the OBJECT IDENTIFIER value associated with that object assignment must also be changed, along with its associated descriptor.
10.2. Object Definitions
An object definition may be revised in any of the following ways:
(1) A SYNTAX clause containing an enumerated INTEGER may have new enumerations added or existing labels changed.
(2) A STATUS clause value of "current" may be revised as "deprecated" or "obsolete". Similarly, a STATUS clause value of "deprecated" may be revised as "obsolete".
(3) A DEFVAL clause may be added or updated.
(4) A REFERENCE clause may be added or updated.
(5) A UNITS clause may be added.
(6) A conceptual row may be augmented by adding new columnar objects at the end of the row.
(7) Entirely new objects may be defined, named with previously unassigned OBJECT IDENTIFIER values.
Otherwise, if the semantics of any previously defined object are changed (i.e., if a non-editorial change is made to any clause other those specifically allowed above), then the OBJECT IDENTIFIER value associated with that object must also be changed.
Note that changing the descriptor associated with an existing object is considered a semantic change, as these strings may be used in an IMPORTS statement.
Finally, note that if an object has the value of its STATUS clause changed, then the value of its DESCRIPTION clause should be updated accordingly.
10.3. Notification Definitions
A notification definition may be revised in any of the following ways:
(1) A REFERENCE clause may be added or updated.
Otherwise, if the semantics of any previously defined notification are changed (i.e., if a non-editorial change is made to any clause other those specifically allowed above), then the OBJECT IDENTIFIER value associated with that notification must also be changed.
Note that changing the descriptor associated with an existing notification is considered a semantic change, as these strings may be used in an IMPORTS statement.
Finally, note that if an object has the value of its STATUS clause changed, then the value of its DESCRIPTION clause should be updated accordingly.
11. Appendix: de-OSIfying a MIB module
There has been an increasing amount of work recently on taking MIBs defined by other organizations (e.g., the IEEE) and de-OSIfying them for use with the Internet-standard network management framework. The steps to achieve this are straight-forward, though tedious. Of course, it is helpful to already be experienced in writing MIB modules for use with the Internet-standard network management framework.
The first step is to construct a skeletal MIB module, as shown earlier in Section 5.8. The next step is to categorize the objects into groups. Optional objects are not permitted. Thus, when a MIB module is created, optional objects must be placed in a additional groups, which, if implemented, all objects in the group must be implemented. For the first pass, it is wisest to simply ignore any optional objects in the original MIB: experience shows it is better to define a core MIB module first, containing only essential objects; later, if experience demands, other objects can be added.
11.1. Managed Object Mapping
Next for each managed object class, determine whether there can exist multiple instances of that managed object class. If not, then for each of its attributes, use the OBJECT-TYPE macro to make an equivalent definition.
Otherwise, if multiple instances of the managed object class can exist, then define a conceptual table having conceptual rows each containing a columnar object for each of the managed object class’s attributes. If the managed object class is contained within the containment tree of another managed object class, then the assignment of an object is normally required for each of the "distinguished attributes" of the containing managed object class. If they do not already exist within the MIB module, then they can be added via the definition of additional columnar objects in the conceptual row corresponding to the contained managed object class.
In defining a conceptual row, it is useful to consider the optimization of network management operations which will act upon its columnar objects. In particular, it is wisest to avoid defining more columnar objects within a conceptual row,
than can fit in a single PDU. As a rule of thumb, a conceptual row should contain no more than approximately 20 objects. Similarly, or as a way to abide by the "20 object guideline", columnar objects should be grouped into tables according to the expected grouping of network management operations upon them. As such, the content of conceptual rows should reflect typical access scenarios, e.g., they should be organized along functional lines such as one row for statistics and another row for parameters, or along usage lines such as commonly-needed objects versus rarely-needed objects.
On the other hand, the definition of conceptual rows where the number of columnar objects used as indexes outnumbers the number used to hold information, should also be avoided. In particular, the splitting of a managed object class’s attributes into many conceptual tables should not be used as a way to obtain the same degree of flexibility/complexity as is often found in MIBs with a myriad of optionals.
11.1.1. Mapping to the SYNTAX clause
When mapping to the SYNTAX clause of the OBJECT-type macro:
1. An object with BOOLEAN syntax becomes a TruthValue [3].
2. An object with INTEGER syntax becomes an Integer32.
3. An object with ENUMERATED syntax becomes an INTEGER with enumerations, taking any of the values given which can be represented with an Integer32.
4. An object with BIT STRING syntax but no enumerations becomes an OCTET STRING.
5. An object with a character string syntax becomes either an OCTET STRING, or a DisplayString [3], depending on the repertoire of the character string.
6. A non-tabular object with a complex syntax, such as REAL or EXTERNAL, must be decomposed, usually into an OCTET STRING (if sensible). As a rule, any object with a complicated syntax should be avoided.
Tabular objects must be decomposed into rows of columnar objects.
11.1.2. Mapping to the UNITS clause
If the description of this managed object defines a unit-basis, then mapping to this clause is straight-forward.
11.1.3. Mapping to the MAX-ACCESS clause
This is straight-forward.
11.1.4. Mapping to the STATUS clause
This is straight-forward.
11.1.5. Mapping to the DESCRIPTION clause
This is straight-forward: simply copy the text, making sure that any embedded double quotation marks are sanitized (i.e., replaced with single-quotes or removed).
11.1.6. Mapping to the REFERENCE clause
This is straight-forward: simply include a textual reference to the object being mapped, the document which defines the object, and perhaps a page number in the document.
11.1.7. Mapping to the INDEX clause
If necessary, decide how instance-identifiers for columnar objects are to be formed and define this clause accordingly.
11.1.8. Mapping to the DEFVAL clause
Decide if a meaningful default value can be assigned to the object being mapped, and if so, define the DEFVAL clause accordingly.
11.2. Action Mapping
Actions are modeled as read-write objects, in which writing a particular value results in a state change. (Usually, as a part of this state change, some action might take place.)
11.2.1. Mapping to the SYNTAX clause
Usually the Integer32 syntax is used with a distinguished value provided for each action that the object provides access to. In addition, there is usually one other distinguished value, which is the one returned when the object is read.
11.2.2. Mapping to the MAX-ACCESS clause
Always use read-write or read-create.
11.2.3. Mapping to the STATUS clause
This is straight-forward.
11.2.4. Mapping to the DESCRIPTION clause
This is straight-forward: simply copy the text, making sure that any embedded double quotation marks are sanitized (i.e., replaced with single-quotes or removed).
11.2.5. Mapping to the REFERENCE clause
This is straight-forward: simply include a textual reference to the action being mapped, the document which defines the action, and perhaps a page number in the document.
11.3. Event Mapping
Events are modeled as SNMPv2 notifications using NOTIFICATION-TYPE macro. However, recall that SNMPv2 emphasizes trap-directed polling. As such, few, and usually no, notifications, need be defined for any MIB module.
11.3.1. Mapping to the STATUS clause
This is straight-forward.
11.3.2. Mapping to the DESCRIPTION clause
This is straight-forward: simply copy the text, making sure that any embedded double quotation marks are sanitized (i.e., replaced with single-quotes or removed).
11.3.3. Mapping to the REFERENCE clause
This is straight-forward: simply include a textual reference to the notification being mapped, the document which defines the notification, and perhaps a page number in the document.
12. Acknowledgements
The section on object definitions (and MIB de-osification) is based, in part, on RFCs 1155 and 1212. The IMPLIED keyword is based on a conversation with David T. Perkins in December, 1991.
The section on trap definitions is based, in part, on RFC 1215.
Finally, the comments of the SNMP version 2 working group are gratefully acknowledged:
Beth Adams, Network Management Forum
Steve Alexander, INTERACTIVE Systems Corporation
David Arneson, Cabletron Systems
Toshiya Asaba
Fred Baker, ACC
Jim Barnes, Xylogics, Inc.
Brian Bataille
Andy Bierman, SynOptics Communications, Inc.
Uri Blumenthal, IBM Corporation
Fred Bohle, Interlink
Jack Brown
Theodore Brunner, Bellcore
Stephen F. Bush, GE Information Services
Jeffrey D. Case, University of Tennessee, Knoxville
John Chang, IBM Corporation
Szusin Chen, Sun Microsystems
Robert Ching
Chris Chiotasso, Ungermann-Bass
Bobby A. Clay, NASA/Boeing
John Cooke, Chipcom
Tracy Cox, Bellcore
Juan Cruz, Datability, Inc.
David Cullerot, Cabletron Systems
Cathy Cunningham, Microcom
James R. (Chuck) Davin, Bellcore
Michael Davis, Clearpoint
Mike Davison, FiberCom
Cynthia DellaTorre, MITRE
Taso N. Devetzis, Bellcore
Manual Diaz, DAVID Systems, Inc.
Jon Dreyer, Sun Microsystems
David Engel, Optical Data Systems
Mike Erlinger, Lexcel
Roger Fajman, NIH
Daniel Fauvarque, Sun Microsystems
Karen Frisa, CMU
Shari Galitzer, MITRE
Shawn Gallagher, Digital Equipment Corporation
Richard Graveman, Bellcore
Maria Greene, Xyplex, Inc.
Michel Guittet, Apple
Robert Gutierrez, NASA
Bill Hagerty, Cabletron Systems
Gary W. Haney, Martin Marietta Energy Systems
Patrick Hanil, Nokia Telecommunications
Matt Hecht, SNMP Research, Inc.
Edward A. Heiner, Jr., Synernetics Inc.
Susan E. Hicks, Martin Marietta Energy Systems
Geral Holzhauer, Apple
John Hopprich, DAVID Systems, Inc.
Jeff Hughes, Hewlett-Packard
Robin Iddon, Axon Networks, Inc.
David Itusak
Kevin M. Jackson, Concord Communications, Inc.
Ole J. Jacobsen, Interop Company
Ronald Jacoby, Silicon Graphics, Inc.
Satish Joshi, SynOptics Communications, Inc.
Frank Kastenholz, FTP Software
Mark Kepke, Hewlett-Packard
Ken Key, SNMP Research, Inc.
Zbiginew Kielczewski, Eicon
Jongyeoi Kim
Andrew Knutsen, The Santa Cruz Operation
Michael L. Kornegay, VisiSoft
Deirdre C. Kostik, Bellcore
Cheryl Krupczak, Georgia Tech
Mark S. Lewis, Telebit
David Lin
David Lindemulder, AT&T/NCR
Ben Lisowski, Sprint
David Liu, Bell-Northern Research
John Lunny, The Wollongong Group
Robert C. Lushbaugh Martin, Marietta Energy Systems
Michael Luufer, BBN
Carl Madison, Star-Tek, Inc.
Keith McCloghrie, Hughes LAN Systems
Evan McGinnis, 3Com Corporation
Bill McKenzie, IBM Corporation
Donna McMaster, SynOptics Communications, Inc.
John Medicke, IBM Corporation
Doug Miller, Telebit
Dave Minnich, FiberCom
Mohammad Mirhakkak, MITRE
Rohit Mital, Protools
George Mouradian, AT&T Bell Labs
Patrick Mullaney, Cabletron Systems
Dan Myers, 3Com Corporation
Rina Nathaniel, Rad Network Devices Ltd.
Hien V. Nguyen, Sprint
Mo Nikain
Tom Nisbet
William B. Norton, MERIT
Steve Onishi, Wellfleet Communications, Inc.
David T. Perkins, SynOptics Communications, Inc.
Carl Powell, BBN
Ilan Raab, SynOptics Communications, Inc.
Richard Ramons, AT&T
Venkat D. Rangan, Metric Network Systems, Inc.
Louise Reingold, Sprint
Sam Roberts, Farallon Computing, Inc.
Kary Robertson, Concord Communications, Inc.
Dan Romascanu, Lannet Data Communications Ltd.
Marshall T. Rose, Dover Beach Consulting, Inc.
Shawn A. Routhier, Epilogue Technology Corporation
Chris Rozman
Asaf Rubissa, Fibronics
Jon Saperia, Digital Equipment Corporation
Michael Sapich
Mike Scanlon, Interlan
Sam Schaen, MITRE
John Seligson, Ultra Network Technologies
Paul A. Serice, Corporation for Open Systems
Chris Shaw, Banyan Systems
Timon Sloane
Robert Snyder, Cisco Systems
Joo Young Song
Roy Spitier, Sprint
Einar Stefferud, Network Management Associates
John Stephens, Cayman Systems, Inc.
Robert L. Stewart, Xyplex, Inc. (chair)
Kaj Tesink, Bellcore
Dean Throop, Data General
Ahmet Tuncay, France Telecom-CNET
Maurice Turcotte, Racal Datacom
Warren Vik, INTERACTIVE Systems Corporation
Yannis Viniotis
Steven L. Waldbusser, Carnegie Mellon University
Timothy M. Walden, ACC
Alice Wang, Sun Microsystems
James Watt, Newbridge
Luanne Waul, Timeplex
Donald E. Westlake III, Digital Equipment Corporation
Gerry White
Bert Wijnen, IBM Corporation
Peter Wilson, 3Com Corporation
Steven Wong, Digital Equipment Corporation
Randy Worzella, IBM Corporation
Daniel Woycke, MITRE
Honda Wu
Jeff Yarnell, Protools
Chris Young, Cabletron
Kiho Yum, 3Com Corporation
13. References
April 1993.
14. Security Considerations
Security issues are not discussed in this memo.
15. Authors' Addresses
Jeffrey D. Case
SNMP Research, Inc.
3001 Kimberlin Heights Rd.
Knoxville, TN 37920-9716
US
Phone: +1 615 573 1434
Email: case@snmp.com
Keith McCloghrie
Hughes LAN Systems
1225 Charleston Road
Mountain View, CA 94043
US
Phone: +1 415 966 7934
Email: kzm@hls.com
Marshall T. Rose
Dover Beach Consulting, Inc.
420 Whisman Court
Mountain View, CA 94043-2186
US
Phone: +1 415 968 1052
Email: mrose@dbc.mtview.ca.us
Steven Waldbusser
Carnegie Mellon University
4910 Forbes Ave
Pittsburgh, PA 15213
US
Phone: +1 412 268 6628
Email: waldbusser@cmu.edu
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc1442.pdf", "len_cl100k_base": 15344, "olmocr-version": "0.1.50", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 94811, "total-output-tokens": 18318, "length": "2e13", "weborganizer": {"__label__adult": 0.0003178119659423828, "__label__art_design": 0.00045013427734375, "__label__crime_law": 0.0007195472717285156, "__label__education_jobs": 0.0025634765625, "__label__entertainment": 0.00018644332885742188, "__label__fashion_beauty": 0.0001883506774902344, "__label__finance_business": 0.0014104843139648438, "__label__food_dining": 0.0002713203430175781, "__label__games": 0.0009551048278808594, "__label__hardware": 0.0030574798583984375, "__label__health": 0.0003938674926757813, "__label__history": 0.0006427764892578125, "__label__home_hobbies": 0.00013196468353271484, "__label__industrial": 0.0007953643798828125, "__label__literature": 0.0005512237548828125, "__label__politics": 0.0005536079406738281, "__label__religion": 0.00046181678771972656, "__label__science_tech": 0.263671875, "__label__social_life": 0.00012946128845214844, "__label__software": 0.098876953125, "__label__software_dev": 0.62255859375, "__label__sports_fitness": 0.00025963783264160156, "__label__transportation": 0.0006418228149414062, "__label__travel": 0.0002264976501464844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65856, 0.03641]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65856, 0.64348]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65856, 0.78784]], "google_gemma-3-12b-it_contains_pii": [[0, 1719, false], [1719, 3562, null], [3562, 4992, null], [4992, 6794, null], [6794, 7077, null], [7077, 7724, null], [7724, 8289, null], [8289, 8596, null], [8596, 9225, null], [9225, 9875, null], [9875, 10666, null], [10666, 11526, null], [11526, 11973, null], [11973, 12537, null], [12537, 14154, null], [14154, 15915, null], [15915, 16335, null], [16335, 17593, null], [17593, 19017, null], [19017, 19464, null], [19464, 20158, null], [20158, 21103, null], [21103, 21343, null], [21343, 23034, null], [23034, 24360, null], [24360, 26159, null], [26159, 27852, null], [27852, 29393, null], [29393, 31148, null], [31148, 32730, null], [32730, 34629, null], [34629, 36565, null], [36565, 38447, null], [38447, 39956, null], [39956, 41732, null], [41732, 41923, null], [41923, 43071, null], [43071, 44057, null], [44057, 45750, null], [45750, 46149, null], [46149, 47085, null], [47085, 49043, null], [49043, 50458, null], [50458, 51847, null], [51847, 54026, null], [54026, 55826, null], [55826, 56924, null], [56924, 58206, null], [58206, 58702, null], [58702, 59978, null], [59978, 61348, null], [61348, 62725, null], [62725, 63300, null], [63300, 65196, null], [65196, 65208, null], [65208, 65856, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1719, true], [1719, 3562, null], [3562, 4992, null], [4992, 6794, null], [6794, 7077, null], [7077, 7724, null], [7724, 8289, null], [8289, 8596, null], [8596, 9225, null], [9225, 9875, null], [9875, 10666, null], [10666, 11526, null], [11526, 11973, null], [11973, 12537, null], [12537, 14154, null], [14154, 15915, null], [15915, 16335, null], [16335, 17593, null], [17593, 19017, null], [19017, 19464, null], [19464, 20158, null], [20158, 21103, null], [21103, 21343, null], [21343, 23034, null], [23034, 24360, null], [24360, 26159, null], [26159, 27852, null], [27852, 29393, null], [29393, 31148, null], [31148, 32730, null], [32730, 34629, null], [34629, 36565, null], [36565, 38447, null], [38447, 39956, null], [39956, 41732, null], [41732, 41923, null], [41923, 43071, null], [43071, 44057, null], [44057, 45750, null], [45750, 46149, null], [46149, 47085, null], [47085, 49043, null], [49043, 50458, null], [50458, 51847, null], [51847, 54026, null], [54026, 55826, null], [55826, 56924, null], [56924, 58206, null], [58206, 58702, null], [58702, 59978, null], [59978, 61348, null], [61348, 62725, null], [62725, 63300, null], [63300, 65196, null], [65196, 65208, null], [65208, 65856, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65856, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65856, null]], "pdf_page_numbers": [[0, 1719, 1], [1719, 3562, 2], [3562, 4992, 3], [4992, 6794, 4], [6794, 7077, 5], [7077, 7724, 6], [7724, 8289, 7], [8289, 8596, 8], [8596, 9225, 9], [9225, 9875, 10], [9875, 10666, 11], [10666, 11526, 12], [11526, 11973, 13], [11973, 12537, 14], [12537, 14154, 15], [14154, 15915, 16], [15915, 16335, 17], [16335, 17593, 18], [17593, 19017, 19], [19017, 19464, 20], [19464, 20158, 21], [20158, 21103, 22], [21103, 21343, 23], [21343, 23034, 24], [23034, 24360, 25], [24360, 26159, 26], [26159, 27852, 27], [27852, 29393, 28], [29393, 31148, 29], [31148, 32730, 30], [32730, 34629, 31], [34629, 36565, 32], [36565, 38447, 33], [38447, 39956, 34], [39956, 41732, 35], [41732, 41923, 36], [41923, 43071, 37], [43071, 44057, 38], [44057, 45750, 39], [45750, 46149, 40], [46149, 47085, 41], [47085, 49043, 42], [49043, 50458, 43], [50458, 51847, 44], [51847, 54026, 45], [54026, 55826, 46], [55826, 56924, 47], [56924, 58206, 48], [58206, 58702, 49], [58702, 59978, 50], [59978, 61348, 51], [61348, 62725, 52], [62725, 63300, 53], [63300, 65196, 54], [65196, 65208, 55], [65208, 65856, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65856, 0.05983]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
4bf2cd456586e06da0eda7644d6052985f88a485
|
[REMOVED]
|
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007%2F978-3-642-22233-7_24.pdf", "len_cl100k_base": 8569, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 36574, "total-output-tokens": 10820, "length": "2e13", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.0005817413330078125, "__label__crime_law": 0.0003077983856201172, "__label__education_jobs": 0.0006442070007324219, "__label__entertainment": 9.381771087646484e-05, "__label__fashion_beauty": 0.0001857280731201172, "__label__finance_business": 0.0002593994140625, "__label__food_dining": 0.00037789344787597656, "__label__games": 0.0005173683166503906, "__label__hardware": 0.0009751319885253906, "__label__health": 0.0006270408630371094, "__label__history": 0.0003528594970703125, "__label__home_hobbies": 8.618831634521484e-05, "__label__industrial": 0.0004279613494873047, "__label__literature": 0.0004076957702636719, "__label__politics": 0.0002837181091308594, "__label__religion": 0.0005335807800292969, "__label__science_tech": 0.05218505859375, "__label__social_life": 8.600950241088867e-05, "__label__software": 0.0071868896484375, "__label__software_dev": 0.93212890625, "__label__sports_fitness": 0.0002789497375488281, "__label__transportation": 0.0006041526794433594, "__label__travel": 0.0002486705780029297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45576, 0.02238]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45576, 0.43634]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45576, 0.86865]], "google_gemma-3-12b-it_contains_pii": [[0, 2492, false], [2492, 5912, null], [5912, 9221, null], [9221, 12689, null], [12689, 15990, null], [15990, 18846, null], [18846, 23131, null], [23131, 24609, null], [24609, 26284, null], [26284, 29630, null], [29630, 32578, null], [32578, 35855, null], [35855, 39255, null], [39255, 42407, null], [42407, 45576, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2492, true], [2492, 5912, null], [5912, 9221, null], [9221, 12689, null], [12689, 15990, null], [15990, 18846, null], [18846, 23131, null], [23131, 24609, null], [24609, 26284, null], [26284, 29630, null], [29630, 32578, null], [32578, 35855, null], [35855, 39255, null], [39255, 42407, null], [42407, 45576, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45576, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45576, null]], "pdf_page_numbers": [[0, 2492, 1], [2492, 5912, 2], [5912, 9221, 3], [9221, 12689, 4], [12689, 15990, 5], [15990, 18846, 6], [18846, 23131, 7], [23131, 24609, 8], [24609, 26284, 9], [26284, 29630, 10], [29630, 32578, 11], [32578, 35855, 12], [35855, 39255, 13], [39255, 42407, 14], [42407, 45576, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45576, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
7ec3790ee3b1094b58def1bd5f76bac9415060bb
|
Efficient Procedure Mapping using Cache Line Coloring
Amir H. Hashemi
David R. Kaeli
Brad Calder
The Western Research Laboratory (WRL) is a computer systems research group that was founded by Digital Equipment Corporation in 1982. Our focus is computer science research relevant to the design and application of high performance scientific computers. We test our ideas by designing, building, and using real systems. The systems we build are research prototypes; they are not intended to become products.
There are two other research laboratories located in Palo Alto, the Network Systems Lab (NSL) and the Systems Research Center (SRC). Another Digital research group is located in Cambridge, Massachusetts (CRL).
Our research is directed towards mainstream high-performance computer systems. Our prototypes are intended to foreshadow the future computing environments used by many Digital customers. The long-term goal of WRL is to aid and accelerate the development of high-performance uni- and multi-processors. The research projects within WRL will address various aspects of high-performance computing.
We believe that significant advances in computer systems do not come from any single technological advance. Technologies, both hardware and software, do not all advance at the same pace. System design is the art of composing systems which use each level of technology in an appropriate balance. A major advance in overall system performance will require reexamination of all aspects of the system.
We do work in the design, fabrication and packaging of hardware; language processing and scaling issues in system software design; and the exploration of new applications areas that are opening up with the advent of higher performance systems. Researchers at WRL cooperate closely and move freely among the various levels of system design. This allows us to explore a wide range of tradeoffs to meet system goals.
We publish the results of our work in a variety of journals, conferences, research reports, and technical notes. This document is a research report. Research reports are normally accounts of completed research and may include material from earlier technical notes. We use technical notes for rapid distribution of technical material; usually this represents research in progress.
Research reports and technical notes may be ordered from us. You may mail your order to:
Technical Report Distribution
DEC Western Research Laboratory, WRL-2
250 University Avenue
Palo Alto, California 94301 USA
Reports and technical notes may also be ordered by electronic mail. Use one of the following addresses:
- Digital E-net: JOVE::WRL-TECHREPORTS
- Internet: WRL-Techreports@decwrl.pa.dec.com
- UUCP: decpa!wrl-techreports
To obtain more details on ordering by electronic mail, send a message to one of these addresses with the word “help” in the Subject line; you will receive detailed instructions.
Reports and technical notes may also be accessed via the World Wide Web: http://www.research.digital.com/wrl/home.html.
Efficient Procedure Mapping using Cache Line Coloring
Amir H. Hashemi
David R. Kaeli
Brad Calder
October 1996
Abstract
As the gap between memory and processor performance continues to widen, it becomes increasingly important to exploit cache memory effectively. Both hardware and software approaches can be explored to optimize cache performance. Hardware designers focus on cache organization issues, including replacement policy, associativity, block size and the resulting cache access time. Software writers use various optimization techniques, including software prefetching, data scheduling and code reordering. Our focus is on improving memory usage through code reordering compiler techniques.
In this paper we present a link-time procedure mapping algorithm which can significantly improve the effectiveness of the instruction cache. Our algorithm produces an improved program layout by performing a color mapping of procedures to cache lines, taking into consideration the procedure size, cache size, cache line size, and call graph. We use cache line coloring to guide the procedure mapping, indicating which cache lines to avoid when placing a procedure in the program layout. Our algorithm reduces on average the instruction cache miss rate by 45% over the original mapping and by 14% over the mapping algorithm of Pettis and Hansen.
1 Introduction
The increasing gap between processor and main memory speeds has forced computer designers to exploit cache memories. A cache is smaller than the main memory and, if properly managed, can hold a major part of the working set of a program [6]. The goal of memory subsystem designers is to improve the average memory access time. Reducing the cache miss rate is one factor for improving memory access performance. Cache misses occur for a number of reasons: cold start, capacity, and collisions [12]. A number of cache line replacement algorithms have been proposed to reduce the number of cache collisions [2, 13, 17].
Instead of concentrating on cache organization we concentrate on the layout of a program on the memory space. Bershad et.al. suggested remapping cache addresses dynamically to avoid conflict misses in large direct-mapped caches [3]. An alternative approach is to perform code repositioning at compile or link-time [8, 10, 11, 15, 18]. The idea is to place frequently used sections of a program next to each other in the address space, thereby reducing the chances of cache conflicts while increasing spatial locality within the program.
Code reordering algorithms for improved memory performance can span several different levels of granularity, from basic blocks, to loops, and to procedures. Research has shown that basic block reordering and procedure reordering can significantly improve a program’s execution performance. Pettis and Hansen [11] found that the reduction in execution time when using procedure reordering was around 8%, and the reduction in execution time for basic block reordering was around 12% on an HP-UX 825 architecture with a 16K direct mapped unified cache. When both of the optimizations were applied together an average improvement of 15% was achieved.
The mapping algorithm we propose in this paper improves upon prior work, particularly when a program’s control flow graph is larger than the cache capacity. Since we are interested in dealing with graphs that are larger than the target instruction cache, we concentrate our discussion in this paper on reordering procedures. Even so, our algorithm can also be used with, and can benefit from, basic block reordering and procedure splitting, as described later in §5.
Our research differs from prior research in procedure reordering because our algorithm uses the cache size, cache line size, and the procedure size to perform a color mapping of cache lines to procedures. This color mapping allows our algorithm to intelligently place procedures in the layout by preserving color dependencies with a procedure’s parents and children in the call graph, resulting in fewer instruction cache conflicts.
In this paper we will describe our algorithm and demonstrate its merit through trace-driven cache simulation. In §2 we describe our color mapping algorithm and compare our algorithm with prior work.
in code reordering. The methodology used to gather our results is described in §3. In §4 we provide quantitative results using our improved procedure ordering algorithm. We then discuss implications and future work for our algorithm in §5, and we summarize our contributions in §6.
2 Procedure Mapping
In this section we describe our procedure mapping algorithm. For the following description, we will assume that the instruction cache is direct mapped (in §5 we discuss how to apply our algorithm to set-associative caches). The basic idea behind the algorithm is to treat the memory address space as two dimensional by breaking up the address space into pieces that are equivalent to the size of the cache, and using the cache blocks occupied by each procedure to guide the mapping. In contrast, previous research has treated memory layout as a one dimensional address space. Employing a second dimension allows our algorithm to intelligently avoid cache conflicts when mapping a procedure for the first time, and it provides the ability to move procedures that have already been mapped in order to eliminate additional conflicts as they arise. To avoid conflicts, we keep track of the colors each procedure is mapped to and a set of colors indicating which colors are currently unavailable to that procedure. We will refer to this set of colors as the unavailable-set.
For a given procedure, the unavailable-set of colors represents the colors occupied (i.e., cache lines used) by all of the immediate parents and children of that procedure in the call graph which have already been mapped to cache lines. Our algorithm uses a call graph with weighted procedure call edges for indicating the importance of mapping procedures next to each other. The algorithm concentrates on only eliminating first-generation cache conflicts, which are the conflicts between a procedure and the immediate parents and children of that procedure in the call graph. When mapping a procedure, our algorithm tries to avoid cache conflicts by avoiding cache line colors in its unavailable-set. Once a procedure has been mapped, a procedure can later be moved to a new location without causing cache conflicts, as long as it does not move to a location (color) which is in its unavailable-set. In using the color mapping to place and move procedures in this way, we are guaranteed that the new location will not increase the number of first-generation conflicts for the procedures in our call graph.
One of the hurdles in a mapping algorithm where code is allowed to move after it has been already mapped, is the problem of how to handle the empty space left behind by the moved procedures. If possible, this gap should be filled since the program is laid out in a contiguous memory space. Therefore, moving a procedure should be followed by filling the space left by the procedure with other procedures, otherwise this can result in a chain of relocations that are hard to manage.
Studies of program behaviors show that 10% to 30% of a program accounts for 90% of its execution time [5]. The rest of the code is not heavily exercised or is often not even executed. Our algorithm takes advantage of this property by dividing each program into frequently executed (popular) and infrequently
executed (unpopular) procedures. The unpopular procedures are treated as fluff or glue, and are used to fill the empty space left behind by moved procedures in our algorithm. We will not worry about conflicts when positioning unpopular procedures, since these parts of a program do not significantly contribute to the number of first level cache conflicts.
2.1 Cache Coloring Algorithm
We will now describe the details of our block coloring algorithm and use an example to demonstrate how to layout procedures. Figure 1 presents an example call graph, containing 7 procedures A through G, where nodes represent procedures and the edges represent procedure calls. Each edge contains a weight indicating how many times that procedure was called. The Figure also contains a table indicating the number of cache blocks each procedure occupies. In this example and algorithm description, we assume the instruction cache is direct mapped and contains only 4 cache lines.
Figure 2 shows the steps taken by our algorithm in mapping the example call graph given in Figure 1. The cache is divided into a set of colors, one color for each cache block. The four cache lines are given the colors red, green, blue, and yellow. In Figure 2, the first column shows at each step which edge or procedure is being processed. The second column shows which of the four edge processing cases the current step corresponds to in our algorithm. The third column shows the current mapping of the processed procedures and edges over the colored 4 block cache space. The last column shows the changes to the unavailable-set of colors for the procedures being processed at each step. If a procedure spans multiple
<table>
<thead>
<tr>
<th>Steps in Color Mapping Algorithm</th>
<th>Case</th>
<th>red</th>
<th>green</th>
<th>blue</th>
<th>yellow</th>
<th>Unavailable-Sets</th>
</tr>
</thead>
<tbody>
<tr>
<td>(1) E → C (100)</td>
<td>I</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>E{b,y}, C{r,g}, A{y}, B{b}</td>
</tr>
<tr>
<td>(2) A → B (90)</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>(3) B → C (80)</td>
<td>II</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>A{r}, B{g,b,y}</td>
</tr>
<tr>
<td>(4) C → D (70)</td>
<td>III</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>D{b,y}</td>
</tr>
<tr>
<td>(5) Fill Space with unpopular G</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>(6) A → E (40)</td>
<td>IV</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>A{r,g}, B{b,y}</td>
</tr>
<tr>
<td>(7) Fill Space with unpopular F</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Figure 2: Procedure mapping using cache line coloring. The first column indicates the steps taken in our color mapping algorithm and each edge and procedure processed at each step. The second column shows which of the four edge processing cases the current step corresponds to in our algorithm. The third column shows the address space divided into sizes equal to the instruction cache, and shows the mapping of the program at each step. The instruction cache contains 4 blocks labeled: red, green, blue, and yellow. The last column shows the unavailable-sets as they are changed for the procedures at each step in the algorithm.
cache lines (as does $C$ in our example), it will generate multiple mappable elements (e.g., $C1$ and $C2$), as is shown in Figure 2.
Our algorithm maintains three important pieces of state for each procedure: the number of cache lines (colors) needed to hold the procedure, the cache colors used to map the procedure, and the unavailable-set of colors which represents the cache lines where the procedure should not be mapped to. We do not actually store the unavailable-set of colors. Instead, each procedure contains pointers to its parents and children in the call graph. The unavailable-set of colors is then constructed for a procedure as needed by unioning all the colors used to map each of the procedure’s parents and children, only if the edge joining the procedure to the parent or child has already been processed in the algorithm.
Our algorithm starts by building a procedure call graph, similar to the one shown in Figure 1. Every procedure in the program is represented by a node in the graph, and each edge between nodes represents a procedure call. Multiple call sites to the same procedure from a single procedure are represented as a single edge in our call graph. The edge values represent the number of times each edge (i.e., call path) was traversed. The sum of the edge weights entering and exiting a node indicates the number of incoming and outgoing procedure calls and this determines that procedure’s popularity.
After the call graph is built, the popularity of each procedure is considered. Based on popularity, the graph is split into the popular procedures and edges and the unpopular procedures and edges. The popular procedure set contains those procedures which are frequently a caller or a callee, and the popular edge set contains the frequently executed procedure call edges. The unpopular procedures and edges are those not included in the above two popular sets. Note, there is a difference between popular procedures and time consuming procedures (procedures that consume a noticeable portion of a program’s overall execution time). A time consuming procedure may be labeled unpopular because it rarely switches control flow to another procedure. If a procedure rarely switches control flow, one does not have to worry about eliminating cache conflicts between this procedure and the rest of the call graph. In the example in Figure 1, popular procedures are $A$, $B$, $C$, $D$, and $E$, and the unpopular procedures are $F$ and $G$ since they are never executed. The popular edges are $A \rightarrow B$, $B \rightarrow C$, $C \rightarrow D$, $A \rightarrow E$, and $E \rightarrow C$, and the unpopular edges are $E \rightarrow F$ and $F \rightarrow G$. The algorithm then sorts the popular edges in descending order using the edge weights. The unpopular procedures are sorted by procedure size, and are used to fill in spaces created by our color mapping.
After the program’s popularity has been decided, we process all of the popular edges starting with the most frequently executed and ending with the least frequently executed. There are four possible cases when processing an edge in our algorithm. The first case occurs when an edge connects two procedures that have not yet been mapped. In this case, the two procedures are merged into a compound node. The two procedures are placed next to each other in the layout and they are assigned cache line colors starting at an arbitrary color (position). Each procedure is assigned the number of cache line colors equal
to \((\text{procedure's size in bytes})/\text{(cache line size in bytes)}\). After the colors have been assigned, the unavailable-set for each procedure includes the colors (cache lines) used by the other procedure at the other end of the call edge. The remaining three cases encountered when processing an edge include: when the call edge links two procedures in two different compound nodes, when the edge is between an unprocessed procedure and a procedure in a compound node, and when the edge being processed is a call between two procedures in the same compound node. The following four paragraphs discuss the details for the four edge processing cases in our algorithm.
Case I: The first case, when an edge connects two unmapped procedures, is shown in the first two steps of Figure 2. The algorithm starts with the heaviest edge (most heavily traversed) in the call graph’s set of popular edges, \(E \rightarrow C\), and forms a compound node \(E \rightarrow C\). This compound node is arbitrarily mapped to the cache line colors. The unavailable-set of colors for \(E\) now includes \textit{blue} and \textit{yellow} (the colors \(C\) maps to) and the unavailable-set for \(C\) now includes \textit{red} and \textit{green} (the colors \(E\) maps to). The second step in Figure 2 processes the edge \(A \rightarrow B\) between two unmapped procedures. The two procedures are combined into a compound node, and their unavailable-sets are shown in the Figure. Note that the unavailable-set for \(A\) does not include colors \textit{red} and \textit{green}, even though there is an edge \(A \rightarrow E\) in the call graph and node \(E\) is mapped to the colors \textit{red} and \textit{green}. This is because the procedure’s unavailable-set only includes parent and children procedures connected by edges that have been processed, and the edge \(A \rightarrow E\) has not yet been processed. We chose this restriction since the unavailable-set of colors is used to restrict where to place procedures, and when placing a procedure, the procedure should only be restricted by the edges with the heaviest (most important) weights.
Case II: The second case occurs when the edge being processed connects two procedures in different compound nodes. For this case, the two compound nodes are merged together, concatenating the compound node that is shorter in length (number of procedures) to the larger compound node. This is shown in step 3 of Figure 2 for edge \(B \rightarrow C\), which combines two compound nodes \(E \rightarrow C\) and \(A \rightarrow B\). The compound nodes both contain the same number of procedures, so we arbitrarily choose \(A \rightarrow B\) to be the smaller compound node. Our algorithm now decides where to map, and how to order, \(A \rightarrow B\) since there are four possibilities: \(A \rightarrow B \rightarrow E \rightarrow C\), \(B \rightarrow A \rightarrow E \rightarrow C\), \(E \rightarrow C \rightarrow A \rightarrow B\) and \(E \rightarrow C \rightarrow B \rightarrow A\). The first decision to make is on which side of compound node \(E \rightarrow C\) should \(A \rightarrow B\) be placed. This is decided by taking the shortest \(\text{mod(distance to procedure in compound node/cache size)}\). For our example, the distance to \(C\) is used and is calculated to be the distance in the number of cache line colors from the middle of procedure \(C\) to each end of the compound node. From the mapping in step 1 of Figure 2, this distance is 1 cache line to the right of \(C\) in the compound node \(E \rightarrow C\) and 3 cache lines to the left of \(C\) in compound node \(E \rightarrow C\). Therefore the algorithm decides to place \(A \rightarrow B\) to the right of \(E \rightarrow C\). The \(\text{mod(distance to procedure/cache size)}\)
heuristic is used to increase the probability of being able to easily map the 2nd compound node to non-conflicting cache colors. Note, that placing $A - B$ to the right of $E - C$ produces a mapping where no cache conflicts occur, whereas if we would had chosen to put $A - B$ on the left side of $E - C$ this would have caused a cache coloring conflict. The next decision our algorithm makes is in which order to place $A - B$, either $E - C - A - B$ or $E - C - B - A$. This is decided by choosing the ordering so the two procedures connected by the edge being processed (i.e., $B \rightarrow C$) are closest to each other in the program layout. Thus we arrive at a mapping of $E - C - B - A$. After this is decided, the algorithm makes sure that the two nodes for the edge being processed, $B$ and $C$, have no cache lines that conflict. This is done by comparing the colors used by $C$ with the colors used by $B$. If there is a conflict, the smaller compound node is shifted away from the larger compound node until there is no longer a conflict. The space left in the mapping will be filled with unpopular procedures. If a conflict cannot be avoided then the original location is used. When the final position for the smaller compound node is determined, the algorithm goes through each procedure and updates the colors (cache lines) used by each procedure. Notice that this changes the unavailable-set of colors: $A$’s set of unavailable colors changes to red and $B$’s changes to green, blue and yellow.
Case III: The third type of edge connects an unmapped procedure and a procedure in a compound node. We process this case similarly to case II as described in the previous paragraph. In this situation, the unmapped procedure is placed on either end of the compound node, which side is decided by using the shortest $mod(distance to procedure/cache size)$ heuristic as described above. Once a side is chosen, the cache line colors used by the newly mapped procedure are checked against the colors used by its corresponding procedure in the compound node. If there is a conflict, space is inserted in the address space between the newly mapped procedure and the compound node until the newly mapped procedure can be assigned colors which do not conflict. If this is not possible, the procedure is left at its original position, adjacent to the compound node. Step 4 in Figure 2 shows this scenario. The algorithm next processes edge $C \rightarrow D$, where $C$ is contained in a compound node and $D$ has not yet been mapped. The algorithm first decides on which side of the compound node to place $D$. Since both of the distances to the middle of $C$ are the same (3 cache lines), the algorithm arbitrarily chooses a side and $D$ is placed to the left of the compound node. The colors used for $D$ at this location are blue and yellow. This would create a conflict since those colors overlap with the colors used by $C$. Therefore the algorithm shifts $D$ to the left until it finds a suitable location (if possible) where $D$ no longer conflicts with $C$. This location for $D$ is found at the colors red and green. This leaves a space in the compound node, as shown in step 4. If a space is created inside of a compound node, the space is filled with the largest unpopular procedure which will fit. This is shown in step 5 of Figure 2, where the space created by shifting $D$ is filled with the unpopular procedure $G$.
Case IV: The fourth and final case to handle occurs when the edge being processed has both procedures belonging to the same compound node. This is a very important case since the algorithm finally gets to use the unavailable-set to avoid cache conflicts. If the colors used by the two procedures of the edge overlap (conflict), then the procedure closest (in terms of cache lines) to either end of the compound node is moved past the end of the compound node, creating a space or gap in the compound node where it use to be located. This space will later be filled by an unpopular procedure or procedures. The unavailable-set for the procedure that is moved past the end of the compound node is updated to include the colors of the corresponding procedure left inside the compound node. The algorithm then checks to see if the current colors used by the procedure conflict with any of its unavailable colors. If there is a conflict, the procedure is shifted away from the compound node in the address space until there is no longer a conflict with its unavailable-set of colors. If we are unable to find a non-conflicting location for the procedure, the original location inside the compound node is used. This final scenario is shown in step 6 in Figure 2, where the edge from \( A \rightarrow E \) is processed and its two procedures are in the same compound node. In examining the colors used by both \( A \) and \( E \), we see that the two procedures’ colors conflict since they map to the same cache block (green). The algorithm tries to eliminate this conflict by choosing to move \( A \), since it is the closest to an end of the compound node. The algorithm moves \( A \) past the end of the compound node, mapping it to the color blue. When checking \( A \)’s new mapping against its unavailable-set (red and green), no conflicts are found, so this is an acceptable location for procedure \( A \). Using the unavailable-set in this way guarantees that previous mappings for \( A \) take precedence over the edge \( A \rightarrow E \), because those mappings were more important. Finally, since \( A \) was moved in step 6, it created a space in the compound node, as shown in Figure 2. After any space is made inside of a compound node, that gap is filled with a procedure(s) from the unpopular list. In our example, the remaining procedure \( F \) is used to fill the gap. We then arrive at the final mapping as shown in step 7, which has no first-generation cache conflicts.
This process is repeated, until all of the edges in the popular set have been processed. Any remaining procedures in the unpopular list are mapped using a simple depth-first traversal of the unpopular edges that join these unpopular procedures. This can create several disjoint compound nodes. These nodes are then ordered in the final layout, from the most frequently executed to the least frequently executed.
2.2 Comparison to Previous Work
There has been considerable work in the area of profile-driven program optimizations and procedure reordering. We now discuss relevant previous work and how it relates to our algorithm.
2.2.1 Knowledge of Cache Size
McFarling examined improving instruction cache performance by not caching infrequently used instructions and by performing code reordering compiler optimizations [10]. The mapping algorithm works at the basic block level and concentrates on laying out the code based on loop structures in the program. The algorithm constructs a control flow graph with basic block, procedure, and loop nodes. It then tries to partition the graph, concentrating on the loop nodes, so that the height of each partitioned tree is less than the size of the cache. If this is the case, then all of the nodes inside of the tree can be trivially mapped since they will not interfere with each other in the cache. If this is not the case, then some nodes in the mapping might conflict with others in the cache.
The notion of wanting the mapped tree size smaller than the cache size also applies to our algorithm when we partition the call graph into popular and unpopular procedures and edges. Partitioning the the call graph actually splits the graph into several disjoint subgraphs comprised of the popular procedures and edges. This has the effect of breaking the call graph into smaller, and more manageable, pieces. If the sum of all the procedure sizes in a subgraph is smaller than the size of the instruction cache, then there will be no conflicting colors when laying out all of the procedures in the subgraph and the mapping can be done trivially as suggested by McFarling. The benefit of our algorithm over McFarling’s is that instead of just taking into consideration the cache size we also take into consideration the exact cache lines used by each procedure in the mapping. This allows our algorithm to effectively eliminate first-generation cache conflicts, even when the popular subgraph size is larger than the instruction cache, by using the color mapping and the unavailable-set of colors.
Torrellas, Xia and Daigle [18] (TXD) also described an algorithm for code layout for operating system intensive workloads. Their work takes into consideration the size of the cache and the popularity of code. Their algorithm partitions the operating system code into executed and non-executed parts at the basic block level. It then repeatedly creates sequences of basic blocks from the executed code. All the basic blocks with weights above a threshold value are removed from the graph and put into a sequence, which is a list of basic blocks. All the basic blocks in a sequence are then layed out together in the address space. The threshold value is then lowered and the process is repeated until all the frequently executed basic blocks have been put into sequences. Their algorithm takes into consideration the cache size by mapping the most frequently executed sequence into a special area in the cache. The rest of the sequences are then mapped to areas in the cache, avoiding this special area. This creates gaps in the program layout which are then filled by the non-executed basic blocks. The TXD algorithm is designed for mapping operating system code to increase performance, by keeping commonly used system code in the cache. Our algorithm is designed for application code and tries to eliminate as many first-generation conflicts as possible. These two goals are different and may champion the use of different algorithms. The techniques used by TXD, which work well
Figure 3: Procedure mapping for a greedy depth-first traversal of the call graph.
for operating system code, may not work as well to eliminate first-generation cache conflicts in application code.
As described in §2.1, our algorithm uses unpopular procedures in a manner similar to how TXD uses non-executed operating system basic blocks. We use the unpopular code in an application to fill in spaces created when mapping procedures. The two approaches differ in that our algorithm uses the unpopular procedures to try to eliminate cache conflicts for all popular procedures by performing a color mapping that gives priority to the procedures that switch control flow the most in the call graph. In comparison, TXD uses the non-executed code to eliminate cache conflicts for only some of the popular basic blocks: the most frequently executed sequence(s). Keeping track of the colors used by each procedure, and using the unavailable-set to eliminate as many conflicts as possible, makes our algorithm more general for eliminating first-generation conflicts in application code.
Another technique used by TXD which works well for operating system code, but which may not work as well for application code, is recursively breaking up the basic blocks into sequences using a threshold value. This technique does not take into consideration the connectivity of the basic blocks in the sequence. Therefore a sequence could be layed out together in the address space, with the basic blocks having little or no temporal locality, and the basic blocks in one sequence could cause conflict misses with basic blocks in another sequence. For application code, our coloring algorithm offers better performance over a recursive threshold partitioning algorithm since we take into consideration the connectivity of the graph.
2.2.2 Procedure Mapping
Hwu and Chang described an algorithm for improving instruction cache performance using inlining, basic block reordering, and procedure reordering compiler optimizations [8]. Their algorithm builds a call graph with weighted call edges produced by profiling. For the procedure reordering, their algorithm processes the call graph depth first, mapping the procedures to the address space in depth first order. Their depth-first traversal is guided by the edge weights determined by the profile, where a heavier edge is traversed (layed
Important points of decision in the Pettis and Hansen algorithm
<table>
<thead>
<tr>
<th>(1) B $\xrightarrow{(80)}$ C</th>
<th>Chain E-C is placed next to B-A, since C-B satisfy the “closest is best” strategy</th>
</tr>
</thead>
<tbody>
<tr>
<td>How to merge chains E-C and B-A?</td>
<td></td>
</tr>
<tr>
<td>(2) C $\xrightarrow{(70)}$ D</td>
<td>The 2 possible locations for D. Both cause cache conflicts with C.</td>
</tr>
<tr>
<td>Where to add procedure D in chain E-C-B-A?</td>
<td></td>
</tr>
</tbody>
</table>
Figure 4: Procedure mapping for the Pettis and Hansen greedy algorithm.
out) before an infrequently executed edge. In using the call graph shown in Figure 1, a depth-first traversal following the most frequently executed edges would traverse the edges in order of $A \rightarrow B$, $B \rightarrow C$, $C \rightarrow D$, $A \rightarrow E$, $E \rightarrow C$, $E \rightarrow F$, and $F \rightarrow G$. Figure 3 represents the final mapping achieved by their algorithm. The drawback of this approach occurs when the depth-first traversal follows an unimportant path in the control flow graph, which will then lay out unpopular procedures before considering procedures on a more important path. This is seen in Figure 1 where their algorithm processes the edge $C \rightarrow D$ before the edge $E \rightarrow C$. This can create significant first-generation cache conflicts in the call graph, as seen by the conflict between procedures $E$ and $C$ in Figure 3.
Pettis and Hansen [11] also described a number of techniques for improving code layout that include: basic block reordering, procedure splitting, and procedure reordering. Their algorithm employs a closest-is-best strategy to perform procedure reordering. The reordering starts with the heaviest executed call edge in the program call graph. The two nodes connected by the heaviest edge will be placed next to each other in the final link order. This is taken care of by merging the two nodes into a chain. The remaining edges entering and exiting the chain node are coalesced. This process continues until the whole call graph is merged into chains which can no longer be merged. Figure 4 shows the key points of the Pettis and
Hansen [11] procedure mapping algorithm when processing the call graph in Figure 1. Their algorithm starts by processing edge $E \rightarrow C$, merging nodes $E$ and $C$ into a chain $E \rightarrow C$. This is followed by edge $A \rightarrow B$, where $A$ and $B$ are merged into a chain $A \rightarrow B$. The next edge to be processed is $B \rightarrow C$. This brings the algorithm to the first point shown in Figure 4, which is how to merge the chains $E \rightarrow C$ and $A \rightarrow B$. At this point their algorithm uses a closest-is-best heuristic, and chooses to place procedure $B$ next to $C$, since the edge $B \rightarrow C$ has a stronger weight than $A \rightarrow E$. The next edge to be processed is from $C \rightarrow D$. This means procedure $D$ needs to be placed at the front or end of chain $E \rightarrow C \rightarrow B \rightarrow A$. Figure 4 shows that, no matter which side of the chain $D$ is placed, a first-generation cache conflict will occur with $C$. This illustrates the main drawback of their approach, which is that the algorithm fails to monitor the chain size. Therefore, once a chain becomes larger than the size of the instruction cache, the effectiveness of their closest-is-best strategy and node merging strategy, decreases. In looking at the final mapping in Figure 4, we see that the mapping has first-generation conflicts between procedures $A$ and $E$, and procedures $C$ and $D$.
Our algorithm improves on the Hwu and Chang and the Pettis and Hansen procedure reordering algorithms by keeping track of the cache lines (colors) used by each mapped procedure when performing the procedure mapping. This allows us to effectively map procedures, eliminating cache conflicts even when the compound node size grows larger than the instruction cache. Neither of their algorithms take into consideration the attributes of the cache, such as cache size, line size, and associativity. They also do not consider leaving spaces in their layout, which can be used to reduce the number of cache conflicts. As shown in Figure 2, when using our color mapping algorithm, no first-generation cache conflicts occur for the call graph shown in Figure 1. In comparison, Figure 3 and Figure 4 show that both the Hwu and Chang and the Pettis and Hansen algorithms suffer from first-generation cache conflicts for the reasons discussed above.
### 3 Methodology
To evaluate the performance of our algorithm, we modified $gcc$ version 2.7.2 to use our new procedure mapping algorithm when linking an application. This has restricted the type of applications we can examine in this study to programs that can be compiled with $gcc$. Therefore, the programs we examined are from the SPECInt95 suite, SPECInt92 suite, and three gnu applications.
We used trace driven simulation to quantify the instruction cache performance of our algorithm [9]. The trace driven simulations were obtained using ATOM, an execution-driven simulation tool available from Digital Equipment Corporation [16]. ATOM allows instrumentation of binaries on DEC Alpha processors and can produce the necessary information about the frequency of procedure calls, procedure sizes, and the program’s control flow graph. In our simulations we model a direct-mapped 8 kilobyte instruction cache.
Table 1: Measured attributes of traced programs. The input is used to both profile the program and gather performance results. The attributes include the number of instructions traced when simulating the program, the executable size of the program, and the number of static procedures in the program. Also shown is the percentage of the executable and the percentage of static procedures that the popular procedures account for after partitioning the program into popular and unpopular procedures when using the color mapping algorithm. The last column shows the percentage of unpopular procedures in terms of the size of the executable that were used as filler (to fill in spaces) in our color mapping algorithm.
with a 32 byte block size, similar to the size used for the DEC Alpha 21064 and DEC Alpha 21164 first-level instruction cache. Therefore, in our color mapping, the number of colors is equal to 256, which is equal to the number of direct mapped cache blocks.
Table 1 describes the static and dynamic attributes for the programs we studied. The first column contains the program name, and the second column shows the input used to profile each program. The third column shows the number of instructions traced for the input used. The fourth column shows the size of each program in kilobytes, and the fifth column shows the number of static procedures in the program. The next two columns show results for the popular procedures in the program as determined by our color mapping algorithm described in §2.1. The sixth column shows the percentage of the executable that contains only the popular procedures, and the seventh column shows the percentage of static procedures which are considered popular. The final column shows the percentage of the executable which were unpopular procedures used as filler to fill in spaces created in the color mapping (as described in §2.1). We used profile information to guide the partitioning of the program into popular and unpopular parts. All the procedures and edges that account for less than 1% of the switches in control flow in the call graph are labeled as unpopular. We can see that by splitting each program into popular and unpopular sets, that the popular procedures make up only 3% to 12% of the static executable size, and this accounts for 4% to 21% of the static procedures in the program. Mapping these procedures correctly will eliminate most of the cache conflicts in the application for the inputs we examined.
Table 2: Instruction cache performance for the Original mapping, Pettis and Hansen (P&H) mapping, and our Color mapping algorithm. The first three columns show the instruction cache miss rates. The next two columns show the percent reduction in the miss rates when using our Color mapping algorithm in comparison to the Original and P&H procedure mapping. The last three columns show the number of instruction cache misses.
<table>
<thead>
<tr>
<th>Program</th>
<th>I-Cache Miss Rate</th>
<th>Miss Rate Reduction Over</th>
<th># Instruction Cache Misses</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>Original</td>
<td>P&H</td>
<td>Color</td>
</tr>
<tr>
<td>li</td>
<td>1.4%</td>
<td>0.3%</td>
<td>0.3%</td>
</tr>
<tr>
<td>m88ksim</td>
<td>3.0%</td>
<td>1.7%</td>
<td>1.4%</td>
</tr>
<tr>
<td>perl</td>
<td>7.5%</td>
<td>4.7%</td>
<td>4.4%</td>
</tr>
<tr>
<td>espresso</td>
<td>0.9%</td>
<td>0.9%</td>
<td>0.5%</td>
</tr>
<tr>
<td>eqntott</td>
<td>0.2%</td>
<td>0.3%</td>
<td>0.1%</td>
</tr>
<tr>
<td>bison</td>
<td>1.5%</td>
<td>1.5%</td>
<td>1.1%</td>
</tr>
<tr>
<td>flex</td>
<td>2.2%</td>
<td>1.7%</td>
<td>1.7%</td>
</tr>
<tr>
<td>gzip</td>
<td>1.1%</td>
<td>0.0%</td>
<td>0.0%</td>
</tr>
<tr>
<td><strong>Average</strong></td>
<td>2.2%</td>
<td><strong>1.4%</strong></td>
<td><strong>1.2%</strong></td>
</tr>
</tbody>
</table>
To evaluate the performance of our color mapping algorithm we also implemented the Pettis and Hansen algorithm described in Section 2.2. Table 2 shows the instruction cache miss rates for the original program, the Pettis and Hansen algorithm, and our cache coloring algorithm. For the results shown, the same input used in Table 1 was used for both profiling the program and gathering the results. The second column provides the cache miss ratio for the *Original* program using the standard link order for the benchmark executables as specified in the makefile provided with the programs. The next column indicates the cache miss ratio after applying the Pettis and Hansen (*P&H*) algorithm. The fourth column, labeled *Color*, refers to the new link order produced by our cache color mapping algorithm. The next two columns show the percent reduction in the cache miss rate when using our algorithm in comparison to the original program and the P&H mapping. The last three columns show the number of instruction cache misses for the original program, P&H layout, and our color mapping. ¹
As seen in Table 2, when using the color mapping algorithm the miss rate of the original program is decreased on average by 45%, with reductions as high as 99% for *gzip*. In comparison to the P&H algorithm our color mapping reduces the miss rate on average by 14%. The Table shows that in comparison to P&H our algorithm provides a substantial reduction in the cache miss rate for the 4 programs *m88ksim*, *espresso*, *eqntott*, and *bison*, provides a smaller improvement for *perl*, and has approximately the same instruction cache miss ratio for *li*, *flex*, and *gzip*.
¹Only averages are shown for the miss rate columns, since the averages for the other columns in the table are not meaningful.
Our algorithm performs better for programs like m88ksim, espresso, and bison because the size of the popular call graph for these applications is larger than the size of the instruction cache. This allows our algorithm to fully exploit cache line coloring, arriving at a layout that significantly reduces the number of first-generation cache conflicts.
For programs such as flex and gzip, the reason why our algorithm and the P&H algorithm have approximately the same miss rate can be seen by looking at the partitioning part of our algorithm. Here, the program is partitioned into popular and unpopular procedures and edges. In performing this partitioning, these programs are split into disjoint subgraphs where most of the subgraphs are smaller than the size of the cache. Since these popular subgraphs easily fit within the instruction cache, we can arbitrarily map their procedures. For example, gzip visits only a small number of very popular procedures when processing the input file gcc-2.7.2.tar. This is seen in Table 1, where the size of the popular procedures for gzip amount to only 10K (3% of the total executable size), and the simulated instruction cache size we used is 8K. For applications where the popular subgraphs fit within the size of the instruction cache, our color mapping algorithm and the Pettis and Hansen algorithm will have similar performance.
Table 2 shows that for eqntott, the instruction cache miss rate when applying the P&H mapping is larger than the miss rate of the original mapping. This effect occurs for two reasons. One reason is the poor choice made by the P&H algorithm when merging chains that sum to a size larger than the instruction cache, creating cache conflicts within the newly merged chain. The second reason is that both our algorithm and the P&H algorithm only model first-generation conflicts in the call graphs. The call graph used in this study only models the frequency of procedure calls between a procedure and its direct children. It does not model the temporal locality between a procedure and all of the procedures that it can possibly reach in the call graph, and any of these reachable procedures can cause cache conflicts. This emphasizes the fact that finding an optimal mapping to minimize conflicts is NP-complete [10]. In the next section we suggest further optimizations to our algorithm in order to address misses beyond first-generation cache conflicts.
The results in Table 2 are all gathered using the same input that was also used to profile the program. An important issue involving profiled-based optimizations is how well does a single input capture the typical behavior of future runs of the program. Several researchers have investigated this problem and have found that programs have predictable behavior between different inputs [4, 7, 19]. Even so, care must be taken when choosing the inputs to guide optimizations. In this vein, we took a few of the optimized programs used to produce the results in Table 2 and ran them using different inputs. Table 3 shows the cache miss rates for these programs using different inputs. For these different inputs, the results show that a similar reduction in miss rate of 45% is achieved when comparing our color mapping algorithm to the original layout, and the reduction in miss rate for our algorithm when compared to P&H is about the same at 15%. In general, when examining different inputs our algorithm still shows significant reductions in the original instruction cache miss ratios, while consistently showing an advantage over P&H.
Table 3: Instruction cache performance using multiple inputs for the Original mapping, Pettis and Hansen (P&H) mapping, and our Color mapping algorithm. In calculating the overall average, a value for espresso is included only once, which is the average miss rate for espresso on all of the inputs shown.
To examine the impact procedure reordering optimizations have on the performance of these programs, Figures 5 and 6 show the estimated performance in instructions issued per cycle (IPC) for the original program, P&H mapping, and our color algorithm for two different architectures. The higher the IPC the better. For these results we assume each instruction takes one cycle to execute, and that the only pipeline stalls are due to misses in the instruction cache. Figure 5 shows a conservative estimate of performance using a single issue architecture with a small (5 cycle) first-level instruction cache miss penalty. Figure 6 shows an aggressive 4-way issue architecture with a larger (10 cycle) first-level instruction cache miss penalty. The results in Figure 5 show that for a conservative architecture our color mapping algorithm increases the IPC on average by 5% when compared to the original mapping, and by 1% when compared to P&H. The results in Figure 6 show that for a more aggressive architecture that our color mapping algorithm increases the IPC on average by 26% when compared to the original mapping, and by 6% when compared to P&H. These two graphs show the potential increase in performance when using our algorithm. As seen in the two figures, the performance for our algorithm is the same for li, flex, and gzip when compared to P&H for the reasons discussed in the previous paragraphs. For programs like m88ksim, espresso, and bison which have larger more complicated call graphs, the figures show that the increase in performance for our algorithm is 2% to 13% when compared to P&H.
One issue to consider with our algorithm is that in order to avoid first-generation cache conflicts our color mapping will insert space into compound nodes as described in §2.1. This space is later filled with unpopular procedures. This could possibly have two adverse effects. The first is, if no unpopular procedure
Figure 5: Instructions issued per cycle for a single issue architecture, with an 8K direct mapped instruction cache which has a 5 cycle cache miss penalty.
Figure 6: Instructions issued per cycle for a 4-way issue processor with an 8K direct mapped instruction cache which has a 10 cycle cache miss penalty.
can be found when trying to fill a space, then this could result in an increase in the executable size. For the programs we examined this was never an issue. As seen in Table 1, on average only 8% of the procedures were labeled as popular, leaving more than enough unpopular procedures to fill in any gaps that were created by the color mapping algorithm. The second effect is that the size of the working set of the program may increase due to the algorithm filling spaces in the compound nodes with unpopular procedures. From our results we do not believe this will be an issue, but further investigation is needed. When performing the color mapping for the programs we examined, on average only 3K worth of unpopular procedures were used as filler and inserted into the popular color mapping, as seen in Table 1. Since the average size for all of the popular procedures in a program was 33K, this increases the size of the popular mapping section of the address space by only 8%.
5 Discussion and Future Work
In this section we discuss how to apply our color mapping algorithm to associative caches, describe how our algorithm can benefit from basic block reordering and procedure splitting, and describe future work on how to improve the performance of our algorithm by using more information on temporal locality to guide the mapping.
5.1 Color Mapping for Associative Caches
In this paper only described our algorithm as applied to direct mapped caches and examined its performance for an 8K direct mapped instruction cache. Our algorithm can easily be applied to set-associative instruction caches. To accomplish this, we treat the associativity of the cache as another dimension in the mapping of the address space. For associative caches our algorithm breaks up the address space into chunks, equal in size to \( \text{the number of cache sets} \times \text{the cache line size} \). Therefore, the number of sets represents the number of available colors in the mapping. The color mapping algorithm can then be applied as described in §2.1, with only a few minor changes. The algorithm changes slightly to keep track of the number of times each color (set) appears in the procedure’s unavailable-set of colors. Therefore, mapping a procedure to a color (set) does not cause any conflicts as long as the number of times that color (set) appears in the unavailable-set of colors is less than the degree of associativity of the cache. This effectively turns the unavailable-set into a multiset, which allows each color to appear in the set up to the associativity of the cache.
5.2 Color Mapping with Basic Block Reordering and Procedure Splitting
The results in §4 do not show the full potential of our coloring algorithm, since our algorithm can benefit from other code reordering techniques such as basic block reordering and procedure splitting [8, 11]. Our color mapping algorithm can benefit from basic block reordering because once the basic blocks have been aligned and condensed into the first part of the procedure, the cache line colors used by the frequently executed basic blocks are the only colors we have to worry about when performing the procedure mapping. Using basic block profiling, each procedure would contain two sets of cache colors: those for the important portions of the procedure, and those for the unimportant. Then the only basic blocks we need to worry about in the unavailable-set of colors are the important basic blocks.
Performing procedure splitting can also be used to improve the performance of our color mapping algorithm. This can be achieved by performing procedure splitting to help reduce the coloring constraints between different procedures. For example, if half of a procedure $X_1$, calls a procedure $Y$, and the other half of the procedure $X_2$, calls procedure $Z$, then finding a location for $X$ in the color mapping as described in §2.1 will have to try and avoid the colors used by both $Y$ and $Z$. If procedure splitting is performed so that $X$ is split into two separate procedures $X_1$ and $X_2$, then this can help reduce the coloring constraints on $X$. After procedure $X$ is split into $X_1$ and $X_2$, the color mapping for $X_1$ only needs to avoid colors used by $X_2$ and $Y$, and the color mapping for $X_2$ needs to only avoid colors used by $X_1$ and $Z$. This can help free up coloring constraints for very large procedures and procedures that have a significant number of different call destinations.
5.3 Using Improved Temporal Locality Data
Our color mapping algorithm, as described in §2.1, concentrates on eliminating conflicts between edges in the control flow graph. For our results, these edges happen to be first-generation cache conflicts because the graph edges represent the call edges between a procedure and its direct parents and children. Our algorithm can easily be applied to more detailed forms of profile and trace information by adding extra edges between procedures, treating these edges as a second set of procedure call edges in our color mapping algorithm. These additional edges, with the appropriate weights, can then be used in the unavailable-set of colors in order to further eliminate cache conflicts.
The call graph and profiles we used to guide the mappings do not provide enough information to determine the temporal locality for a depth greater than one procedure call (first-generation) in the graph. Even for first-generation misses, a call graph does not provide exact information about temporal locality. Therefore, our algorithm tries to remove the worst case number of first-generation misses. For example, in Figure 1, we know that since the edge $C \rightarrow D$ was executed 70 times, that if $C$ and $D$ had overlapping cache lines, then the call to $D$ and the return to $C$ could in the worst case cause $((70 + 70) \times \text{number of overlapping cache lines})$
misses. For future work we are using control flow analysis of the program’s structure to indicate if all the calls from $C \rightarrow D$ were done during one invocation of $C$ or whether they were spread out over several invocations, similar to the control flow analysis used by McFarling [10]. We are also using control flow analysis to determine how much of procedure $C$ can actually overlap with procedure $D$, so we only have to include those cache lines in $D$’s unavailable-set of colors. This will help provide more accurate temporal locality information for first-generation conflicts, but it does not provide the additional temporal locality information we would like for deeper paths in the call graph.
When profiling just the call edges, there is no way to get a good indication of temporal locality for a path longer than one procedure call edge. For example, in Figure 1 we have no way of knowing for the call edge $C \rightarrow D$ how many of the procedure calls to $D$ came down the path through procedure $B$ and how many went through procedure $E$, nor do we know how much temporal locality there is between $B$ and $D$ or $E$ and $D$. Some of this information can be obtained by using full path profiling, which would allow one to know the frequency of each path [1, 20], although full path profiling still does not provide optimal temporal locality information. One way to obtain additional information on temporal locality is to store the full trace of a program. Capturing, storing, and processing a full trace can be very time and space consuming, but efficient techniques have been proposed to capture and process this information in a compact form, such as the gap model proposed by Quong [14]. We plan on investigating the use of full path profiling and the gap model with our color mapping algorithm in order to eliminate additional cache conflicts for deeper paths in the call graph.
6 Conclusions
The performance of the cache-based memory system is critical in today’s processors. Research has shown that compiler optimizations can significantly reduce this latency, and every opportunity should be taken by the compiler to do so.
The contribution of this paper is a new algorithm for procedure mapping which takes into consideration the call graph, procedure size, cache size, and cache line size. An improved algorithm is achieved by keeping track of the cache blocks (colors) used by each procedure as it is mapped, in order to avoid cache conflicts. This color mapping allows our algorithm to intelligently place unmapped procedures, and to efficiently move a procedure that has already been mapped, by preserving prior color dependencies with that procedure’s parents and children in the call graph. This provides our main advantage over prior work, in that we can accurately map procedures in a popular call graph even if the size of the graph is larger than the size of the instruction cache. This ability is very important, especially for applications which have large and complicated control flow graphs, which result in large instruction cache miss rates due to conflict misses. Our results showed that we were able to reduce the cache miss rate on average by 45% over the original
procedure mapping. In comparison to prior work, our algorithm reduced the cache miss rate on average 14\% below that of the Pettis and Hansen algorithm [11].
In this study we concentrated on applying our color mapping algorithm to procedure reordering. Our algorithm can be combined and benefit from other code reordering techniques such as basic block reordering, taking into consideration looping structures, and procedure splitting. These are topics of future research. In this paper we also concentrated on the performance achieved using call edge profiles to guide the optimizations in order to eliminate first-generation cache conflicts. We are currently investigating how to apply our algorithm to use full path profiling and other trace collection techniques in order to collect improved temporal locality information. We are also examining how to apply our color mapping algorithm to statically formed call graphs using static program estimation.
Acknowledgments
We would like to thank Amitabh Srivastava and Alan Eustace for providing ATOM, which greatly simplified our work, and Jeffrey Dean, Alan Eustace, Waleed Meleis, and Russell Quong for providing useful suggestions and comments on this paper. Brad Calder was supported by Digital Equipment Corporation’s Western Research Lab. David Kaeli was supported by an NSF CAREER Program award No. 9501172.
References
WRL Research Reports
“Architectural and Organizational Tradeoffs in the Design of the MultiTitan CPU.”
“Integration and Packaging Plateaus of Processor Performance.”
“A 20-MIPS Sustained 32-bit CMOS Microprocessor with High Ratio of Sustained to Peak Performance.”
“The Distribution of Instruction-Level and Machine Parallelism and Its Effect on Performance.”
“Long Address Traces from RISC Machines: Generation and Analysis.”
“Link-Time Code Modification.”
“Noise Issues in the ECL Circuit Family.”
“Efficient Generation of Test Patterns Using Boolean Satisfiability.”
“Two Papers on Test Pattern Generation.”
“Virtual Memory vs. The File System.”
“Efficient Use of Workstations for Passive Monitoring of Local Area Networks.”
“A One-Dimensional Thermal Model for the VAX 9000 Multi Chip Units.”
“1990 DECWRL/Livermore Magic Release.”
“Pool Boiling Enhancement Techniques for Water at Low Pressure.”
“Writing Fast X Servers for Dumb Color Frame Buffers.”
“A Simulation Based Study of TLB Performance.”
“Analysis of Power Supply Networks in VLSI Circuits.”
“TurboChannel T1 Adapter.”
“Procedure Merging with Instruction Caches.”
“Don’t Fidget with Widgets, Draw!”
“Pool Boiling on Small Heat Dissipating Elements in Water at Subatmospheric Pressure.”
“Incremental, Generational Mostly-Copying Garbage Collection in Uncooperative Environments.”
“Interleaved Fin Thermal Connectors for Multichip Modules.”
“Experience with a Software-defined Machine Architecture.”
WRL Technical Notes
|
{"Source-Url": "http://apotheca.hpl.hp.com/ftp/pub/compaq/WRL/research-reports/WRL-TR-96.3.pdf", "len_cl100k_base": 13147, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 78662, "total-output-tokens": 20236, "length": "2e13", "weborganizer": {"__label__adult": 0.00042557716369628906, "__label__art_design": 0.0007843971252441406, "__label__crime_law": 0.0003659725189208984, "__label__education_jobs": 0.0017557144165039062, "__label__entertainment": 0.00016510486602783203, "__label__fashion_beauty": 0.0002682209014892578, "__label__finance_business": 0.00048470497131347656, "__label__food_dining": 0.00040793418884277344, "__label__games": 0.0013256072998046875, "__label__hardware": 0.01226043701171875, "__label__health": 0.0006556510925292969, "__label__history": 0.0006499290466308594, "__label__home_hobbies": 0.0002124309539794922, "__label__industrial": 0.0011568069458007812, "__label__literature": 0.0003719329833984375, "__label__politics": 0.0003426074981689453, "__label__religion": 0.0006437301635742188, "__label__science_tech": 0.403564453125, "__label__social_life": 8.392333984375e-05, "__label__software": 0.01221466064453125, "__label__software_dev": 0.560546875, "__label__sports_fitness": 0.0003790855407714844, "__label__transportation": 0.0008454322814941406, "__label__travel": 0.0002276897430419922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 80390, 0.04631]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 80390, 0.29452]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 80390, 0.89702]], "google_gemma-3-12b-it_contains_pii": [[0, 98, false], [98, 3042, null], [3042, 3154, null], [3154, 4393, null], [4393, 7314, null], [7314, 10588, null], [10588, 12276, null], [12276, 13752, null], [13752, 17266, null], [17266, 21062, null], [21062, 24497, null], [24497, 27617, null], [27617, 31016, null], [31016, 33392, null], [33392, 35731, null], [35731, 39024, null], [39024, 41507, null], [41507, 44980, null], [44980, 48551, null], [48551, 50783, null], [50783, 51092, null], [51092, 53680, null], [53680, 56995, null], [56995, 60222, null], [60222, 62579, null], [62579, 64592, null], [64592, 65276, null], [65276, 65276, null], [65276, 68258, null], [68258, 71409, null], [71409, 74503, null], [74503, 76291, null], [76291, 79143, null], [79143, 80390, null]], "google_gemma-3-12b-it_is_public_document": [[0, 98, true], [98, 3042, null], [3042, 3154, null], [3154, 4393, null], [4393, 7314, null], [7314, 10588, null], [10588, 12276, null], [12276, 13752, null], [13752, 17266, null], [17266, 21062, null], [21062, 24497, null], [24497, 27617, null], [27617, 31016, null], [31016, 33392, null], [33392, 35731, null], [35731, 39024, null], [39024, 41507, null], [41507, 44980, null], [44980, 48551, null], [48551, 50783, null], [50783, 51092, null], [51092, 53680, null], [53680, 56995, null], [56995, 60222, null], [60222, 62579, null], [62579, 64592, null], [64592, 65276, null], [65276, 65276, null], [65276, 68258, null], [68258, 71409, null], [71409, 74503, null], [74503, 76291, null], [76291, 79143, null], [79143, 80390, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 80390, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 80390, null]], "pdf_page_numbers": [[0, 98, 1], [98, 3042, 2], [3042, 3154, 3], [3154, 4393, 4], [4393, 7314, 5], [7314, 10588, 6], [10588, 12276, 7], [12276, 13752, 8], [13752, 17266, 9], [17266, 21062, 10], [21062, 24497, 11], [24497, 27617, 12], [27617, 31016, 13], [31016, 33392, 14], [33392, 35731, 15], [35731, 39024, 16], [39024, 41507, 17], [41507, 44980, 18], [44980, 48551, 19], [48551, 50783, 20], [50783, 51092, 21], [51092, 53680, 22], [53680, 56995, 23], [56995, 60222, 24], [60222, 62579, 25], [62579, 64592, 26], [64592, 65276, 27], [65276, 65276, 28], [65276, 68258, 29], [68258, 71409, 30], [71409, 74503, 31], [74503, 76291, 32], [76291, 79143, 33], [79143, 80390, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 80390, 0.08497]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
07e2e3659895264d516480eab4d3d8c92da94827
|
TABLE OF CONTENTS
Contents
List of Figures .................................................................................................................................................... 3
INTRODUCTION .................................................................................................................................................. 4
Prerequisites ................................................................................................................................................. 4
Videos ............................................................................................................................................................ 4
Research Baxter Videos For Examples ......................................................................................................... 4
Baxter Research Robot Example: Move Baxter’s Arms Using Keyboard ............................................................. 4
Baxter Research Robot Examples: Puppet ........................................................................................................... 4
Simulator Videos ........................................................................................................................................... 4
Manufacturing Baxter Videos ....................................................................................................................... 5
Customer Videos- Many Examples of Baxter Applications ............................................................................. 5
Human-Robot Interaction ............................................................................................................................. 5
Machine Learning ....................................................................................................................................... 5
Planning and Manipulation ........................................................................................................................... 5
Manipulation and Mechatronics .................................................................................................................... 5
Computer Vision ......................................................................................................................................... 5
Baxter Research Robot Speaks Out .................................................................................................................... 6
• https://www.youtube.com/watch?v=LOn5WoTnkQU ................................................................................. 6
LOG IN and ENABLE BAXTER ....................................................................................................................... 7
⚠️ ENABLE BAXTER ....................................................................................................................................... 7
Before Baxter respond to any commands, the robot must be enabled. The WEB page describes the Enable_Robot_Tool and the video shows how Baxter is enabled. ................................................................. 7
http://sdk.rethinkrobotics.com/wiki/Enable_Robot_Tool ............................................................................... 7
Functions- Options ......................................................................................................................................... 7
VERIFY CONNECTION: Echo Baxter’s joint_states ......................................................................................... 8
To Disable the robot: Note – Tuck the arms first ............................................................................................. 8
SDK EXAMPLES (wiki/FOUNDATIONS) ........................................................................................................ 10
Fundamental Examples ................................................................................................................................... 10
Other Baxter Rethink Examples that are not discussed in this section are listed in Appendix I. ................. 10
RUN EXAMPLE PROGRAMS ........................................................................................................................ 10
Example 1 Run an example program to wobble arms: .................................................................................. 10
Example 2. Joint Position Keyboard Example ........................................................................................... 11
Example 3. Joint Position Waypoints Example ................................................................. 13
Example 4. Joint Trajectory Playback Example ............................................................... 15
Example 5. Put a picture on Baxter’s face ........................................................................... 18
Example 6. Baxter’s Cameras ............................................................................................... 19
Camera Control Example ..................................................................................................... 20
Example 7. This example will blink the LED on the Left Navigator on and then off. ............ 22
Carol’s Easy Script for Baxter Commands ........................................................................ 23
SIMULATORS ....................................................................................................................... 24
Turtlesim and Introduction To ROS ..................................................................................... 25
MoveIt .................................................................................................................................. 35
Gazebo .................................................................................................................................. 39
APPENDIX I BAXTER RETHINK EXAMPLES .................................................................. 42
Movement ............................................................................................................................. 42
Input and Output ................................................................................................................... 42
Examples with Links From FOUNDATIONS 8/30/2014
http://sdk.rethinkrobotics.com/wiki/Examples ................................................................ 44
APPENDIX II Summary of Ubuntu and ROS commands ....................................................... 45
APPENDIX III ROS Workspace Directories ....................................................................... 46
APPENDIX IV Carol’s Script for .run_baxter ................................................................. 47
List of Figures
Figure 1 Baxter's Cuff for Zero-G mode ................................................................. 8
Figure 2 Baxter's Arms Untucked and Tucked .......................................................... 9
Figure 3 Baxter's Arm and Joint Designations ......................................................... 11
Figure 4 Baxterworking.png .................................................................................. 18
Figure 5 View from Baxter's Head Camera .............................................................. 20
Figure 6 Turtlesim 1 with the Turtle after the node is executed ............................... 26
Figure 7 Turtlesim After Moving ............................................................................. 27
Figure 8 Four Turtlesim Windows using Terminator ............................................... 29
Figure 9 Turtle responds to published topic ............................................................ 32
Figure 10 Turtlesim graph showing communication ............................................... 34
Figure 11 MoveIt and BaxterInitial View ................................................................. 36
Figure 12 Simulator with Initial and Desired Position of Arms ............................... 37
Figure 13 Baxter Simulator Ready for Execution of Moves ..................................... 37
Figure 14 Baxter Simulation Showing Final Position of Arms ............................... 38
Figure 15 Gazebo Simulation of Baxter before Enabling Baxter ......................... 40
Figure 16 Baxter Enabled in Simulation .................................................................. 40
Figure 17 Baxter Software ....................................................................................... 43
INTRODUCTION
This report describes the Rethink Robotics examples for Baxter the robot. The instructions to enable Baxter and run the examples are presented. The report also describes several types of simulation of Baxter as well as the Turtlesim simulator useful for learning ROS.
Prerequisites
Before using Baxter in the UHCL lab, you should have read and understood the material covered in the Introduction to Baxter report by T.L. Harman and Carol Fairchild. In the lab, Baxter has been setup and can be commanded by Terminal Commands that execute scripts. You should have an account on the workstation and be able to login.
Videos
As a useful prerequisite to using Baxter, it would be helpful to view a number of videos that show the various examples of Baxter in action.
Research Baxter Videos For Examples
Meet the Baxter Research Robot (General)
http://www.youtube.com/watch?feature=player_embedded&v=G2-4WFr9-X0
Baxter Research Robot Example: Enable Robot
https://www.youtube.com/watch?v=tYpNk5v7wfI
Baxter Research Robot Example: Move Baxter’s Arms Using Keyboard
Baxter Research Robot Examples: Joint Position Using Keyboard
Baxter Research Robot Examples: Puppet
https://www.youtube.com/watch?v=TTgwcczfCJQ
Baxter Research Robot Record-Playback Example
https://www.youtube.com/watch?v=uk1XBMsNhco&feature=player_detailpage
Simulator Videos
MoveIT Video
https://www.youtube.com/watch?feature=player_embedded&v=1Zdkwym42P4
Manufacturing Baxter Videos
Baxter Folds a Shirt
https://www.youtube.com/watch?v=Mr7U9pQtwq8&feature=player_detailpage
Customer Videos- Many Examples of Baxter Applications
http://sdk.rethinkrobotics.com/wiki/Customer_Videos
Human-Robot Interaction
- David Using Jammster
- Magic Robot - The Illusion of the Thinking Machine
- Baxter on wheels retrieving jacket
- Baxter Robot control using body tracking with kinect
- Clothing and Unclothing Assistance by Baxter
- Using a Baxter Robot for Co-Operative Disassembly of a CPU
- Teleoperation of Baxter Robot using Phantom Omni
Machine Learning
- Human touch makes robots smarter: On Learning Context-Driven User Preferences
- Baxter Experiments for Deep Learning for Detecting Robotic Grasps
Planning and Manipulation
- Robot Motion Planning for Reactive Execution of Learned Tasks
- Baxter Coordinated Dual-Arm Force Control
- Baxter Research Robot Solves Rubik's Cube
- Human touch makes robots smarter: On Learning Context-Driven User Preferences
- Baxter Research Robot: Mimicry using Kinect
- Online human upper body imitation using BAXTER robot
Manipulation and Mechatronics
- Baxter Research Robot at WPI with Prof. Dmitry Berenson
- Optimal Parameter Identification of Flexible Objects via Manipulation
- Teleoperating Multiple Baxter Robots Using Kinect v2
- Soft Pneumatic Robot Hand
- Baxter Robot Pipetting Complete
- Packaging Demo of Baxter Robot using 2-Finger Adaptive Robot Grippers from Robotiq
- GelSight sensor gives robots touch
Computer Vision
- Happy Easter from the RRC Robotics and Automation Team
- Automated Lego Sorting
- Bartender Baxter
- Towards an Automated Checked Baggage Inspection System Augmented with Robots
- Handle Localization in 3D Point Clouds
- BAXTER Dunks A Ball
- BAXTER Sort Colored Balls - Author's View
• Baxter Perfect Colored Cube Sort
Baxter Research Robot Speaks Out
• https://www.youtube.com/watch?v=LOn5WoTnkQU
LOG IN and ENABLE BAXTER
Before running examples on Baxter, it is necessary to login on the workstation using your password. Then, go to the ROS workspace and execute the baxter shell which sets up communication between the workstation and Baxter. Our commands will be in **bold text** after the $ command prompt in a terminal window. The response will be in DejaVu Sans Mono font for baxter.
1. Log in
2. Begin a Terminal Session
3. Go to the ROS workspace and execute Baxter Shell
4. Enable communication with Baxter
5. Enable Baxter and Untuck his arms
```
$ cd /home/tlharmanphd/ros_ws
$ ./baxter.sh
```
```
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$
```
```
$ rosrun baxter_tools tuck_arms.py -u
```
The last command is in the typical ROS format:
```
rosrun <package name> <ROS node name> <options>
```
Now the command line shows baxter and the IP address of the robot. This is one purpose of the baxter shell baxter.sh.
**ENABLE BAXTER**
Before Baxter respond to any commands, the robot must be enabled. The WEB page describes the Enable_Robot_Tool and the video shows how Baxter is enabled.
This tool is responsible for enabling (powering and state monitoring) Baxter. Enabling the robot is expected and necessary for standard Baxter usage.
A fundamental tool for use when working with Baxter, the enable_robot tool, provided in the baxter_tools SDK package, allows for enabling/disabling/resetting/stopping the robot. Baxter must be enabled in order to actively command any of the motors.
Baxter will now be enabled. The joints will be powered, and Baxter will hold his current joint positions with a position control loop.
**Functions- Options**
To get help on enable_robot use the `-h` argument:
$ rosrun baxter_tools enable_robot.py -h
Help screen:
enable_robot.py [ARGUMENTS]
• h, --help show this help message and edit
• s, --state Print current robot state
• e, --enable Enable the robot
• d, --disable Disable the robot
• r, --reset Reset the robot
• S, --stop Stop the robot
VERIFY CONNECTION: Echo Baxter's joint_states
$ rostopic echo /robot/joint_states (LOTS OF STATES)
You should see a continuous stream of Baxter's joint names with measured positions, velocities and torques.
All is well! You have successfully setup communication between your development PC and Baxter if the joint states are displayed in the terminal window.
Move Baxter’s arms by grabbing Baxter's cuff:
Figure 1 Baxter’s Cuff for Zero-G mode
Once Baxter is enabled, Baxter’s arms are in the "Zero-G" mode. The position control loop will be released with solely gravity compensation enabled. This allows for intuitive hand-over-hand guidance of the limbs throughout the workspace. Baxter’s arms will move according to the motion you impart to them. This method is used to train Baxter to reach various positions and is used in the Joint Record and Playback example described later.
To Disable the robot: Note – Tuck the arms first
$ rosrun baxter_tools tuck_arms.py -t
$ rosrun baxter_tools enable_robot.py -d
TO LEAVE BAXTER CNTL+D
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ cntl+d
tlharmanphd@D125-43873:~/ros_ws$
TUCK AND UNTUCK ARMS http://sdk.rethinkrobotics.com/wiki/Tuck_Arms_Tool
$ rosrun baxter_tools tuck_arms.py –u
$ rosrun baxter_tools tuck_arms.py -t
*Figure 2* Baxter's Arms Untucked and Tucked
SDK EXAMPLES (wiki/FOUNDATIONS)
SDK Examples
The SDK Example Programs from Rethink Robotics are designed to demonstrate using the various interfaces and features of Baxter. By following the Usage Guide on an Example Page on the wiki Website, you can try out some of Baxter's functionality. Each example also has a corresponding Code Walkthrough that will take you through the program and explain how Rethink Robotics uses the interfaces. The code walkthroughs are more advanced and are not covered in this report.
Fundamental Examples
Enable Robot Example - This tool is responsible for enabling (powering and state monitoring) Baxter. Enabling the robot is expected and necessary for standard Baxter usage.
http://sdk.rethinkrobotics.com/wiki/Enable_Robot_Tool
Other Baxter Rethink Examples that are not discussed in this section are listed in Appendix I.
RUN EXAMPLE PROGRAMS
A number of Baxter example programs are provided which use the baxter_interface package which contains Python modules for Baxter Research Robot development.
⚠️
Be sure that the script baxter.sh has been executed in the terminal window and Baxter is Enabled.
Example 1 Run an example program to wobble arms:
Wobbler Example Use wobbler as an example of controlling Baxter using joint velocity control. Arms “wobble”
$ rosrun baxter_examples joint_velocity_wobbler.py
This example will simply move the arms to a neutral position, enter into velocity control mode, moving each joint through a random sinusoidal motion. More about this example on the Joint Velocity Wobbler Example Page. Press Ctrl-C to stop...
Example 2. Joint Position Keyboard Example
This example demonstrates control of Baxter’s joints using the keyboard to move the joints. It is a good example to show how Baxter’s arms move in response to joint commands. For example, try to move Baxter’s arm with the keys to pick up an object as a simple project. It is not as easy as you might think at least not the first couple of times!
VIEW the WEB page: Joint Position Example
The Joint Position Control Examples demonstrate how to use position control to move and sense the arm based on joint angles. Examples include keyboard or joystick based user control of arm angles, along with example programs from Rethink to record and playback joint positions.
VIEW VIDEO
Baxter Research Robot Examples: Joint Position Using Keyboard
1. Power on Baxter - white button on rear
2. LOG ON Workstation
3. Go to ~/ros_ws (ROS workspace)
4. ./baxter.sh
5. rosrn baxter_tools enable_robot.py -e
6. rosrn baxter_examples joint_position_keyboard.py
TYPE ? FOR LIST OF COMMAND KEYS TO MOVE JOINTS WITH NAMES SHOWN IN FIGURE:

EXAMPLE AT TERMINAL WINDOW
tlharmannphd@D125-43873:~$ cd ~/ros_ws
tlharmannphd@D125-43873:~/ros_ws$ ./baxter.sh
[baxter - http://172.29.64.200:11311] tlharmannphd@D125-43873:~/ros_ws$ rosrun baxter_tools enable_robot.py -e
[INFO] [WallTime: 1422042340.373074] Robot Enabled
[baxter - http://172.29.64.200:113]
tlharmannphd@D125-43873:~/ros_ws$ rosrun baxter_examples joint_position_keyboard.py
Initializing node...
Getting robot state...
Enabling robot...
[INFO] [WallTime: 1422042352.890294] Robot Enabled
[WARN] [WallTime: 1422042353.064890] left_gripper electric: Gripper Firmware version (3.0.05.5) does not match SDK Version (1.0.0). Use the Robot's Field-Service-Menu to Upgrade your Gripper Firmware.
Controlling joints. Press ? for help, Esc to quit.
key bindings:
<table>
<thead>
<tr>
<th>Left Arm and Gripper</th>
<th>Right Arm and Gripper</th>
</tr>
</thead>
<tbody>
<tr>
<td>?: Help</td>
<td></td>
</tr>
<tr>
<td>/ : left: gripper calibrate</td>
<td>b: right: gripper calibrate</td>
</tr>
<tr>
<td>. : left: gripper close</td>
<td>c: right: gripper close</td>
</tr>
<tr>
<td>m: left: gripper open</td>
<td>x: right: gripper open</td>
</tr>
<tr>
<td>y: left_e0 decrease</td>
<td>q: right_e0 decrease</td>
</tr>
<tr>
<td>o: left_e0 increase</td>
<td>r: right_e0 increase</td>
</tr>
<tr>
<td>u: left_e1 decrease</td>
<td>w: right_e1 decrease</td>
</tr>
<tr>
<td>i: left_e1 increase</td>
<td>e: right_e1 increase</td>
</tr>
<tr>
<td>6: left_s0 decrease</td>
<td>1: right_s0 decrease</td>
</tr>
<tr>
<td>9: left_s0 increase</td>
<td>4: right_s0 increase</td>
</tr>
<tr>
<td>7: left_s1 decrease</td>
<td>2: right_s1 decrease</td>
</tr>
<tr>
<td>8: left_s1 increase</td>
<td>3: right_s1 increase</td>
</tr>
<tr>
<td>h: left_w0 decrease</td>
<td>a: right_w0 decrease</td>
</tr>
<tr>
<td>l: left_w0 increase</td>
<td>f: right_w0 increase</td>
</tr>
<tr>
<td>j: left_w1 decrease</td>
<td>s: right_w1 decrease</td>
</tr>
<tr>
<td>k: left_w1 increase</td>
<td>d: right_w1 increase</td>
</tr>
<tr>
<td>n: left_w2 decrease</td>
<td>z: right_w2 decrease</td>
</tr>
<tr>
<td>. : left_w2 increase</td>
<td>v: right_w2 increase</td>
</tr>
</tbody>
</table>
Esc: Quit
Example 3. Joint Position Waypoints Example
This is a basic example for joint position moves. Move Baxter’s arms using the zero_g mode and record a number of joint position waypoints using the round navigation button. These waypoints will then be played back upon completion of the moves.
Description
The joint position waypoints example demonstrates basic joint position control. A joint position waypoint is a configuration of the arm in joint space (i.e. simultaneous joint angles for all of the arm's seven degrees of freedom). In this example, the robot is enabled to move the specified limb in zero-g mode. Using the arm's navigator buttons, a sequence of joint position waypoints can be recorded. Upon completion of recording, the limb loop will be commanded through the recorded joint sequence.
Usage
Verify that the robot is enabled.
Start the joint position waypoints example program, specifying the left or right arm:
$ rosrun baxter_examples joint_position_waypoints.py -l right
A prompt will provide instructions for recording joint position waypoints:
Initializing node...
Getting robot state...
Enabling robot...
[INFO] [WallTime: 1412620077.078718] Robot Enabled
[INFO] [WallTime: 1412620077.174666] Waypoint Recording Started
Press Navigator 'OK/Wheel' button to record a new joint position waypoint.
Press Navigator 'Rethink' button when finished recording waypoints to begin playback
MOVE THE ARM IN ZERO-G MODE. When at a joint position configuration you would like to record, PRESS THAT LIMB’S NAVIGATOR ‘WHEEL’. Feedback will be displayed that the joint position waypoint has been recorded:
Waypoint Recorded
...
WHEN DONE RECORDING WAYPOINTS, PRESS the limb Navigator's 'Rethink' button and playback will begin:
[INFO] [WallTime: 1399571721.540300] Waypoint Playback Started
Press Ctrl-C to stop...
The program will begin looping through the recorded joint positions. When a joint position waypoint is fully achieved (within the accuracy threshold), the next recorded joint position will be commanded.
Waypoint playback loop #1
Waypoint playback loop #2
...
Pressing Control-C at any time will stop playback and exit the program.
The parameter and options for the joint position waypoints example are:
**Required argument:**
- l or --limb The limb (arm) on which the waypoints will be captured.
**Optional arguments:**
- h or --help Help
- s or --speed The joint position motion speed ratio [0.0–1.0] (default = 0.3)
- a or --accuracy The threshold in Radians at which the current joint position command is considered successful before sending the next following joint position command. (default = 0.008726646)
⚠️
This ridiculously accurate looking value represents
0.008726646 * 180/pi = 0.5 degrees
Example 4. Joint Trajectory Playback Example
The example shows how to Record and Playback Arm and Gripper Positions. Define a recorder file and then move the arms while holding the cuffs. The joint positions with corresponding timestamps will be recorded for both arms. NOTE: You can open and close the grippers while recording by using Baxter’s cuff buttons:
- Oval = Close, Circle = Open
Press any key to exit when done recording. The movements can be played back one or more times according to the option set for playback.
⚠️ This example uses two terminal windows. One is for recording and one is for playback since the joint_trajectory_action_server.py script must be running in one window for the playback to work. In each window, make sure that the directory is the ROS workspace (ros_ws) and the baxter.sh shell is executed to allow communication with Baxter.
The WEB page describes the application and the YouTube video shows how to record and playback the positions:
http://sdk.rethinkrobotics.com/wiki/Joint_Trajectory_Playback_Example
https://www.youtube.com/watch?v=uk1XBMsNhco&x-yt-ts=1422579428&x-yt-cl=85114404&feature=player_embedded
We have defined the file “my_first_recording.rec” to hold the time and joint and gripper positions for this example. The joint_recorder script is in the baxter_examples package.
1. Execute this command with your file <recordingfile>
(Note the –f option):
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ rosrer run baxter_examples joint_recorder.py -f my_first_recording.rec
Initializing node...
Getting robot state...
Enabling robot...
[INFO] [WallTime: 1422556692.423999] Robot Enabled
Recording. Press Ctrl-C to stop.
2. ^C
(Cntl-c after moving arms and manipulating the gripper to stop recording)
Done.
3. Check to see the recording file was created
$ ls
(To view directory and your new file)
-rw-rw-r-- 1 tlharmanphd tlharmanphd 2005277 Jan 29 12:39 my_first_recording.rec
(If viewed using a text editor you can view the positions of the joints and the gripper state.)
4. Execute the Playback Help command:
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ rosrune x baxter_examples joint_trajectory_file_playback.py -h
usage: joint_trajectory_file_playback.py [-h] -f PATH [-l LOOPS]
RSDK Joint Trajectory Example: File Playback
Plays back joint positions honoring timestamps recorded
via the joint_recorder example.
Run the joint_recorder.py example first to create a recording
file for use with this example. Then make sure to start the
joint_trajectory_action_server before running this example.
This example will use the joint trajectory action server
with velocity control to follow the positions and times of
the recorded motion, accurately replicating movement speed
necessary to hit each trajectory point on time.
optional arguments:
-h, --help show this help message and exit
-f PATH, --file PATH path to input file
-l LOOPS, --loops LOOPS
number of playback loops. 0=infinite.
Related examples:
joint_recorder.py: joint_position_file_playback.py.
5. Start the joint_trajectory_action_server
A commonly used ROS method for robot arm motion control is the joint trajectory action interface. The
trajectory_controller and its corresponding joint trajectory action server is the baxter_interface
implementation to support this action interface.
http://sdk.rethinkrobotics.com/wiki/Simple_Joint_trajectory_example
Execute the command:
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ rosrune x baxter_interface
joint_trajectory_action_server.py
Initializing node...
Initializing joint trajectory action server...
Running. Ctrl-c to quit
6. Playback the movements and gripper states
2nd Terminal Click on Terminal Icon, Open A New Window, and communicate with Baxter
tlharmanphd@D125-43873:~/ros_ws
cd /home/tlharmanphd/ros_ws
tlharmanphd@D125-43873:~/ros_ws$ ./baxter.sh (Always execute baxter.sh in a new window)
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$
Start the playback:
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ rosrun baxter_examples joint_trajectory_file_playback.py -f my_first_recording.rec
Initializing node...
Getting robot state...
Enabling robot...
[INFO] [WallTime: 1422558116.740928] Robot Enabled
Running. Ctrl-c to quit
Playback loop 1 of 1
Exiting - File Playback Complete
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$
Here is an example of the recorded data starting with time and then listing the states of the joints and grippers. See Figure 3 for a definition of Baxter’s joint and arm designations:
```
3.034468, 0.081300981665, -1.18653413807, 1.94278666564, -0.498543755493, 100.0
(Gripper Open)
15.005002, 0.266529161591, -1.33379629354, 1.46341766997, 0.885490408795, -100.0
Left Gripper Closing at about 15 seconds
15.734782, 0.8846154213, 2.8846154213, left_gripper (CLOSED)
```
Opening
Example 5. Put a picture on Baxter’s face
$ rosrun baxter_examples xdisplay_image.py --f baxterworking.png (Image in ros_ws)
⚠️ **WARNING:** baxterworking.png 871.6 Kb (Digital Camera jpegs are too LARGE)
Screen Display - Example tool for displaying image files (png, jpeg) on the Head Screen. Your computer, and the example will read and convert the image using `cv_bridge`, sending it to the screen as a standard ROS Image Message.
optional arguments:
- `-h`, `--help` Show this help message and exit
- `-d SEC`, `--delay SEC` Time in seconds to wait before publishing image
required arguments:
- `-f PATH`, `--file PATH` Path to image file to send
Notes:
Max screen resolution is 1024x600.
Images are always aligned to the top-left corner.
Image formats are those supported by OpenCV - LoadImage().
Replace the `--file` argument with the path to your own image.
*Figure 4 Baxterworking.png*
Example 6. Baxter’s Cameras
http://sdk.rethinkrobotics.com/wiki/Camera_Control_Example
According to the WEB page: “There are three cameras available local to Baxter. A single camera is located in either of Baxter’s hands, the ‘left_hand_camera’ and the ‘right_hand_camera’. A third camera is available on Baxter's head, described as ‘head_camera’. This example shows usage for listing available cameras, opening each of the three cameras with various parameters, and closing the cameras.”
⚠️ WARNING: Important Note: Due to limited bandwidth capabilities only two cameras can be operating simultaneously. Starting a camera while both of the other cameras are in operation will result in an error, and the camera will not open.
Important Note: Default behavior on Baxter startup is for both of the hand cameras to be in operation at a resolution of 320x200 at a framerate of 25 fps.
Supported Frame Size Modes: Frame sizes at which the cameras will operate.
- 1280x800
- 960x600
- 640x400
- 480x300
- 384x240
- 320x200
If given an unsupported frame size, this example will exit with the ValueError: Invalid Camera mode.
For more information on Baxter’s cameras, see Using the Cameras.
Usage
See the camera controller’s usage on the command line by passing camera_controller.py the --h, help argument:
$ rosrun baxter_tools camera_control.py -h
Usage: camera_control.py [-h] [-o CAMERA] [-c CAMERA] [-r RESOLUTION XxY] [-l]
Optional Arguments
- -h, --help This screen
- -o, --open [CAMERA] Open specified camera
- -c, --close [CAMERA] Close specified camera
- -r, --resolution [X]x[Y] Set camera resolution
- -l, --list List available cameras
Camera Control Example
Verify that all cameras are closed using the –c option from a terminal session,
```
$ rosrun baxter_tools camera_control.py -c left_hand_camera
$ rosrun baxter_tools camera_control.py -c right_hand_camera
$ rosrun baxter_tools camera_control.py -c head_camera
```
The reason that we are closing all of the cameras is because of the note above describing the constraint that only two cameras can be operated simultaneously.
You can Open, Display and Close each of the three available cameras [left_hand_camera, right_hand_camera, head_camera] with various settings.
To list all of the available cameras:
```
$ rosrun baxter_tools camera_control.py -l
(Get List in Terminal Window)
$ rosrun baxter_tools camera_control.py -o head_camera
$ rosrun image_view image_view image:=/cameras/head_camera/image
```

You can also View the camera feed in rviz. See the Camera Control Tool WEB page for more information: [http://sdk.rethinkrobotics.com/wiki/Camera_Control_Example](http://sdk.rethinkrobotics.com/wiki/Camera_Control_Example)
```
$ rosrun rviz rviz
```
Under the displays tab on the left hand side of rviz, change the 'Global Option - Fixed Frame' from '/map' to '/base'.
Select 'Add' in the displays tab of rviz.
Select 'Camera' display topic.
The 'Camera' topic will now be displayed in a new embedded window.
Under the 'Camera' tab on the left display window, choose the 'Image Topic':
/cameras/right_hand_camera/image
You should now see the right_hand_camera image in the embedded camera window. Other data displayed in rviz will also be projected onto this image accordingly (ie. turning on the 'RobotModel' display, and moving the right arm to point at the left arm will display the 'RobotModel' overlaid on your live video feed).
Example 7. This example will blink the LED on the Left Navigator on and then off.
$ rosrun baxter_examples digital_io_blink.py $ rosrun baxter_examples digital_io_blink.py -h
(Get Help)
RSDK Digital IO Example: Blink
(usage: digital_io_blink.py [-h] [-c COMPONENT_ID])
Turns the output of a DigitalIO component on then off again while printing the state at each step. Simple demonstration of using the baxter_interface.DigitalIO class. Run this example with default arguments and watch the light on the left arm Navigator blink on and off while the console echoes the state. Use the component_id argument or ROS Parameter to change the DigitalIO component used.
Initial state: False New state: True Final state: False
optional arguments:
-h, --help show this help message and exit
-c COMPONENT_ID, --component COMPONENT_ID
name of Digital IO component to use (default: left_itb_light_outer)
Carol’s Easy Script for Baxter Commands
Carol Fairchild has written a script that runs under Ubuntu with the name `.run_baxter`
(Note the . and space before `run_baxter`)
Here is the Help file for the script (shown in Appendix IV):
```
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ . run_baxter h
Today is Fri Jan 23 15:27:32 CST 2015
run_baxter commands:
enable, disable, state, reset, stop
tuck, untuck
arms_keyboard, record <filename>, playback <filename>
springs <right or left>, arms_wobbler, puppet <right or left>
ik <right or left>, joint_trajectory <right or left>
camera open <right, left or head> res <wide, medium or narrow>
camera close <right, left or head>
head_wobbler, gripper_keyboard, head_display <filename>
digital_io, analog_io
```
For example to execute the Enable command:
```
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ . run_baxter enable
```
SIMULATORS
This section covers several of the simulators that work with ROS. The first is called Turtlesim and is useful to learn about the ROS nodes, topics, and services. Turtlesim is a ROS package that contains executable code and other information to create a simple simulation using a moving “turtle”.
The second simulator considered here is called MoveIt and is used to simulate trajectory planning for Baxter the Robot as well as other popular robots. The trajectory can be sent to the physical Baxter and the simulated movements will be replayed.
The third simulator discussed is called Gazebo and like MoveIt can simulate Baxter and other robots.
Our introduction here is very brief to allow you to just “sample” the capability of these simulators. The use of MoveIt and Gazebo, for example, go far beyond simulating Baxter’s arm movements. They can be used for scene creation including planning trajectories with obstacle avoidance for Baxter’s arms.
Turtlesim and Introduction To ROS
TURTLESIM 01/23/2015
LOG ON AND OPEN A TERMINAL WINDOW. This example will use three windows to run the simulation. A forth window will be used to show the use of ROS to get information about the simulation such as the position of the turtle.
Terminal 1 Start ROS
tharmanphd@D125-43873:~$ roscore
... logging to /home/tharmanphd/.ros/log/1de04490-a353-11e4-86c8-3417ebba982/roslauncheD125-43873-13463.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://D125-43873:46355/
ros_comm version 1.9.55
SUMMARY
========
PARAMETERS
* /rosdistro
* /rosversion
NODES
auto-starting new master
process[master]: started with pid [13477]
ROS_MASTER_URI=http://D125-43873:11311/
setting /run_id to 1de04490-a353-11e4-86c8-3417ebba982
process[rosout-1]: started with pid [13490]
started core service [/rosout]
You can ignore the messages for this simulation.
Turtlesim Terminal 2
```
tlharmanphd@D125-43873:~$ rosrun turtlesim turtlesim_node
[ INFO] [1422053853.021652635]: Starting turtlesim with node name /turtlesim
[ INFO] [1422053853.024476555]: Spawning turtle [turtle1] at x=[5.544445], y=[5.544445], theta=[0.000000]
```
Turtlesim is the package and Turtlesim_node is the executable node that creates the picture with the turtle.

*Figure 6 Turtlesim 1 with the Turtle after the node is executed*
TURTLESTIM WINDOW – TO MOVE TURTLE OPEN NEW WINDOW #3 TO ENABLE KEYBOARD.
TERMINAL 3 Enable the Keyboard
In the third window, we execute a node that allows keyboard control of the turtle.
```
tlharmanphd@D125-43873:~$ rosrun turtlesim turtle_teleop_key
Reading from keyboard
Use arrow keys to move the turtle.
Up arrow Turtle up
Down arrow Turtle down
Right arrow Rotate CW
Left arrow Rotate CCW
```

Figure 7 Turtlesim After Moving
TURTLESIM WINDOW AFTER USING TERMINAL 3 WITH KEYBOARD
TERMINAL 4 ROS Nodes, Topics, and Services Using Turtlesim
Before going through this section and especially the tutorials on ros.org, you should have read and understood the material about ROS covered in the *Introduction to Baxter* report by T.L. Harman and Carol Fairchild.
A READ OF THE FIRST FEW CHAPTERS IN BOOKS SUCH AS THE FOLLOWING WILL BE HELPFUL:
The textbook *Learning ROS for Robotics Programming* by Aaron Martinez is useful. The examples are in C++.
*A Gentle Introduction to ROS* by Jason M. O’Kane is very readable and can be downloaded from the site: [http://www.cse.sc.edu/~jokane/agitr/agitr-letter.pdf](http://www.cse.sc.edu/~jokane/agitr/agitr-letter.pdf)
The author’s website is [http://www.cse.sc.edu/~jokane/agitr/](http://www.cse.sc.edu/~jokane/agitr/)
These other ROS books might be helpful as referenced by O’Kane:
- [ROS by Example](http://www.cse.sc.edu/~jokane/agitr/agitr-letter.pdf) by R. Patrick Goebel
- [Learning ROS for Robotics Programming](http://www.cse.sc.edu/~jokane/agitr/agitr-letter.pdf)
by Aaron Martinez and Enrique Fernandez. The examples are in C++.
Always be sure to check of any changes in the Ubuntu or ROS distribution. This User’s Guide is written using Ubuntu 12.04 and ROS Groovy as indicated in the *Introduction to Baxter* report by T.L. Harman and Carol Fairchild.
⚠️ If you are new to ROS - don’t be impatient. There is a great deal to learn but the Turtlesim example shown here should make things easier.
The ROS official tutorials are at these WEB sites:
ROS Tutorials Helpful for the Examples to Follow:
- [ROS/Tutorials/UnderstandingNodes](http://wiki.ros.org/turtlesim/Tutorials)
- [ROS/Tutorials/UnderstandingTopics](http://wiki.ros.org/turtlesim/Tutorials)
- [ROS/Tutorials/UnderstandingServicesParams](http://wiki.ros.org/turtlesim/Tutorials)
We start a fourth terminal window to view the information that is available through ROS for the Turtlesim. The commands in that window elicit data while the other windows keep the turtle active. To move the turtle, use window three.
Figure 8 Four Turtlesim Windows using Terminator
The screen with four windows was created using Terminator. It is downloaded in Ubuntu from the Software Center Icon on the launcher: [http://en.wikipedia.org/wiki/Ubuntu_Software_Center](http://en.wikipedia.org/wiki/Ubuntu_Software_Center)
The terminator is described at this site: [https://apps.ubuntu.com/cat/applications/terminator/](https://apps.ubuntu.com/cat/applications/terminator/)
1. List the ROS parameters to get information about the ROS nodes. The nodes are generally the executable scripts in ROS.
```
tlharmanphd@D125-43873:~$ rosnode
rosnode is a command-line tool for printing information about ROS Nodes.
Commands:
rosnode ping test connectivity to node
rosnode list list active nodes
rosnode info print information about node
rosnode machine list nodes running on a particular machine or list machines
rosnode kill kill a running node
rosnode cleanup purge registration information of unreachable nodes
Type rosnode <command> -h for more detailed usage. e.g. ‘rosnode ping -h’
```
-----------------------------
tlharmanphd@D125-43873:~$ rosnode list -h
Usage: rosnode list
Options:
-h, --help show this help message and exit
-u list XML-RPC URIs
-a, --all list all information
tlharmanphd@D125-43873:~$ rosnode list (Active Nodes)
/rosout
/teleop_turtle
/turtlesim
tlharmanphd@D125-43873:~$
2. Determine what information you can get for the node turtlesim.
tlharmanphd@D125-43873:~$ rosnode info /turtlesim (P44)
--------------------------------------------------------------------------------
Node [/turtlesim]
Publications:
* /turtle1/color_sensor [turtlesim/Color]
* /rosout [rosgraph_msgs/Log]
* /turtle1/pose [turtlesim/Pose]
Subscriptions:
* /turtle1/command_velocity [turtlesim/Velocities]
Services:
* /turtle1/teleport_absolute
* /turtlesim/get_loggers
* /turtlesim/set_logger_level
* /reset
* /spawn
* /clear
* /turtle1/set_pen
* /turtle1/teleport_relative
* /kill
contacting node http://D125-43873:52428/ ... (Our lab workstation)
Pid: 13590
Connections:
* topic: /rosout
* to: /rosout
* direction: outbound
* transport: TCPROS
* topic: /turtle1/commandvelocity
* to: /teleop_turtle (http://D125-43873:43507/)
* direction: inbound
• transport: TCPROS
•
tlharmanphd@D125-43873:~$ rostopic list
/rosout
/rosout_agg
/turtle1/color_sensor
/turtle1/command_velocity
/turtle1/pose
-----------------------------
rosout is the name of the console log reporting mechanism in ROS. It can be thought as comprising several components:
For a little explanation from the ROS wiki:
http://wiki.ros.org/ROS
- The `rosout` node for subscribing, logging, and republishing the messages.
- The /rosout topic
- The /rosout_agg topic for subscribing to an aggregated feed
- rosgraph_msgs/Log message type, which defines standard fields as well as verbosity levels.
The rosout package only provides the rosout node.
One important topic is /turtle1/command_velocity which will be published using the keyboard or by publishing the topic with the rostopic pub command as shown later.
DETERMINE DATA
This command shows the data sent by the note to control the turtle. As you move the turtle, the data are updated.
tlharmanphd@D125-43873:~$ rostopic echo /turtle1/command_velocity
---
linear: 2.0
(Use Window 3 to move Turtle)
angular: 0.0
---
linear: 2.0
angular: 0.0
---
linear: -2.0
angular: 0.0
---
linear: 2.0
angular: 0.0
---
Services allow nodes to communicate by sending a request and receiving a response. The services can be used in the form rosservice call <option> where option for example is /clear.
tlharmanphd@D125-43873:~$ rosservice list
/clear
/kill
/reset
/rosout/get_loggers
/rosout/set_logger_level
/spawn
/teleop_turtle/get_loggers
/teleop_turtle/set_logger_level
/turtle1/set_pen
/turtle1/teleport_absolute
/turtle1/teleport_relative
/turtlesim/get_loggers
/turtlesim/set_logger_level
We can make the turtle turn in a circle by **publishing** the topic `/turtle1/command_velocity`.
tlharmanphd@D125-43873:~$ rostopic pub /turtle1/command_velocity turtlesim/Veloc...1 -- 2.0 -1.8

**Figure 9 Turtle responds to published topic**
The command will publish at a rate (`-r`) of once a second (1 Hz). The topic `/turtle1/command_velocity` is followed by the message type `turtlesim/Veloc...y` that commands the turtle to turn with linear velocity 2.0 and angular velocity 1.8 according to the ROS tutorial:
As noted before, a turtlesim/Velocity msg has two floating point elements: linear and angular. In this case, 2.0 becomes the linear value, and 1.8 is the angular value. These arguments are actually in YAML syntax, which is described more in the YAML command line documentation.
When you want to CLEAR THE SCREEN
tlharmanphd@D125-43873:~$ rosservice call /clear
There is another feature of ROS that is useful for those who wish to see a graphical view of the communication between nodes. We know that `/teleop_turtle` node publishes a message on the topic called `/turtle1/command_velocity` and the node `/Turtlesim` subscribes to those messages.
This can be shown in a graphical form with the command:
tlharmanphd@D125-43873:~$ rqt_graph

Figure 10 Turtlesim graph showing communication
The advantage of Turtlesim is as follows:
1. Easy to Learn and Use
2. Shows basic ROS capability
3. Can be downloaded with ROS for use on a laptop
Working with Baxter and ROS is considerably more complicated. The software modules are pictured in Appendix I and the ROS files are listed in Appendix III.
MoveIt
From the official MoveIt site:
“MoveIt! is state of the art software for mobile manipulation, incorporating the latest advances in motion planning, manipulation, 3D perception, kinematics, control and navigation. It provides an easy-to-use platform for developing advanced robotics applications, evaluating new robot designs and building integrated robotics products for industrial, commercial, R&D and other domains.” View the video on that site.
http://moveit.ros.org/
Here is an introduction to ROS and MoveIt together:
https://www.youtube.com/watch?v=eMlGV94c5WU
The Rethink tutorial shows how to download and use MoveIt with Baxter:
https://github.com/RethinkRobotics/sdk-docs/wiki/MoveIt-Tutorial
Rethink Robotics has a good video showing the use of MoveIt:
https://www.youtube.com/watch?v=1Zdkwym42P4
To use MoveIt:
In First terminal window:
1. Go to ros_ws, Execute baxter.sh
2. Enable Baxter and run the joint_trajectory_action_server
```
tlharmanphd@D125-43873:~$ cd ~/ros_ws
tlharmanphd@D125-43873:~/ros_ws$ ./baxter.sh
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ rosrnn baxter_tools enable_robot.py -e
[INFO] [WallTime: 1422563798.218496] Robot Enabled
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ rosrnn baxter_interface joint_trajectory_action_server.py
Initializing node...
Initializing joint trajectory action server...
Running. Ctrl-c to quit
```
In Second terminal window: Go to ros_ws, run baxter.sh again and launch MoveIt
tlharmanphd@D125-43873:~/ros_ws$ ./baxter.sh
[baxter - http://172.29.64.200:11311] tlharmanphd@D125-43873:~/ros_ws$ roslaunch baxter_moveit_config demo_baxter.launch
... logging to /home/tlharmanphd/.ros/log/adfe730a-a7e3-11e4-bae5-000af72ca0bb/roslaunch-D125-43873-7674.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://172.29.64.201:60551/
After a great deal of information look for the line
All is well! Everyone is happy! You can start planning now!
The roslaunch command causes a number of nodes to execute.
The Rviz gui will then open showing Baxter with interactive markers:
MoveIT first screen with Baxter
Figure 11MoveIt and Baxter Initial View
SELECT THE PLANNING TAB and move the simulated arms using the arrows and rings.
**Figure 12** Simulator with Initial and Desired Position of Arms
Viewing Baxter with Red arms as current position, Orange as planned position
Now using Execute, Baxter’s physical arms will move to the simulated position.
**Figure 13** Baxter Simulator Ready for Execution of Moves
EXECUTE - Moves Baxter's arms to positions as in the simulation.
Gazebo
Gazebo is used to simulate many robots and Baxter is one of them.
**Baxter Simulation with Gazebo.** To download the Baxter Simulator used with Gazebo, it requires a GitHub user name and permission from Rethink Robotics.
**TURN ON BAXTER AND LOG IN.**
The option `sim` was added to the baxter shell.
In the first terminal window:
```bash
fairchildc@D125-43873:~$ ros_ws
fairchildc@D125-43873:/home/tlharmanphd/ros_ws$ ./baxter.sh sim
[baxter - http://localhost:11311] fairchildc@D125-43873:/home/tlharmanphd/ros_ws$ roslaunch baxter_gazebo baxter_world.launch
... logging to /home/fairchildc/.ros/log/f75eada0-a647-11e4-b327-3417ebca982/roslaunch-D125-43873-8765.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://172.29.64.201:44979/
**SUMMARY**
**PARAMETERS**
* /grav_left_name
* /grav_right_name
* /left_tip_name
...
Baxter Simulation is ready when you see the following:
```
[ INFO] [1422380584.22918484, 529.025000000]: Robot is enabled
[ INFO] [1422380584.229174028, 529.025000000]: right_joint_position_controller was started and right_joint_velocity_controller and right_joint_effort_controller were stopped successfully
[ INFO] [1422380584.229194997, 529.025000000]: Gravity compensation was turned on
```
In a second terminal window:
```
fairchildc@D125-43873:~$ ros_ws
fairchildc@D125-43873:/home/tlharmanphd/ros_ws$ /baxter.sh sim
[baxter - http://localhost:11311] fairchildc@D125-43873:/home/tlharmanphd/ros_ws$ rosr
baxter_tools enable_robot.py -e
[INFO] [WallTime: 1422380502.352324] [447.269000] Robot Enabled
[baxter - http://localhost:11311] fairchildc@D125-43873:/home/tlharmanphd/ros_ws$ . run_baxter
untuck
```
Today is Tue Jan 27 11:43:03 CST 2015
[INFO] [WallTime: 1422380583.925996] [0.000000] Untucking arms
[INFO] [WallTime: 1422380584.016426] [528.812000] Moving head to neutral position
[INFO] [WallTime: 1422380584.016611] [528.812000] Untucking: Arms already Untucked; Moving to neutral position.
[INFO] [WallTime: 1422380585.581263] [530.374000] Finished tuck
[baxter - http://localhost:11311] fairchildc@D125-43873:/home/tlharmanphd/ros_ws$
The command $rosrun baxter_tools tuck_arms.py --u$ could have been used but here we used Carol’s script. run_baxter discussed previously and in Appendix IV.
TRY AN EXAMPLE TO SEE BAXTER MOVE IN SIMULATION:
$ rosrun baxter_examples joint_velocity_wobbler.py
APPENDIX I BAXTER RETHINK EXAMPLES
Movement
Joint Position Waypoints Example - The basic example for joint position moves. Hand-over-hand teach and recording a number of joint position waypoints. These waypoints will then be played back indefinitely upon completion.
Joint Position Keyboard Example - This example demonstrates numerous joint position control.
Joint Position Example - Joystick, keyboard and file record/playback examples using joint position control of Baxter's arms.
Joint Torque Springs Example - Joint torque control example applying virtual spring torques.
Joint Velocity Wobbler Example - Simple demo that moves the arm with sinusoidal joint velocities.
Joint Velocity Puppet Example - Demo which mirrors moves of one arm on the other in Zero-G.
Inverse Kinematics Service Example - Basic use of Inverse Kinematics solver service.
Simple Joint Trajectory Example - Simple demo using the joint trajectory interface.
Joint Trajectory Playback Example - Trajectory playback using the joint trajectory interface.
Head Movement Example - Simple demo moving and nodding the head.
Gripper Example - Joystick and Keyboard control for the grippers.
Input and Output
Camera Control Example - Demonstrates usage for listing, opening, and closing the available cameras.
View Cameras Example - Simple tool for viewing camera feed on development machine.
I/O Example - Flash the lights on the digital outputs.
These examples are shown as part of Baxter’s software in Figure 17.
The Figure shows the elements of software loaded on the workstation to download software or operate Baxter. Notice the Baxter Interface, Examples and Tools software modules. The figure is for the SDK version 0.7 and may not be up to date with the latest SDK software from Rethink Robotics.
Figure 17 Baxter Software
Examples with Links From FOUNDATIONS 8/30/2014
http://sdk.rethinkrobotics.com/wiki/Examples
Here are links to the example pages:
- **Wobbler Example** Use wobbler as an example of controlling Baxter using joint velocity control. Arms "wobble"
- **Joint Trajectory Playback Example** Enable the robot joint trajectory interface, parse a file created using the joint position recorder example, and send a the resulting joint trajectory to the action server.
- **Puppet Example** Use puppet as an example of controlling Baxter using joint velocity control. One arm follows the other in VELOCITY.
- **Simple Joint Trajectory Example** Enable the robot joint trajectory interface and send a simple joint trajectory for Baxter to follow.
- **Joint Torque Springs Example** This example shows joint torque control usage. After moving to neutral, the robot will enter torque control mode, applying torques representing virtual springs holding the joints to their start position.
- **Gripper Example** Use gripper control as an example of controlling Baxter's grippers.
- **Head Movement Example** The Head "Wobbler" Example randomly moves the head to demonstrate using the head pan and nod interfaces.
- **IK Service Example** The IK Test example shows the very basics of calling the on-robot Inverse-Kinematics (IK) Service to obtain a joint angles solution for a given endpoint Cartesian point & orientation.
- **Input Output Example** The 'input_output' examples provide simple demonstrations of using the DigitalIO and AnalogIO components, in correspondence with the python interfaces included in the 'baxter_interface' code.
## APPENDIX II Summary of Ubuntu and ROS commands
<table>
<thead>
<tr>
<th><strong>UNITY</strong></th>
<th><strong>TERMINAL</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>Dash Help ?</td>
<td>Desktop Guide</td>
</tr>
<tr>
<td>F1 –</td>
<td>Works with many applications</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
### FILES DIRECTORIES
<table>
<thead>
<tr>
<th>Home Folder -File System</th>
<th>Show Root/home/home-user</th>
<th>ls -la</th>
<th>Show all and long-permissions, etc.</th>
</tr>
</thead>
<tbody>
<tr>
<td>RightClick Copy Then Paste</td>
<td>cp <options> file1 file2</td>
<td>Copy -r does files in directories</td>
<td></td>
</tr>
</tbody>
</table>
### EDIT
<table>
<thead>
<tr>
<th>LibreOffice</th>
<th>Save as txt if a program- change suffix</th>
<th>gedit file</th>
<th>Move up to menu bar and File:SaveAs</th>
</tr>
</thead>
</table>
### SYSTEM
<table>
<thead>
<tr>
<th>Sysinfo in Dash SystemSettings-Launcher or on Right of menu bar</th>
<th>(Installed) -Hardware Appearance, HW, Network details, User Accounts</th>
<th>lspci -v</th>
<th>More than you need</th>
</tr>
</thead>
</table>
### NETWORK
<table>
<thead>
<tr>
<th>SystemSettings Network</th>
<th>172.29.64.201</th>
<th>ifconfig</th>
</tr>
</thead>
</table>
APPENDIX III ROS Workspace Directories
/home/tlharmanphd/ros_ws/baxter.sh
/home/tlharmanphd/ros_ws/baxter_common
/home/tlharmanphd/ros_ws/build
/home/tlharmanphd/ros_ws/devel
/home/tlharmanphd/ros_ws/src
/home/tlharmanphd/ros_ws/src/activerobots
/home/tlharmanphd/ros_ws/src/baxter
/home/tlharmanphd/ros_ws/src/baxter_common
/home/tlharmanphd/ros_ws/src/baxter_examples
/home/tlharmanphd/ros_ws/src/baxter_interface
/home/tlharmanphd/ros_ws/src/baxter_simulator
/home/tlharmanphd/ros_ws/src/baxter_tools
/home/tlharmanphd/ros_ws/src/control_toolbox
/home/tlharmanphd/ros_ws/src/gazebo_ros_pkgs
/home/tlharmanphd/ros_ws/src/moveit_robots
/home/tlharmanphd/ros_ws/src/moveit_robots/clam_moveit_config
/home/tlharmanphd/ros_ws/src/moveit_robots/iri_wam_moveit_config
/home/tlharmanphd/ros_ws/src/moveit_robots/r2_moveit_generated
/home/tlharmanphd/ros_ws/src/realtime_tools
/home/tlharmanphd/ros_ws/src/ros_control
/home/tlharmanphd/ros_ws/src/ros_control/controller_interface
/home/tlharmanphd/ros_ws/src/ros_control/controller_manager
/home/tlharmanphd/ros_ws/src/ros_control/controller_manager_msgs
/home/tlharmanphd/ros_ws/src/ros_control/controller_manager_tests
/home/tlharmanphd/ros_ws/src/ros_control/docs
/home/tlharmanphd/ros_ws/src/ros_control/hardware_interface
/home/tlharmanphd/ros_ws/src/ros_control/joint_limits_interface
/home/tlharmanphd/ros_ws/src/ros_control/ros_control
/home/tlharmanphd/ros_ws/src/ros_control/transmission_interface
/home/tlharmanphd/ros_ws/src/ros_controllers
/home/tlharmanphd/ros_ws/src/ros_controllers/effort_controllers
/home/tlharmanphd/ros_ws/src/ros_controllers/force_torque_sensor_controller
/home/tlharmanphd/ros_ws/src/ros_controllers/forward_command_controller
/home/tlharmanphd/ros_ws/src/ros_controllers/imu_sensor_controller
/home/tlharmanphd/ros_ws/src/ros_controllers/joint_state_controller
/home/tlharmanphd/ros_ws/src/ros_controllers/joint_trajectory_controller
/home/tlharmanphd/ros_ws/src/ros_controllers/position_controllers
/home/tlharmanphd/ros_ws/src/ros_controllers/ros_controllers
/home/tlharmanphd/ros_ws/src/ros_controllers/velocity_controllers
/home/tlharmanphd/ros_ws/src/xacro
APPENDIX IV Carol’s Script for .run_baxter
01/23/15 . run_baxter
[http://172.29.64.200:11311] tlahmanphd@D125-43873:~/ros_ws$ . run_baxter h
Today is Fri Jan 23 15:27:32 CST 2015
run_baxter commands:
enable, disable, state, reset, stop
tuck, untuck
arms_keyboard, record <filename>, playback <filename>
springs <right or left>, arms_wobbler, puppet <right or left>
ik <right or left>, joint_trajectory <right or left>
camera open <right, left or head> res <wide, medium or narrow>
camera close <right, left or head>
head_wobbler, gripper_keyboard, head_display <filename>
digital_io, analog_io
The script runs under Ubuntu/BASH and calls various Python language scripts from the Baxter ROS packages.
#!/bin/bash
echo "Today is `date`"
case "$1" in
enable | Enable | ENABLE )
rosrun baxter_tools enable_robot.py -e
;;
disable | Disable | DISABLE )
rosrun baxter_tools enable_robot.py -d
;;
state | State | STATE )
rosrun baxter_tools enable_robot.py -s
;;
reset | Reset | RESET )
rosrun baxter_tools enable_robot.py -r
;;
stop | Stop | STOP )
rosrun baxter_tools enable_robot.py -S
;;
# The following options will open a new terminal window and execute the command.
# tuck_arms from baxter_tools
tuck | Tuck | TUCK )
rosrun baxter_tools tuck_arms.py -t
# xterm -hold -e "rosrun baxter_tools tuck_arms.py -t"
;;
untuck | Untuck | UNTUCK )
rosrun baxter_tools tuck_arms.py -u
# xterm -hold -e "rosrun baxter_tools tuck_arms.py -u"
# gnome-terminal -e "rosrun baxter_tools tuck_arms.py -u" -- did not work
;;
# joint commands from baxter_examples
arms_keyboard | ARMS_KEYBOARD )
xterm -hold -e "rosrun baxter_examples joint_position_keyboard.py"
;;
record | RECORD )
if [ -n "\$2" ]; then # check if second argument is passed
xterm -hold -e "rosrun baxter_examples joint_recorder.py -f \$2"
else
;
esac
}
echo "No Filename given."
fi
;
playback | PLAYBACK )
if [ -r "$2" ]; then # check that file exists and is readable
xterm -hold -e "rosrun baxter_examples joint_position_file_playback.py -f $2"
else
echo "No Filename given or file does not exist."
fi
;
# joint torque command from baxter_examples
springs | Springs | SPRINGS )
if [ "$2" == left -o "$2" == right ]; then
xterm -hold -e "rosrun baxter_examples joint_torque_springs.py -l $2"
else
echo "Specify right or left."
fi
;
arms_wobbler | Arms_Wobbler | ARMS_WOBBLER | \
arm_wobbler | Arm_Wobbler | ARM_WOBBLER | wobbler )
xterm -hold -e "rosrun baxter_examples joint_velocity_wobbler.py"
;
puppet | Puppet | PUPPET )
if [ "$2" == left -o "$2" == right ]; then
xterm -hold -e "rosrun baxter_examples joint_velocity_puppet.py -l $2"
else
echo "Specify right or left."
fi
;
# call on-board Inverse Kinematics (IK) service to obtain joint angle solutuion
# for a given endpoint Cartesian point & orientation
ik | IK )
if [ "$2" == left -o "$2" == right ]; then
xterm -hold -e "rosrun baxter_examples ik_service_client.py -l $2"
else
echo "Specify right or left."
fi
;
# enable robot joint trajectory interface using joint trajectory controller
# another terminal session can execute a client to sent a joint trajectory
# to the right or left arm
joint_traj* | Joint_Traj* | JOINT_TRAJ* )
if [ "$2" == left -o "$2" == right ]; then
xterm -hold -e "rosrun baxter_interface joint_trajectory_action_server.py" &
rosrun baxter_examples joint_trajectory_client.py -l $2
else
echo "Specify right or left."
fi
# head_wobbler randomly nods and tilts head
head_wobbler | Head_Wobbler | HEAD_WOBBLER )
xterm -hold -e "rosrun baxter_examples head_wobbler.py"
# gripper_keyboard uses keyboard to control grippers
gripper_keyboard | Gripper_Keyboard | GRIPPER_KEYBOARD )
xterm -hold -e "rosrun baxter_examples gripper_keyboard.py"
# camera control tool
camera | Camera | CAMERA )
rosrun baxter_tools camera_control.py -c right_hand_camera
rosrun baxter_tools camera_control.py -c left_hand_camera
rosrun baxter_tools camera_control.py -c head_camera
shift
key="$1"
case $key in
open | Open | OPEN )
shift
rosrun baxter_tools camera_control.py -c right_hand_camera
rosrun baxter_tools camera_control.py -c left_hand_camera
rosrun baxter_tools camera_control.py -c head_camera
RESOLUTION=1280x800 # default resolution
esac
case "$1" in
right | Right | RIGHT )
CAMERA=right_hand_camera
;;
left | Left | LEFT )
CAMERA=left_hand_camera
;;
head | Head | HEAD )
CAMERA=head_camera
;;
esac
* )
echo "Camera not specified."
echo "open <right, left or head>"
exit
;;
esac
shift
# if there are more parameters on the command line do the next steps
while [[ $# > 1 ]];
do
case "$1" in
res | Res | RES )
shift
case "$1" in
wide | Wide | WIDE )
RESOLUTION=1280x800
;;
medium | Medium | MEDIUM )
RESOLUTION=480x300
;;
narrow | Narrow | NARROW )
RESOLUTION=320x200
;;
* )
echo "Resolution not specified."
echo "res <wide, medium or narrow>"
exit
;; # for case *
esac
;; # for case res
esac
done # end of while loop
esac
# camera image can be displayed on image_view or rviz
# comment out the option you do not want
rosrun image_view image_view image:=/cameras/"${CAMERA}"/image
# xterm -hold -e "rosrun rviz rviz"
;; # for case open
# close camera
close | Close | CLOSE )
shift
case "$1" in
right | Right | RIGHT )
CAMERA=right_hand_camera
;;
left | Left | LEFT )
CAMERA=left_hand_camera
;;
head | Head | HEAD )
CAMERA=head_camera
;;
* )
echo "Camera not specified."
echo "close <right, left or head>"
exit
;; # for case *
esac
rosrun baxter_tools camera_control.py -c "${CAMERA}"
;; # for case close
* ) echo "camera commands:"
echo "open <right, left or head> res <wide, medium or narrow>"
echo "close <right, left or head>"
;; # for case *
esac
;; # for case camera
# No new terminal window to execute these commands:
# xdisplay_image
# Display an image (e.g. .png or .jpg) to Baxter's head display. Baxter
# display resolution is 1024 x 600 pixels.
head_display | Head_Display | HEAD_DISPLAY )
if [ -n "$2" ]; then
rosrun baxter_examples xdisplay_image.py --f $2
else
echo "No Filename given."
fi
;;
# digital_io
# This will blink the LED on the Left Navigator on and then off while
# printing the status before and after.
digital_io | Digital_IO | DIGITAL_IO )
rosrun baxter_examples digital_io_blink.py
# analog_io
# This will run the robot's fans from 0 to 100 and back down again in
# increments of 10.
analog_io | Analog_IO | ANALOG_IO |
rosrun baxter_examples analog_io_rampup.py
* ) echo "run_baxter commands:"
echo "enable, disable, state, reset, stop"
echo "tuck, untuck"
echo "arms_keyboard, record <filename>, playback <filename>"
echo "springs <right or left>, arms_wobbler, puppet <right or left>"
echo "ik <right or left>, joint_trajectory <right or left>"
echo "camera open <right, left or head> res <wide, medium or narrow>"
echo "camera close <right, left or head>"
echo "head_wobbler, gripper_keyboard, head_display <filename>"
echo "digital_io, analog_io"
esac
#exit 0 -- using exit will end communication with Baxter –
(run_baxter in fairchildc/catkin_ws 01/23/15)
|
{"Source-Url": "https://sce.uhcl.edu/harman/CENG5931Baxter2015/Guides/BaxterUsersGuideDemoExamples2_3_2015.pdf", "len_cl100k_base": 15868, "olmocr-version": "0.1.53", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 106004, "total-output-tokens": 19313, "length": "2e13", "weborganizer": {"__label__adult": 0.0005269050598144531, "__label__art_design": 0.002239227294921875, "__label__crime_law": 0.00042128562927246094, "__label__education_jobs": 0.00269317626953125, "__label__entertainment": 0.0003352165222167969, "__label__fashion_beauty": 0.0002923011779785156, "__label__finance_business": 0.0005421638488769531, "__label__food_dining": 0.00042176246643066406, "__label__games": 0.0024089813232421875, "__label__hardware": 0.00963592529296875, "__label__health": 0.0005221366882324219, "__label__history": 0.0010004043579101562, "__label__home_hobbies": 0.0007677078247070312, "__label__industrial": 0.0028972625732421875, "__label__literature": 0.0004835128784179687, "__label__politics": 0.0004496574401855469, "__label__religion": 0.00072479248046875, "__label__science_tech": 0.313720703125, "__label__social_life": 0.00019633769989013672, "__label__software": 0.054534912109375, "__label__software_dev": 0.6025390625, "__label__sports_fitness": 0.0006856918334960938, "__label__transportation": 0.0015239715576171875, "__label__travel": 0.0003533363342285156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65295, 0.03453]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65295, 0.46647]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65295, 0.64301]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4572, false], [4572, 6799, null], [6799, 8583, null], [8583, 10037, null], [10037, 11851, null], [11851, 11970, null], [11970, 14042, null], [14042, 15491, null], [15491, 15688, null], [15688, 17282, null], [17282, 18412, null], [18412, 20195, null], [20195, 22364, null], [22364, 22940, null], [22940, 25050, null], [25050, 26779, null], [26779, 28055, null], [28055, 28959, null], [28959, 30666, null], [30666, 31995, null], [31995, 32489, null], [32489, 33418, null], [33418, 34400, null], [34400, 35365, null], [35365, 36384, null], [36384, 36930, null], [36930, 37401, null], [37401, 39544, null], [39544, 40616, null], [40616, 41936, null], [41936, 43283, null], [43283, 44261, null], [44261, 44624, null], [44624, 45407, null], [45407, 46860, null], [46860, 47730, null], [47730, 48096, null], [48096, 48161, null], [48161, 49518, null], [49518, 49936, null], [49936, 50637, null], [50637, 52141, null], [52141, 52458, null], [52458, 54092, null], [54092, 55426, null], [55426, 57578, null], [57578, 58290, null], [58290, 59634, null], [59634, 61017, null], [61017, 62308, null], [62308, 63355, null], [63355, 64450, null], [64450, 65295, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4572, true], [4572, 6799, null], [6799, 8583, null], [8583, 10037, null], [10037, 11851, null], [11851, 11970, null], [11970, 14042, null], [14042, 15491, null], [15491, 15688, null], [15688, 17282, null], [17282, 18412, null], [18412, 20195, null], [20195, 22364, null], [22364, 22940, null], [22940, 25050, null], [25050, 26779, null], [26779, 28055, null], [28055, 28959, null], [28959, 30666, null], [30666, 31995, null], [31995, 32489, null], [32489, 33418, null], [33418, 34400, null], [34400, 35365, null], [35365, 36384, null], [36384, 36930, null], [36930, 37401, null], [37401, 39544, null], [39544, 40616, null], [40616, 41936, null], [41936, 43283, null], [43283, 44261, null], [44261, 44624, null], [44624, 45407, null], [45407, 46860, null], [46860, 47730, null], [47730, 48096, null], [48096, 48161, null], [48161, 49518, null], [49518, 49936, null], [49936, 50637, null], [50637, 52141, null], [52141, 52458, null], [52458, 54092, null], [54092, 55426, null], [55426, 57578, null], [57578, 58290, null], [58290, 59634, null], [59634, 61017, null], [61017, 62308, null], [62308, 63355, null], [63355, 64450, null], [64450, 65295, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65295, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65295, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4572, 2], [4572, 6799, 3], [6799, 8583, 4], [8583, 10037, 5], [10037, 11851, 6], [11851, 11970, 7], [11970, 14042, 8], [14042, 15491, 9], [15491, 15688, 10], [15688, 17282, 11], [17282, 18412, 12], [18412, 20195, 13], [20195, 22364, 14], [22364, 22940, 15], [22940, 25050, 16], [25050, 26779, 17], [26779, 28055, 18], [28055, 28959, 19], [28959, 30666, 20], [30666, 31995, 21], [31995, 32489, 22], [32489, 33418, 23], [33418, 34400, 24], [34400, 35365, 25], [35365, 36384, 26], [36384, 36930, 27], [36930, 37401, 28], [37401, 39544, 29], [39544, 40616, 30], [40616, 41936, 31], [41936, 43283, 32], [43283, 44261, 33], [44261, 44624, 34], [44624, 45407, 35], [45407, 46860, 36], [46860, 47730, 37], [47730, 48096, 38], [48096, 48161, 39], [48161, 49518, 40], [49518, 49936, 41], [49936, 50637, 42], [50637, 52141, 43], [52141, 52458, 44], [52458, 54092, 45], [54092, 55426, 46], [55426, 57578, 47], [57578, 58290, 48], [58290, 59634, 49], [59634, 61017, 50], [61017, 62308, 51], [62308, 63355, 52], [63355, 64450, 53], [64450, 65295, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65295, 0.03312]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
9ebb0a72b78d31b91691c9d3fa02ef85e99b17ff
|
Thermo-fluid Analysis Software
V14 Product Guide
scSTREAM
HeatDesigner
scFLOW
SC/Tetra
PICLS
scPOST
The Role of CFD in Engineering
One of the foremost expectations of today’s successful product driven companies is that they bring high value-added products, that meet customer needs, quickly to the market. In addition, successful companies proactively identify application scenarios that could result in unsatisfactory performance, product failures, customer dissatisfaction and/or develop design solutions that mitigate the potential risks.
Thermo-fluid analysis software
Since software simulation enables predicting performance without creating a hardware prototype, the tools can be used early in the planning stage of product development to sift through preliminary design concepts. Simulation can also be used to predict performance of products where it is difficult to make experimental measurements. In addition, simulation software can be used to visualize invisible fluid flow and heat transfer. This results in increased engineering understanding while providing a vehicle for communicating this knowledge to non-experts.
Where does thermo-fluid analysis software come into play?
Thermo-fluid analysis software is indispensable for “Front-loading” product development to ensure the best product concepts that are identified early in the design process. Design quality will be improved during the conceptual design phase by conducting basic studies of fluid and thermal phenomena that directly affect product performance. During the detailed design phase, analyses are conducted under conditions similar to what the actual product will experience. From this work, design engineers can understand the source of problems that limit performance and investigate alternate design solutions before production begins.
Structured and unstructured mesh: the differences
Software Cradle offers two different types of thermo-fluid analysis tools: scSTREAM and HeatDesigner with structured mesh, scFLOW and SC/Tetra with unstructured mesh.
Structured mesh is simple and easy to construct. Structured mesh is comprised of many small cuboids so it can only approximate curved or angled surfaces with stair-case patterns. It is most useful for applications where tiny details and surface curvature or angles do not have a strong effect on the overall results. Examples of applications for structured mesh include electronics cooling, HVAC, and architecture.
Unstructured mesh is created using polyhedral elements. Mesh is generated such that it fits along the ridge lines of the original geometry. As a result, unstructured mesh is used for applications where precise representation of geometry is crucial. Examples of applications for unstructured mesh include vehicle aerodynamics, fan blade designs, and flows inside ducts.
Software Cradle develops and provides thermo-fluid simulation software and optional tools that suit various industries and objectives.
### Thermo-fluid simulation software and main peripheral tools
**scSTREAM**
- Structured mesh (Cartesian/cylindrical coordinate systems)
- Designing thermal and air flow inside an office
- Evaluating air flow around buildings
- Evaluating heat island effect
- Heat dissipation design of electronics and precision instruments
- Dust proof and moisture proof review of electronics and precision instruments
- Multi-phase flow analysis such as mixing, spray, solidification, melting, boiling, and condensation
- Analysis involving moving objects such as cars, controlled equipment, hydraulic and pneumatic equipment, and robots
**HeatDesigner**
- Module for electronics
- Designing heat release of a printed circuit board
- Examining heat-releasing fins and the material
- Designing heat release of an enclosure with a fan
**PICLS**
- Dedicated tool for thermal analysis of printed circuit boards
- Real-time thermal analysis
- Board size design
- Layer composition design
- Parts layout design
- Review of the effects of wiring pattern and thermal vias
**scPOST**
- Comprehensive and versatile data visualization software
- Obtaining numerical information
- Creating animation
- Mapping temperature information of a fluid analysis result to a structural analysis
- Comparing multiple analysis results
**scFLOW**
- Unstructured mesh (Polyhedral elements)
- Aerodynamic simulation for automobiles
- Evaluation of rotational devices such as fans and pumps
- Prediction of cavitation and erosion
- Design of household electric appliances such as refrigerators and wash machines
- Internal flow analysis of ducts and nozzle
- Analysis involving chemical reactions including reactor, catalyst, furnace, combustor, and CVD
- Multi-phase flow analysis such as mixing, spray, solidification, melting, boiling, and condensation
- Analysis of multiphase flow phenomena such as mixing, blending, sprays, solidification, melting, boiling, and condensation
- Water tank test simulation of a vessel
**SC/Tetra**
- Unstructured mesh (Tetrahedral, pentahedral, and hexahedral elements)
**WindTool**
- Wind environment assessment tool
**Launcher (Autodesk® Revit®)**
- CAD add-in tool
**Launcher (ARCHICAD)**
- CAD add-in tool
**Launcher (SOLIDWORKS®)**
- CAD add-in tool
**Optimus® for Cradle**
- Optional tool for optimum solution search
**ElectronicPartsMaker**
- Tool for semiconductor package modeling
**scWorkSketch**
- Tool for creating automated workflow
**Structural Analysis**
- Structural analysis tool (linear static analysis)
**SmartBlades**
- Tool for modeling fan blades
**Fluid-Structure Interaction (Abaqus®)**
- Tool for two-way coupling
**FluidBearingDesigner**
- Tool for analyzing fluid bearing
**1D/3D Coupling (GT-SUITE)**
- Tool for two-way coupling
---
Autodesk and Revit are registered trademarks of Autodesk, Inc. and/or its affiliates in the United States and other countries.
ARCHICAD is a registered international trademark of GRAPHISOFT R&D Rt.
Is your analysis tool useful in years to come?
scSTREAM and HeatDesigner have proven track records for incorporating the latest leading edge technology.
**scSTREAM HeatDesigner**
scSTREAM thermo-fluid software has serviced the electronics and architectural industries for more than thirty years. The ever-evolving software is characterized by its overwhelmingly user-friendly interfaces and high speed processing. HeatDesigner is based on scSTREAM and is specially developed for thermal design of electronics products. HeatDesigner provides physical functions required only for thermal design with its simple interfaces and powerful computing performance.
### Various methods to represent shapes
The shape of a model to be analyzed can be represented by using the following methods: voxel method (slanted faces and curved faces are represented in cuboids), cell method (the shape of a model created with a CAD tool can be represented more accurately), and finite element model method (a model of an arbitrary shape defined with unstructured mesh can be overlapped on a model defined with structured mesh to use the shape created with a CAD tool as is).
### Moving objects
A flow generated by a moving rigid object can be calculated. Conditions including the motions of an object (translation, rotation, and elastic deformation), heat generation/absorption, and air supply/return can be set. The model of a moving object is created on another mesh. In this way, conditions such as the distance that the object moves are limited very little.
### 6-degree-of-freedom motion (6DOF)
The function can analyze passive translation and rotation of an object receiving a fluid force. A moving object is assumed to be a rigid body. Its movement whose maximum degree of freedom is six (3D translation + 3D rotation) can be solved. The function can simulate driftwood which is flowed by a force from water flow.
Multiblock
Mesh can be refined partially to represent a model shape more accurately and perform a calculation more efficiently.
Discrete element method (DEM)
Multiphase analyses can be performed, which enables coupling of fluid analysis and flow analysis of particles.
Parts library
The shapes and conditions of frequently used parts can be registered. Conditions include the allocation position, material, and heat generation.
HeatPathView
The information on temperature of each part and a comprehensive amount of heat release obtained in post-processing of a general CFD analysis is not enough to know the heat path. HeatPathView displays heat paths and the amount of heat transfer in the whole computational domain in a diagram, a graph, and a table, allowing you to find the bottleneck of the heat paths easily.
ElectronicPartsMaker
The tool can create detailed models of semiconductor packages including QFP, SOP, and BGA by specifying parameters, and simplified models using thermal resistor models such as DELPHI models and two-resistor models. Manufacturers of semiconductor packages can provide the data of semiconductor packages as thermal resistor models without releasing the inside information.
Reading wiring patterns
To calculate heat transfer conditions depending on wiring patterns of a printed circuit board (PCB) in detail, the module can read Gerber data output from an electric CAD tool and import the data as a model for a thermo-fluid analysis. By using Gerber data, a more realistic calculation result can be obtained with the consideration of heat transfer affected by an uneven wiring pattern.
Radiation
Radiation heat transfer with the consideration of diffusion, reflection, transmission, refraction, and absorption can be calculated. VF (view factor) method and FLUX method\(^*1\) can be used. The lamp function can simulate radiant heat by a filament without detailed shape information of a lamp. In addition to the filament, laser beam and defective radiation specified by half-value angle can be used as a heat source model.
\(^*1\) Only for scSTREAM.
**BIM**
The software interface supports BIM 2.0. Autodesk® Revit® and GRAPHISOFT ARCHICAD have a direct interface (optional) through which a target part can be selected and the tree structure can be kept and simplified. In addition, the module can load files in IFC format, which is the BIM-standard format.
**Illuminance analysis**
The software can calculate illuminance of various types of light; for example, daylight through an opening of a building and artificial lighting with consideration of its directivity. Object surfaces such as walls are treated as diffusive reflection surfaces. In general, the larger an opening of a building is, the larger heat loss tends to be. By calculating the illuminance, the balance between heat and light can be examined collectively.
**Electronic part model**
A wide range of models are available that enable to easily achieve thermal design of printed circuit boards and electronic enclosures, which includes DELPHI (multi-resistor) model, Peltier device and heat pipes. It is possible to consider pressure loss characteristics using slits, and P-Q characteristics of fans using swirling component. Generated models can be added in library.
---
* Measurment device is not included
Air-conditioner parts (CFD parts)
The model shapes of parts frequently used for room air-conditioning can be imported. The models include ceiling cassettes, anemostat models, and linear diffusers. The software can import CFD part data, such as air supply characteristics, provided by SHASE. Various parameters can be set to simulate air-conditioning operation in addition to simple air heating and cooling.
Solar radiation (ASHRAE, NEDO)
Climate data published by ASHRAE and NEDO is preset and can be used for condition setting. By entering arbitrary values of longitude, latitude, date, and time, the solar altitude and the azimuth angle of the sun at a specified location and time are calculated automatically. The effect of solar radiation can be examined in detail. Various parameters including absorption and reflectivity of solar radiation and materials which transmit light diffusely, such as frosted glass, can be set.
Thermal comfort, heat stress risk and ventilation efficiency indices
Comfort indices PMV and SET* can be derived from already obtained temperature, humidity, and MRT (Mean Radiant Temperature), as one of result-processing functions. WBGT (heat stress risk indices), and the scale for ventilation efficiency (SVE), of which some indices can be converted to a real time, can be set by one click, and the range of calculation area can be selected (for example, either one of two rooms).
Humidity / dew condensation
The software can analyze humidity in the air. Dew condensation and evaporation on a wall surface due to temperature change can be considered and the amount of dew condensation and evaporation per time can be obtained. The software supports the analyses of moisture transfer inside a solid, and the function can be used to analyze a permeable object and dew condensation inside a part.
Plant canopy model (flow and heat)
Air resistance caused by plant canopy can be considered by setting the coefficient of friction and the leaf area density. For frequently used plants such as oak tree, their parameters are preset as the recommended values. The software also simulates the cooling effect by the latent heat of vaporization on a leaf surface by using the fixed temperature and setting the amount of absorbed heat. The function can be used for analyses of outdoor wind environment and heat island effect.
WindTool (outdoor wind environment assessment tool)
This tool helps assess outdoor wind environment. The assessment criteria can be selected from the ones proposed by Murakami et al. and by Wind Engineering Institute. By specifying a base shape and parameters required for wind environment evaluation, the parameters for 16 directions are calculated and the wind environment is ranked automatically. Detailed distributions of air current and pressure per direction can be visualized.
**Electrostatic field**
In addition to fluid force, the effect of an electrostatic field, which applies external force to charged particles, can be considered. By setting electric charge of particles and electric potential of a wall surface, the function can be used for analyses to consider area control of electrostatic coating. Velocity at which charged particles do not adhere on a wall surface can also be examined by using the function.
**Flow of foaming resin**
The software calculates the behavior of filling up an object with foaming resin, which is used as a heat insulator for houses and refrigerators. To examine speed and pressure of filling-up and the position for injecting the resin, the software simulates the behavior in 3D. The simulation can provide more pieces of information in shorter time than an actual measurement.
**Mapping**
When a target phenomenon is in a small range and the phenomenon is affected by a wide range of its surrounding area, analysis results of the surrounding area can be used for an analysis of the target phenomenon as boundary conditions to decrease the calculation load. To analyze only the inside of an enclosure for an electronic device highly affected by its outside, the analysis results of the outside can be used as boundary conditions.
**Free surface**
The software calculates the shape of an interface between a gas and a liquid. Either MARS or VOF method can be used, and the calculation target phase can be selected: both gas and liquid, only gas, or only liquid. The function is useful in a wide range of fields: from an analysis of tsunami in the civil engineering and construction field to an analysis of soldering in the electronic device field.
**Solidification / melting**
The phase change between fluid and solid, for example, water to ice and ice to water, can be considered. The following phenomena related to solidification/melting can be considered: change of flow affected by a solidified region, change of melting speed depending on the flow status, and latent heat at melting. A phenomenon that water in an ice maker becomes ice can be simulated using the function.
**Boiling / condensation**
(bubble nucleation, bubble growth / condensation)
With the function, the user can analyze a boiling flow, which is a gas-liquid two-phase flow caused by temperature difference between a liquid and a heat conduction surface. A boiling flow is analyzed as a free surface analysis using MARS method, and latent heat generation and volume change due to bubble growth / condensation are considered using phase change model.
**Particle tracking**
The software simulates the behavior of particles depending on their characteristics (diameter, density, and sedimentation speed) and action/reaction between particles and a fluid. This includes sedimentation due to gravity, inertial force for mass particles, and movement due to electrostatic force, liquefaction at adhering on a wall surface, evaporation and latent heat, the behavior as bubbles in a liquid for charged particles.
**Panel**
(heat conduction / transfer / thermal transport)
Material properties and motion conditions can be applied to a panel having no thickness in model, which allows for heat conduction to other parts and heat dissipation to air. This enables the simulations of paper feeding and film drying processes, where thin objects move and go under heating repetitively.
---
1. Transfer and thermal transport are only available on scSTREAM
## Functions (scSTREAM, HeatDesinger)
<table>
<thead>
<tr>
<th>Preprocessor/Solver</th>
<th>Mesh generation</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Conditions</th>
<th>Operation and control environment</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Flow types</th>
<th>Turbulence models</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Solving</th>
<th>Particle analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Multiphase Flow analysis</th>
<th>Thermal reaction analysis</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Electric field analysis</th>
<th>Thermal source model</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Flow conditions</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Thermal conditions
- Fixed temperature
- Amount of heat generation
- Heat transfer coefficient
- Contact heat transfer coefficient
- Wall conditions
- No-slip (stationary wall)
- Slip (symmetry wall)
- Log-law condition
- Power-law condition
- Surface roughness
Porous media
Source conditions
- Volume force / pressure loss
- Heat source
- Smoke source (diffusing materials)
- Turbulence generation
- Humidification
User-defined conditions
- Volume force / pressure loss
- Heat source
- Smoke source (diffusing materials)
- Turbulence generation
- Humidification
Calculation control environment
- Monitoring the calculation status
- Automatic translation of the calculation
Output for third party software
- Abaqus, Nastran, Femtet, ADVENTURECluster, JMAG-Designer, Optimus, Light, modelRKTHER, Autodesk Revit, ARCHICAD, ThermoRender, EnSight, FieldView
Drawing functions
- Mesh, vector, contour plots
- Isosurface, streamline, pathline, volume rendering
- Geometry display
- Isoparms, Cylinder
- Surface roughness
- Special effects
- Oil flow (on plane / surface)
- Texture mapping (on plane / surface)
- Lighting, gradient, shadowing (preset / arbitrary)
- Arbitrary scaling
- Arbitrary pick
- Animation
- Vector animation
- Flow line animation
- Cut-plane sweeping
- Marker particle (turbulent diffusion effect)
- Automatic translation of viewpoint (view / focus points can be set)
- Key-frame animation
- Animation interpolated between cycles
Analysis results
- Variable registration (function registration)
- Integral (surface / volume)
- Comparison
- Projected area calculation
- Analysis results of mesh elements / subdomains
- Evaluation of flow visualization
- Evaluation of solid visualization
Data/image output
- Microsoft* BMP, JPEG, PNG
- CradleViewer® (support steady-state / transient animation, attach to Office applications)
- AVI, WMV
- VRML
Operation and control environment
- Selectable help function
- Spatial simulation
- Streaming
- Selectable mouse operation modes
- Virtual reality data by video
Functions (scSTREAM, HeatDesigner)
<table>
<thead>
<tr>
<th>Product</th>
<th>Compliant OS</th>
<th>Recommended environment</th>
<th>Approx. measure of analysis size</th>
<th>Compiler environment (User-defined function)</th>
</tr>
</thead>
</table>
System Configuration
Windows is a trademark of Microsoft Corporation in the United States and other countries.
The official name of Windows is the “Microsoft® Windows® Operating System”.
Microsoft Visual Studio is a registered trademark of Microsoft Corporation in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States and other countries.
Intel is a registered trademark of Intel Corporation in the United States and other countries.
Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries.
SUSE is a registered trademark of SUSE LLC.
All other product and service names mentioned are registered trademarks or trademarks of their respective companies.
* Windows version
* Intel Parallel Studio XE 2018 Composer Edition for Fortran
* Intel Parallel Studio XE 2017 Composer Edition for Fortran
* Intel Parallel Studio XE 2016 Composer Edition for Fortran
* Linux version
* GFortran (GNU Fortran compiler)
* (Linux standard)
The ever-evolving latest CFD solution
Discover what you want from your CFD tool here
**scFLOW SC/Tetra**
SC/Tetra has been characterized by sophisticated mesh generation function, high speed computing capability, and user-friendly features throughout the operation. As its advanced version, scFLOW has been released. It is equipped with more stable Solver that achieves calculation speed three times faster (at maximum) than before, and new Preprocessor that helps entry-level users build complicated models and high-quality mesh. scFLOW, the new generation software, keeps on evolving.
**Simplification of Preprocessor operations**
From the CAD data to analysis mesh data, the required operations are grossly simplified compared to before. The conservation of assembly information and the settings of conditions on the parts bring the sense of continuity from the CAD operations and reduce the operational burden of the users.
**Polyhedral mesher**
Using polyhedral mesh elements improves stability and calculation accuracy of cell-centered solver. In scFLOWpre, mesh can be generated according to the target number of mesh elements and automatically refined near wall area. The automatic mesher function also enables users to specify mesh refinement level of each part and region.
**Modifying CAD data**
When CAD data to be used for simulation has a problem, the data can be modified with Preprocessor. Boundary conditions can be set based on the part names and color information set in the CAD data. When some regions are missing in the model, shapes such as cuboids and cylinders can be added.
**Viewer mode**
Preprocessor data can be displayed in the viewer mode without the Pre-/Post-processor license, when the license is taken by the mesher or by Postprocessor and is unavailable.
Mesh-adaptation analysis
With this function, mesh will be automatically refined where a flow or pressure changes greatly in a steady-state analysis. After the calculation in Solver is completed, Preprocessor automatically launches and executes gridding and meshing based on the calculation result. By specifying the target number of elements, coarse mesh is generated first and the mesh is automatically refined to be appropriate for the calculation. The function is useful for an analysis of flows in a tube with a complicated shape.
Discontinuous mesh
Flow with object motion can be calculated, including rotation of fans and turbines, and crossing travel of automobiles or trains (translation). The function enables an analysis with consideration on shear heating between rotor and pad in a disk brake. The function also makes it possible to analyze a combination of rotation and translation such as a piston pump.
Free surface (steady-state / transient)
The shape of an interface between a gas and a liquid can be simulated. Calculations by VOF method (new method: FIRM) are fast and accurate, and functions including moving boundary, overset mesh, and particle tracking can be used in combination. Because a phenomenon where the phase interface becomes stable can be analyzed in a steady-state calculation, the result can be obtained in a shorter time than before.
Overset mesh
Free movement of regions, that cannot be analyzed using existing functions such as stretching or rotating elements, can now be simulated by overlapping mesh elements for stationary and moving regions. This function supports an overlap of multiple moving regions, a contact between objects, and a 6-degree-of-freedom motion of rigid bodies. This is useful to analyze opening and closing of a valve of an engine port or a gear pump where gears engage with each other.
Stabilization of calculation
Even for mesh data with elements of extremely low quality, the calculation can be stabilized by the automatic processing to avoid divergence. This function helps Solver be more robust.
6-degree-of-freedom motion (6DOF)
Passive translation and rotation of a rigid body receiving a fluid force can be analyzed. With the function, the user can analyze a ball valve with consideration of the elasticity of the spring (1D translation), and paper airplane with consideration of 6-degree-of-freedom rigid-body motion (3D translation + 3D rotation). In addition, the function is applied to analyses of check valves, wind power generators, and blades of wave power generators.
Only scFLOW supports FIRM. FIRM cannot be used for overset mesh or steady-state analyses.
Fluid-structure interaction
This option is used for two-way FSI (fluid-structure interaction) with structural analysis software. With this option, not only rigid bodies but also elastic bodies can be treated. Deformation of an object caused by a fluid force and the change of fluid caused by the deformation can be analyzed.
Compressible fluid
The software can analyze phenomena such as supersonic flow and significant expansion/contraction of volume. For a compressible fluid, both the pressure-based and the density-based Solvers can be used. The density-based Solver keeps the calculation stable even with high Mach number. You can select either Solver depending on the analysis target and phenomenon.
Cavitation
This function enables simulation of a vaporization phenomenon called cavitation, which is caused at an area where pressure of a liquid becomes lower than in the surrounding area, such as with a propeller rotating at a high speed under water. The occurrence of cavitation can be predicted by applying the cavitation model based on the pressure values. The software also supports problems caused by cavitation such as erosion.
Aerodynamic noise analysis
Sound caused by pressure oscillation of a fluid, such as wind noise, and sound caused by resonance can be predicted. The calculation can be performed accurately by using LES and the weak compressible flow model. The frequency of aerodynamic noise can also be analyzed using the Fast Fourier Transform (FFT) method from the CFD analysis result.
Evaporation/Condensation
Free surface analysis function (VOF method) of this software can simulate phase change between gas and liquid, such as evaporation and condensation. By considering phase change, not only simple heat conduction but also heat transfer from latent heat can be calculated. For example, this method can be applied to internal flow simulations for heat transfer devices such as heat pipes, in which a refrigerant liquid changes to vapor by absorbing heat from an outer region.
Dispersed multi-phase flow
This function can simulate flows containing many bubbles, droplets, or particles (dispersed phase), which are difficult to be analyzed using free surface. This function is a multi-fluid model that can predict volume fraction distribution and velocity distribution of each phase by solving the governing equation under the assumption that the dispersed phase is a fluid (continuous phase). The function is useful to analyze the bubble jet effect and aeration tanks.
Particle tracking
Particle tracking function enables analyzing behavior of particles in flow. When analyzing small particles that follow the fluids movement (such as steam and dust), marker particle function can be used to evaluate particles in flow that change over time, which assumes that particle movement is in accordance with fluid velocity.
Humidity dew condensation
The amount of dew condensation on an object surface can be calculated from the surface temperature and water vapor in the air. You can output the amount of dew condensation per unit time in a steady-state analysis and the accumulated dew condensation in a transient analysis. Evaporation from a surface where dew condensation occurs can be calculated simultaneously, and this is useful for an analysis of a windshield defroster.
Liquid film model
The liquid film model is an extended function of the particle tracking function. By using the model, you the user can simulate the phenomenon that liquid particles change to a liquid film (water on a wall) when they reaching on the a wall. A liquid film on a wall flows with the influence of gravity and a gas-phase flowdown depending on an angle of the wall and collects in at a certain position. The analysis results are output as the thickness of a liquid film.
Thermoregulation-model (JOS)
Combination use of the thermoregulation-model (JOS) and a fluid analysis enables analyses of the surface temperature of a human body under a certain thermal environment. It can also be used to analyze temperature and humidity changes in the surrounding environment of a human body. The user can consider age, clothes, and physiological phenomena of the human body such as heat transfer by blood flow in addition to the surrounding environment of a human body such as temperature and velocity.
LES
LES is one of the turbulent flow models. It models eddies smaller than the mesh element in size and directly calculates other eddies. Although calculation load is large, LES enables simulations closer to real phenomena. LES is often used in noise analyses, significantly affected by time variation, to simulate the behavior of small eddies. The user can use the hybrid model with RANS, a turbulent model of small calculation load.
Radiation
Heat transfer by infrared-ray radiation can be considered by setting emissivity and temperature difference between objects. The user can choose VF (view factor) method or FLUX method as a calculation method. The user can also consider wavelength dependence, transmission, absorption, refraction, diffusion, and reflection of radiation. In FLUX method, the user can also consider directionality.
Mapping
When a target phenomenon is in a small range and the phenomenon is affected by a wide range of its surrounding area, analysis results of the surrounding area can be used for an analysis of the target phenomenon as boundary conditions to decrease the calculation load.
Coupled analysis with GT-SUITE
Coupled analysis with GT-SUITE is available. The entire flow in an intake and exhaust system is calculated with GT-SUITE and small flows of each part are interpolated with scFLOW or SC/Tetra. This will enhance calculation accuracy of the whole system.
Operation logging by VB interface
The operations in Preprocessor can be saved as a log file using the VB interface. Making the user scripting unnecessary, this makes the construction of an automated system affordable in a short period of time based on the files storing the operation logs.
Fan model (rotating blades)
With this model, an average flow field around rotating blades can be simulated only by entering characteristic properties regardless of real shapes of fans or propellers. The user can use the non-dimensional swirl coefficient model, the simplified propeller model, and the simplified rotor model. This model is useful to analyze axial-flow windmills and waterwheels.
Script functions
Before, complicated settings, including time- and coordinate-dependent material properties or boundary conditions, required a coding and compilation of user-defined function in C language. With the script functions, compilation is not required. Functions can be written in Preprocessor based on JavaScript.
SmartBlades
This function is useful for analyzing the shape of a fan automatically throughout creating the shape of a fan (CAD data), calculating the flow, and post-processing. The shape of a fan can be created easily by specifying parameters including the number of blades, fan diameter, rake angle, and skew angle.
FluidBearingDesigner
The function creates groove patterns of fluid bearings (dynamic-pressure bearing) and generates mesh. You can select the shape of grooves such as journal and thrust and materials such as porous material. From calculation results, you can obtain parameters for designing fluid bearings such as axial force and drag coefficient.
<table>
<thead>
<tr>
<th>Category</th>
<th>scFLOW</th>
<th>SC/Tetra</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Modeling</strong></td>
<td>CAD data interface (import)</td>
<td>Parasolid,STEP,IGES,ACIS,CATIA/V,Creo/SolidWorks,Stratasys,Pro/Engineer,CADENCE,Tera,STL,Parasolid,Amesim,OAEM,Creo/DesignModeler,Fortran,MAPS,HiL,Makarsa,OAEM,Parasolid,MAK</td>
</tr>
<tr>
<td></td>
<td>Preprocessor</td>
<td>Preprocessor</td>
</tr>
<tr>
<td></td>
<td>Solver</td>
<td>Solver</td>
</tr>
<tr>
<td><strong>Mesh generation</strong></td>
<td>Preprocessing</td>
<td>Preprocessing</td>
</tr>
<tr>
<td></td>
<td>Postprocessing</td>
<td>Postprocessing</td>
</tr>
<tr>
<td></td>
<td>Visualization</td>
<td>Visualization</td>
</tr>
<tr>
<td></td>
<td>Control environment</td>
<td>Control environment</td>
</tr>
<tr>
<td><strong>Conditions</strong></td>
<td>Flow types</td>
<td>Flow types</td>
</tr>
<tr>
<td></td>
<td>Fluid properties</td>
<td>Fluid properties</td>
</tr>
<tr>
<td></td>
<td>Mixture analysis</td>
<td>Mixture analysis</td>
</tr>
<tr>
<td></td>
<td>Dispersion</td>
<td>Dispersion</td>
</tr>
<tr>
<td></td>
<td>Mixing plane</td>
<td>Mixing plane</td>
</tr>
<tr>
<td><strong>Operation and control environment</strong></td>
<td>Selected mouse operation modes</td>
<td>Selected mouse operation modes</td>
</tr>
<tr>
<td></td>
<td>Filtering</td>
<td>Filtering</td>
</tr>
<tr>
<td><strong>Mesh</strong></td>
<td>Structured mesh</td>
<td>Structured mesh</td>
</tr>
<tr>
<td></td>
<td>Hybrid mesh</td>
<td>Hybrid mesh</td>
</tr>
<tr>
<td></td>
<td>Moving mesh</td>
<td>Moving mesh</td>
</tr>
<tr>
<td><strong>Numerical scheme</strong></td>
<td>Convergence term accuracy</td>
<td>Convergence term accuracy</td>
</tr>
<tr>
<td></td>
<td>Iteration scheme</td>
<td>Iteration scheme</td>
</tr>
<tr>
<td></td>
<td>Boundary condition</td>
<td>Boundary condition</td>
</tr>
<tr>
<td></td>
<td>Flow type</td>
<td>Flow type</td>
</tr>
<tr>
<td></td>
<td>Heat radiation</td>
<td>Heat radiation</td>
</tr>
<tr>
<td><strong>Turbulence models</strong></td>
<td>Chen-Othmer model</td>
<td>Chen-Othmer model</td>
</tr>
<tr>
<td></td>
<td>Realizable k-ε model</td>
<td>Realizable k-ε model</td>
</tr>
<tr>
<td></td>
<td>Spalart-Allmaras model</td>
<td>Spalart-Allmaras model</td>
</tr>
<tr>
<td></td>
<td>0-equation model</td>
<td>0-equation model</td>
</tr>
<tr>
<td></td>
<td>LES, DES, VLES</td>
<td>LES, DES, VLES</td>
</tr>
<tr>
<td><strong>Thermal analysis</strong></td>
<td>Heat conduction (fluid model)</td>
<td>Heat conduction (fluid model)</td>
</tr>
<tr>
<td></td>
<td>Convection heat transfer</td>
<td>Convection heat transfer</td>
</tr>
<tr>
<td></td>
<td>Heat conduction (solid model)</td>
<td>Heat conduction (solid model)</td>
</tr>
<tr>
<td></td>
<td>Local radiation</td>
<td>Local radiation</td>
</tr>
<tr>
<td><strong>Particle analysis</strong></td>
<td>Particle properties</td>
<td>Particle properties</td>
</tr>
<tr>
<td></td>
<td>Diffusion</td>
<td>Diffusion</td>
</tr>
<tr>
<td></td>
<td>Converting fluid model</td>
<td>Converting fluid model</td>
</tr>
<tr>
<td><strong>Reactor analysis</strong></td>
<td>Reaction kinetics</td>
<td>Reaction kinetics</td>
</tr>
<tr>
<td></td>
<td>Chemical reaction</td>
<td>Chemical reaction</td>
</tr>
<tr>
<td><strong>Multi-phase flow analysis</strong></td>
<td>Inter-particle interaction model</td>
<td>Inter-particle interaction model</td>
</tr>
<tr>
<td></td>
<td>Inter-particle interaction</td>
<td>Inter-particle interaction model</td>
</tr>
<tr>
<td></td>
<td>Inter-particle interaction model (VOF method, steady state)</td>
<td>Inter-particle interaction model (VOF method, steady state)</td>
</tr>
<tr>
<td><strong>Aerodynamic noise analysis</strong></td>
<td>Inter-particle interaction model (VOF method, transient)</td>
<td>Inter-particle interaction model (VOF method, transient)</td>
</tr>
<tr>
<td><strong>Current analysis</strong></td>
<td>Current density</td>
<td>Current density</td>
</tr>
<tr>
<td><strong>Thermo-regulation model</strong></td>
<td>CON, CON-2</td>
<td>CON, CON-2</td>
</tr>
<tr>
<td><strong>Flow conditions</strong></td>
<td>Viscosity</td>
<td>Viscosity</td>
</tr>
<tr>
<td></td>
<td>Density</td>
<td>Density</td>
</tr>
<tr>
<td></td>
<td>Volume flow rate</td>
<td>Volume flow rate</td>
</tr>
<tr>
<td><strong>Thermal conditions</strong></td>
<td>Amount of heat generation</td>
<td>Amount of heat generation</td>
</tr>
<tr>
<td></td>
<td>Heat transfer coefficient</td>
<td>Heat transfer coefficient</td>
</tr>
<tr>
<td><strong>Wall conditions</strong></td>
<td>Contact heat transfer coefficient</td>
<td>Contact heat transfer coefficient</td>
</tr>
<tr>
<td></td>
<td>Specified heat transfer coefficient</td>
<td>Specified heat transfer coefficient</td>
</tr>
<tr>
<td></td>
<td>Interface heat transfer</td>
<td>Interface heat transfer</td>
</tr>
<tr>
<td></td>
<td>Interface heat transfer (fluid-to-fluid)</td>
<td>Interface heat transfer (fluid-to-fluid)</td>
</tr>
<tr>
<td></td>
<td>Specified heat transfer (solid-to-solid)</td>
<td>Specified heat transfer (solid-to-solid)</td>
</tr>
<tr>
<td></td>
<td>Specified heat transfer (fluid-to-solid)</td>
<td>Specified heat transfer (fluid-to-solid)</td>
</tr>
<tr>
<td></td>
<td>Specified heat transfer (solid-to-fluid)</td>
<td>Specified heat transfer (solid-to-fluid)</td>
</tr>
</tbody>
</table>
Functions (scFLOW, SC/Tetra)
### Solver
| Pressure conditions | Fixed pressure | Pressure loss | Pressure loss
|---------------------|----------------|--------------|--------------|
| Source conditions | Volume force / pressure loss | Heat generation | Smoke source (diff using materials)
| | Turbulence generation | Solid shear heating | Simplified propeller model
| User-defined conditions | Simplified rotor model | User-defined function (compilation required) | User-defined conditions
### Postprocessor
| Drawing functions | Mesh, vector, contour plots | Surface rendering | Volume rendering
| | Isosurface, streamline, pathlines, volume rendering | Geometry display | 2D graph
### Calculation control environment
- Monitoring the calculation status
- Email notification of the calculation
- Job management
- VB interface
### Output for third party software
- Abaqus, Nastran, Femtet, ANSYS Design
- ADVENTURE
designer
- LMS Virtual.Lab, Actran, FlowNoise
- OPTIMUS, Isight, modeFRONTIER
- LOGE, LMS Virtual.Lab, Actran, FlowNoise
- EnSight, FieldView, AVS
### Drawing position/orientation
- Arbitrary plane, surface, entire volume, cylinder
- Pathlines
- Arbitrary scaling
- Arbitrary pick
### Special effects
- Texture mapping
- Lighting, luster, gradation
### Animation
- Vector animation
- Flow line animation
- Cut-plane weeping
- Marker particle (turbulent diffusion effect)
- Automatic translation of view points
- Key-frame animation
- Analysis interpolated between cycles
- Velocity path animation
- Customize position / orientation
### Analysis results
- Scalable rendering
- Hexagonal data
- Hexagonal data
- Special effects
- Analysis interpolated between cycles
- Special effects
- Analysis interpolated between cycles
- Special effects
### Discharge output
- CRW
- Avi, WMV
- VRML
### Operation and control environment
- Selectable help function
- OpenGL emulation
- Stereoscopic view (side by side)
- Arbitrary plane, surface, entire volume, cylinder
- Streamlines, isosurfaces
- Pathlines
- Arbitrary scaling
- Arbitrary pick
### System Configuration
<table>
<thead>
<tr>
<th>Product</th>
<th>Compliant OS</th>
<th>Recommended environment</th>
<th>Approx. measure of analysis size</th>
<th>Compiler environment (User-defined function)</th>
</tr>
</thead>
</table>
Windows is a registered trademark of Microsoft Corporation in the United States and other countries.
The official name of Windows is the “Microsoft® Windows® Operating System”.
Microsoft Visual Studio is a registered trademark of Microsoft Corporation in the United States and other countries.
Linus is a trademark registered to Linus Torvalds in the United States and other countries.
Red Hat is a registered trademark of Red Hat, Inc. in the United States and other countries.
SUSE is a registered trademark of SUSE LLC.
All other product and service names mentioned are registered trademarks or trademarks of their respective companies.
© Fluid-Structure Interaction (Abaqus®) is not supported for HPC pack.
© Only compliant with partial functions in Solver, Monitor and Preprocessor.
Visualize your multiphysics phenomena in one environment
Postprocessor regularly installed in Software Cradle products can be purchased separately
**scPOST**
**Postprocessor**
In Postprocessor, you can visualize the simulation results calculated in Solver. It is effective for product design reviews because in Postprocessor, you can check, for example, temperature distribution at the places that cannot be measured or observed in the actual products. You can output not only still images but also animations, as well as output files for CradleViewer.
**Characteristics**
You can:
- Obtain numerical information with simple operation.
- Create beautiful animation quickly in Postprocessor.
- Easily map temperature information obtained in a fluid analysis to a structural analysis.
- Easily compare multiple analysis results.
- Easily calculate heat transfer and grasp a whole of heat-related matters.
- Output images supporting VR.
^Output in the equirectangular format with parallax
**Useful functions (example)**
- Creates animation automatically
- Saves display status
- Develops the image on the meridian plane
- Compares results
- Calculates (integral, registering functions)
**Compatible formats for import/export**
<table>
<thead>
<tr>
<th>Formats supported</th>
<th>Other format</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Import</strong></td>
<td><strong>Export</strong></td>
</tr>
<tr>
<td>• MSC Nastran2018 (.h5)</td>
<td>• Generic format for fluid data (.cgns)</td>
</tr>
<tr>
<td>• MSC Marc 2018 (119, 116)</td>
<td>[Only ADF can be exported through scCONVERTER)</td>
</tr>
<tr>
<td>• Generic format for fluid data (.cgns)</td>
<td>• Images (BMP, PNG)</td>
</tr>
<tr>
<td>[ADF only]</td>
<td>• 3D geometry data (STL, OBJ)</td>
</tr>
<tr>
<td></td>
<td>• Generic format for fluid data (.cgns)</td>
</tr>
<tr>
<td></td>
<td>[Only ADF can be exported through scCONVERTER]</td>
</tr>
<tr>
<td></td>
<td>• Images (BMP, JPEG, PNG)</td>
</tr>
<tr>
<td></td>
<td>• Animation (AVI, WMV)</td>
</tr>
<tr>
<td></td>
<td>• 3D geometry data (STL, VRML, FBX)</td>
</tr>
</tbody>
</table>
Co-simulation with MSC Software products
Integration of multidisciplinary analyses – from materials to systems
More realistic coupled fluid – mechanical – structural analyses
Capturing movement and deformation more precisely and expressing boundary conditions in fluid analyses with more reality
Co-simulation platform
The platform for coupled analyses with MSC mechanical and structural analysis solvers provides seamless co-simulation.
Co-simulation using FMI
Co-simulation using FMI a tool independent standard of 1D co-simulation interface
Co-simulation with Actran, acoustic analysis software
scFLOW and SC/Tetra are used to create fluid sound sources and Actran is used for propagation analysis of sound waves.
Analysis of an exhaust tube of a motorcycle
Analysis of an axial-flow fan
Acoustic analysis using fluid analysis results as a sound source
Compared with direct solution only by fluid analysis software, solution can be obtained with dramatically less calculation load.
Wow! Was it this easy?!
Non-experts can start thermal analysis right away with easy operation in 2D and real-time results
PICLS is a thermal simulation tool which helps designers easily perform thermal simulation of PCBs. Even if you are unfamiliar with thermal simulation, you will obtain a simulation result without stress through the tool’s easy and quick operation in 2D. You can import the data of a PCB created in PICLS to scSTREAM and HeatDesigner, that is, you can pass the analysis data seamlessly from the PCB design stage to the mechanical design stage.
Advantages
- Easy to use
(Operation in 2D, integrated GUI for pre- and post-processing)
- Inexpensive
- Capable of real-time analysis
Thermal countermeasures using PICLS
- Checking the layout of components to avoid interference of heat between them
- Troubleshooting thermal issues of current products
- Considering heat release depending on a wiring pattern (coverage ratio)
- Examining the location and the number of thermal vias
- Examining the performance of a heatsink
- Examining the size of a PCB
- Considering natural/forced air cooling
- Considering radiant heat
- Considering heatsinks (number of fins, size)
- Examining heat dissipation performances by connection to enclosure
- Considering PCB mounting environment
Functions available in PICLS and PICLS Lite
- Multiple layers
- 3D preview
- Real-time display
- Radiation
- IDF3.0 interface
- Library
- Wiring area specification
- Displaying each layer
- Automatic report output
- Contact thermal resistance
- Considering a heatsink
- Wiring data (Gerber) import
- Thermal via
- Cutting out a PCB
- Forced air cooling
- Temperature margin, alert function
- Consideration of simple enclosure
- Drill data import
* PICLS Lite is provided online
http://www.cradle-cfd.com/picls/
Main features of PICLS and PICLS Lite
**Modeling**
- **External file interface**
- You can import IDF 3.0 and Gerber data
- **Heatsink**
- You can allocate and display parts such as plate fins and heat dissipation plates
- **Library**
- You can register and reuse created parts to the library
- **Preview**
- You can check the layout of components in the 3D image
- **Cutting out a PCB**
- You can create a PCB of arbitrary shape using cut-out function
**Calculation and Post-Processing**
- **Real-time display**
- The translation of components is displayed in real time
- **Report output**
- You can output analysis results as reports
- **Alert function**
- You can check parts whose temperature is higher than threshold
**System Configuration**
<table>
<thead>
<tr>
<th>Compliant OS</th>
<th>Recommended environment</th>
</tr>
</thead>
<tbody>
<tr>
<td>Windows 10</td>
<td>[Memory] 2.0 GB or more</td>
</tr>
<tr>
<td>Windows 8.1</td>
<td>[Hard disk] 0.5 GB or more free capacity recommended</td>
</tr>
<tr>
<td>Windows 7</td>
<td>[Display resolution] 1920 x 1080 or more</td>
</tr>
<tr>
<td>Windows Server 2016</td>
<td></td>
</tr>
<tr>
<td>Windows Server 2012 R2</td>
<td></td>
</tr>
<tr>
<td>Windows Server 2012</td>
<td></td>
</tr>
<tr>
<td>Windows Server 2008 R2</td>
<td></td>
</tr>
<tr>
<td>Red Hat Enterprise Linux 7</td>
<td></td>
</tr>
<tr>
<td>Red Hat Enterprise Linux 6 (6.1 onward)</td>
<td></td>
</tr>
<tr>
<td>SUSE Linux Enterprise Server 12</td>
<td></td>
</tr>
<tr>
<td>SUSE Linux Enterprise Server 11 (SP3 onward)</td>
<td></td>
</tr>
</tbody>
</table>
*1 Supports license manager only.
Analysis Procedure
− scSTREAM (HeatDesigner), scFLOW and SC/Tetra
There are three major steps in the workflow for obtaining simulation results.
**STEP.1 Preprocessor**
With Preprocessor, create or import analysis models, set analysis conditions, and generate mesh.
**STEP.2 Solver**
Flow/thermal calculations are performed using input data created in the Preprocessor. During the computation, calculation status can be monitored. The amount of time required for the computation depends on the size of the model (number of mesh elements), quality of the model, and hardware. A parallel Solver is available for reducing the computational time of large-scale models.
* Solver features (examples)
- Setting the degree of parallelism
- Monitoring job status
- Visualizing results in real-time
**STEP.3 Postprocessor**
The Solver outputs field data for visualization using the Postprocessor. This permits examining flow, temperature, pressure, and other analysis results. Visualized results can be converted to images, animations and/or CradleViewer (details on P23) files for later use.
* Various drawing functions
- Vector plot
- Isosurface
- Contour map
- Pathline (available only in SC/Tetra)
- Streamline
- Oil flow
- Still image and animation output
* See page 25 (HPC Solution) for more information about parallel calculation
Main Mutual Features
CAD Interface
Software Cradle analysis software can import native data from major 3D CAD software as well as import most generalized intermediate data formats. This eliminates the cumbersome process of data conversion.
### CAD / geometry data
<table>
<thead>
<tr>
<th>Compliant software</th>
<th>Format</th>
<th>V14 compliant versions</th>
</tr>
</thead>
<tbody>
<tr>
<td>CATIA V5</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>CATIA V4</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>Creo Elements/Pro/Pro/E</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>SOLIDWORKS</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>ug nx</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>SolidEdge</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>Autodesk® Inventor*</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>Autodesk® Revit*</td>
<td>ST</td>
<td>Compliant with Launcher</td>
</tr>
<tr>
<td>ARCHICAD</td>
<td>ST</td>
<td>Compliant with Launcher</td>
</tr>
<tr>
<td>IGES</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>VDAFS</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>ACIS</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>Parasolid</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>STEP</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>IFC</td>
<td>ST</td>
<td>SHP (Polyline, polygon)</td>
</tr>
<tr>
<td>SHAPE</td>
<td>ST</td>
<td>3ds</td>
</tr>
<tr>
<td>3ds</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>STL</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>Nastran</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>SketchUp</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>Albaque*</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>DesignSpace</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>Plot3D</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>CGNS</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>DFX</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>IDF</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>GERBER</td>
<td>ST</td>
<td>-</td>
</tr>
<tr>
<td>Parasolid</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>STL</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>Nastran</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
<tr>
<td>CGNS</td>
<td>ST</td>
<td>HD, FLOW, SCT</td>
</tr>
</tbody>
</table>
### Import
#### CAD Interface
**HPC (High Performance Computing) Solution**
Large-scale, high-speed simulation with parallel computing technologies
Parallel computing makes possible solving existing models faster, conducting more analyses, and/or solving more detailed models with a greater number of mesh elements.
#### scSTREAM V14
- **Arena**: Approx. 70 million elements
- **Projector**: Approx. 20 million elements
#### scFLOW V14
- **Converter**: Approx. 15 million elements
- **Sirocco fan**: Approx. 15 million elements
Main Mutual Features
**scMonitor**
You can visualize the progress of the simulations in scMonitor during the Solver calculations. You can check, for example, pressure contour of a registered surface and temperature contour and flow vector on axial planes.
* To use this function, a Postprocessor license is required in some areas.
**LFileView**
LFileView is a dedicated viewer for L files, which are output during the simulations automatically. You can check the progress of the simulations numerically with variable values for each cycle and the maximum/minimum/average values for the specified output.
**VB Interface**
The software supports COM technology provided by Microsoft. You can control the software by using Microsoft Office products and Visual Basic (VB). A tool to create and execute the automatic operation flow, scWorkS ketch, is bundled with the software. By using the tool, you can create your original automatic operation flow easily. In addition, you can register the created flow as a template and reuse it.
**Parametric Study Tool**
Using the parametric study tool, you can set analysis conditions to multiple cases all at once - for instance, when you run several calculations with modified parameters such as flow rate or amount of heat. The interface is user-friendly with spreadsheet-like settings. You can check, in the same interface, the status of each case and the output parameters such as the maximum/minimum temperature or average pressure on a specified plane.
* This tool is available in scSTREAM, HeatDesigner, and SC/Tetra
**Useful functions**
- Tool to create a report automatically
- Unique GUI
- Tool to create a model from the 2D data automatically
CradleViewer
The simulation result visualized in Postprocessor can be saved in a file and the file can be opened in a simple viewer. In the viewer, the viewpoint and the distance can be changed with the mouse and by touch operation*. CradleViewer is provided free of charge. You can share the simulation result even in an environment without Postprocessor installed.
* Operation using two fingers is supported on a multitouch-compatible screen in a Windows 10 or Windows 8.1 environment.
HeatPathView
Using HeatPathView, you can review heat dissipation measures with focus on each component. The tool enables the intuitive and comprehensive evaluation of heat balance and search of heat dissipation paths. By understanding the flow of heat, you can make your heat dissipation designs more reliable.
scCONVERTER
Data (FLD/FPH files) such as pressure, temperature, and heat transfer coefficient obtained in thermo-fluid analyses can be mapped to input data of structural analysis software (Abaqus, I-DEAS, Nastran). In addition, input data of structural analyses can be converted to an FLD or FPH file. scCONVERTER can create an animation file from multiple still images (BMP/PNG files), edit FLD/FPH files, and convert a P file to an FLD file or an FLD file to an iFLD file.
Introducing Optimus®
Optimus is an integration platform of simulation tools with optimization and automation as its cores
1. Optimus®
Optimus has a direct interface with scSTREAM, scFLOW and SC/Tetra, and the optimization can be performed without any additional customization. It also supports a creation of an original GUI using API and an optimization using Quality Engineering (Taguchi Method).
Automation/Integration
Executes the processes automatically just by constructing a simulation workflow with icons.
Data Mining
Visualizes data immediately. Relationships between the parameters can be grasped easily from sensitivity and correlation analyses.
Optimization
Optimization algorithm automatically searches the parameters yielding the best performance.
Robustness, Reliability
Predicts the variations in performance from the variations of parts. This enables the design with consideration on the variations in advance.
Comparison Table: Optimus® and Optimus® for Cradle
<table>
<thead>
<tr>
<th></th>
<th>Optimus®</th>
<th>Optimus® for Cradle</th>
</tr>
</thead>
<tbody>
<tr>
<td>Linkage to Software Cradle products</td>
<td>Condition setting</td>
<td>(Direct interface)</td>
</tr>
<tr>
<td>Linkage to third party products</td>
<td>Shape modification using CAD</td>
<td>(Direct Interface)</td>
</tr>
<tr>
<td>Calculation method</td>
<td>DOE</td>
<td>Total 23 methods</td>
</tr>
<tr>
<td></td>
<td>Response surface</td>
<td>All 5 + Optional 11 methods</td>
</tr>
<tr>
<td></td>
<td>Single-objective optimization</td>
<td>Local: Total 5 methods</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Global: Total 7 + Optional 5 methods</td>
</tr>
<tr>
<td></td>
<td>Multi-objective optimization</td>
<td>Total 11 + Optional 5 methods</td>
</tr>
<tr>
<td></td>
<td>Robustness / Reliability / Quality Engineering</td>
<td>Total 7 methods, orthogonal table L4-512, static/dynamic characteristics</td>
</tr>
<tr>
<td>Postprocessing</td>
<td>Method</td>
<td>Total 23 types</td>
</tr>
<tr>
<td></td>
<td>Model</td>
<td>Total 13 types</td>
</tr>
</tbody>
</table>
License type
We provide various license types based on customer operations from on-premise to cloud
1. On-premise license: Features
- Underpriced
- Internally manageable machines
- Existing hardware resources
- No data transfer
- Internally controlled security
- In-house tools (e.g. automation)
- Multiple tools in combination
2. Cloud license: Features
- On-demand offer*
- No hardware required
- Large-scale calculations
- Support for sudden need
- No maintenance required
- Underpriced for infrequent users
- Not an asset
Recommended for customers who want to...
- Incorporate analysis into design workflow constantly and reduce cost
- Elaborate a combinational use of multiple analysis tools with a simple system
- Simplify analysis operations for obtaining design pointers and allow several users short-term uses
Recommended for customers who want to...
- Finish large-scale calculations in a short time although ordinary calculations can be performed with on-premise licenses
- Use outside resources temporarily because in-house resources is insufficient at the time
- Handle intensive calculation jobs efficiently for one project without placing burden on in-house resources
License type lookup table
<table>
<thead>
<tr>
<th>Software</th>
<th>License</th>
<th>Agreement type, period</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Floating</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Monthly</td>
</tr>
<tr>
<td>scSTREAM SC/Tetra</td>
<td>Pre/Post</td>
<td>Standard edition</td>
</tr>
<tr>
<td></td>
<td></td>
<td>HPC edition</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Solver</td>
</tr>
<tr>
<td></td>
<td></td>
<td>HPC edition</td>
</tr>
<tr>
<td>HeatDesigner</td>
<td>Pre/Solver/Post</td>
<td>Standard edition</td>
</tr>
<tr>
<td>scFLOW</td>
<td>Pre/Post</td>
<td>Standard edition</td>
</tr>
<tr>
<td></td>
<td></td>
<td>HPC edition</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Solver</td>
</tr>
<tr>
<td></td>
<td></td>
<td>HPC edition</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Unlimited edition</td>
</tr>
<tr>
<td>PICLS</td>
<td></td>
<td>●</td>
</tr>
<tr>
<td>Optimus® for Cradle</td>
<td></td>
<td>●</td>
</tr>
</tbody>
</table>
*1 Only available in a certain region
*2 Agreement type available with Altair Partner Alliance (APA) provided by Altair Engineering, Inc. For details, go to the official website of APA.
Links with other software
1. Electromagnetic Field Analysis Software
Using the data output from the electromagnetic analysis software, the effect of heat source distribution due to an electromagnetic field can be analyzed.
- **JMAG-Designer**
- Developed by JSOL Corporation (Japan)
- **Femtet®**
- Developed by Murata Software Co., Ltd (Japan)
- **EMsolution®**
- Developed by Science Solution International Laboratory, Inc. (Japan)
- **FlowNoise**
- Developed by CEDIC (Korea)
- **LMS Virtual.Lab**
- Developed by Siemens PLM Software (USA)
2. Acoustic Analysis Software
The acoustics of aerodynamic noise can be analyzed using SC/Tetra output data.
- **LMS Virtual.Lab**
- Developed by Siemens PLM Software (USA)
- **FlowNoise**
- Developed by CEDIC (Korea)
- **Actran**
- Developed by Dassault Systèmes S.A. (France)
3. Structural Analysis Software
Using the output data from scFLOW, SC/Tetra and scSTREAM, structural analysis can include the influence of heat transfer and other fluid interactions.
- **Abaqus®**
- Developed by Dassault Systèmes S.A. (France)
- **Nastran**
- Developed by Dassault Systèmes S.A. (France)
- **Femtet®**
- Developed by Murata Software Co., Ltd (Japan)
- **ADVENTURECluster**
- Developed by Allied Engineering Corporation (Japan)
4. Thermal Environment Simulation Software
The surface temperature distribution is output from the thermal environment simulation software. The output data can be used as boundary conditions for scSTREAM calculation to analyze the distribution of wind velocity and temperature.
5. One-Dimensional Analysis Software
Computational load can be reduced by not solving all of the thermo-fluid analysis in three dimensions but using one-dimensional analysis software for some part.
6. Chemical Reaction Analysis Software
Using material property parameters and chemical reaction database of LOGE, coupled analysis with SC/Tetra can be performed. This enables analysis of overall chemical reactions and detailed chemical reactions, which could not be analyzed by SC/Tetra alone.
7. 3D CAD Software
The direct interface (Launcher) equipped with scSTREAM enables the software to directly load original 3D CAD data.
8. Optimization Software
Software Cradle products can be used in conjunction with optimization software for automation and/or optimizing product design.
9. Visualization Software
Read, visualize and edit FLD data (analysis results file from Software Cradle products) using other visualization software.
About Software Cradle
Software Cradle Co., Ltd. is an innovative provider of computational fluid dynamics (CFD) simulation software. Established in 1984, the company has pursued to offer unique, innovation focused, and highly reliable CFD solutions that enhance customers’ product quality and creativity. In 2016, the company joined MSC Software Corporation (headquartered in Newport Beach, California, US), the worldwide leader in the field of multidiscipline simulation. As a truly global company, Software Cradle delivers all-inclusive multi-physics solutions.
For more information about MSC Software Corporation, please visit:
|
{"Source-Url": "https://www.cradle-cfd.com/images/products/EN_ProductGuide_A4_email.pdf", "len_cl100k_base": 14662, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 78649, "total-output-tokens": 15250, "length": "2e13", "weborganizer": {"__label__adult": 0.0004527568817138672, "__label__art_design": 0.0026226043701171875, "__label__crime_law": 0.00045943260192871094, "__label__education_jobs": 0.0021495819091796875, "__label__entertainment": 0.0003871917724609375, "__label__fashion_beauty": 0.0003445148468017578, "__label__finance_business": 0.0010166168212890625, "__label__food_dining": 0.00047206878662109375, "__label__games": 0.0018520355224609375, "__label__hardware": 0.004337310791015625, "__label__health": 0.0004837512969970703, "__label__history": 0.0006251335144042969, "__label__home_hobbies": 0.0002841949462890625, "__label__industrial": 0.0033817291259765625, "__label__literature": 0.0004115104675292969, "__label__politics": 0.0002887248992919922, "__label__religion": 0.0007748603820800781, "__label__science_tech": 0.3017578125, "__label__social_life": 0.0002157688140869141, "__label__software": 0.294677734375, "__label__software_dev": 0.38134765625, "__label__sports_fitness": 0.0005488395690917969, "__label__transportation": 0.0006814002990722656, "__label__travel": 0.0003337860107421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70654, 0.00529]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70654, 0.29844]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70654, 0.84467]], "google_gemma-3-12b-it_contains_pii": [[0, 101, false], [101, 2831, null], [2831, 5943, null], [5943, 7851, null], [7851, 9942, null], [9942, 11173, null], [11173, 14005, null], [14005, 15722, null], [15722, 17498, null], [17498, 18585, null], [18585, 22401, null], [22401, 24196, null], [24196, 26838, null], [26838, 29343, null], [29343, 31996, null], [31996, 33243, null], [33243, 34234, null], [34234, 44185, null], [44185, 47779, null], [47779, 50153, null], [50153, 51145, null], [51145, 52956, null], [52956, 55087, null], [55087, 56444, null], [56444, 59958, null], [59958, 61654, null], [61654, 62931, null], [62931, 65106, null], [65106, 67417, null], [67417, 68724, null], [68724, 69937, null], [69937, 70654, null]], "google_gemma-3-12b-it_is_public_document": [[0, 101, true], [101, 2831, null], [2831, 5943, null], [5943, 7851, null], [7851, 9942, null], [9942, 11173, null], [11173, 14005, null], [14005, 15722, null], [15722, 17498, null], [17498, 18585, null], [18585, 22401, null], [22401, 24196, null], [24196, 26838, null], [26838, 29343, null], [29343, 31996, null], [31996, 33243, null], [33243, 34234, null], [34234, 44185, null], [44185, 47779, null], [47779, 50153, null], [50153, 51145, null], [51145, 52956, null], [52956, 55087, null], [55087, 56444, null], [56444, 59958, null], [59958, 61654, null], [61654, 62931, null], [62931, 65106, null], [65106, 67417, null], [67417, 68724, null], [68724, 69937, null], [69937, 70654, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70654, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70654, null]], "pdf_page_numbers": [[0, 101, 1], [101, 2831, 2], [2831, 5943, 3], [5943, 7851, 4], [7851, 9942, 5], [9942, 11173, 6], [11173, 14005, 7], [14005, 15722, 8], [15722, 17498, 9], [17498, 18585, 10], [18585, 22401, 11], [22401, 24196, 12], [24196, 26838, 13], [26838, 29343, 14], [29343, 31996, 15], [31996, 33243, 16], [33243, 34234, 17], [34234, 44185, 18], [44185, 47779, 19], [47779, 50153, 20], [50153, 51145, 21], [51145, 52956, 22], [52956, 55087, 23], [55087, 56444, 24], [56444, 59958, 25], [59958, 61654, 26], [61654, 62931, 27], [62931, 65106, 28], [65106, 67417, 29], [67417, 68724, 30], [68724, 69937, 31], [69937, 70654, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70654, 0.21673]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e7139125b23dc66609f0b94753bc900cb06a0ed8
|
Topics
- UML
- class diagrams
- sequence diagrams
- communication diagrams
- design principles
- software architecture
- design patterns
Development
development: converting the system specification into an executable system
Main concern: how should the system work?
Development = Design + Implementation
development: converting the system specification into an executable system
Traditionally broken down into several stages:
- architectural design
- interface design
- abstract specification
- coding
- development is an iterative process with feedback between the stages
- design and implementation are typically interleaved
Design vs. Modeling
Design is the process of deciding how the requirements should be implemented.
- guided by design principles
- part of development
Modeling is the process of creating an abstract representation of the domain or the system.
- uses modeling languages
- spans requirements and development
UML (Unified Modeling Language)
- graphical modeling language
- standardized by OMG (Object Management Group)
- semi-formal
- variety of different diagram types
- supports object-oriented designs
- but no fixed methodology
- unified: each diagram gives a different view on the same system
- developed by Rumbaugh, Booch, Jacobson et al.
- starting early-90’s, unification mid-90’s
UML diagram types
- Diagram
- Structure diagram
- Behavior diagram
- Activity diagram
- Use case diagram
- State machine diagram
Structural vs. behavioral modeling
- System = structure + behavior
- Structural models show the system’s organization in terms of its components and their relationships
- can be static (classes) or dynamic (threads)
- Behavioral models show the system’s dynamic as it is executing and responding to stimuli
- can be events or data
Class Diagrams
Class diagram essentials
- Class diagrams describe the data found in a software system.
- Main elements:
- Classes: represent the types of the data
- Attributes: represent the data found in instances
- Associations: show how instances are related
- Generalizations: represent inheritance hierarchies
- Operations: represent the actions by instances
Classes
- A class describes a collection of instances with common properties and similar behavior.
Employee
- Name
- Rank
- GetRank()
- SetRank()
Attributes
- Noun phrase, singular, camelCase
Operations
- Verb phrase, camelCase
More information:
A class describes a collection of instances with common properties and similar behavior.
Class diagram:
- Employee
- name : string
- rank : Rank
+ getRank() : Rank
+ setRank(rank : Rank) : void
Attributes and operations can have additional visibility and type information.
An association describes how instances (of two classes) reference each other.
Association diagram:
- Employee
- Company
- workFor
- name
- getRank()
- setRank()
Reading order: left entity <association> right entity.
Note: multiplicity carry a lot of meaning, so they should always be given.
Multiplicities describe how many instances (of the two classes) are related by an association.
Multiplicity diagram:
- Employee
- Company
- Board
- workFor
- name
- rank
- getRank()
- setRank()
- “non-empty”
- “many”
- “optional”
Symbol at an association end indicates how many instances of this end are related to one instance on the other end.
Note: multiplicities are difficult to get right, so always read them in both directions.
**Typical multiplicities**
- **one-to-many**: common, strictly hierarchical relations
- **many-to-many**: common, matrix relations
- **one-to-one**: often abused

**Association directionality**
- associations are normally bi-directional:
- for each **Order**, we know **Items** it contains
- for each **Item**, we know which **Order** it belongs to
- association direction can be restricted (**navigable**):
- can only go from **Order** to **Item**
- ... but not vice versa

**Exercises**
Model with appropriate multiplicities:
1. docents teaching courses taken by students
2. musicians planning performances of different plays at different venues
**Named association ends**
- association ends can be named (**role names**)
- associations can be reflexive

**Association classes**
- some attributes can be difficult to allocate to classes:
- where should **grade** go?
- use **association class**:
- association name disappears
- each **Student-Class** link contains an object of type **Registration**
**Association classes can be eliminated.**
- replace association + association class

*couldn’t get this working in Astah...*
by two (one-to-many) associations:

**Aggregation**
- Aggregations represent part-whole associations between instances.

Use when
- the parts are partOf the aggregate
- (or the aggregate isComposedOf the parts)
- whoever owns or controls the aggregate, also owns or control the parts
**Aggregation hierarchies**
- typically one-to-many

- aggregation can be arranged hierarchically
- NOTE: not the same as inheritance
**Composite associations**
- Composite associations represent strong part-whole associations between instances.

Use when
- the part cannot exist without the whole aggregate
**Composite associations vs. attributes**
- one-to-one composite associations can also be modeled as attributes (and vice versa)
**Exercises**
- Which mechanism (association, aggregation, or composition) should be used to model the following relations:
1. telephone and handset
2. school and teachers
3. book and chapters
4. country and provinces
5. polygon and lines
**Generalization**
Generalization represents inheritance hierarchies. This a relation on classes, not an association between instances!
- A subclass is a superclass must make sense!
Inheritance must obey the *isa rule*:
- "a subclass is a superclass" must make sense!
**Generalization**
- use a discriminator label to denote the criterion used to distinguish different subclasses
- in particular for disjoint generalization
**Pitfalls – multiple generalization**
- UML allows multiple inheritance
- but Java doesn’t...
- and there are problems with repeated attributes
**Pitfalls – overgeneralization**
- no difference other than discriminator label
- better solution:
**Pitfalls – class change**
- instances should never change their class!
**Interfaces**
An **interface** describes a *portion of the visible behaviour* of a set of objects.
- similar to classes, except they lack instance variables and implemented methods
---
**Domain Class Model**
**A recipe to cook domain class models...**
1. add classes (*without* attributes)
- identify relevant real system objects and represent as classes
2. add generalizations
3. add associations (*with* multiplicities)
- identify relations between identified objects
4. add aggregations and compositions
- check whether really necessary
5. add attributes (*no* operations)
- identify relevant core attributes and add to corresponding classes
6. stir until done
---
**Class diagrams can be used to model the problem at different levels of abstraction.**
- **domain model**
- developed in domain analysis to understand the domain
- models also aspects of the domain that will not be implemented by the system
- also called **exploratory domain model**
- **shrink & refine**
- **system domain model**
- models only aspects of the domain that are implemented by the system
- **grow & refine**
- **system model**
- models also classes outside the domain but used to build the user interface and system architecture
---
**Domain class model**
The **domain class model** contains:
- **relevant entities as classes:**
- physical objects
- persons, organizations (**actors**)
- events, processes, abstractions
- **links between entities as associations:**
- relations
- communications
- part/whole relations (**aggregation, composition**)
**Discovering domain classes**
**Noun phrase analysis:**
- analyze (textual) documents describing the system
- use cases
- requirements documents
- extract the nouns and noun phrases
- eliminate nouns that
- are redundant, vague, or highly general
- are too specific or represent specific instances
- refer to the entire system or objects outside the application
- pay attention to nouns that describe different user types or other actors
**Discovering associations**
- start with central and most important classes
- work outwards towards the less important classes
- add an association if one class
- possesses or controls
- is related to
- communicates with
- is a part of
- is a member of
- some other class in the model
- label it clearly
- specify the multiplicity at both ends
- KISS: keep it simple
**Pitfalls – transient associations**
- an association is only legitimate if the links "survive" beyond execution of an operation
- links are stored in database
- if nothing needs to be stored, rethink the association
**Pitfalls – actions as associations**
- actions should not be modelled as associations but as association classes
- store information associated with action
- different operations access different attributes
Pitfalls – actions as associations
- **actions** should **not** be modelled as associations but as **association classes**
- store information associated with action
- different operations access different attributes
Pitfalls – wrong multiplicities
Sanity-check the multiplicities with a few questions:
- Do we model **generic / indistinguishable** or **individual / distinguishable** items?
Pitfalls – wrong multiplicities
Sanity-check the multiplicities with a few questions:
- Do we model **generic / indistinguishable** or **individual / distinguishable** items?
- Do we model a **static view (snapshot)** or a **dynamic view (history)**?
- Do we model **initial** or **exceptional** situations correctly?
Discovering attributes
- information that must be maintained in each class
- nouns rejected as classes may become attributes
- attributes should generally contain a simple value
- string, number, date, ...
- if a subset of a class’s attributes form a coherent group, then create a new class from these attributes
Example: meal ordering system
Pitfalls – repeated attributes
It is not good to have repeated attributes:
Discovering generalizations / interfaces
There are two ways to identify generalizations:
- **bottom-up**: group together similar classes creating a new superclass
- **top-down**: look for more general classes first, specialize them if needed
Create an interface, instead of a superclass if:
- some of the classes already have superclasses
- the classes are very dissimilar except for having a few operations in common
- different implementations of the same class might be available
Data Dictionary
Goals of the data dictionary
- **find** terms quickly
- sort and cross-reference entries
- **understand** concepts without knowledge of UML
- use natural language descriptions
- ensure consistency with diagrams
- **eliminate** ambiguities
- give precise definitions
- use terms consistently
- list commonly used alternative terms
- **explain** design decisions
Data dictionary
The **data dictionary** is a **centralized repository** of information about data such as meaning, relations to other data, origin, usage, and format.
- **glossary** that describes artefacts from diagrams
- contains **descriptions** and **restrictions** that cannot be expressed (easily) within the diagrams
- serves as **communication basis** between the different **stakeholders**
Source: Lethbridge/Laganiere, Object-Oriented Software Engineering
Elements of the data dictionary
- **use case diagrams**: description of **actors**
- including different scenarios and exceptions
- **domain class model**: description of **classes**
- including properties and attributes
- description of **associations**
- discussion of multiplicities
---
Example: meal ordering system
**Menu (class)**:
A list that contains the **dishes** for a day. The menu is created by the **kitchen staff**, and released by the **supervisor**. The system only contains one menu at a time.
Possible attributes: date
**orders (association)** between **Patient** and **Dish**
Describes which types of the different **dishes** a **patient** has ordered for the following day. Each **patient** can normally order one **breakfast**, one **lunch**, and one **dinner**; if they get released the following day, they can only order **breakfast**. Each **dish** can be ordered by many **patients**. [...]
**Supervisor (actor, class)**
[...]
---
Recap – what have we modeled so far?
- **use case diagrams** to capture the interactions between actors and system
- dynamic, high-level system model
- **class diagrams** to capture the domain structure
- static, high-level domain model
Neither of the two diagram types reflects:
- the **temporal order** of interactions
- the **internal communication** structure
---
Sequence Diagrams
**Sequence diagram essentials**
Sequence diagrams describe the interactions of related **objects** in **temporal order**.
Main elements:
- **lifeline boxes**: represent the interacting **objects**
- **lifelines**: represent the **temporal order**
- **messages**: represent the object **interactions**
- **control boxes**: represent the object **interactions**
Modeling actor-system interactions
Sequence diagrams can be used to model the scenarios contained in the use cases:
- **communication** between actor and system
- **system operations** (i.e., user requests)
- **system events** (i.e., system responses)
- sequential **order** of operations and events
Modeling actor-system interactions
Sequence diagram notation
- **lifeline box** (actor)
- **lifeline box** (object)
- **operation** (message)
- **event** (message)
- **system operation**
- **system event**
- **actors**
- **system**
- **messages** can have parameters
**Lifeline boxes** represent the interacting **objects**:
- **named** object of given class (default)
- **anonymous** object of given class
- **object without class**
- **actor**
Sequence diagram notation
- **synchronous** communication
- **asynchronous** communication
**Activity zones** can express nested control flows
- **C must respond** before **B**
Sequence diagram notation
- **alternative**
- **option**
- one of three possible responses happens (depending on conditions)
- one to three possible responses happen (depending on conditions)
Sequence diagrams – further notation
- **loops**
- **object creation, object deletion**
- **not required to model actor-system interaction**
A recipe to cook sequence diagrams...
1. identify scenarios
- look at use cases
- rule of thumb: multiple scenarios from each use case
2. for each scenario:
1. identify the actors
2. name the (single) system operation
- system operations should be independent
- use activity diagrams to compose
3. identify the parameters
4. model the sequence of system events
- find expressive names for the events
5. add exceptions / error cases
- use alt / opt
Example: meal ordering system
3. [Create menu] The system shall allow the kitchen staff to create menu for the following day:
1. System shall allow kitchen staff to add/delete meals to not released menu
2. System shall allow kitchen staff to release menu
3. System shall ensure when releasing the menu that:
1. menu contains at least one dish for each meal
2. menu has a maximum of 4 dishes for any meal
4. System shall ensure release of menu for the following day after the current day ordering process has been finished and before 8am the following day
1. System shall ensure that released menus cannot be changed
2. System shall provide same menu to all wards after release
3. System shall inform ward supervisors about new menu release
Activity diagram essentials
Activity diagrams describe the stepwise flow of activities and actions.
Main purpose:
- model both organisational (i.e., workflows) and computational processes
- identify candidate system use cases
- through the examination of business workflows
- model flow between (or within) use cases
- combine all sequence diagrams for a single actor/use case
- identify pre- and post-conditions of use cases
Activity Diagrams
Activity diagram essentials
Activity diagrams describe the stepwise flow of activities and actions.
Main elements:
- actions / activities: represent the executable steps
- transitions: represent the control flow
- split / merge: represent decisions
- fork / join: represent concurrency
- swim lanes: structure the control flow
Activity diagram – basic structure
Origin:
- flow charts
- program execution plans
Activities:
- sequence of actions
- transitions immediate (triggerless)
- exit at final state
Activity diagram – actions vs. activities
actions:
- executable atomic computation in the control flow
- cannot be decomposed
activities:
- also executable units in the control flow but non-atomic
- can be decomposed
- can be represented by a separate activity diagram
Activity diagram – decisions
decision node:
- only one branch can execute
- use mutually exclusive conditions – [else] possible
merge node:
- reunite alternative control flows
Activity diagram – concurrency
split (fork):
- B and C only start after A
- B and C mutually independent
- both branches execute
synchronization (join):
- D happens only after B and C
- independent control flows are synchronized
Activity diagram – termination nodes
activity final:
- entire activity is ended
- including other flows
flow final:
- only current flow is ended
- other flows can continue
**Activity diagram – swim lanes**
Use swim lanes to structure activity diagrams:
- one lane per class
- split/merge and fork/join can span multiple lanes
**A recipe to cook activity diagrams...**
1. **create one activity diagram for each** actor / use case combination
- merge all scenarios (i.e., sequence diagrams) that are triggered by each actor
2. **refine** the system operations
- add internal activities and transitions
- add GUI activities
- important: restrict to actor’s point of view
3. **integrate** system events
4. **integrate** conditions / nesting
- take guards from sequence diagrams
---
**Example: meal ordering system**
6. **[Take orders]** System shall let nurse take orders of patients for the following day:
1. System shall only allow taking orders from released menu
2. System shall only allow nurses to take order if patient is from their ward
3. System shall check the patients availability
1. System shall ensure sure that patients not discharged the following day order exactly three meals
2. One breakfast, one lunch, one dinner
4. System shall save the ID of nurse taking the order
5. System shall ensure that orders cannot be changed after sending them to the kitchen
6. System shall ensure that orders are never deleted
**Example: meal ordering system**
6. **[Take orders] System shall ensure that patients not discharged the following day order exactly three meals**
1. One breakfast, one lunch, one dinner
2. System shall make sure that patients that are discharged the following day can only order breakfast
---
**Example: meal ordering system**
6.3 System shall check the patients availability
1. System shall ensure sure that patients not discharged the following day order exactly three meals
2. One breakfast, one lunch, one dinner
3. System shall make sure that patients that are discharged the following day can only order breakfast
**Example: meal ordering system**
6.3 System shall check the patients availability
1. One breakfast, one lunch, one dinner
2. System shall make sure that patients that are discharged the following day can only order breakfast
Example: meal ordering system
System Class Models
Class diagrams can be used to model the problem at different levels of abstraction.
- **Domain model**
- developed in domain analysis to understand the domain
- models also aspects of the domain that will not be implemented by the system
- also called **exploratory domain model**
- **System domain model**
- models only aspects of the domain that are implemented by the system
- **System model**
- models also classes outside the domain but used to build the user interface and system architecture
System class model development
- determine the **system boundary**
- actors are outside the system
- determine the **system tiers**
- **presentation tier**
- **application tier** (middle tier, business logic)
- **data tier**
- use UML stereotypes to denote which tier a class belongs to
**Derive the system class model systematically from domain class model.**
<<Boundary>> classes
Boundary classes constitute the interface between system and environment:
- **Presentation layer** (GUI)
- interfaces to other systems
- sensors and switches to control external devices
- actors communicate with the system only via boundary classes
**Design rule: one boundary class per actor.**
Control classes orchestrate the system operations:
- application layer
- encapsulate business processes and business logic
⇒ “glue” between boundary and entity classes
Entity classes manage the application data and the internal system state:
- data layer (persistence, database system)
- includes access methods
- data for actors must be reflected into the system via additional entity classes
⇒ connected to control and other entity classes via relations
Design rule: one control class per use case.
Layered architecture
Associations in the system class model
A recipe to cook system class models...
1. start with the domain class model
2. identify actors
– check in the use case diagrams
3. identify boundary classes for actors
– represent the user interface
4. identify entity classes
– map properties into attributes
5. insert entity classes for actors (if required)
– reflect necessary properties of the actors in the system...
A recipe to cook system class models...
6. identify control classes for use cases
– between boundary and entity classes
– typically one control class per use case
7. ensure 1:1 associations between actor/boundary and boundary/control classes
– ensure that actors only talk to boundary classes
8. check model for completeness
– insert new associations (if necessary)
– model might differ structurally from domain class model
9. complete attributes in all classes
Example: boundary classes for actors
Example: actors
Insert a new boundary class for each actor, or repurpose an existing class!
Example: actors?
Check the use case diagrams for actors!
Example: boundary classes for actors?
A recipe to cook system class models...
1. start with the domain class model
2. identify actors
- check the use case diagrams
3. identify boundary classes for actors
- represent the user interface
4. identify entity classes
- map properties into attributes
5. insert entity classes for actors (if required)
6. identify control classes for use cases
- between boundary and entity classes
- typically one control class per use case
7. ensure 1:1 associations between actor/boundary and boundary/control classes
8. check model for completeness
- insert new associations (if necessary)
9. complete attributes in all classes
Example: entity classes
Example: entity classes for actors
Insert entity classes for actors if the system needs to keep actor data!
Classes that are not actor or boundary classes are entity classes!
Example: control classes for use cases?
Insert a control class for each use case and connect with the boundary classes according to the use case / actor relation!
Example: control classes for use cases
Insert a control class for each use case and connect with the boundary classes according to the use case / actor relation!
Example: enforce 1:1 relations
Move associations from <<actor>> to corresponding <<control>>!
irrelevant for requirements
Example: enforce 1:1 relations
Move associations from <<actor>> to corresponding <<control>>!
all of the supervisor’s associations are in control – replace by actor!
Example: enforce 1:1 relations
Move associations from <<actor>> to corresponding <<control>>!
same procedure for kitchen staff
Example: enforce 1:1 relations
Move associations from <<actor>> to corresponding <<control>>!
integrate NurseData into system; change direction and multiplicity of association
Example: enforce 1:1 relations
integrate NurseData into system; move worksOn association from <<actor>> to <<entity>>
A recipe to cook system class models...
1. start with the domain class model
2. identify actors
- check in the use case diagrams
3. identify boundary classes for actors
- represent the user interface
4. identify entity classes
- map properties into attributes
5. insert entity classes for actors (if required)
- reflect necessary properties of the actors in the system
6. identify control classes for use cases
- between boundary and entity classes
- typically one control class per use case
7. ensure 1:1 associations between actor/boundary and boundary/control classes
8. check model for completeness
- insert new associations (if necessary)
- model might differ structurally from domain class model
9. complete attributes in all classes
Example: check diagram
Example: complete attributes
Result: system class model
Recap – what have we modeled so far?
- use case diagrams and sequence diagrams to capture the interactions between actors and system
- class diagrams to capture the domain and system structure
- activity diagrams to capture the step-wise flow of activities and actions
None of the diagrams focuses on the level of individual objects.
Communication diagram essentials
Communication diagrams describe the flow of communications between objects along the associations.
Main purpose:
- more detailed model of system operations
- serves as blue-print for implementation
Design rule: one communication diagram per system operation.
Message format
number: [guard] variable := name(parameters)
- messages have name and parameters
- message send can be guarded
- guard must be checked locally
- message only sent if guard evaluates to true
- messages can store results in variables
- variables local (typically in control object)
- messages ordered by number
A recipe to cook communication diagrams...
1. create one communication diagram for each system operation
2. identify actor and control object
- identify actor in sequence diagram
- identify control object in system class model
- ignore communication between boundary and control
3. identify the system operation
- identify name and parameters
- ensure consistency with sequence diagram
...
A recipe to cook communication diagrams...
4. identify collaborators
- follow association links in system class model
- introduce collections for set-valued relations
5. derive message flow
- define messages to
▶ process and store data in objects
▶ create and delete objects
▶ response messages (cf. sequence diagram)
- define order of messages (incl. conditionals)
▶ take pre-conditions into account
6. update system class model if necessary
- add links and attributes
Example: change seats on booking
Change number of seats on booked public tour
- An existing booking can be changed if the scheduler enters the booking number and the new number of seats.
- If there are not enough seats available, the scheduler is notified, and nothing changes;
- otherwise, the booking is changed and scheduler is notified.
from the requirements
Example: change seats on booking
2. **identify** actor and control object
- **identify** actor in sequence diagram
- **identify** control object in system class model
Example: change seats on booking
3. **identify** the system operation
- **identify** name and parameters
- ensure consistency with sequence diagram
Example: change seats on booking
4. **identify** collaborators
- follow association links in system class model
- introduce collections for set-valued relations
Example: change seats on booking
Second step:
From the Booking we can access the Tour.
The BookingSystem needs to maintain several added bookings in a collection. It uses the booking number to find the corresponding booking (attribute nr) in the collection.
Example: change seats on booking
Third step:
From the Tour we can access the Bus (and its capacity and current load).
Example: change seats on booking
Fourth step:
4. derive message flow
- define messages to
▶ process and store data in objects
▶ create and delete objects
▶ response messages (cf. sequence diagram)
- define order of messages (incl. conditionals)
▶ take pre-conditions into account
Example: change seats on booking
Fifth step:
Final step – update booking and acknowledge:
Example: change seats on booking
6. **update** system class model if necessary
- add links and attributes
Message flow
- message flow starts with call of system operation
- receiver is control object
- messages are **only** sent as reaction to receiving another message
- messages can **only** be sent along links that are instances of associations in the system class model
- update class model if missing
- variables are always local
- typically in the control object
Implementation Class Models
Class diagrams can be used to model the problem at different levels of abstraction.
- **domain model**
- models also aspects of the domain that will not be implemented by the system
- **system domain model**
- models only aspects of the domain that are implemented by the system
- **system model**
- models also classes outside the domain but used to build the user interface and system architecture
- **implementation model**
- represents system model using programming language constructs
Recap – what have we modeled so far?
- **use case diagrams** and **sequence diagrams** to capture the interactions between actors and system
- **class diagrams** to capture the domain and system structure
- **activity diagrams** to capture the step-wise flow of activities and actions
- **communication diagrams** to capture the flow of communications between objects along the associations
None of the diagrams is aimed at the implementation level.
Implementation class models serve as foundation for the implementation.
Goal: **systematic derivation** of the implementation classes from system class model.
- contains **complete description** of all class elements (attributes and methods)
- types, visibility, multiplicities
- still uses UML syntax...
- ...but relies on constructs available in the programming language
A recipe to cook implementation class models...
1. **identify** attributes
- retain all attributes from (updated) system class model
- represent all associations as attributes
- use navigation order to identify host class
2. **identify** methods
- represent messages from communication diagrams as methods
- use navigation order to identify host class
- control classes host system operations
3. **determine types and properties** of attributes and methods
- including visibility
4. **refactor** class diagram to match programming language
- UML != Java...
- replace association attributes by references
- replace multiple inheritance by interfaces
---
Example: Bus booking system
1. **identify** attributes
- retain all attributes from (updated) system class model
- represent all associations as attributes
- use navigation order to identify host class
3. **determine types and properties** of attributes and methods
- including visibility
---
Extended syntax for attributes

---
Unidirectional associations
Communication happens only in one direction:
```
A: A
B: B
```
A is client, B is server
(or: A "knows" B, B "doesn’t know" A)
Realizing unidirectional associations
Optional references:
```java
public class A {
private B b;
public void meth(B b) {
this.b = b;
}
...
}
public class B {
...
public String print() {
return "hello B";
}
...
}
```
Implementation in Java
```
implementation in Java
```
Realizing unidirectional associations
Mandatory references:
```java
public class A {
private B b;
public void meth(B b) {
this.b = b;
}
...
}
public class B {
...
public String print() {
return "hello B";
}
...
}
```
Implementation in Java
```
implementation in Java
```
Realizing unidirectional associations
Multiple references (fixed number):
```java
public class A {
private Collection<B> bs;
public void add(B b) { bs.add(b); }
public void remove(B b) { bs.remove(b); }
}
```
Implementation in Java
```
import java.util.ArrayList;
import java.util.Collection;
public class A {
private Collection<B> bs = new ArrayList<>();
public void add(B b) { bs.add(b); }
public void remove(B b) { bs.remove(b); }
}
```
Realizing unidirectional associations
Multiple references (variable number):
```java
public class A {
private Collection<B> bs;
public void add(B b) { bs.add(b); }
public void remove(B b) { bs.remove(b); }
}
```
Implementation in Java
```
import java.util.ArrayList;
import java.util.Collection;
```
Realizing unidirectional associations
Multiple references (variable number):
```java
public class A {
private Collection<B> bs;
public void add(B b) { bs.add(b); }
public void remove(B b) { bs.remove(b); }
}
```
Bidirectional associations
Communication happens in both directions:
```
bidirectional
```
```
association is undirected
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
```
messages are sent in both directions
```
Generating class skeletons
```java
public class Buchung {
private int sitzplaetze;
private int nr;
private Tour tour;
public int getNr() {
return nr;
}
public Tour getTour() {
return tour;
}
public int getSitzplaetze() {
return sitzplaetze;
}
public void setSitzplaetze(int plaezte) {
this.sitzplaetze = plaezte;
}
}
```
generator/ setter can be auto-generated
Example: Bus booking
Methods can be read of the communication diagrams as methods
1. represent messages from communication diagrams as methods
2. navigate order to identify host class
3. host system operations in control classes
```
public int getBusPlaetze() {
return nr;
}
```
```
public class Buchung {
private Collection<Buchung> buchungen = ...;
public Tour getTour() {
return tour;
}
}
```
public class Reisetour extends Tour {
private int sitzplaetze;
public int getSitzplaetze() {
return sitzplaetze;
}
}
Implement generalization through inheritance
```
public class Tour {
private int belegtePlaetze;
private int nr;
public int getBelegtePlaetze() {
return belegtePlaetze;
}
public void platzwekndern (int altplaetze, int neuePlaetze) {
}
}
```
Deriving Code from Models
Realizing bidirectional associations
system class model
implementation class model
Implementation in Java more complicated – constraints...
Generating functions
public void gebuchtePlaetzeEinerOeffentlichenReisebustourAndern(int bNr, int neuePlaetze){
Tour t = b.getTour();
int belegtePlaetze = t.getBelegtePlaetze();
int alterPlaetze = t.getBusPlaetze();
int busPlaetze = t.getBusPlaetze();
if (! (belegtePlaetze + neuePlaetze - alterPlaetze <= busPlaetze))
calls from diagram
altePlaetze = b.getSitzplaetze();
...
Generating functions
```java
public Buchung b = findeBuchung(bNr);
if (! (belegtePlaetze + neuePlaetze - altePlaetze <= busPlaetze))
return Systemereignis.AENDERUNG_AUS_PLATZGRUENDEN_NICHT_MOEGLICH;
int altePlaetze = b.getSitzplaetze();
```
Guard holds implicitly (because of return)
```java
public Systemereignis
gedachtePlaetzeInEineOffentlicheReisebustourAendern(int bNr, int neuePlaetze)
Buchung b = findeBuchung(bNr);
Tour t = b.getTour();
int belegtePlaetze = t.getBelegtePlaetze();
int busPlaetze = t.getBusPlaetze();
int alterPlaetze = b.getPlaetze();
if (! (belegtePlaetze + neuePlaetze - alterPlaetze <= busPlaetze))
return Systemereignis.AENDERUNG_AUS_PLATZGRUENDEN_NICHT_MOEGLICH;
b.setPlaetze(neuePlaetze);
return Systemereignis.AENDERUNG_AUS_PLATZGRUENDEN_NICHT_MOEGLICH;
```
Guard holds implicitly (because of return)
public class Tour {
private int belegtePlaetze;
public void plaeetzeAendern(int altePlaetze, int neuePlaetze) {
belegtePlaetze = belegtePlaetze + neuePlaetze - altePlaetze;
…
return Systemereignis;
if (! (belegtePlaetze + neuePlaetze - altePlaetze <= busPlaetze))
return Systemereignis.
}
}
Buchung b = findeBuchung(bNr);
…
t.setSitzplaetze(neuePlaetze);
t.plaeetzeAendern(altePlaetze, neuePlaetze);
Software Design Principles
prin-ci-ple (noun):
1. a basic truth or theory;
2. an idea that forms the basis of something.
Software design principles...
• are abstract guidelines
• become practice through methods and techniques
– often methods and techniques are packaged in a methodology
– methodologies can be enforced by tools
• apply to process and product
Key design principles
- Rigor and formality
- Separation of concerns
- Modularity
- coupling and cohesion
- Abstraction
- information hiding
- hierarchical structure
- Design for change
- Generality
- Incrementality
Key principle #1: Rigor and formality
Software Engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is, the application of engineering to software
- even creative activities (e.g., design, programming) must be practiced systematically
- rigor: the quality of being very exact, careful, or strict
- any systematic approach
- formality: rigor at the highest degree
- typically mathematical methods
Key principle #1: Rigor and formality
Examples (product):
- systematic (rigorous) transformation of models
- systematic (rigorous) construction of test cases
- mathematical (formal) program correctness proofs
Examples (process):
- documentation of development steps
- cleanroom software development
Key principle #2: Separation of concerns
- aka "divide and conquer"
- develop software in a way that the issues can be addressed one at a time
- supports parallelization of efforts and separation of responsibilities
Key principle #2: Separation of concerns
Examples (product):
- different phases of a compiler
- protocol stack
- model-view-controller architecture
Examples (process):
- waterfall model
- lazy code optimization
Key principle #3: Modularity
- modularity is separation of functional concerns
- modules
- enforce logical boundaries
- separate interface from implementation
- modular languages support separate compilation and deployment (modules as components)
- modularity is the cornerstone of software design:
"Modularity is the single attribute of software that allows a program to be intellectually manageable."
G. J. Myers
Key principle #3: Modularity
Informally, modularity means decomposition into subprograms and tasks.
Problem: this is difficult to achieve:
- What should be a subprogram or task and why?
- What items should be parameters?
- What items should be “global” variables?
- What should be in the “main” program?
Cohesion and coupling
- modules should be highly cohesive
- items in a module are closely related to one another
- modules understandable as a meaningful unit
- modules should exhibit low coupling
- modules have low interactions with others
- modules understandable separately
Bad:
Cohesion levels
- coincidental cohesion (low)
- module items grouped randomly
- logical cohesion (low)
- module items perform similar functions
- temporal cohesion (low)
- module items are activated at the same time
- communicational cohesion (medium)
- all module items operate on the same input or produce the same output
- sequential cohesion (medium)
- one module item’s output is another one’s input
Cohesion and coupling
- high cohesion and low coupling ⇒ simple interfaces
- simpler communication
- simpler correctness proofs
- changes influence other modules less often
- reusability increases
- comprehensibility improves
Cohesion levels
- functional cohesion (high)
- each item is necessary for the execution of a single function
- object cohesion (high)
- each operation provides functionality which allows object attributes to be modified or inspected
- inheritance weakens cohesion
- to understand a component, the super-classes as well as the component class must be examined
Cohesion is not formally defined and often difficult to classify.
Coupling mechanisms
- content or pathological coupling (high)
- one module modifies or relies on the internals of another module (e.g., accessing local data)
- changing the way the second module produces data requires changing the dependent module
- common or global coupling (high)
- two modules share the same global data (e.g., a global variable)
- changing the shared resource requires changing all the modules using it
- external coupling (high)
- two modules share an externally imposed data format, communication protocol, or device interface
### Coupling mechanisms
- **control coupling** (medium)
- one module controls the flow of another (e.g., passing a what-to-do flag)
- **data coupling** (medium)
- modules share data only through parameters
- **message coupling** (low)
- achieved by state decentralization (as in objects) and communication via parameters or message passing
**High coupling leads to ripple effects when the code is changed.**
### Key principle #4: Abstraction
- **abstraction is separation of hierarchical concerns**
- **abstraction ignores details**
- type and scale of abstraction depends on purpose
- **abstraction produces models**
- (see UML)
- trades reasoning about the system by reasoning about the model
- **procedural abstraction**: stepwise refinement
- **data abstraction**: find hierarchy in the data
- application-oriented data structures
- general data structures
### Information hiding
**Combination of abstraction and modularity**
*All information about a module (and particularly how the module works) should be private to the module, unless it is specifically declared otherwise.*
- world can only see module through interface
- anything beyond that interface should be hidden
- justification: restrict changes to one location.
- real purpose: hide design decisions
### Key principle #5: Design for change
- change of algorithms
- e.g., replace inefficient sorting algorithm
- change of data representation
- e.g., from custom format to XML
- 17% of maintenance costs attributed to data representation changes
- change of underlying hardware or abstract machine
- new devices
- new release of operating system or DBMS
- change of “social” environment
- new tax regime
- EURO vs national currency in EU
**Change is inevitable, better plan for it!**
⇒ use abstraction and modularity!
**Key principle #6: Generality**
- see whether the current problem is an instance of a more general problem whose solution can be reused in other cases
- carefully balance generality against performance and cost
- sometimes the general problem is easier to solve than a special case
**Key principle #7: Incrementality**
Development should proceed in a stepwise fashion (increments):
- deliver subsets of a system early to get early feedback from expected users, then add new features incrementally
- design for change
- deal first with functionality, then turn to performance
- separation of concerns
- deliver a first prototype and then incrementally add effort to turn prototype into product
**Software Architecture**
**Software architecture by example...**
**Software architecture as abstraction**
Software architecture as abstraction
Software architecture – Definition #1
"The architecture of a software system defines that system in terms of computational components and interactions among those components."
M. Shaw and D. Garlan
statement
procedure
module
(design) pattern
architecture
Problem: purely static view
More information:
Software architecture – Definition #2
"The software architecture of a system is the [...] structures of the system, which comprise software elements, the externally visible properties of those elements, and the relationships among them."
Bass, Clements, and Kazman
Difference to previous definition:
- multiple system structures
- externally visible properties of components
More information:
Architectural structures
- module structure
- conceptual, or logical structure
- process, or coordination structure
- physical structure
- uses structure
- calls structure
- data flow
- control flow
- class structure
Architecture is a set of abstractions.
Other views
- Architecture is conceptual, high-level design
- Architecture is overall structure of the system
- Architecture is the structure, including the principles and guidelines governing their design and evolution over time
- Architecture is components and connectors
- Architecture is the process of designing the global organization of a software system, including:
- dividing software into subsystems
- deciding how these will interact
- determining their interfaces
Why is architecture important?
- enables everyone to better understand the system
- separation of concerns
- computation from communication
- architecture from implementation
- allows people to work on individual pieces of the system in isolation
- explicit system structure (divide and conquer)
- prepares for extension of the system
- facilitates reuse and reusability, reduces development costs
- system families, software product lines
- component-based development
Components and connectors
Standard architectural framework:
- components are connected by connectors
- building blocks with which an architecture can be described
- no standard notation has emerged yet
Types of components
- **computational**: does a computation of some sort
- e.g., function, filter
- **memory**: maintains a collection of persistent data
- e.g., data base, file system, symbol table
- **manager**: contains state + operations, state is retained between invocations of operations
- e.g., ADT, server
- **controller**: governs time sequence of events
- e.g., control module, scheduler
Types of connectors
- procedure call (including RPC)
- data flow
- e.g., pipes
- implicit invocation
- e.g., interrupts
- message passing
- shared data
- e.g., blackboard or shared data base
- instantiation
Terminology
- **component**
- functionality, class, system, subsystem, legacy system, client, server, filter ...
- **connector**
- communication, dependency, relationship, link, call, interaction, ...
- **style**
- configuration, topology, pattern, composition rules, form, ...
Different terms with similar meaning – but many subtleties
Architectural styles
- recurring organizational patterns and idioms
- established, shared understanding of common design forms
- every mature engineering field has architectures
- abstraction of recurring composition and interaction characteristics in a set of architectures
**Style #1: Call-and-return**
**Problem:**
- hierarchy of functions
- result of functional decomposition
- (usually) single thread of control
- context: language with nested procedures
**Solution:**

**Style #1: Call-and-return**
**Components:**
- functions / methods
- usually single-threaded
**Connectors:**
- procedure calls
- usually synchronously
- shared memory
**Topology:**
- hierarchy of (nested) functions
- interaction topologies can vary
**Style #1: Call-and-return**
**Strengths:**
- can change implementation without affecting clients
- can break problems into interacting agents
- distributed across multiple machines / networks
**Weaknesses:**
- components must know their interaction partners
- topology hardwired
- when partner changes, objects that explicitly invoke it must change
- indirect side effects
- if A and B both use C, A’s effects on C can surprise B
**Style #2: Multi-layered architecture**
**Problem:**
- distinct, hierarchical classes of services
- each layer acts as a ...
- service provider to layers “above”
- service consumer from layers “below”
- operating systems, product families, ...
**Solution:**

**Style #2: Multi-layered architecture**
**Components:**
- services
**Connectors:**
- (typically) procedure calls
- API protocol
**Topology:**
- nested
- interaction topologies can be
- **opaque**: layer $n+1$ can only call layer $n$
- **translucent**: layer $n+1$ can call every layer $m \leq n$
- **virtual machine** style if layers are fully opaque
**Style #2: Multi-layered architecture**
**Strengths:**
- can increase levels of abstraction
- can partition complex problems along the layers
- low coupling
- especially for opaque layers
- supports reuse
- implementation of a level can be swapped
**Weaknesses:**
- performance can suffer
- opaque layers require communicating down through several layers
**Style #2: Multi-layered architecture**
**Variant:** protocol stacks
- complementary layered architectures on sender and receiver side
- standard architecture for networks

**Examples:**
- Application program
- User account management
- Kernel (binding processes and swapping)
- Dealing with application protocols
- Dealing with connections
- Dealing with packets
- Encapsulating and routing
**Style #3: Pipe-and-filter**
**Problem:**
- independent components solve simple tasks
- each component reads input in simple format, transforms, and writes output in simple format
- data transfer typically handled by OS
- want to glue components together to build system
**Solution:**
```
ls -> tee -> post -> lpt
file -> piped -> wc
```
**Components:**
- independent programs
- little local context used, no state maintained between instantiations
**Connectors:**
- pipes (i.e., queues)
- data transfer typically incrementally (compare to sequential *batch processing*)
- data transferred in pure ASCII or XML format
**Topology:**
- data flow graph
**Strengths:**
- supports reuse
- filters only need to agree on the data format
- architecture can easily be reconfigured
**Weaknesses:**
- sharing global data is expensive or limiting
- can be difficult to design incremental filters
- not appropriate for interactive applications
- error handling
- e.g., some intermediate filter crashes
- untyped, low-level data format
**Example:**

**Style #4: Repository architecture**
**Problem:**
- long-lived, richly structured, shared data
- to be manipulated in many different ways by different clients
**Solution:**
**Components:**
- central data repository + schema
- can be active (send notifications)
- independent operators
- interact with database by queries and updates
**Connectors:**
- messages / procedure calls
- shared data
**Topology:**
- star (components only communicate with repository)
**Style #4: Repository architecture**
**Strengths:**
- efficient way to share large amounts of data
- data integrity localized to repository module
**Weaknesses:**
- repository data model is compromise between clients
- schema evolution is difficult and expensive
- distribution can be a problem
**Example:**
**Style #5: Client/server**
**Problem:**
- some components are service provides, others users
- irregular usage patterns
- asymmetric relation: service requests driven by users (pull)
- context: distributed systems
**Solution:**
**Components:**
- clients: user-facing, little persistent state, active (request services)
- servers: “in the back office”, maintains persistent state and offers services, passive
**Connectors:**
- remote procedure calls or network protocols
- server doesn’t know identity of clients
**Topology:**
- clients surround the server(s)
Style #5: Client/server
**Strengths:**
- makes effective use of networked systems
- easy to add / upgrade servers
- redundancy relatively straightforward.
**Weaknesses:**
- communication may be expensive and slow
- denial of service attacks
- data interchange complicated, data integrity functionality must be implemented for each server
Example:

Style #6: Peer-to-peer
**Variant** of client/server:
- each component acts both as server and client
- no centralized component
- flexible communication structure
**Strengths:**
- efficiency: all clients provide resources
- scalability – system capacity grows with number of clients
- robustness – data is replicated over peers – no single point of failure in the system (in pure peer-to-peer style)
**Weaknesses:**
- architectural complexity – more demanding of peers (compared to client-server).
- resources are distributed and not always available
Style #7: Event-based
**Problem:**
- loosely coupled collection of components
- application likely to be reconfigured
- context: requires event handler – through OS or language
**Solution:**

**Style #7: Event-based**
Example:

- **Problem:**
- separation of UI from application
- **Solution:**
- separation of concerns
- standard architecture
- **Weaknesses:**
- can be too much overhead for small models or simple UIs
**Style #8: Model-view-controller**
- **Components:**
- **Model:** holds data
- **View:** draws visualization
- **Controller:** manages interaction
- **Connectors:**
- typically procedure calls
- **Topology:**
- typically single-threaded
**Style #8: Model-view-controller**
- **Example:** MVC on the internet
- **Model:** underlying system that manages the information.
- **View:** generates the HTML code to be displayed by the browser
- **Controller:** interprets HTTP POST transmissions coming back from the browser
**Design Patterns**
Design patterns
A design pattern is the outline of a reusable solution to a general software design problem encountered in a particular context.
Good patterns...
• are as general as possible
• solve recurring problems
• describe a proven and effective solution
– for the problem in the indicated context
⇒ Studying patterns is an effective way to learn from the experience of others!
Pattern description
Patterns are typically described in the following style:
• Context: general situation where the pattern applies
• Problem: short description of the main difficulty
• Forces: issues to consider when solving the problem
• Solution: recommended way to solve the problem in the given context
– typically a UML class diagram
• Antipatterns: solutions that are inferior or do not work in this context (optional)
• Related patterns (optional)
• References: who developed or inspired the pattern
Example: Abstraction-Occurrence
• Context: in a domain model you find a set of related objects (occurrences) such that the objects
– share common information
– but also differ from each other in important ways
• Problem: what is the best way to represent such occurrences in a class diagram?
• Forces: represent the occurrences without duplicating the common information
• Solution:

Example: Abstraction-Occurrence
• Antipatterns:

Example: General Hierarchy
• Context: objects in a hierarchy can have one or more objects above (superiors) and below (subordinates) them, but some objects cannot have any subordinates
• Problem: how do you represent an object hierarchy where some objects cannot have subordinates?
• Forces:
– you want a flexible way to represent the hierarchy
– the objects have many common properties and operations
**Example: General Hierarchy**
- **Solution:**
![General Hierarchy Solution Diagram]
- **Antipattern:**
![General Hierarchy Antipattern Diagram]
**The Gang of Four (GoF)**
- patterns originated in the mid-70’s as architectural concept *(C. Alexander, A Pattern Language)*
- first application to programming in the mid-80’s *(K. Beck and W. Cunningham)*
- became wildly popular in computer science in 1995 *(E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software)*
- known informally as the “Gang of Four” (GoF)
**GoF pattern description template**
- **Pattern Name and Classification:** descriptive and unique name that helps in identifying and referring to the pattern.
- **Intent:** description of the goal behind the pattern and the reason for using it.
- **Also Known As:** other names for the pattern.
- **Motivation:** scenario consisting of a problem and a context in which this pattern can be used.
- **Applicability:** situations in which this pattern is usable; the context for the pattern.
- **Structure:** graphical representation of the pattern: class diagrams and interaction diagrams may be used for this purpose.
- **Participants:** listing of the classes and objects used in the pattern and their role in the design.
- **Collaboration:** description of how classes and objects used in the pattern interact with each other.
- **Consequences:** description of the results, side effects, and trade-offs caused by using the pattern.
- **Implementation:** description of an implementation of the pattern; solution part of the pattern.
- **Sample Code:** illustration of how the pattern can be used in a programming language.
- **Known Uses:** examples of real usages of the pattern.
- **Related Patterns:** other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
**GoF patterns**
- **Creational patterns** for class instantiation
- **Structural patterns** for class & object composition
- use inheritance to
- compose interfaces
- define ways to obtain new functionality via composition
- “wrap” classes to
- modify interface
- extend functionality
- restrict access
- **Behavioural patterns** for object communications
- especially useful when using multiple abstractions
**GoF patterns**
- **By Purpose**
- **Creational:** Factory Method, Adapter (Clone), Abstract Factory, Builder, Prototype, Singleton
- **Structural:** Adapter (object), Bridge, Decorator, Facade, Proxy, Chain of Responsibility, Command, Interpreter, Mediator, Memento, Observer, State, Strategy, Visitor
---
**Example: General Hierarchy**
- **Solution:**
![General Hierarchy Solution Diagram]
- **Antipattern:**
![General Hierarchy Antipattern Diagram]
**The Gang of Four (GoF)**
- patterns originated in the mid-70’s as architectural concept *(C. Alexander, A Pattern Language)*
- first application to programming in the mid-80’s *(K. Beck and W. Cunningham)*
- became wildly popular in computer science in 1995 *(E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software)*
- known informally as the “Gang of Four” (GoF)
**GoF pattern description template**
- **Pattern Name and Classification:** descriptive and unique name that helps in identifying and referring to the pattern.
- **Intent:** description of the goal behind the pattern and the reason for using it.
- **Also Known As:** other names for the pattern.
- **Motivation:** scenario consisting of a problem and a context in which this pattern can be used.
- **Applicability:** situations in which this pattern is usable; the context for the pattern.
- **Structure:** graphical representation of the pattern: class diagrams and interaction diagrams may be used for this purpose.
- **Participants:** listing of the classes and objects used in the pattern and their role in the design.
- **Collaboration:** description of how classes and objects used in the pattern interact with each other.
- **Consequences:** description of the results, side effects, and trade-offs caused by using the pattern.
- **Implementation:** description of an implementation of the pattern; solution part of the pattern.
- **Sample Code:** illustration of how the pattern can be used in a programming language.
- **Known Uses:** examples of real usages of the pattern.
- **Related Patterns:** other patterns that have some relationship with the pattern; discussion of the differences between the pattern and similar patterns.
**GoF patterns**
- **Creational patterns** for class instantiation
- **Structural patterns** for class & object composition
- use inheritance to
- compose interfaces
- define ways to obtain new functionality via composition
- “wrap” classes to
- modify interface
- extend functionality
- restrict access
- **Behavioural patterns** for object communications
- especially useful when using multiple abstractions
GoF patterns
Creational patterns
Singleton
- **Context:** it is very common to find classes for which only one instance should exist
- **Problem:** how do you ensure that it is never possible to create more than one instance of a singleton class?
- **Forces:**
- the use of a public constructor cannot guarantee that no more than one instance will be created
- the singleton instance must also be accessible to all classes that require it
Factory Method and Abstract Factory
- **Creational patterns** abstract the object instantiation process (i.e., `new`)
- hide how objects are created
- help make the overall system independent of how its objects are created and composed
- **Class creational patterns** focus on the use of inheritance to decide the object to be instantiated
- **Factory Method**
- **Object creational patterns** focus on the delegation of the instantiation to another object
- **Abstract Factory**
Singleton
- **Solution:**
```java
<Singleton>
theInstance
getInstance()
Company
theCompany
Company() {private
getInstance() --
if (theCompany==null)
theCompany= new Company();
return theCompany;
```
Maze example
Consider a maze game with the following classes:
Maze example
```java
class MazeGame {
public Maze makeMaze() // factory method
return new Maze();
public Wall makeWall(int n) // factory method
return new Wall(n);
public Room makeRoom(int n) // factory method
return new Room(n);
}
```
Maze example
```java
class EnchantedMazeGame extends MazeGame {
public Maze makeMaze() // factory method
return new EnchantedMaze();
public Wall makeWall() // factory method
return new EnchantedWall();
public Room makeRoom() // factory method
return new EnchantedRoom();
}
```
Maze example
```java
Maze makeMaze(int n) {return new Room(n);}
```
Maze example
```java
Maze makeMaze() {
Maze maze = makeMaze();
Room room = makeRoom();
Wall wall = makeWall();
Door door = makeDoor();
return maze;
}
```
Maze example
```java
Door door = makeDoor(r1, r2);
```
Maze example
```java
Maze makeMaze() {return new Room(n);}
```
Maze example
```java
Maze makeMaze() {return new Room(n);}
```
Maze example
```java
Maze makeMaze() {return new Room(n);}
```
Maze example
```java
Maze makeMaze() {return new Room(n);}
```
Structural patterns
- Proxy
- Provide a surrogate or placeholder for another object to control access to it.
- Bridge
- Decouple abstraction from implementation so they can vary independently.
- Flyweight
- Use sharing to support large numbers of fine-grained objects.
- Decorator
- Attach additional responsibilities to an object dynamically. Enhance or implement new functionality.
- Facade
- Provide a unified interface to a set of interfaces in a subsystem. Facade defines a higher-level interface that makes the subsystem easier to use.
- Composite
- Composite objects into tree structures to represent part-whole hierarchies. Let clients treat individual objects and compositions of objects uniformly.
GoF original patterns
- Adapter
- Context:
- you are building an inheritance hierarchy and want to incorporate it into an existing class.
- the reused class is also often already part of its own inheritance hierarchy
- Problem: how to obtain the power of polymorphism when reusing a class whose methods have the same function but not the same signature as the other methods in the hierarchy?
- Forces: you do not have access to multiple inheritance or you do not want to use it
**Behavioral patterns**
- **Iterator**: Provide a way to access the elements of an aggregate object sequentially without exposing its underlying representation.
- **Blackboard**: Generalized observer, which allows multiple readers and writers. Communicates information system-wide.
- **State**: Allow an object to alter its behavior when its internal state changes. The object will appear to change its class.
- **Specification**: Reimplementable business logic in a framework fashion.
- **Restorer**: An alternative to the existing Memento pattern.
- **Null Object**: Designed to act as a default value of an object.
- **Memento**: Without violating encapsulation, capture and externalize an object's internal state so that the object can be restored to this state later.
- **Interpreter**: Given a language, define a representation for its grammar along with an interpreter that uses the representation to interpret sentences in the language.
- **Command**: Encapsulate a request as an object, thereby letting you parameterize clients with different requests, queue or log requests, and support undoable operations.
**GoF original patterns**
**Behavioral patterns**
- **Visitor**: Represent an operation to be performed on elements of an object structure. Lets you define new operation without changing the classes of the elements on which it operates.
- **Observer or Publish/Subscribe**: Define: a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically.
- **Strategy**: Define a family of algorithms, encapsulates each one, and makes them interchangeable. Strategy lets the algorithm vary independently from clients that use it.
- **Chain of responsibility**: Avoid coupling sender of a request to receiver by giving the request to a chain of handlers. Chain re- tains objects and passes request along chain until an object handles it.
- **Template method**: Define the skeleton of an algorithm in an operation, deferring some steps to subclasses. Template Method lets subclasses redefine certain steps of an algorithm without changing the algorithm's structure.
**GoF original patterns**
|
{"Source-Url": "http://www.cs.sun.ac.za/rw344/slides/Design.6.pdf", "len_cl100k_base": 15051, "olmocr-version": "0.1.50", "pdf-total-pages": 48, "total-fallback-pages": 0, "total-input-tokens": 301721, "total-output-tokens": 18018, "length": "2e13", "weborganizer": {"__label__adult": 0.0004534721374511719, "__label__art_design": 0.0005960464477539062, "__label__crime_law": 0.0002357959747314453, "__label__education_jobs": 0.0017843246459960938, "__label__entertainment": 7.063150405883789e-05, "__label__fashion_beauty": 0.00016427040100097656, "__label__finance_business": 0.0002149343490600586, "__label__food_dining": 0.00037217140197753906, "__label__games": 0.0007386207580566406, "__label__hardware": 0.0005383491516113281, "__label__health": 0.0002627372741699219, "__label__history": 0.00036406517028808594, "__label__home_hobbies": 0.0001226663589477539, "__label__industrial": 0.00034427642822265625, "__label__literature": 0.000293731689453125, "__label__politics": 0.00028634071350097656, "__label__religion": 0.0004620552062988281, "__label__science_tech": 0.0024700164794921875, "__label__social_life": 0.0001118779182434082, "__label__software": 0.003810882568359375, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0004634857177734375, "__label__transportation": 0.0005087852478027344, "__label__travel": 0.00029349327087402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68535, 0.00293]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68535, 0.66626]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68535, 0.79955]], "google_gemma-3-12b-it_contains_pii": [[0, 943, false], [943, 2577, null], [2577, 3613, null], [3613, 4773, null], [4773, 6257, null], [6257, 7016, null], [7016, 8605, null], [8605, 9873, null], [9873, 10913, null], [10913, 12370, null], [12370, 14408, null], [14408, 15376, null], [15376, 17106, null], [17106, 18473, null], [18473, 20652, null], [20652, 21906, null], [21906, 22857, null], [22857, 23565, null], [23565, 24407, null], [24407, 25458, null], [25458, 26642, null], [26642, 28552, null], [28552, 29313, null], [29313, 29833, null], [29833, 31674, null], [31674, 32920, null], [32920, 35100, null], [35100, 36556, null], [36556, 36556, null], [36556, 36969, null], [36969, 37850, null], [37850, 38675, null], [38675, 40553, null], [40553, 42805, null], [42805, 44634, null], [44634, 45443, null], [45443, 47130, null], [47130, 49073, null], [49073, 51016, null], [51016, 52539, null], [52539, 53887, null], [53887, 55075, null], [55075, 55927, null], [55927, 57725, null], [57725, 62755, null], [62755, 63990, null], [63990, 66357, null], [66357, 68535, null]], "google_gemma-3-12b-it_is_public_document": [[0, 943, true], [943, 2577, null], [2577, 3613, null], [3613, 4773, null], [4773, 6257, null], [6257, 7016, null], [7016, 8605, null], [8605, 9873, null], [9873, 10913, null], [10913, 12370, null], [12370, 14408, null], [14408, 15376, null], [15376, 17106, null], [17106, 18473, null], [18473, 20652, null], [20652, 21906, null], [21906, 22857, null], [22857, 23565, null], [23565, 24407, null], [24407, 25458, null], [25458, 26642, null], [26642, 28552, null], [28552, 29313, null], [29313, 29833, null], [29833, 31674, null], [31674, 32920, null], [32920, 35100, null], [35100, 36556, null], [36556, 36556, null], [36556, 36969, null], [36969, 37850, null], [37850, 38675, null], [38675, 40553, null], [40553, 42805, null], [42805, 44634, null], [44634, 45443, null], [45443, 47130, null], [47130, 49073, null], [49073, 51016, null], [51016, 52539, null], [52539, 53887, null], [53887, 55075, null], [55075, 55927, null], [55927, 57725, null], [57725, 62755, null], [62755, 63990, null], [63990, 66357, null], [66357, 68535, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68535, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68535, null]], "pdf_page_numbers": [[0, 943, 1], [943, 2577, 2], [2577, 3613, 3], [3613, 4773, 4], [4773, 6257, 5], [6257, 7016, 6], [7016, 8605, 7], [8605, 9873, 8], [9873, 10913, 9], [10913, 12370, 10], [12370, 14408, 11], [14408, 15376, 12], [15376, 17106, 13], [17106, 18473, 14], [18473, 20652, 15], [20652, 21906, 16], [21906, 22857, 17], [22857, 23565, 18], [23565, 24407, 19], [24407, 25458, 20], [25458, 26642, 21], [26642, 28552, 22], [28552, 29313, 23], [29313, 29833, 24], [29833, 31674, 25], [31674, 32920, 26], [32920, 35100, 27], [35100, 36556, 28], [36556, 36556, 29], [36556, 36969, 30], [36969, 37850, 31], [37850, 38675, 32], [38675, 40553, 33], [40553, 42805, 34], [42805, 44634, 35], [44634, 45443, 36], [45443, 47130, 37], [47130, 49073, 38], [49073, 51016, 39], [51016, 52539, 40], [52539, 53887, 41], [53887, 55075, 42], [55075, 55927, 43], [55927, 57725, 44], [57725, 62755, 45], [62755, 63990, 46], [63990, 66357, 47], [66357, 68535, 48]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68535, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
3934ba524e256465d9ce8634bef951720b137483
|
On The Relation of Test Smells to Software Code Quality
Davide Spadini,*† Fabio Palomba‡ Andy Zaidman,* Magiel Bruntink,‡ Alberto Bacchelli‡
Software Improvement Group, *Delft University of Technology, ‡University of Zurich
Abstract—Test smells are sub-optimal design choices in the implementation of test code. As reported by recent studies, their presence might not only negatively affect the comprehension of test suites but can also lead to test cases being less effective in finding bugs in production code. Although significant steps toward understanding test smells, there is still a notable absence of studies assessing their association with software quality.
In this paper, we investigate the relationship between the presence of test smells and the change- and defect-proneness of test code, as well as the defect-proneness of the tested production code. To this aim, we collect data on 221 releases of ten software systems and we analyze more than a million test cases to investigate the association of six test smells and their co-occurrence with software quality. Key results of our study include: (i) tests with smells are more change- and defect-prone, (ii) ‘Indirect Testing’, ‘Eager Test’, and ‘Assertion Roulette’ are the most significant smells for change-proneness and, (iii) production code is more defect-prone when tested by smelly tests.
I. INTRODUCTION
Automated testing (hereafter referred to as just testing) has become an essential process for improving the quality of software systems [12], [47]. In fact, testing can help to point out defects and to ensure that production code is robust under many usage conditions [12], [16]. Writing tests, however, is as challenging as writing production code and developers should maintain test code with the same care they use for production code [11].
Nevertheless, recent studies found that developers perceive and treat production code as more important than test code, thus generating quality problems in the tests [9], [10], [57], [82]. This finding is in line with the experience reported by van Deursen et al. [74], who described how the quality of test code was “not as high as the production code [because] test code was not refactored as mercilessly as our production code” [74]. In the same work, van Deursen et al. introduced the concept of test smells, inspired by Fowler et al.’s code smells [23]. These smells were recurrent problems that van Deursen et al. found when refactoring their troublesome tests [45].
Since its inception, the concept of test smells has gained significant traction both among practitioners [18], [42] and the software engineering research community [7], [26], [74], [76]. Bavota et al. presented the earliest and most significant results advancing our empirical knowledge on the effects of test smells [7]. The researchers conducted the first controlled laboratory experiment to establish the impact of test smells on program comprehension during maintenance activities and found evidence of a negative impact of test smells on both comprehensibility and maintainability of test code [7].
Although the study by Bavota et al. [7] made a first, necessary step toward the understanding of maintainability aspects of test smells, our empirical knowledge on whether and how test smells are associated with software quality aspects is still limited. Indeed, van Deursen et al. [74] based their definition of test smells on their anecdotal experience, without extensive evidence on whether and how such smells are negatively associated with the overall system quality.
To fill this gap, in this paper we quantitatively investigate the relationship between the presence of smells in test methods and the change- and defect-proneness of both these test methods and the production code they intend to test. Similar to several previous studies on software quality [24], [62], we employ the proxy metrics change-proneness (i.e., number of times a method changes between two releases) and defect-proneness (i.e., number of defects the method had between two releases). We conduct an extensive observational study [15], collecting data from 221 releases of ten open source software systems, analyze more than a million test cases, and investigate the association between six test smell types and the aforementioned proxy metrics.
Based on the experience and reasoning reported by van Deursen et al. [74], we expect to find tests affected by smells to be associated with more changes and defects, i.e., higher maintenance efforts and lower software quality. Furthermore, since test smells indicate poor design choices [74] and previous studies showed that better test code quality leads to better productivity when writing production code [4], we expect to find production code tested by smelly tests to be associated with more defects.
Our results meet these expectations: Tests with smells are more change- and defect-prone than tests without smells and production code is more defect-prone when tested by smelly tests. Among the studied test smells, ‘Indirect testing’, ‘Eager Test’ and ‘Assertion Roulette’ are those associated with highest change-proneness; moreover, the first two are also related to a higher defect-proneness of the exercised production code. Overall, our results provide empirical evidence that detecting test smells is important to signal underlying software issues as well as studying the interplay between test design quality and effectiveness on detecting defects is of paramount importance for the research community.
II. RELATED WORK
Over the last decade the research community spent a considerable effort in studying (e.g., [1], [3], [32], [39], [51], [55], [59], [61], [66], [72], [78]–[80]) and detecting (e.g., [33], [36], [41], [43], [46], [49], [52], [54], [70]) design flaws occurring in production code, also known as code smells [23]. At the same time, problems concerning the design of test code have only been partially explored and our literature survey showed us that our empirical knowledge is still limited.
In this section, we first discuss the literature related to test smells, then we discuss previous work that analyzed the change- and defect-proneness of code smells, as it can shed light on why test smells can also be problematic.
A. Test Smells
The importance of having well-designed test code was initially put forward by Beck [8]. Beck argued that test cases respecting good design principles are desirable since these test cases are easier to comprehend, maintain, and can be successfully exploited to diagnose problems in the production code. Inspired by these arguments, van Deursen et al. [74] coined the term test smells and defined the first catalog of 11 poor design choices to write tests, together with refactoring operations aimed at removing them. Such a catalog has been then extended more recently by practitioners, such as Meszaros [42] who defined 18 new test smells.
From these catalogs, Greiler et al. [25], [26] showed that test smells affecting test fixtures frequently occur in a company setting. Motivated by this prominence, Greiler et al. presented TESTHOUND, a tool able to identify fixture-related test smells such as ‘General Fixture’ or ‘Vague Header Setup’ [25]. Van Rompaey et al. [76] devised a heuristic code metric-based technique that can identify two test smell types, i.e., ‘General Fixture’ and ‘Eager Test’. However, the empirical study conducted to assess the performance of the technique showed that it often misses instances of the two smells.
Turning the attention to the empirical studies that had test smells as their object, Bavota et al. [7] studied (i) the diffusion of test smells in 18 software projects, and (ii) their effects on software maintenance. They found that 82% of JUnit classes are affected by at least one test smell and that the presence of test smells has a strong negative impact on the comprehensibility of the affected classes. The high diffuseness of test smells was also confirmed in the context of the test cases automatically generated by testing tools [53].
Tufano et al. [71] conducted an empirical study aimed at measuring the perceived importance of test smells and their lifespan during the software life cycle. Key results of the investigation indicated that developers usually introduce test smells in the first commit involving the affected test classes, and in almost 80% of the cases the smells are never removed, primarily because of poor awareness of developers. This study strengthened the case for having tools able to automatically detect test smells to raise developers’ knowledge about these issues.
Finally, Palomba and Zaidman [56] investigated the extent to which test smells can be exploited to locate flaky tests, i.e., test cases having a non-deterministic behavior [40]. The main findings of the work showed that (i) almost 54% of flaky tests contain a test smell that can cause the flakiness and (ii) the refactoring of test smells removed both the design flaws and test code flakiness [56].
The work we present in this paper is complementary to the ones discussed so far: We aim at making a further step ahead by investigating the change- and defect-proneness of test smells, as well as the defect-proneness of production code tested by smelly tests.
B. Change- and Defect-proneness of Code Smells
The software engineering research community has conducted extensive work in the context of code smells in production code. More specifically, Khomh et al. [31] showed that the presence of code smells increases the code’s change-proneness. Later on, they also showed that code components affected by code smells are more fault-prone than non-smelly components [32]. Their results were confirmed by Palomba et al. [50], who found that code smells make classes more change- and defect-prone; in addition, they also found that the class’ change-proneness can benefit from code smell removal, while the presence of code smells in many cases is not necessarily the direct cause of the class defect-proneness, but rather a co-occurring phenomenon [50].
Gatrell and Counsell [24] conducted an empirical study aimed at quantifying the effect of refactoring on class change- and defect-proneness. In particular, they monitored a commercial project for eight months and identified the refactoring operations applied by developers during the first four months. Then, they examined the same classes for the second four months to investigate whether the refactoring results in a decrease of change- and defect-proneness. They compared against classes of the system that were not refactored during the same period. Results revealed that classes subject to refactoring have a lower change- and defect-proneness.
Li and Shatnawi [38] empirically evaluated the correlation between the presence of code smells and the probability that the class contains errors. They studied the post-release evolution process showing that many code smells are positively correlated with class errors. Olbrich et al. [48] studied the maintainability of two specific code smell types, i.e., ‘God Class’ and ‘Brain Class’, reporting that classes affected by such smells change less frequently and have a fewer number of defects than non-smelly classes. D’Ambros et al. [20] studied how ‘Feature Envy’ and ‘Shotgun Surgery’ instances are related to software defects, reporting no consistent correlation between them. Finally, Saboury et al. [63] empirically investigated the impact of code smells on the defect-proneness of JAVASCRIPT modules, confirming the adverse effect of smells on source code maintainability.
III. RESEARCH METHODOLOGY
The goal of our study is to increase our empirical knowledge on whether and how test methods affected by smells are associated with higher change- and defect-proneness of the test code itself, as well as to assess whether and to what extent test methods affected by test smells are associated with the defect-proneness of the production code they test. The perspective is that of both researchers and practitioners who are interested in understanding the possible adverse effects of test smells on test and production code. We structured our study around the two overarching research questions that we describe in the following.
The first research question investigates the relationship between the presence of test smells in test code and its change/defect proneness:
RQ1. Are test smells associated with change/defect proneness of test code?
We, thus, structure RQ1 in three sub-research questions. First, we aim at providing a broad overview of the relationship of test smells and their co-occurrence with change- and defect-proneness of test code:
RQ1.1: To what extent are test smells associated with the change- and defect-proneness of test code?
RQ1.2: Is the co-occurrence of test smells associated with the change- and defect-proneness of test code?
Then, we aim at verifying whether some particular test smells have a stronger association with change- and defect-proneness of test code:
RQ1.3: Are certain test smell types more associated with the change- and defect-proneness of test code?
Considering that defect-proneness as been widely used in previous literature as a proxy metric for software quality (e.g., [20], [24], [32], [50]), in the second research question, we aim at making a complementary analysis into the association of test smells with the defect-proneness of the exercised production code. In fact, if the production code exercised by tests with test smells is more defect-prone this would be an even stronger signal on the relevance of test smells. This goal leads to our second research question:
RQ2. Is the production code tested by tests affected by test smells more defect-prone?
The expectation is that test code affected by test smells might be less effective in detecting defects [4], thus being associated with more defect-prone production code. We structured RQ2 in three sub-research questions:
RQ2.1: Are test smells associated with the defect-proneness of the tested production code?
RQ2.2: Is the co-occurrence of test smell associated with the defect-proneness of the tested production code?
RQ2.3: Are certain test smell types more associated with the defect-proneness of production code?
Similarly to RQ1, we aim at providing an overview of the role of test smells in the defect-proneness of production code, by investigating single test smells and their co-occurrence.
A. Subjects of the Study
In our study, we have to select two types of subjects: software systems and test smells.
Software systems. We consider ten OSS projects and their 221 major releases as subject systems for our study. Specifically, Table I reports the characteristics of the analyzed systems concerning (i) the number of the considered releases and (ii) size, in terms of the number of classes, methods, and KLOCs. Two main factors drive the selection: firstly, since we have to run static analysis tools to detect test smells and compute maintainability metrics, we focus on projects whose source code is publicly available (i.e., OSS); secondly, we analyze systems having different sizes and scopes. After filtering on these criteria, we randomly select ten OSS projects from the list available on GITHUB\(^1\) having different size, scope, and with a number of JUnit test cases higher than 1,000 in all the releases.
For each system, we only consider their major releases. In fact, (i) detecting test smells at commit-level is prohibitively expensive in terms of computational time and (ii) minor releases are too close to each other (in some cases there is more than one minor release per week), so very few changes are made in the source and test code. We mine these major releases directly from the systems’ GITHUB repositories.
Test smells. As subject test smells for our study, we consider those described in Table II. While other test smell types have been defined in literature [42], [74], we select the smells in Table II because: (1) Identifying test smells in 221 project releases through manual detection is prohibitively expensive, thus a reliable and accurate automatic detection mechanism must be available; (2) the selected test smells have the greatest diffusion in industrial and OSS projects [7]; and (3) the
\(^1\)https://github.com
selected ones compose a diverse catalog of test smells, which are related to different characteristics of test code.
### B. Data Extraction
To answer RQ1, we extract data about (i) the test smells affecting the test methods in each system release and (ii) the change/defect proneness of these test cases. To answer RQ2, we extract data about the defect proneness of the production code exercised by the test code. The obtained data and the R script used to analyze the results are both available in our online appendix [14].
**Detecting test smells.** We adopt the test smell detector by Bavota et al. [7] (widely adopted in previous research [7], [53], [56], [71]), which is able to reliably identify the six smells considered in our study with a precision close to 88% and a recall of 100%, by relying on code metrics-based rules.
**Defining the change-proneness of test code.** To compute change- and defect-proneness of test code, we mine the change history information of the subject systems using REPODRILLER [2], a Java framework that allows the extraction of information such as commits, modifications, diffs, and source code. Explicitly, for each test method $T_i$ of a specific release $r_j$ we compute its change-proneness as follows:
$$ change\text{-proneness}(T_i, r_j) = \#\text{commits}(T_i)_{r_{j-1}\rightarrow r_j} $$
where $\#\text{commits}(T_i)_{r_{j-1}\rightarrow r_j}$ represents the number of changes performed by developers on the test method $T_i$ between the releases $r_{j-1}$ and $r_j$. Given the granularity of our analyses (i.e., release-level), we only compute the change-proneness of test methods that were actually present in a release $r_j$; if a new method was added and removed between $r_{j-1}$ and $r_j$, it does not appear in our result set. To identify which test method changed within a commit, we implement the following algorithm:
1) We first identify all test classes modified in the commit. In line with past literature [71], [81], we consider a class to be a test when its name ends with ‘Test’ or ‘Tests’.
2) For each test class, we obtain the source code of the class in both the present commit and the previous one.
3) We parse the source code of the test class to identify the test methods contained in the current and in the previous commit. Then, we compare the source code of each test method from the current commit against all the test methods of the prior version:
a) if we find the same method, it means that it is not changed (i.e., both signature and content of the method in $r_j$ are the same as $r_{j-1}$);
b) if we find a different method, it means that it is changed (i.e., the signature of the method is the same, but the source code in $r_j$ is not equal to $r_{j-1}$);
c) if we do not find the method (i.e., the signature of the method does not exist in the previous version of the file), it means that it has been added or renamed. To capture the latter, we adopt a technique similar to the one proposed by Biegel et al. [13], based on the use of textual analysis to detect rename refactoring operations. Specifically, if the cosine similarity [5] between the current method and that of the methods in the previous version is higher than 95%, then we consider a method as renamed (hence, it inherited all the information of the old test case).
**Defining the defect-proneness of test code.** To compute the defect-proneness of each test case, we follow a similar procedure to the one for change-proneness, with the exception that to calculate the buggy commits we relied on SZZ [67]. In particular, we first determine whether a commit fixed a defect employing the technique proposed by Fischer et al. [22], which is based on the analysis of commit messages. If a commit message matches an issue ID present in the issue tracker or it contains keywords such as ‘bug’, ‘fix’, or ‘defect’, we consider it as a bug fixing activity. This approach has been extensively used in the past to determine bug fixing changes [29], [34] and it has an accuracy close to 80% [22], [55], thus we deem it as being accurate enough for our study. Once we have detected all the bug fixing commits involving a test method, we employ SZZ to obtain the commits where the bug was introduced.
To estimate the moment when a bug was likely introduced, the SZZ algorithm relies on the annotation/blame feature of versioning systems [67]. In short, given a bug-fix activity
<table>
<thead>
<tr>
<th>Test smell</th>
<th>Description</th>
<th>Problem</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mystery Guest</td>
<td>A test that uses external resources (e.g., file containing test data)</td>
<td>Lack of information makes it hard to understand. Moreover, using external resources introduces hidden dependencies: if someone deletes such a resource, tests start failing.</td>
</tr>
<tr>
<td>Resource Optimism</td>
<td>A test that makes optimistic assumptions about the state/existence of external resources</td>
<td>It can cause non-deterministic behavior in test outcomes. The situation where tests run fine at one time and fail miserably the other time.</td>
</tr>
<tr>
<td>Eager Test</td>
<td>A test method exercising more methods of the tested object</td>
<td>It is hard to read and understand, and therefore more difficult to use as documentation. Moreover, it makes tests more dependent on each other and harder to maintain.</td>
</tr>
<tr>
<td>Assertion Roulette</td>
<td>A test that contains several assertions with no explanation</td>
<td>This smell indicates that there might be problems with data hiding in the production code. It may depend on many irrelevant details such as commas, quotes, spaces, etc. Whenever the toString method for an object is changed, tests start failing.</td>
</tr>
<tr>
<td>Indirect Testing</td>
<td>A test that interacts with the object under test indirectly via another object</td>
<td></td>
</tr>
<tr>
<td>Sensitive Equality</td>
<td>A test using the ‘toString’ method directly in assert statements</td>
<td></td>
</tr>
</tbody>
</table>
| Table II: Subject Test Smells | |
| SUBJECT TEST SMELLS | |
<table>
<thead>
<tr>
<th>Test smell</th>
<th>Description</th>
<th>Problem</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mystery Guest</td>
<td>A test that uses external resources (e.g., file containing test data)</td>
<td>Lack of information makes it hard to understand. Moreover, using external resources introduces hidden dependencies: if someone deletes such a resource, tests start failing.</td>
</tr>
<tr>
<td>Resource Optimism</td>
<td>A test that makes optimistic assumptions about the state/existence of external resources</td>
<td>It can cause non-deterministic behavior in test outcomes. The situation where tests run fine at one time and fail miserably the other time.</td>
</tr>
<tr>
<td>Eager Test</td>
<td>A test method exercising more methods of the tested object</td>
<td>It is hard to read and understand, and therefore more difficult to use as documentation. Moreover, it makes tests more dependent on each other and harder to maintain.</td>
</tr>
<tr>
<td>Assertion Roulette</td>
<td>A test that contains several assertions with no explanation</td>
<td>This smell indicates that there might be problems with data hiding in the production code. It may depend on many irrelevant details such as commas, quotes, spaces, etc. Whenever the toString method for an object is changed, tests start failing.</td>
</tr>
<tr>
<td>Indirect Testing</td>
<td>A test that interacts with the object under test indirectly via another object</td>
<td></td>
</tr>
<tr>
<td>Sensitive Equality</td>
<td>A test using the ‘toString’ method directly in assert statements</td>
<td></td>
</tr>
</tbody>
</table>
identified by the bug ID $k$, the approach works as follows:
- For each file $f_i$, $i = 1 \ldots m_k$ involved in the bug-fix $k$ ($m_k$ is the number of files changed in the bug-fix $k$) and fixed in its revision $rel\text{-}fix_{i,k}$, we extracted the file revision just before the bug fixing ($rel\text{-}fix_{i,k} - 1$).
- Starting from the revision $rel\text{-}fix_{i,k} - 1$, for each source line in $f_i$ changed to fix the bug $k$, we identified the production method $M_j$ to which the changed line changed belongs. Furthermore, the blame feature of Git is used to identify the revision where the last change to that line occurred. In doing that, blank lines and lines that only contain comments are identified using an island grammar parser [44]. This produces, for each production method $M_j$, a set of $n_{i,k}$ bug-inducing revisions $rel\text{-}bug_{i,j,k}$, $j = 1 \ldots n_{i,k}$. Thus, more than one commit can be indicated by the SZZ algorithm as responsible for inducing a bug.
With the list of bug inducing commits involving every test method, we compute its defect-proneness in a release $r_j$ as the number of bug inducing activities involving the method in the period between the releases $r_{j-1}$ and $r_j$.
**Defining the defect-proneness of production code.** For each test method in the considered projects, we first need to retrieve what is the production method it exercises. For this, we exploit a traceability technique based on naming convention, i.e., it identifies the methods under test by removing the string ‘Test’ from the method name of the JUnit test method. This technique has been previously evaluated by Sneed [68] and by Van Rompay et al. [75], demonstrating the highest performance (both in terms of accuracy and scalability) with respect to other traceability approaches (e.g., slicing-based approaches [60]).
Once we detect the links between test and production methods, we can compute the defect-proneness of such production methods. Since we calculate test smells at the release level (i.e., we only have information regarding which test is smelly at the specific commit of the release), we have to detect how many defects production methods have within that particular release. To this aim, we rely again on the SZZ algorithm. To detect defects of production code in a specific release, we only consider bug fixing activities related to bugs introduced before the release date. More formally, we compute the fault-proneness of a production method $M_i$ in a release $r_j$ as the number of changes to $M_i$ aimed at fixing a bug in the period between $r_j$ and $r_{j+1}$, where the bug was introduced before the release date, in the period between $r_{j-1}$ and $r_j$. The obtained list of bugs are the ones that were present in the system when it was released, hence not captured using tests.
By employing SZZ, we can approximate the time periods in which each production method was affected by one or more bugs. We exclude from our analysis all the bugs occurring in a production method $M_i$ after the system was released, because in this case the test smell could have been solved before the introduction of the bug. We also exclude bug-introducing changes that were recorded after the bug was reported, since they represent false positives [19].
**C. Data Analysis**
To answer RQ1, we analyze the previously extracted information regarding test smells and change- and defect-proneness of test code. In particular, in the context of RQ1.1, we test whether JUnit test methods that contain a test smell are more likely to be change- or defect-prone. To this aim, we compute the Relative Risk (RR) [37], an index reporting the likelihood that a specific cause (in our case, the presence/absence of a test smell) leads to an increase in the amount a test case is subject to a particular property (in our case, number of changes or defects) [30], [58]. The RR is defined as the ratio of the probability of an event occurring in an exposed group (e.g., the probability of smelly tests being defective), to the probability of the event occurring in a non-exposed group (e.g., the probability of non-smelly tests being defective) and it is computed using the following equation:
$$RR = \frac{P_{event\ when\ exposed}}{P_{event\ when\ not\ exposed}}$$
A relative risk of 1 means that the event is equally likely in both samples. A RR greater than 1 indicates that the event is more likely in the first sample (e.g., when the test is smelly), while a RR of less than 1 points out it is more likely in the second sample (e.g., when the test is not smelly). We prefer using this technique rather than alternative statistical tests adopted in previous work (e.g., analysis of box plots [50] or Odds Ratios [6], [32]) because of the findings reported in the statistic field that showed how this method (i) should be preferred when performing exploratory studies such as the one conducted herein [21], [83] and (ii) is equivalent to Odds Ratios analysis [64].
Change- and defect-proneness of JUnit test methods might also be due to other factors rather than the presence of a test smell. Indeed, Kitchenham et al. [35] found that both size and number of previous changes might influence the observations on the defect-proneness of source code; additionally, Zhou et al. [84], reported the role of size as possible confounding effect when studying the change-proneness of code elements. Based on the evidence above, we control our findings for change-proneness by computing the RR achieved considering the size of the test method in terms of lines of code (LOC). Moreover, we control the phenomenon of defect-proneness by considering LOC of test methods and number of times the method changed from the last release (i.e., prior changes). More specifically, the aim is to understand whether the likelihood of a test case being smelly and more change- or defect-prone varies when controlling for size and number of changes. In other words, if smelly tests are consistently more prone to changes and defects than non-smelly tests, independently from their size or number of times they changed in the past, we have higher confidence that the phenomena observed are associated with test smells.
To answer RQ1.2 and analyze the role of test smell co-occurrences, we split the previously extracted dataset into seven groups, each one containing test methods affected by exactly $i$ smells, where $0 \leq i \leq 6$. Then, we compare
change- and defect-proneness of each group using (i) the Wilcoxon rank sum test [77] (with confidence level 95%) and (ii) Cohen’s $d$ [65] to estimate the magnitude of the observed difference. We choose the Wilcoxon test since it is a non-parametric test (it does not have any assumption on the underlying data distribution), while we interpret the results of Cohen’s $d$ relying on widely adopted guidelines [65]: The effect size is considered small for $0.2 \leq d < 0.5$, medium for $0.5 \leq d < 0.8$, and large for $d \geq 0.8$.
To answer RQ1.3, we adopt the same procedure as for RQ1.2, but we consider each smell type separately, i.e., we compare change- and defect-proneness of different smell types by means of Wilcoxon rank sum test [77] and Cohen’s $d$ [65], controlling for size and number of previous changes (only in case of defect-proneness). It is important to note that, as done in earlier work [32], [50], in this analysis we consider test cases affected only by a single test smell, e.g., only Eager test, with the aim of understanding the effect of single test smells on change- and fault-proneness of test code.
For RQ2 we adopt a process similar to that of RQ1. In particular, for RQ2.1 we compute the RR: in this case, we aim to investigate the likelihood that the presence/absence of a test smell is associated with the defect-proneness of the production code being tested. Similarly to RQ1, we control for size and number of changes. Analogously, in RQ2.2 we use (i) the Wilcoxon rank sum test [77] and (ii) Cohen’s $d$ [28] to assess the association of test smell co-occurrences to the defect-proneness of production code. Finally, to answer RQ2.3, we compare the distribution of the number of defects related to the production code tested by different test smell types (considering single test smell types).
D. Threats to Validity
Our research method poses some threats to the validity of the results we obtain.
Construct validity. Threats to construct validity concern our research instruments. To obtain information regarding test smells we use the test smell detector devised by Bavota et al. [7]. Even though this tool has been assessed in previous studies [7], [56] as being extremely reliable, some false positives can still be present in our dataset.
Another threat is related to how we detected which production method is exercised by a test method: specifically, we exploited a traceability technique based on naming convention that has been heavily adopted in the past [6], [53], [71], [81]. This technique has also been evaluated by Sneed [68] and by Van Rompaey and Demeyer [75], and the results reported an average precision of 100% and a recall of 70%.
Internal validity. Threats to internal validity concern factors that could affect the variables and the relations being investigated. When we look into the relation between test smells and test defects, many factors can influence the results. For example, a test could contain more defects than others because more complex, bigger, or more coupled, while the studied variable (test smells) could be insignificant. To mitigate this, we control for some of these metrics, namely size of the method (LOC) and number of changes, which have been reported to correlate with code complexity [17]. As shown in the results section, the results generally do not change when controlling for other metrics. Furthermore, at the beginning of this study we also built a Logistic Regression Model to determine whether our explanatory variable was (not) statistically significant in the model. Similarly to Thongtanunam et al. [69], we built a logistic regression model to determine the likelihood of a test being defective (or change prone) using LOC, prior changes, production changes as control variables and being smelly (our new variable) as a binary explanatory variable. We used R scripts provided by Thongtanunam et al. [69] to build the model, and we discovered that test code smelliness was indeed statistically significant for the model. However, we preferred to proceed with RR instead of the model, for better readability of the results.
External validity. Threats to external validity concern the generalization of results. We conducted our study taking into account 221 releases of 10 Java systems having different scope and characteristics to strengthen the generalizability of our findings. However, a study investigating different projects and programming languages may lead to differing conclusions.
IV. RQ1 Results: Test Smells and Test Code
This section describes the results to RQ1.
RQ1.1: To what extent are test smells associated with the change- and defect-proneness of test code?
Figure 1 depicts the Relative Risk of test smells to be change prone in smelly tests vs non-smelly tests, controlling by size. The p-value for all RRs is $< 0.0001$. Fig. 1. Relative risk of being change prone in smelly tests vs non-smelly tests, controlling by size. The p-value for all RRs is $< 0.0001$.
<table>
<thead>
<tr>
<th>Size</th>
<th>RR</th>
<th>Conf. Int.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Small</td>
<td>1.31</td>
<td>(1.29-1.32)</td>
</tr>
<tr>
<td>Average</td>
<td>1.95</td>
<td>(1.86-2.04)</td>
</tr>
<tr>
<td>Large</td>
<td>2.02</td>
<td>(1.94-2.19)</td>
</tr>
</tbody>
</table>
| Overall | 1.47 | (1.46-1.50)|
We make two main observations from the results in Figure 1. On the one hand, test methods affected by at least one smell are more change-prone than non-smelly methods, with an RR of 1.47; from a practical perspective, this means that a smelly test has the risk of being 47% more change-prone than a non-smelly test. On the other hand, we can notice that smelly tests with higher size are more change prone: this is intuitive since larger methods are more difficult to maintain (hence more change prone) and they are more likely to contain smells. An important result to notice is that large smelly tests (LOC > 60) are more than twice more likely of being change prone than not smelly large tests. This finding is a good incentive for practitioners and developers to write small and concise tests, as recommended by Beck [8].
Concerning defect-proneness, Figure 2 shows how the RR varies when considering (i) the presence of test smells (“Overall”), (ii) the size of test cases—split in the same way as done for change-proneness, and (iii) the number of previous changes applied to test cases (we discriminated between methods that change frequently vs. methods that infrequently change, by adopting the heuristic proposed by Romano and Pinzger [62], i.e., we considered frequently evolving methods to have a number of changes higher than the median of the distribution of all the changes that occurred in test cases — 2, in our case).
From Figure 2, we observe that the presence of test smells is associated with the defect-proneness of test cases. Indeed, methods affected by at least one design flaw have the risk of being 81% more defect-prone than non-smelly ones. Additionally, the result does not change when controlling for size and number of changes. Indeed, the difference is even more prominent for large tests: the smelly ones are 3.5 times more defect prone than the not smelly. Instead, change proneness seems not relevant when discriminating the defect-proneness of test cases. In both cases, the RR of smelly tests of being more defect prone is 50% higher.
Overall, the results of this first analysis provide empirical evidence that test smells—defined with the aim of describing a set of bad patterns influencing test code maintainability [74]—are indeed associated with higher change- and defect-proneness of the affected test cases.
**Finding 1.** Tests affected by test smells are associated with higher change- and defect-proneness than tests not affected by smells, also when controlling for both the test size and the number of previous changes.
**RQ1.2 Is the co-occurrence of test smells associated with the change- and defect-proneness of test code?**
While in the previous research question we did not discriminate on the number of test smells a test method contained, the goal of this analysis is to assess whether test smell co-occurrences is associated with the change- and defect-proneness of test cases. Figures 3 and 4 report box plots showing change- and defect-proneness of test cases affected by a different number of test smells, respectively.
For change-proneness, the median of the different groups very low (around one) for all test cases: to some extent, this is in line with the findings by Zaidman et al. [82], who found that developers generally do not change test cases as soon as they implement new modifications to the corresponding production
code. At the same time, Figure 3 shows that the higher the number of test smells, the more dispersed the distribution of changes is, thus indicating that test cases affected by more design problems tend to be changed more often by developers. This observation is supported by the results of the statistical tests, where we found that the difference between all groups was statistically significant ($p$-value < $2^{-16}$), with a negligible effect size between the first 5 groups ($d \leq 0.2$) and a medium one between the first 5 and the last 2 groups ($0.5 \leq d \leq 0.8$).
When considering defect-proneness in Figure 4, we notice that test methods having up to four test smells do not show significant differences with respect to methods affected by five or six design flaws. Indeed, the median of the distribution is almost identical in all the groups, and even though the difference is considered statistically significant by the Wilcoxon rank sum test, it has a small effect size ($d < 0.2$). Thus, these findings suggest that the co-occurrence of more test smells is not directly associated with higher defect-proneness; we hypothesize that they are instead a co-existing phenomenon, similarly to what Palomba et al. reported for code smells in production code [50].
In the context of this research question, we controlled for the size of the test method and the number of its changes, finding that these factors are not associated with the investigated outcome. We include a report of this additional analysis in our on-line appendix [14].
Finding 2. Test methods affected by more smells are associated with a slightly higher change-proneness than methods with less smells. Conversely, the co-presence of more test smells in a test method is not associated with higher defect-proneness.
RQ1.3 Are certain test smell types more associated with the change- and defect-proneness of test code?
The final step of the first research question investigates the association to change- and defect-proneness of different test smell types. Figure 5 shows two box plots for each type, depicting its change- and defect-proneness. When analyzing the change-proneness, we observe that almost all the test smells have a similar trend and indeed the magnitude of their differences is negligible, as reported by Cohen $d$. The only exception regards the Indirect testing smell: while the median change-proneness is similar to other smells, its box plot shows several outliers going up to 55 changes. In this case, the magnitude of the differences with all the other smell types is medium. This result is due to the characteristics of the smell. By definition, an Indirect testing smell is present when a method performs tests on other objects (e.g., because of external references in the production code tested) [74]: as a consequence, it naturally triggers more changes since developers may need to modify the test code more often due to changes occurring in the exercised external production classes.
In the case of defect-proneness the discussion is similar. Indeed, the number of defects affecting the different test smell types is similar: even though the differences between them are statistically significant ($p$-value < $2^{-16}$), they are mostly negligible. However, we can see some exceptions, also in this case. The box plots show that the distribution of ‘Indirect Testing’, ‘Eager Test’ and ‘Assertion Roulette’ smells slightly differ from the others, and indeed these are the smells having the highest number of outliers. This result is due to the fact that these test smells tend to test more than required [74] (i.e., a test method suffering from ‘Indirect Testing’ exercises other objects indirectly, an ‘Eager Test’ test method checks several methods of the object to be tested, while an ‘Assertion Roulette’ contains several assertions checking different behavior of the exercised production code). Their nature makes them intrinsically more complex to understand [7], likely leading developers to be more prone to introduce faults.
Finding 3. Test methods affected by ‘Indirect Testing’, ‘Eager Test’, and ‘Assertion Roulette’ are more change and defect prone than those affected by other smells.
V. RQ2 RESULTS: TEST SMELLS AND PRODUCTION CODE
This section describes the results to our second research question.
RQ2.1 Are test smells associated with the defect-proneness of the tested production code?
Figure 6 reports the RR that a smelly test case is exercising a more defect-prone production method (label ‘Overall’), along with the RR obtained when considering size as a control factor.
In the first place, Figure 6 shows that smelly tests have a higher likelihood to test defective code than non-smelly tests (i.e., the RR = 1.71 states that production code executed by smelly tests has 71% higher chances of being defective than...
Fig. 6. Relative risk of the production code being more defect prone when tested by smelly tests vs. non-smelly tests. For all RRs, p-value < 0.0001.
Fig. 7. Relative risk of being defect prone if tested by smelly tests vs non-smelly tests. Production code executed by non-smelly tests). Zooming in on this result, Figure 7 depicts the box plots reporting the distribution of the number of production code bugs, when exercised by smelly test methods vs. non-smelly ones. The difference between the two distributions is statistically significant (p-value < 2.2e−16) with a large effect size (d = 1.40).
The results still hold when controlling for size: Size does not impact the RR concerning the defect-proneness of production code exercised by smelly tests vs. non-smelly ones, actually, as shown in the previous RQ, it makes it worse. For instance, methods having a large number of lines of code have an RR = 2.17. Two main factors can explain this result: On the one hand, we suppose that a large size of the test implies a large volume of the production code, and our research community widely recognized size as a valid proxy measure for software quality [35]; on the other hand, our results corroborate previous findings reported by Palomba et al. [50], who showed that large methods (e.g., the ones affected by a Long Method code smell [23]) are strongly associated with the defect-proneness of production code.
Thus, from our analysis we have empirical evidence that the presence of test smells contributes to the explanation of the defect-proneness of production code. Given our experimental setting, we cannot speculate on the motivations behind the results achieved so far: indeed, our RQ2.1 meant to be a coarse-grained investigation aimed at understanding whether the presence of design flaws in test code might somehow be associated with the defectiveness of production code. Thus, in this research question we did not focus on the reasons behind the relationship, i.e., if it holds because the production code is of poor quality (thus difficult to test) or because the tests are of poor quality (thus they do not capture enough defects). Our RQ2.3 makes a first step in providing additional insights on such a relationship.
**Finding 4. Production code that is exercised by test code affected by test smells is more defect-prone, also when controlling for size.**
**RQ2.2 Is the co-occurrence of test smell associated with the defect-proneness of the tested production code?**
Figure 8 presents the results concerning the association of test smell co-occurrences to the defectiveness of the exercised production code. In this case, the defect-proneness of production code remains almost constant among the different groups, meaning that having more design issues in test code is not associated with a higher number of defects in production.
This result led to two main observations: as observed in RQ2.1, test smells are related to the defect-proneness of the exercised production code, but do not fully explain this phenomenon. Secondly, while the specific number of test smells is not associated with the defectiveness of production code, the overall presence of test smells is. It is reasonable to think that some specific test smells could contribute more to the found association to defect-proneness; this reasoning represented the input for RQ2.3.
In this research question, we controlled the findings for size and number of changes, finding that none of them influence the outcome. We include a report of this additional analysis in our on-line appendix [14].
Finding 5. The co-occurrence of more test smells in a test case is not strongly associated with higher defect-proneness of the exercised production code.
RQ2.3 Are certain test smell types more associated with the defect-proneness of production code?
Figure 9 depicts the box plots reporting the association of different test smell types to the defect-proneness of production code. We observed that the ‘Indirect Testing’ and ‘Eager Test’ smells are related to the production code being more defect-prone with respect to the other test smell types. The differences observed between the ‘Indirect testing’ and ‘Eager Test’ and the other distributions are all statistically significant \((p-value < 2^{-16})\) with medium effect size, while we found the other smells to be not statistically associated with more production code defect-proneness.
As also explained in the context of RQ1.3, the ‘Indirect Testing’ and ‘Eager Test’ smells lead to test cases that are (i) less cohesive and (ii) poorly focused on the target production code [74]. The former implies the testing of other objects indirectly, the latter checks several production methods of the class under test. The lack of focus of such smells may explain why the corresponding production code is associated with defect-proneness: It seems reasonable to consider that the greedy nature of these two smells makes them less able to find defects in the exercised production code.
From a practical point of view, our results provide evidence that developers should carefully monitor test and production code involved with Indirect Testing and Eager Test. In fact, these are the smells that not only are related to more change- and defect-prone test code, but also to more defect-prone production code.
VI. Conclusion
Automated testing is nowadays considered to be an essential process for improving the quality of software systems [12], [47]. Unfortunately, past literature showed that test code can often be of low quality and may contain design flaws, also known as test smells [7], [73], [74]. In this paper, we presented an investigation on the relation between six test smell types and test code change/defect proneness on a dataset of more than a million test cases. Furthermore, we delved into the relation between smelly tests and defect-proneness of the exercised production code.
The results we obtained provide evidence toward several findings, including the following two lessons:
Lesson 1. Test smells and their relation with test code quality.
Corroborating what van Deursen et al. [74] conjectured in their study, we bring empirical evidence that test smells are negatively associated with test code quality. Specifically, we found that a smelly test has an 81% higher risk of being defective than a non-smelly test. Similarly, the risk of being change-prone is 47% higher in tests affected by smells. This result is complementary to the findings by Bavota et al. [7], who found that test smells can have a negative impact on program comprehension during maintenance activities. Moreover, we found that test methods with more co-occurring smells tend to be more change-prone than methods having fewer smells and that ‘Indirect Testing’, ‘Eager Test’, and ‘Assertion Roulette’ are those associated with the most change-prone test code.
Lesson 2. Test smells and their relation with software quality.
With our study, we provided empirical evidence that the presence of design flaws in test code is associated with the defect-proneness of the exercised production code; indeed the production code is 71% more likely to contain defects when tested by smelly tests. ‘Indirect Testing’ and ‘Eager Tests’ are related to a higher defect-proneness in production code.
This paper provides initial evidence on the relation between test smells and both change/defect proneness of test code and defect-proneness of exercised production code. As such, it represents a call to arms to researchers and tool vendors. We call upon researchers and tool vendors to develop practically automatic test smell detection tools. We call upon the research community to further investigate the interplay between test design quality and the effectiveness of test code in detecting defects.
VII. Acknowledgment
This project has received funding from the European Union’s H2020 programme under the Marie Skłodowska-Curie grant agreement No 642954. A. Bacchelli and F. Palomba gratefully acknowledge the support of the Swiss National Science Foundation through the SNF Project No. PP00P2_170529.
|
{"Source-Url": "https://pure.tudelft.nl/portal/files/45750536/PID5458011.pdf", "len_cl100k_base": 11479, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43099, "total-output-tokens": 14783, "length": "2e13", "weborganizer": {"__label__adult": 0.0003910064697265625, "__label__art_design": 0.00029015541076660156, "__label__crime_law": 0.0003342628479003906, "__label__education_jobs": 0.0011539459228515625, "__label__entertainment": 4.9114227294921875e-05, "__label__fashion_beauty": 0.0001678466796875, "__label__finance_business": 0.00016736984252929688, "__label__food_dining": 0.0002932548522949219, "__label__games": 0.0005598068237304688, "__label__hardware": 0.0005269050598144531, "__label__health": 0.00039458274841308594, "__label__history": 0.0001575946807861328, "__label__home_hobbies": 7.212162017822266e-05, "__label__industrial": 0.00022995471954345703, "__label__literature": 0.0002386569976806641, "__label__politics": 0.00020778179168701172, "__label__religion": 0.0003807544708251953, "__label__science_tech": 0.0030975341796875, "__label__social_life": 9.137392044067384e-05, "__label__software": 0.003862380981445313, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00026798248291015625, "__label__transportation": 0.0003275871276855469, "__label__travel": 0.0001589059829711914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60443, 0.02753]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60443, 0.56341]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60443, 0.92145]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5556, false], [5556, 11649, null], [11649, 16375, null], [16375, 24016, null], [24016, 30493, null], [30493, 35682, null], [35682, 39080, null], [39080, 43945, null], [43945, 47534, null], [47534, 52082, null], [52082, 52082, null], [52082, 60443, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5556, true], [5556, 11649, null], [11649, 16375, null], [16375, 24016, null], [24016, 30493, null], [30493, 35682, null], [35682, 39080, null], [39080, 43945, null], [43945, 47534, null], [47534, 52082, null], [52082, 52082, null], [52082, 60443, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60443, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60443, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5556, 2], [5556, 11649, 3], [11649, 16375, 4], [16375, 24016, 5], [24016, 30493, 6], [30493, 35682, 7], [35682, 39080, 8], [39080, 43945, 9], [43945, 47534, 10], [47534, 52082, 11], [52082, 52082, 12], [52082, 60443, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60443, 0.12121]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
25d2d41c60545db74810562e257e92d0574779ea
|
Crowdsourcing Metadata – Challenges and Outlook
Washington, 10 May 2016
Lars Vilhuber, William Block (Cornell University)
Crowdsourcing Metadata – Challenges and Outlook
Lars Vilhuber, William Block (Cornell University)
Washington, 10 May 2016
Acknowledgements
Based on work with
• Benjamin Perry (formerly Cornell University)
• Venkata Kambhampaty (formerly Cornell University)
• Kyle Brumsted (McGill University)
• Jeremy Williams (Cornell University)
• Carl Lagoze (University of Michigan)
• John Abowd (Cornell University)
and materials presented in INFO 7470, all of that with funding by NSF Grant #1131848
I’m going to argue that...
- **Replicability** is a problem...
- and (A) easier *deposit* methods could alleviate it
- but progress is slow
I’m going to argue that...
- Having replicable archives *shifts* the problem...
- in time: (B) older articles cannot be linked to data
- in scope: (C) curators need *expert help* in documenting the data
A test
LDI “reproducibility” project:
Kingi, Stanchi, Vilhuber (2016, unpublished)
“The Reproducibility of Economics Research”
Kingi, Stanchi, Vilhuber (2016)
- Simpler test:
Do the provided data and programs yield the published results?
Figure 1: A Breakdown of the Articles
Total Articles (109)
- Confidential Data (44)
- Non Confidential Data (65)
- Unsuccessful (28)
- Successful (37)
The old source of knowledge – and data!
Options are available
• Social and behavioral sciences
Share your social and behavioral science research data
Get started now »
Maximize Access
Be recognized and cited
Store Safely
Store your data with confidence
Protect Confidentiality
Ensure confidentiality and privacy
Options are available
• “Research data”
Open source research data repository software
Researchers
Enjoy full control over your data. Receive web visibility, academic credit, and increased citation counts. A personal dataverse is easy to set up, allows you to display your data on your personal website, can be branded uniquely as your research program, makes your data more discoverable to the research community, and satisfies data management plans. Want to set up your personal dataverse?
Journals
Seamlessly manage the submission, review, and publication of data associated with published articles. Establish an unbreakable link between articles in your journal and associated data. Participate in the open data movement by using Dataverse as part of your journal data policy or list of repository recommendations. Want to find out more about journal dataverses?
Developers
Participate in a vibrant and growing community that is helping to drive the norms for sharing, preserving, citing, exploring, and analyzing research data. Contribute code extensions, documentation, testing, and/or standards. Integrate research analysis, visualization and exploration tools, or other research and data archival systems with Dataverse. Want to contribute?
Institutions
Establish a research data management solution for your community. Federation with a growing list of Dataverse repositories worldwide for increased discoverability of your community’s data. Participate in the drive to set norms for sharing, preserving, citing, exploring, and analyzing research data. Want to install a Dataverse repository?
Researchers don’t use them
Training? Incentives? Ease of use?
A Good Example
Gentzkow, Shapiro, Sinkinson (2014)
American Economic Review, 104(10): 3073-3114. DOI: 10.1257/aer.104.10.3073
- Data at http://doi.org/10.3886/E1361V3
Principal Investigator(s): Gentzkow, Matthew (University of Chicago, Booth School of Business); Shapiro, Jesse (University of Chicago, Booth School of Business); Sinkinson, Michael
<table>
<thead>
<tr>
<th>Title</th>
<th>Date Entered</th>
<th>File Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>codebook</td>
<td>2014-04-03</td>
<td>.txt</td>
</tr>
<tr>
<td>Gentzkow, Matthew; Shapiro, Jesse; Sinkinson, Michael</td>
<td>9:08 PM</td>
<td></td>
</tr>
<tr>
<td>data</td>
<td>2014-04-03</td>
<td>.txt</td>
</tr>
<tr>
<td>Gentzkow, Matthew; Shapiro, Jesse; Sinkinson, Michael</td>
<td>9:08 PM</td>
<td></td>
</tr>
<tr>
<td>Orig</td>
<td>2014-04-07</td>
<td>.txt</td>
</tr>
<tr>
<td>Gentzkow, Matthew; Shapiro, Jesse; Sinkinson, Michael</td>
<td>1:28 PM</td>
<td></td>
</tr>
</tbody>
</table>
Persistent URL: http://doi.org/10.3886/E1361V3
Project Description
Summary
The focus of this data collection was the historical circulation and subscription prices of US daily newspapers in 1924. These data are obtained from audit reports obtained from the Audit Bureau of Circulations, an independent organization created to verify circulation. They include circulation by town and delivery channel for each newspaper. The sample is all audited daily newspapers by the Audit Bureau of Circulations.
All pdfs and extracted data. Copyright belongs to the Audit Bureau of Circulations. We have obtained written permission from the Audit Bureau of Circulations to post the PDFs and data files.
What’s good about this?
• Permanent URL
• Availability of
– Original data
– Transformed data
– Open availability
• Easy online inspection
Not Perfect
- Archive at openICPSR not actually tied to article and vice-versa
- Conversely, “online appendix” just a “blob”
Journals are starting to use them
The American Journal of Political Science is committed to significant advances in knowledge and understanding of citizenship, governance, and politics, and to the public value of political science research.
1 to 10 of 228 Results
Replication Data for: Navigating the Range of Statistical Tools for Inferential Network Analysis
Apr 21, 2016
Cramer, Skyler; Leifeld, Philip; McGing, Scott; Rolfe, Meredith. 2016, "Replication Data for Navigating the Range of Statistical Tools for Inferential Network Analysis", http://dx.doi.org/10.7910/DVN/3PFR67, Harvard Dataverse, V1
The last decade has seen substantial advances in statistical techniques for the analysis of network data, and a major increase in the frequency with which these tools are used. These techniques are designed to accomplish the same broad goal, statistically valid inference in the presence of dependent data.
Replication Data for: Constitutional Qualms or Politics as Usual? The Factors Shaping Public Support for Unilateral Action
Apr 13, 2016
Kinder, Douglas; Christensen, Dino. 2016, "Replication Data for: Constitutional Qualms or Politics as Usual? The Factors Shaping Public Support for Unilateral Action", http://dx.doi.org/10.7910/DVN/DDSH05, Harvard Dataverse, V1
The formal institutional constraints that Congress and the courts impose on presidential unilateral action are real. As a result, recent scholarship suggests that public opinion may be the strongest check against executive overreach. However, little is known about how the public...
Replication Data for: Can Political Inequalities be Educated Away? Evidence from a Large-scale Reform
Apr 13, 2016
Lindgren, Karl-Oskar; Oskarsson, Sven; Dawes, Christopher. 2016, "Replication Data for: Can Political Inequalities be Educated Away? Evidence from a Large-scale Reform", http://dx.doi.org/10.7910/DVN/3TO01Q, Harvard Dataverse, V1
Over the years, many suggestions have been made on how to reduce the importance of family background in political recruitment. In this...
But the biggest problem...
Figure 1: A Breakdown of the Articles
Total Articles (109)
Confidential Data (44) Non Confidential Data (65)
Unsuccessful (28) Successful (37)
• Articles using confidential data are (weakly) more cited than others
But: for confidential data…
- Data is not available
- Metadata is not available
- Programs? So-so…
Should We Just Trust These Guys?
Some are quite commendable
**Even detailed information**
| **Full Title** | General Social Survey, 2011: Cycle 25, Family |
| **Subtitle** | Cycle 25, Family |
| **Alternative Title** | GSS 2011: Family |
| **Parallel Title** | Encuesta social generale, 2011: Cycle 25, Famille |
| **Identification Number** | ca-statcan-58195 |
| **Authoring Entity** | |
| **Name** | **Affiliation** |
| Statistics Canada | StatCan |
| **Producer** | |
| **Name** | **Affiliation** | **Abbreviation** | **Role** |
| Statistics Canada | | StatCan |
**Copyright**
Copyright © Statistics Canada, 2012
**Date of Distribution**
2012-07-18
**Series Information**
General Social Survey - Family (GSS) [4501]
**Version**
15769.6
How many users actually use that?
Data documentation is dry
• How reliable is that question?
Dataset: General Social Survey, 2011: Cycle 25, Family
Cycle 25, Family
Variable PA_Q240: Year parents separated
<table>
<thead>
<tr>
<th>Values</th>
<th>Categories</th>
</tr>
</thead>
<tbody>
<tr>
<td>9997</td>
<td>Not asked</td>
</tr>
<tr>
<td>9998</td>
<td>Not stated</td>
</tr>
<tr>
<td>9999</td>
<td>Don't know</td>
</tr>
</tbody>
</table>
LITERAL QUESTION
In what year did your parents separate?
SUMMARY STATISTICS
This variable is numeric
UNIVERSE
Respondents who answered: PA_Q230 = 1.
NOTES
This variable is suppressed on the public use microdata file.
Don’t (just) liberate the data!
Liberate the data users!
Our contribution
Leverage researcher knowledge
Our Approach
- Rely on open standards, namely the Data Documentation Initiative (DDI) schema
- Provide easy-to-use tools and interfaces to structured metadata
- Build infrastructure that enables data curators to leverage community-driven input to official documentation
How?
CED$^2$AR
The Comprehensive Extensible Data Documentation and Access Repository
What is CED²AR?
- Metadata curation software
- Designed for documenting existing datasets
- Funded by NSF grant #1131848
- Online at www2.ncrn.cornell.edu/ced2ar-web
What is CED\(^2\)AR?
CED\(^2\)AR
Official Server - The Comprehensive Extensible Data Documentation and Access Repository
Search Variables Browse Variables Browse by Codebook Documentation About
Filter
Codebooks
Search
Searching all codebooks. No filters active.
Advanced Search
Show 10 \(\downarrow\) variables
Compare Variables
No variables selected
© 2012-2015, Cornell Institute for Social and Economic Research
Report a Bug Email us Copyright Information NCRN GitHub
Basic Information Flow
Staging Area
Datasets → Internal Metadata
Public Facing
Official Metadata → Crowdsourced Metadata
User switches
Crowdsourced Metadata → Official Metadata
Basic Information Flow
Staging Area
Datasets → Internal Metadata
Public Facing
Official Metadata
Crowdsourced Metadata
User switches
Internal Processing
1. Creation of skeletal metadata
- Assuming data is already curated
- Converting data into standardized metadata
• Tools included (for SAS, Stata, SPSS, CSV), not discussed here
2. Hand editing and subsetting
- Adding verbose descriptions
- Applying disclosure limitation
3. User accessible
- These tools can be manipulated by normal users
- They could be incorporated into existing workflows
Internal Processing
• Simple editing interface
– Web-based, with limited rich text features
– Math allowed (LaTeX)
• Feedback
– Completeness of codebook?
– Without technical jargon!
– Can be tuned
The SIPP Synthetic Beta (SSB) is a Census Bureau product that integrates person-level micro-data from a household survey with administrative tax and benefit data. These data link respondents from the Survey of Income and Program Participation (SIPP) to Social Security Administration (SSA)/Internal Revenue Service (IRS) Form W-2 records and SSA records of retirement and disability benefit receipt, and were produced by Census Bureau staff economists and statisticians in collaboration with researchers at Cornell University, the SSA and the IRS. The purpose of the SSB is to provide access to linked data that are usually not publicly available due to confidentiality concerns.
To overcome these concerns, Census has synthesized, or modeled, all the variables in a way that changes the record of each individual in a manner designed to preserve the underlying covariate relationships between the variables. The only variables that were not altered by the synthesis process and still contain their original values are gender and a link to the first reported marital partner in the survey. Seven SIPP panels (1990, 1991, 1992, 1993, 1996, 2001, 2004) form the basis for the SSB, with a large subset of variables available across all the panels selected for inclusion and harmonization across the years. Administrative data were added and some editing was done to correct for logical inconsistencies in the IRS/SSA earnings and benefits data.
This field supports ASCII math. See FAQ for details.
• Provide feedback to improve sparse documentation
**Codebook Score**
**Variables**
- 100.0% of variables have labels
- 85.1% of variables have significant full descriptions
*Variables without significant full descriptions ... more*
- 43.0% of variables have values
*Variables without values ... more*
- 0.0% of variables have summary statistics
**Title Page**
- Missing related studies
- Missing access conditions
- Missing bibliographic citation
- Missing related publications
**Overall Score**
80.3%
Fine-grained access controls
Important when working with confidential (meta)data
Internal Processing: Access Control
- Marking elements with different restrictions
Select what sub-elements to mark
- Select All
- Mean
- Median
- Mode
- Valid
- Invalid
- Min
- Max
- Standard Deviation
- Other Summary Statistics
Select what access level to apply, then check which variables to apply to. Finally, click changes levels.
<table>
<thead>
<tr>
<th>Variable Name</th>
<th>Label</th>
<th>Top Access Level</th>
</tr>
</thead>
<tbody>
<tr>
<td>afdc_MN</td>
<td>Indicator for receipt of AFDC or TANF benefits</td>
<td>released</td>
</tr>
<tr>
<td>afdcamt_MN</td>
<td>Amount of AFDC received</td>
<td>released</td>
</tr>
<tr>
<td>birthdate</td>
<td>Date of Birth</td>
<td>released</td>
</tr>
<tr>
<td>current_enroll_coll</td>
<td>Currently Enrolled in College</td>
<td>released</td>
</tr>
<tr>
<td>current_enroll_hs</td>
<td>Currently Enrolled in HS (or less)</td>
<td>released</td>
</tr>
</tbody>
</table>
Workflow control
• Ability to view additions/subtractions
– Between versions
– Between crowd-sourced information and official information
• Ability to control access
– Editing versus viewing
– Authentication and reputation
All changes are logged externally via Git
<table>
<thead>
<tr>
<th>Author</th>
<th>Commit</th>
<th>Message</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>tomcat7</td>
<td>0FeAsis51e</td>
<td>{ssbv601,<a href="mailto:lars@vilhuber.com">lars@vilhuber.com</a>,cover}</td>
<td>37 minutes ago</td>
</tr>
<tr>
<td>tomcat7</td>
<td>5c824de</td>
<td>{ssbv601,<a href="mailto:lars@vilhuber.com">lars@vilhuber.com</a>,cover}{ssbv601,<a href="mailto:lars@vilhuber.com">lars@vilhuber.com</a>,var,fl...}</td>
<td>an hour ago</td>
</tr>
<tr>
<td>tomcat7</td>
<td>c83c58f9</td>
<td>Committing codebooks retrieved directly from BaseX</td>
<td>4 days ago</td>
</tr>
<tr>
<td>venkata</td>
<td>a61abe3</td>
<td>{testbdv1,anonymous,edit}</td>
<td>4 days ago</td>
</tr>
<tr>
<td>venkata</td>
<td>5p1e51e</td>
<td>{testbdv1,anonymous,edit}</td>
<td>4 days ago</td>
</tr>
<tr>
<td>tomcat7</td>
<td>5edbf9f9</td>
<td>{acs2009,<a href="mailto:bap63@cornell.edu">bap63@cornell.edu</a>,edit}</td>
<td>5 days ago</td>
</tr>
<tr>
<td>tomcat7</td>
<td>d66d3d04</td>
<td>{ssbv601,<a href="mailto:lorreeder@gmail.com">lorreeder@gmail.com</a>,var,phus_ssd_local_benefit_totamt_k}</td>
<td>5 days ago</td>
</tr>
<tr>
<td>tomcat7</td>
<td>1f845c1</td>
<td>{siabv1,<a href="mailto:warren.brown48@gmail.com">warren.brown48@gmail.com</a>,cover}{siabv1,<a href="mailto:bap63@cornell.edu">bap63@cornell.edu</a>,edit}</td>
<td>5 days ago</td>
</tr>
<tr>
<td>tomcat7</td>
<td>cb77f31</td>
<td>{siabv1,<a href="mailto:warren.brown48@gmail.com">warren.brown48@gmail.com</a>,var,bild}{siabv1,warren.brown4...}</td>
<td>cestesting</td>
</tr>
<tr>
<td>tomcat7</td>
<td>b34a118</td>
<td>{siabv1,<a href="mailto:warren.brown48@gmail.com">warren.brown48@gmail.com</a>,var,persnr}{siabv1,warren.brown...}</td>
<td>cestesting</td>
</tr>
<tr>
<td>venkata</td>
<td>2cb6de7d</td>
<td>{lbdv2,anonymous,cover}</td>
<td>vrk4</td>
</tr>
<tr>
<td>tomcat7</td>
<td>12630cf</td>
<td>{siabv1,<a href="mailto:warren.brown48@gmail.com">warren.brown48@gmail.com</a>,var,bnn}{siabv1,warren.brown4...}</td>
<td>cestesting</td>
</tr>
<tr>
<td>tomcat7</td>
<td>af94f3</td>
<td>{siabv1,<a href="mailto:bap63@cornell.edu">bap63@cornell.edu</a>,edit}{bliss2011,<a href="mailto:bap63@cornell.edu">bap63@cornell.edu</a>,edit}</td>
<td>cestesting</td>
</tr>
<tr>
<td>tomcat7</td>
<td>0e52a6e</td>
<td>Committing codebooks retrieved directly from BaseX</td>
<td>cestesting</td>
</tr>
</tbody>
</table>
Basic Information Flow
**Staging Area**
- Datasets
- Internal Metadata
**Public Facing**
- Official Metadata
- Crowdsourced Metadata
User switches
CED²AR
Official Server - The Comprehensive Extensible Data Documentation and Access Repository
You are viewing the official CED²AR crowdsourced contributions.
SIPP Synthetic Beta
v6.02
View Variables (123 variables)
Last update to metadata: 2015-11-24 10:05:15 (upload date)
Document Date: November 12, 2015
Codebook prepared by: Cornell NSF-Census Research Network
Data prepared by: United States Department of Commerce, Bureau of the Census.
Data Distributed by:
Labor Dynamics Institute
http://www2.vrdc.cornell.edu/news/data/sipp-synthetic-beta-file/
United States Department of Commerce, Bureau of the Census.
Crowdsourced view
CED²AR
Community Development Server (Beta) - The Comprehensive Extensible Data Documentation and Access Repository
You are viewing crowdsourced metadata. View the official version.
SIPP Synthetic Beta
v6.02
View Variables (123 variables)
Last update to metadata: 2015-11-24 09:59:07 (auto-generated)
Document Date: November 12, 2015
Codebook prepared by: Cornell NSF-Census Research Network
Data prepared by: United States Department of Commerce, Bureau of the Census.
Data Distributed by:
Labor Dynamics Institute
http://www2.vrdc.cornell.edu/news/data/sipp-synthetic-beta-file/
United States Department of Commerce, Bureau of the Census.
Authentication and Attribution
• When opening up contributions to a wide audience, how to triage between “rants” and meaningful contributions?
• Here: Use of ORCID (academic network) for authentication
• Public attribution with link to (verified) academic ID is key for positive feedback (your effort is recognized) and prevention of negative contribution (your rant is traceable to you!)
Authentication
• Supports OpenID and OAuth2
– Currently using Google and ORCID with OAuth2
– Developing connectors to work with additional providers
• CED\textsuperscript{2}AR handles identity management
Editing made easy
You are viewing crowdsourced metadata. View the official version.
CED2AR / SIPP Synthetic Beta v6.02
SIPP Synthetic Beta v6.02
View Variables (123 variables)
View codebook score
Last update to metadata: 2016-01-26 14:35:26 (auto-generated)
Document Date: November 12, 2015
Codebook prepared by: Cornell NSF-Census Research Network
Data Distributed by:
Labor Dynamics Institute
http://www2.vrdc.cornell.edu/news/data/sipp-synthetic-beta-file/
<table>
<thead>
<tr>
<th>Variable Name</th>
<th>totearn_ser_YYYY</th>
</tr>
</thead>
<tbody>
<tr>
<td>Top Access Level</td>
<td>released</td>
</tr>
<tr>
<td>Label</td>
<td>SER: Capped Earnings from all FICA-covered jobs</td>
</tr>
<tr>
<td>Access Level</td>
<td>released</td>
</tr>
<tr>
<td>Codebook</td>
<td>SIPP Synthetic Beta v6.02</td>
</tr>
<tr>
<td>Concept</td>
<td></td>
</tr>
<tr>
<td>Type</td>
<td>numeric</td>
</tr>
<tr>
<td>Question Text</td>
<td></td>
</tr>
<tr>
<td>Full Description</td>
<td>Person-level annual earnings that were taxed by FICA; these variables include earnings only up to the FICA taxable maximum and cover the years 1951-2011. These earnings are the inputs for calculating the OASDI benefit a person and his or her spouse will receive upon retirement or disability.</td>
</tr>
<tr>
<td>Files</td>
<td></td>
</tr>
</tbody>
</table>
Note #1 - Access Level: released
Having a 0 value on toetarn_ser_yyyy could mean a couple of things: 1) this individual had no FICA-covered earnings in that year; 2) this individual had no labor income at all in this tax year; 3) this individual worked for an employer that failed to report earnings in this year (that is to say, this has nothing to do with whether a person filed taxes because the earnings are reported by the employer, not the employee). Prior to 1978, if a person has $0 earnings on the Summary Earnings Record (SER), there's really no way of knowing whether they had no earnings or whether they had non-FICA earnings because the SER only reports FICA-covered earnings reported by employers. For years 1978 and later, you can compare the SER to the Detailed Earnings Record (DER). The DER captures all earnings subject to income tax, so both FICA and non-FICA earnings are reported on the DER.
If you are looking at earnings in earlier years, particular the 1960s and earlier, there will be more people with $0 earnings because many jobs were not FICA-taxable then. Even today, there are some instances of legitimate non-FICA earnings that would not be reflected on the SER. One example of this is that graduate student stipends are not taxed for FICA or Medicare, so these earnings would not be reflected on the SER (https://www.irs.gov/Charities–Non-Profits/Student-Exception-to-FICA-Tax).
Having a 0 value on `totearn_ser_yyyy` could mean a couple of things: 1) this individual had no FICA-covered earnings in that year; 2) this individual had no labor income at all in this tax year; 3) this individual worked for an employer that failed to report earnings in this year (that is to say, this has nothing to do with whether a person filed taxes because the earnings are reported by the employer, not the employee). Prior to 1978, if a person has $0 earnings on the Summary Earnings Record (SER), there’s really no way of knowing whether they had no earnings or whether they had non-FICA earnings because the SER only reports FICA-covered earnings reported by employers. For years 1978 and later, you can compare the SER to the Detailed Earnings Record (DER). The DER captures all earnings subject to income tax, so both FICA and non-FICA earnings are reported on the DER.
If you are looking at earnings in earlier years, particularly the 1960s and earlier, there will be more people with $0 earnings because many jobs were not FICA-taxable then. Even today, there are some instances of legitimate non-FICA earnings that would not be reflected on the SER. One example of this is that graduate student stipends are not taxed for FICA or Medicare, so these earnings would not be reflected on the SER (https://www.irs.gov/Charities--Non-Profits/Student-Exception-to-FICA-Tax).
Basic Information Flow
**Staging Area**
- Datasets
- Internal Metadata
**Public Facing**
- Official Metadata
- Crowdsourced Metadata
*User switches*
Everybody can see changes
<table>
<thead>
<tr>
<th>Remote</th>
<th>Current</th>
</tr>
</thead>
<tbody>
<tr>
<td>Variable Name</td>
<td>totearn_ser_YYYY</td>
</tr>
<tr>
<td>Label</td>
<td>SER: Capped Earnings from all FICA-covered jobs</td>
</tr>
<tr>
<td>Codebook</td>
<td>SIPP Synthetic Beta v6.02</td>
</tr>
<tr>
<td>Concept</td>
<td>Concept</td>
</tr>
<tr>
<td>Concept Vocabulary</td>
<td>Concept Vocabulary</td>
</tr>
<tr>
<td>Concept Vocabulary URI</td>
<td>Concept Vocabulary URI</td>
</tr>
<tr>
<td>Type</td>
<td>numeric</td>
</tr>
<tr>
<td>Files</td>
<td>Files</td>
</tr>
</tbody>
</table>
```
sub_v6.0.2_syntheticK_M.sas7bdst
SAS
sub_v6.0.2_syntheticK_M.dta
Stata
```
Question Text
Person-level annual earnings that were taxed by FICA; these variables include earnings only up to the FICA taxable maximum and cover the years 1951-2011. These earnings are the inputs for calculating the OASDI benefit a person and his or her spouse will receive upon retirement or disability.
Notes (0 total)
1. Having a 0 value on totearn_ser_YYYY could mean a couple of things: 1) this individual had no FICA-covered earnings in that year; 2) this individual had no labor income at all in this tax year; 3) this individual worked for an employer that failed to report earnings in this year (that is to say, this has nothing to do with whether a person filed taxes because the earnings are reported by the employer, not the employee). Prior to 1978, if a person has 0 earnings on the Summary Earnings Record (SER), there's really no way of knowing whether they had no earnings or whether they had non-FICA earnings because the SER only reports FICA-covered earnings reported by employers. For years 1978 and later, you can...
Combining Knowledge: Merging
- Curators are given an interface to merge crowdsourced documentation with official
Combining Knowledge: Merging
current_enroll_coll
<table>
<thead>
<tr>
<th>Crowdsourced Documentation</th>
<th>Official Documentation</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Variable Name</strong></td>
<td>current_enroll_coll</td>
</tr>
<tr>
<td><strong>Label</strong></td>
<td>Currently Enrolled in College</td>
</tr>
<tr>
<td><strong>Codebook</strong></td>
<td>SIPP Synthetic Beta v6</td>
</tr>
<tr>
<td><strong>Concept</strong></td>
<td></td>
</tr>
<tr>
<td><strong>Concept Vocabulary</strong></td>
<td></td>
</tr>
<tr>
<td><strong>Concept Vocabulary URI</strong></td>
<td></td>
</tr>
<tr>
<td><strong>Type</strong></td>
<td>numeric</td>
</tr>
<tr>
<td><strong>Files</strong></td>
<td></td>
</tr>
<tr>
<td><strong>Variable Name</strong></td>
<td>current_enroll_coll</td>
</tr>
<tr>
<td><strong>Label</strong></td>
<td>Currently Enrolled</td>
</tr>
<tr>
<td><strong>Codebook</strong></td>
<td>SIPP Synthetic Beta v6</td>
</tr>
<tr>
<td><strong>Concept</strong></td>
<td></td>
</tr>
<tr>
<td><strong>Concept Vocabulary</strong></td>
<td></td>
</tr>
<tr>
<td><strong>Concept Vocabulary URI</strong></td>
<td></td>
</tr>
<tr>
<td><strong>Type</strong></td>
<td>numeric</td>
</tr>
<tr>
<td><strong>Files</strong></td>
<td></td>
</tr>
</tbody>
</table>
Combining Knowledge: Merging
Crowdsourced Documentation
Last update to metadata: 2015-08-18 08:43:01 (upload date)
Document Date: June 15, 2014
Citation
Please cite this codebook as:
Please cite this dataset as:
U.S. Census Bureau. SIPP Synthetic Beta: Version 5.1 [Computer file]. Washington DC; Cornell University, Synthetic Data Server [distributor], Ithaca, NY, 2013
Abstract
The SIPP Synthetic Beta (SSB) is a Census Bureau product that integrates person-level micro-data from a household survey with administrative tax and benefit data. These data link respondents from the Survey of Income and Program Participation (SIPP) to Social Security Administration (SSA)/Internal Revenue Service (IRS) Form W-2 records and SSA...
Official Documentation
Last update to metadata: 2015-10-23 11:12:44 (auto-generated)
Document Date: June 15, 2014
Citation
Please cite this codebook as:
Please cite this dataset as:
U.S. Census Bureau. SIPP Synthetic Beta: Version 5.1 [Computer file]. Washington DC; Cornell University, Synthetic Data Server [distributor], Ithaca, NY, 2013
Abstract
The SIPP Synthetic Beta (SSB) is a Census Bureau product that integrates person-level micro-data from a household survey with administrative tax and benefit data. These data link respondents from the Survey of Income and Program Participation (SIPP) to Social...
• Contributors can be tracked for each of their changes
<table>
<thead>
<tr>
<th>Variable Name</th>
<th>Date Changed</th>
<th>Commit Message</th>
<th>User</th>
<th>Origin</th>
</tr>
</thead>
<tbody>
<tr>
<td>phus_ssdii_benefit_totamt_k</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
<tr>
<td>phus_ssdii_benefit_totamt_k</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
<tr>
<td>pos_phus_ssdii_benefit_totamt_k</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
<tr>
<td>phus_ssdii_benefit_totamt_k</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
<tr>
<td>pos_phus_ssdii_benefit_totamt_k</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
<tr>
<td>pos_phus_retire_benefit_totamt</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
<tr>
<td>phus_ssdii_benefit_totamt_k</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
<tr>
<td>pos_phus_ssdii_benefit_totamt_k</td>
<td>November 18, 2015</td>
<td>View commit</td>
<td><a href="mailto:lori.reeder@gmail.com">lori.reeder@gmail.com</a></td>
<td>Remote</td>
</tr>
<tr>
<td></td>
<td>AM</td>
<td></td>
<td></td>
<td>Change</td>
</tr>
</tbody>
</table>
Combining Knowledge: Citations
Lars Vilhuber
ORCID ID
id.orcid.org/0000-0001-5733-8932
CED²AR: The Comprehensive Extensible Data Documentation and Access Repository
IEEE/ACM Joint Conference on Digital Libraries
2014-09 | conference-paper
DOI: 10.1109/jcdl.2014.6970178
Source: CrossRef Metadata Search
Next steps
Making CED²AR v2 robust
• Addition of UTF-8 editing support (Spanish, French, Portuguese, etc.)
• Additional fields (link to survey questions, anything within DDI-C)
• Bug fixes
Making CED²AR scalable in V3
• Current implementation of CED²AR is packaged for a single server (=portable)
– Already a scalable archive backend (git)
– Could be fronted by a load balancer
– But live server is not scalable
• Current implementation of CED²AR is tied to a limited schema – adding schema is hard
Thank you!
Questions?
ced2ar-devs-I@cornell.edu
|
{"Source-Url": "https://ecommons.cornell.edu/bitstream/handle/1813/43887/NCRN-Spring2016-Vilhuber-Crowdsourcing-Metadata.pdf?isAllowed=y&sequence=3", "len_cl100k_base": 8312, "olmocr-version": "0.1.53", "pdf-total-pages": 67, "total-fallback-pages": 0, "total-input-tokens": 83594, "total-output-tokens": 10260, "length": "2e13", "weborganizer": {"__label__adult": 0.0005373954772949219, "__label__art_design": 0.0008635520935058594, "__label__crime_law": 0.0018672943115234375, "__label__education_jobs": 0.04052734375, "__label__entertainment": 0.00021731853485107425, "__label__fashion_beauty": 0.000354766845703125, "__label__finance_business": 0.0157623291015625, "__label__food_dining": 0.0005359649658203125, "__label__games": 0.0008692741394042969, "__label__hardware": 0.0013790130615234375, "__label__health": 0.0009860992431640625, "__label__history": 0.0026302337646484375, "__label__home_hobbies": 0.0004246234893798828, "__label__industrial": 0.0009245872497558594, "__label__literature": 0.00107574462890625, "__label__politics": 0.01239776611328125, "__label__religion": 0.000621795654296875, "__label__science_tech": 0.168701171875, "__label__social_life": 0.0011138916015625, "__label__software": 0.1224365234375, "__label__software_dev": 0.62451171875, "__label__sports_fitness": 0.00040221214294433594, "__label__transportation": 0.0007138252258300781, "__label__travel": 0.0003592967987060547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31134, 0.0391]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31134, 0.05953]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31134, 0.83226]], "google_gemma-3-12b-it_contains_pii": [[0, 124, false], [124, 248, null], [248, 619, null], [619, 764, null], [764, 972, null], [972, 1100, null], [1100, 1276, null], [1276, 1429, null], [1429, 1469, null], [1469, 1469, null], [1469, 1469, null], [1469, 1747, null], [1747, 3354, null], [3354, 3417, null], [3417, 3432, null], [3432, 3736, null], [3736, 5405, null], [5405, 5550, null], [5550, 5677, null], [5677, 7726, null], [7726, 7901, null], [7901, 7972, null], [7972, 8072, null], [8072, 8105, null], [8105, 8132, null], [8132, 8821, null], [8821, 8855, null], [8855, 9393, null], [9393, 9451, null], [9451, 9499, null], [9499, 9770, null], [9770, 9857, null], [9857, 10024, null], [10024, 10514, null], [10514, 10697, null], [10697, 10836, null], [10836, 11275, null], [11275, 11484, null], [11484, 12981, null], [12981, 13495, null], [13495, 13577, null], [13577, 14346, null], [14346, 14579, null], [14579, 16432, null], [16432, 16582, null], [16582, 16582, null], [16582, 17297, null], [17297, 18057, null], [18057, 18449, null], [18449, 18658, null], [18658, 18676, null], [18676, 19361, null], [19361, 20615, null], [20615, 22029, null], [22029, 23414, null], [23414, 23566, null], [23566, 25242, null], [25242, 25356, null], [25356, 26452, null], [26452, 28318, null], [28318, 30272, null], [30272, 30578, null], [30578, 30589, null], [30589, 30768, null], [30768, 31086, null], [31086, 31086, null], [31086, 31134, null]], "google_gemma-3-12b-it_is_public_document": [[0, 124, true], [124, 248, null], [248, 619, null], [619, 764, null], [764, 972, null], [972, 1100, null], [1100, 1276, null], [1276, 1429, null], [1429, 1469, null], [1469, 1469, null], [1469, 1469, null], [1469, 1747, null], [1747, 3354, null], [3354, 3417, null], [3417, 3432, null], [3432, 3736, null], [3736, 5405, null], [5405, 5550, null], [5550, 5677, null], [5677, 7726, null], [7726, 7901, null], [7901, 7972, null], [7972, 8072, null], [8072, 8105, null], [8105, 8132, null], [8132, 8821, null], [8821, 8855, null], [8855, 9393, null], [9393, 9451, null], [9451, 9499, null], [9499, 9770, null], [9770, 9857, null], [9857, 10024, null], [10024, 10514, null], [10514, 10697, null], [10697, 10836, null], [10836, 11275, null], [11275, 11484, null], [11484, 12981, null], [12981, 13495, null], [13495, 13577, null], [13577, 14346, null], [14346, 14579, null], [14579, 16432, null], [16432, 16582, null], [16582, 16582, null], [16582, 17297, null], [17297, 18057, null], [18057, 18449, null], [18449, 18658, null], [18658, 18676, null], [18676, 19361, null], [19361, 20615, null], [20615, 22029, null], [22029, 23414, null], [23414, 23566, null], [23566, 25242, null], [25242, 25356, null], [25356, 26452, null], [26452, 28318, null], [28318, 30272, null], [30272, 30578, null], [30578, 30589, null], [30589, 30768, null], [30768, 31086, null], [31086, 31086, null], [31086, 31134, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31134, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31134, null]], "pdf_page_numbers": [[0, 124, 1], [124, 248, 2], [248, 619, 3], [619, 764, 4], [764, 972, 5], [972, 1100, 6], [1100, 1276, 7], [1276, 1429, 8], [1429, 1469, 9], [1469, 1469, 10], [1469, 1469, 11], [1469, 1747, 12], [1747, 3354, 13], [3354, 3417, 14], [3417, 3432, 15], [3432, 3736, 16], [3736, 5405, 17], [5405, 5550, 18], [5550, 5677, 19], [5677, 7726, 20], [7726, 7901, 21], [7901, 7972, 22], [7972, 8072, 23], [8072, 8105, 24], [8105, 8132, 25], [8132, 8821, 26], [8821, 8855, 27], [8855, 9393, 28], [9393, 9451, 29], [9451, 9499, 30], [9499, 9770, 31], [9770, 9857, 32], [9857, 10024, 33], [10024, 10514, 34], [10514, 10697, 35], [10697, 10836, 36], [10836, 11275, 37], [11275, 11484, 38], [11484, 12981, 39], [12981, 13495, 40], [13495, 13577, 41], [13577, 14346, 42], [14346, 14579, 43], [14579, 16432, 44], [16432, 16582, 45], [16582, 16582, 46], [16582, 17297, 47], [17297, 18057, 48], [18057, 18449, 49], [18449, 18658, 50], [18658, 18676, 51], [18676, 19361, 52], [19361, 20615, 53], [20615, 22029, 54], [22029, 23414, 55], [23414, 23566, 56], [23566, 25242, 57], [25242, 25356, 58], [25356, 26452, 59], [26452, 28318, 60], [28318, 30272, 61], [30272, 30578, 62], [30578, 30589, 63], [30589, 30768, 64], [30768, 31086, 65], [31086, 31086, 66], [31086, 31134, 67]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31134, 0.21901]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
5b42e46bb07011f9e2eea09be16f47d3d3257cbc
|
Compiling Polymorphism Using Intensional Type Analysis
Citation
Published Version
http://doi.acm.org/10.1145/199448.199475
Permanent link
http://nrs.harvard.edu/urn-3:HUL.InstRepos:2794950
Terms of Use
This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA
Share Your Story
The Harvard community has made this article openly available. Please share how this access benefits you. Submit a story.
Accessibility
Compiling Polymorphism Using Intensional Type Analysis*
Robert Harper† Greg Morrisett‡
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3891
Abstract
Traditional techniques for implementing polymorphism use a universal representation for objects of unknown type. Often, this forces a compiler to use universal representations even if the types of objects are known. We examine an alternative approach for compiling polymorphism where types are passed as arguments to polymorphic routines in order to determine the representation of an object. This approach allows monomorphic code to use natural, efficient representations, supports separate compilation of polymorphic definitions and, unlike coercion-based implementations of polymorphism, natural representations can be used for mutable objects such as refs and arrays.
We are particularly interested in the typing properties of an intermediate language that allows run-time type analysis to be coded within the language. This allows us to compile many representation transformations and many language features without adding new primitive operations to the language. In this paper, we provide a core target language where type-analysis operators can be coded within the language and the types of such operators can be accurately tracked. The target language is powerful enough to code a variety of useful features, yet type checking remains decidable. We show how to translate an ML-like language into the target language so that primitive operators can analyze types to produce efficient representations. We demonstrate the power of the “user-level” operators by coding flattened tuples, marshalling, type classes, and a form of type dynamic within the language.
1 Introduction
Many compilers assume a universal or “boxed” representation of a single machine word if the type of a value is unknown. This allows the compiler to generate one simple piece of code to manipulate the value. But boxed representations often require more space and provide less efficient access than natural representations. For example, an array of small unknown objects, such as booleans or characters, is represented as an array of words, wasting the majority of the space. An object larger than a word, such as a double-precision floating-point value, is allocated and a pointer is used in place of the value. Consequently, accessing the value requires an additional memory access. As word sizes increase from 32 to 64-bits, and memory latencies increase, it becomes increasingly important to minimize boxing.
In modern programming languages such as Modula-3, Standard ML (SML), and Haskell, unknown types and thus boxed representations arise because of two key language features: types imported from a separately compiled program unit and types within polymorphic routines. Polymorphic values are particularly troublesome because we can simultaneously view them as having any one of an infinite number of monomorphic types. For example, a polymorphic routine that maps a function across the elements of an array can be simultaneously seen as a function that works on boolean arrays and a function that works on real arrays. The routine can thus be used in place of a function that was compiled knowing whether the argument array contains booleans or reals. Consequently, monomorphic routines are forced to use the same representations as polymorphic routines and the entire program pays the price of the increased space and execution-time overheads of the universal representations.
1.1 Coercion Implementations
The problem with polymorphism stems from the assumption that viewing a polymorphic value as a monomorphic value should have no computational effect. Recent work by Leroy [30] and others [41, 24, 43] has suggested that the instantiation of a polymorphic value should correspond to a run-time coercion from the universal representation to the appropriate specialized representation. At function types, this requires the dual coercion (for the function argument) that converts specialized representations to the universal representation. For example, when the identity function of type \(\forall a. a \to a\) is instantiated to have type \(\text{int} \to \text{int}\), a coercion is generated that takes an integer argument, boxes it, passes it to the identity function, and unboxes the result. This approach allows monomorphic code to use the natural, efficient representations.
Leroy’s coercions produce an isomorphic copy of a data structure. For example, to coerce a tuple, we project the
components of the tuple, box/unbox them, and then form a new tuple. Unfortunately, copying coercions are impractical for large data structures since the cost of making the copy often outweighs the benefits of the unboxed representation (as pointed out by Leroy [30, page 184]). More problematically, copying coercions do not work for mutable data structures such as arrays. If we make a copy of the value to box the components then updates to the copy will not be reflected in the original array and vice versa.
1.2 Type Passing
An alternative approach to coercions, first suggested by the Napier '88 implementation [37], is to pass the types that are unknown at compile-time to primitive operations at link-time or even run-time. Then the primitive operations can analyze the type in order to select the appropriate code to manipulate the natural representation of an object. For example, a polymorphic subscript function for arrays might be compiled into the following pseudo-code:
\[
\text{sub} = \lambda \alpha. \text{typecase } \alpha \text{ of } \\
\text{ bool } \Rightarrow \text{boolsub} \\
\text{ real } \Rightarrow \text{realsub} \\
\tau \Rightarrow \text{boxedsub}[\tau]
\]
Here, \text{sub} is a function that takes a type argument \((\alpha)\), and does a case analysis to determine the appropriate specialized subscript function that should be returned. For example, \text{sub}[\text{bool}] returns the boolean subscript function that expects an array of bits, while \text{sub}[\text{real}] returns the floating point subscript function that expects a double-word aligned array of floating point values. For all other types, we assume the array has boxed components and thus return the boxed subscript function.
If the sub operation is instantiated with a type that is known at compile-time (or link-time), then the overhead of the case analysis can be eliminated by duplicating and specializing the definition of \text{sub} at the appropriate type. For example, the source expression \(\text{sub}(x, 4) + 3.14\) will be compiled to the target expression \(\text{sub}[\text{real}](x, 4) + 3.14\) since the result of the sub operation is constrained to be a real. If the definition of \text{sub} is inlined into the target expression and some simple reductions are performed, this yields the optimized expression \(\text{realsub}(x, 4) + 3.14\). Thus, parameterizing the primitive operations by type provides a single, consistent methodology for type analysis at compile-time, link-time, and run-time.
In languages where polymorphic definitions are restricted to "computational values" (essentially constants and functions), polymorphic definitions can always be duplicated and specialized or even inlined. Lazy languages such as Haskell satisfy this constraint, and Wright has determined empirically that such a restriction does not affect the vast majority of SML programs [52]. Languages like core-SML and Haskell only allow polymorphic values to arise as the result of a "let" binding and restrict the type of such values to be prenex-quantified. That is, the type must be of the form \(\forall \alpha_1, \ldots, \alpha_n. \tau\) where \(\tau\) contains no quantifier. Thus, the only thing that can be done to a polymorphic value is to instantiate it. Since the scope of a let is closed, it is possible to determine all of the instantiations of the polymorphic value at compile time and eliminate all polymorphism through duplication and\(\)d specialization. Such an approach is used, for instance, by Blelloch et al. in their NESL compiler [5] and more recently by Jones to eliminate Haskell overloading [27]. Furthermore, Jones reports that this approach does not lead to excessive code-bloat.
Unfortunately, eliminating all of the polymorphism in a program is not always possible or practical. In particular, there is no way to eliminate the polymorphism when separately compiling a definition from its uses because it is impossible to determine the types at which the definition will potentially be used. This prevents us from separately compiling polymorphic libraries or polymorphic definitions entered at a top-level loop. Furthermore, in languages that allow polymorphic values to be "first-class" such as XML [21] and Quest [9], it is impossible to eliminate all polymorphism at compile-time. Therefore, we view duplication and specialization as an important optimization, but consider some run-time type analysis to still be necessary for practical language implementation.
1.3 Type-Checking Type Analysis
In this paper, we show how to compile ML-like polymorphic languages to a target language where run-time type analysis may be used by the primitive operations to determine the representation of a data structure. We are particularly interested in the typing properties of a language that allows run-time type analysis. The sub-definition above is ill-typed in ML because it must simultaneously have the types boolarray \(\times\) int \(\rightarrow\) bool, realarray \(\times\) int \(\rightarrow\) real, as well as \(\forall \alpha. (\alpha)\text{boxedarray} \times\) int \(\rightarrow\) \(\alpha\). Since boolarray and realarray are nullary constructors and not instantiations of \(\alpha\)\text{boxedarray}, it is clear that there is no ML type that unifies all of these types.
Our approach to this problem is to consider a type system that provides analysis of types via a type-level "Typecase" construct. For example, the sub definition above can be assigned a type of the form:
\[
\forall \alpha. \text{SpclArray}[\alpha] \times\text{int} \rightarrow \alpha
\]
where the specialized array constructor \text{SpclArray} is defined using \text{Typecase} as follows:
\[
\text{SpclArray}[\alpha] = \text{Typecase } \alpha \text{ of } \\
\text{ bool } \Rightarrow \text{boolarray} \\
\text{ real } \Rightarrow \text{realarray} \\
\tau \Rightarrow (\tau)\text{boxedarray}
\]
The definition of the constructor parallels the definition of the term: If the parameter \(\alpha\) is instantiated to bool, then the resulting type is boolarray and if the parameter is instantiated to real, the resulting type is realarray.
In its full generality, our target language allows types to be analyzed not just by case analysis, but also via primitive recursion. This allows more sophisticated transformations to be coded within the language, yet type checking for the target language remains decidable. An example of a more sophisticated translation made possible by primitive recursion is one where arrays of tuples are represented as tuples of arrays. For example, an array of bool \(\times\) real is represented as a pair of a boolarray and a realarray. This representation allows the boolean components of the array to be packed and allows the real components to be naturally aligned. The subscript operation for this representation is defined using a recursive typecase construct called typeacc in the following
product of
Again, the definition of the constructor parallels the definition of SML, or an overloaded operator. If the parameter is instantiated to bool, then the resulting type is boolean. If the parameter is instantiated with \(\tau_1 \times \tau_2\), then the resulting type is the product of \(\text{RecArray}[\tau_1]\) and \(\text{RecArray}[\tau_2]\).
The type of this sub operation is:
\[
\forall \alpha. \text{RecArray}[\alpha] \times \text{int} \rightarrow \alpha
\]
where the recursive, specialized array constructor \(\text{RecArray}\) is defined using a type-level "Tpyperec":
\[
\text{Typerrec} \text{RecArray}[\text{bool}] = \text{boolarray}
\]
\[
\text{Typerrec} \text{RecArray}[\text{real}] = \text{realarray}
\]
\[
\text{Typerrec} \text{RecArray}[\tau_1 \times \tau_2] = \text{RecArray}[\tau_1] \times \text{RecArray}[\tau_2]
\]
\[
\text{Typerrec}[\tau] = (\tau) \text{boxedarray}
\]
If \(\text{sub}\) is a product type, \(\tau_1 \times \tau_2\), then it returns a function that takes a pair of arrays \((x, y)\) and an index \(i\) and returns the pair of values from both arrays at that index, recursively calling the \(\text{sub}\) operation at the types \(\tau_1\) and \(\tau_2\).
The type of this sub operation is:
\[
\forall \alpha. \text{RecArray}[\alpha] \times \text{int} \rightarrow \alpha
\]
where the recursive, specialized array constructor \(\text{RecArray}\) is defined using a type-level "Tpyperec":
\[
\text{Typerrec} \text{RecArray}[\text{bool}] = \text{boolarray}
\]
\[
\text{Typerrec} \text{RecArray}[\text{real}] = \text{realarray}
\]
\[
\text{Typerrec} \text{RecArray}[\tau_1 \times \tau_2] = \text{RecArray}[\tau_1] \times \text{RecArray}[\tau_2]
\]
\[
\text{Typerrec}[\tau] = (\tau) \text{boxedarray}
\]
Again, the definition of the constructor parallels the definition of the sub operation. If the parameter is instantiated to bool, then the resulting type is boolean. If the parameter is instantiated with \(\tau_1 \times \tau_2\), then the resulting type is the product of \(\text{RecArray}[\tau_1]\) and \(\text{RecArray}[\tau_2]\).
Run-time type analysis can be used to provide other useful language mechanisms besides efficient representations. In particular, ad hoc polymorphic operators, such as the equality operator of SML, or an overloaded operator exported from a Haskell type class, can be directly implemented in our target language without the need to tag values. Furthermore, the static constraints of SML's equality types and Haskell's type classes may be coded using our Typerrec construct. Our target language is also able to express "marshalling" of data structures and a form of type dynamic.
In Section 2 we describe the type-analysis approach to compilation as a type-based translation from a source language, Mini-ML, to our target language, \(\lambda^{ML}\). The key properties of \(\lambda^{ML}\) are stated, and a few illustrative examples involving Typerrec and Typerrec are given. In Section 3 we show how many interesting and useful language constructs can be coded using Typerrec, including flattened representations, marshalling, type classes, and type dynamic. In Section 4 we discuss related work, and in Section 5 we summarize and suggest directions for future research.
## 2 Type-Directed Compilation
In order to take full advantage of type information during compilation, we consider translations of typing derivations from the implicitly-typed ML core language to an explicitly-typed target language, following the interpretation of polymorphism suggested by Harper and Mitchell [20]. The source language is based on Mini-ML [11], which captures many of the essential features of the ML core language. The target language, \(\lambda^{ML}\), is an extension of \(\lambda^{ML}\), also known as XML [21], a predicative variant of Girard's Fe [16, 17, 42], enriched with primitives for intensional type analysis.
### 2.1 Source Language: Mini-ML
The source language for our translations is a variant of Mini-ML [11]. The syntax of Mini-ML is defined by the following grammar:
\[
\begin{align*}
(\text{monotypes}) & :\quad \tau ::= t \mid \text{int} \mid \tau_1 \rightarrow \tau_2 \mid \tau_1 \times \tau_2 \\
(\text{polymtypes}) & :\quad \sigma ::= \tau \mid \forall \tau. \sigma \\
(\text{terms}) & :\quad e ::= x \mid \overline{\text{f}} \mid (e_1, e_2) \mid \tau_1 e_1 \mid \tau_2 e_1 \mid \lambda e. e \mid e_1 e_2 \mid \text{let} x = e \text{ in } e \\
(\text{values}) & :\quad v ::= x \mid \overline{\text{f}} \mid (\{v_1, v_2\}) \mid \lambda e. e
\end{align*}
\]
Monotypes \(\tau\) are either type variables \(t\), integers, arrow types, or binary product types. Polymtypes \(\sigma\) (also known as type schemes) are either monotypes or prenex quantified types. We write \(\forall \tau_1, \ldots, \tau_n. \sigma\) to represent the polytype \(\forall \tau_1, \ldots, \tau_n. \sigma\).
The terms of Mini-ML \(e\) consist of identifiers, numerals \(\overline{f}\), pairs, first and second projections, abstractions, applications, and let expressions. Values \(v\) are a subset of the terms and include identifiers, integer values, pairs of values, and abstractions.
The static semantics for Mini-ML is given in Figure 1 as a series of inference rules. The rules allow us to derive a judgement of the form \(\Delta; \Gamma \vdash e : \tau\) where \(\Delta\) is a set of free type variables and \(\Gamma\) is a type assignment mapping identifiers to polytypes. We write \(\sigma[f/\tau]\) to denote the substitution of the type \(\tau\) for the type variable \(f\) in the type expression \(\sigma\).
We use \(\Delta \notin \Delta'\) to denote the union of two disjoint sets of type variables, \(\Delta\) and \(\Delta'\), and \(\Gamma \notin \sigma : \tau\) to denote the type assignment that extends \(\Gamma\) so that \(x\) is assigned the polytype \(\sigma\), assuming \(x\) does not occur in the domain of \(\Gamma\).
Let-bound expressions are restricted to values so that our translation, which makes type abstraction explicit, is correct (see below).
### 2.2 Target Language: \(\lambda^{ML}\)
The target language of our translations, \(\lambda^{ML}\), is based on \(\lambda^{ML}\) [20], a predicative variant of Girard's Fe [16, 17, 42]. The essential departure from the impredicative systems of Girard and Reynolds is that the quantifier \(\forall \sigma\) ranges only over "small" types, or "monotypes", which do not include the quantified types. This calculus is sufficient for the interpretation of ML-style polymorphism (see Harper and Mitchell [20] for further discussion of this point.) The language \(\lambda^{ML}\) extends \(\lambda^{ML}\) with intensional (or structural [19]) polymorphism, which allows non-parametric functions to be defined by intensional analysis of types.
The four syntactic classes for \(\lambda^{ML}\), kinds \((\kappa)\), constructors \((\mu)\), types \((\sigma)\), and terms \((e)\), are given below:
\[
\begin{align*}
(\text{kinds}) & :\quad \kappa ::= \Omega \mid \kappa_1 \rightarrow \kappa_2 \\
(\text{con's}) & :\quad \mu ::= t \mid \text{int} \mid (\mu_1, \mu_2) \mid (\mu_1 \times \mu_2) \\
& \quad \text{Typerrec} \mu \mid \mu_1 \mu_2 \\
(\text{types}) & :\quad \sigma ::= T(\mu) \mid \text{int} \mid \sigma_1 \rightarrow \sigma_2 \mid \sigma_1 \times \sigma_2 \\
(\text{terms}) & :\quad e ::= x \mid \overline{\text{f}} \mid \lambda \sigma. e \mid e_1 e_2 \mid (e_1, e_2) \mid \tau_1 e_1 \mid \tau_2 e_1 \mid \lambda \sigma. e \mid e_1 e_2 \mid \text{let} x = e \text{ in } e \\
& \quad \text{Typerrec} \mu \mid t \sigma \mid e \mid \kappa \mid \mu
\end{align*}
\]
Kinds classify constructors, and types classify terms. Constructors of kind \(\Omega\) name "small types" or "monotypes". The monotypes are generated from \(\text{int}\) and variables by the...
in tro ductory forms are the constructors of kind $\kappa_1 \rightarrow \kappa_2$. Types in $\lambda^{\text{ML}}$ include the monotypes, and are closed under products, function spaces, and polymorphic quantification. We distinguish constructors from types, writing $T(\mu)$ for the type corresponding to the constructor $\mu$. The terms are an explicitly typed $\lambda$-calculus with explicit constructor abstractions and application forms.
The official syntax of terms shows that the primitive operations of the language are provided with type information that may be used at run-time. For example, the pairing operation is $\langle e_1, e_2 \rangle : \sigma_1 \times \sigma_2$, where $e_i : \sigma_i$, reflecting the fact that there is (potentially) a pairing operation at each pair of types. In a typical implementation, the pairing operation is implemented by computing the size of the components from the types, allocating a suitable chunk of memory, and copying the parameters into that space. However, there is no need to tag the resulting value with type information because the projection operations $\pi_{\sigma_1, \sigma_2} e$ are correspondingly indexed by the types of the components so that the appropriate chunk of memory can be extracted from the tuple. Similarly, the application primitive $\langle e_1, e_2 \rangle : \tau_2$ is indexed by the domain type of the function and is used to determine the calling sequence for the function. Of course, primitive operations can ignore the type if a universal representation is used. Consequently, the implementor can decide whether to use a natural or universal representation. We use a simplified term syntax without the types when the information is apparent from the context. However, it is important to bear in mind that the type information is present in the fully explicit form of the calculus.
The Type rec and typerec forms provide the ability to define constructors and terms by structural induction on monotypes. These forms may be thought of as eliminatory forms for the kind $\Omega$ at the constructor and term level. (The introductory forms are the constructors of kind $\Omega$; there are no introductory forms at the term level in order to preserve the phase distinction [8, 21].) At the term level typerec may be thought of as a generalization of typcase that provides for the definition of a term by induction on the structure of a monotype. At the constructor level Type rec provides a similar ability to define a constructor by induction on the
| (var) | $FTV([\tau_i/t_i], \cdots ([\tau_1/t_1], \cdots)) \subseteq \Delta$ | (int) | $\Delta; \Gamma \vdash n : \text{int}$ |
| (pair) | $\Delta; \Gamma \vdash e_1 : \tau_1 \quad \Delta; \Gamma \vdash e_2 : \tau_2$ | (pair) | $\Delta; \Gamma \vdash e : \tau_1 \times \tau_2$ |
| (abs) | $\Delta; \Gamma \vdash \lambda x. e : \tau_1 \rightarrow \tau_2$ | (app) | $\Delta; \Gamma \vdash e_1 : \tau' \quad \Delta; \Gamma \vdash e_2 : \tau'$ |
| (let) | $\Delta; \Gamma \vdash \{ x_1, \ldots, x_n \} \vdash e : \tau$ | |

The whole constructor has kind $\kappa$ if the constructor to be analyzed, $\mu$, is of kind $\Omega$ (i.e., a monotype), $\mu_1$ is of kind $\kappa$, and $\mu_1 \rightarrow \mu_2$ is each of kind $\Omega \rightarrow \Omega \rightarrow \kappa \rightarrow \kappa \rightarrow \kappa$. The constructor equivalence rules (see Figure 2) axiomatize definitional equality [47, 31] of constructors to consist of $\beta$-conversion together with recursion equations governing the Type rec form. Conceptually, Type rec selects $\mu_1, \mu_2, \mu_\times$ according to the head constructor of the normal form of $\mu$ and passes it the components of $\mu$ and the "unrolling" of the Type rec on the components. The level of constructors and kinds is a variation of Gödel's T [18]. Every constructor, $\mu$, has a unique normal form, $NF(\mu)$, with respect to the obvious notion of reduction derived from the equivalence rules of Figure 2 [47]. This reduction relation is confluent, from which it follows that constructor equivalence is decidable [47].
The type formation, type equivalence, and term formation rules for $\lambda^{\text{ML}}$ are omitted due to lack of space, but can be found in a previous report [22]. The rules of type equivalence define the interpretation $T(\mu)$ of the constructor $\mu$ as a type. For example, $T(\text{int}) \equiv \text{int}$ and $T(\langle \mu_1, \mu_2 \rangle) \equiv T(\mu_1) \rightarrow T(\mu_2)$. Thus, $T$ takes us from a constructor which names a type to the actual type. The term formation rules are standard with the exception of the typerec form, which
---
1 In general, application could also depend upon the range type, but our presentation is simplified greatly by restricting the dependency to the domain type.
is governed by the following rule:
\[
\Delta \vdash \mu : \Omega \quad \Delta \ni \{ t : \Omega \} \sigma \quad \Delta ; \Gamma \vdash e : [\text{int} / t] \sigma \\
\Delta ; \Gamma \vdash e_{\Sigma} : \forall t_{1}, t_{2} : \Omega \to (t_{1} / t_{2}) \sigma \to \rightarrow (t_{1}, t_{2}) / t_{1} \sigma \\
\Delta ; \Gamma \vdash e_{\Sigma} : \forall t_{1}, t_{2} : \Omega \to (t_{1} / t_{2}) \sigma \to \rightarrow (t_{1}, t_{2}) / t_{1} \sigma \\
\Delta ; \Gamma \vdash \text{type}
\]
The argument constructor \( \mu \) must be of kind \( \Omega \), and the result type of the \( \text{type}
\) expression is determined as a function of the argument constructor, namely the substitution of \( \mu \) for \( t \) in the type expression \( \sigma \). The \( \{ t ; \sigma \} \) label provides the type information needed to check the construct without inference. Typically, the constructor variable \( t \) occurs in \( \sigma \) as the argument of a \( \text{type}
\) expression so that \( \mu[t] / \sigma \) is determined by a recursive analysis of \( \mu \). Similar to normalization of a \( \text{type}
\) constructor, the evaluation of a \( \text{type}
\) expression selects \( e_{\Sigma} \), \( e_{\times} \), or \( e_{\rightarrow} \) according to the head constructor of the normal form of \( \mu \) and passes it the components of \( \mu \) and the “unrolling” of the \( \text{type}
\) on the components.
The semantics of \( \text{type}
\) checking for \( \lambda_{\text{ML}} \) reduces to equivalence checking for types and constructors. In view of the decidability of constructor equivalence, we have the following important result:
**Proposition 2.1** It is decidable whether or not \( \Delta ; \Gamma \vdash e : \sigma \) is derivable in \( \lambda_{\text{ML}} \).
To fix the interpretation of \( \text{type}
\), we specify a call-by-value, natural semantics for \( \lambda_{\text{ML}} \) as a relation of the form \( e \rightarrow v \) where \( v \) is a closed (with respect to both type and value variables), syntactic value. Values are derived from the following grammar:
\[
v : \equiv \quad \bar{n} | x : \sigma, e | \langle v_{2}, v_{2} \rangle^{\sigma_{2} \sigma_{2}} | \Delta t : \kappa, e
\]
Type abstractions are values, reflecting the fact that evaluation does not proceed under \( \Lambda \).
Figure 3 defines the evaluation relation with a series of axioms and inference rules. The semantics uses an auxiliary judgment, \( \mu \rightarrow \mu' \), (not formally defined here) that determines the normal forms of constructors. Determining evaluation, we only need to determine normal forms of closed constructors of kind \( \Omega \). This amounts to evaluating constructors of the form \( \text{type}
(\ldots) \) and \( \{\mu[\mu]\} \) by orienting the equivalences of Figure 2 to the right and adding the appropriate congruences.
The rest of the semantics is standard except for the evaluation of a \( \text{type}
\) expression which proceeds as follows:
First, the normal form of the constructor argument is determined. Once the normal form is determined, the appropriate subexpression is selected and applied to any argument constructor. The resulting function is in turn applied to the “unrolling” of the \( \text{type}
\) at each of the argument constructors. Some simple examples using \( \text{type}
\) may be found at the end of this subsection.
The semantics uses meta-level substitution of closed values for variables and closed constructors for type variables. In a lower-level semantics where substitution is made explicit, an environment would be needed not only for value variables, but also for type variables. Tolmach [51] describes many of the issues involved in implementing such a language.
**Proposition 2.2** (Type Preservation) If \( \emptyset ; \emptyset \vdash e : \sigma \) and \( e \rightarrow v \), then \( \emptyset ; \emptyset \vdash v : \sigma \).
By inspection of the value typing rules, only appropriate values occupy appropriate types and thus evaluation will not “go wrong”. In particular, it is possible to show that when evaluating well-typed programs, we only use the \text{proj} evaluation rule when \( \sigma_{1} \equiv \sigma_{1} \) and \( \sigma_{2} \equiv \sigma_{2} \) and we only use the \text{app} rule when \( \sigma' \equiv \sigma \). Furthermore, programs written in pure \( \lambda_{\text{ML}} \) (i.e., without general recursion operators or recursive types) always terminate.
**Proposition 2.3** (Termination) If \( e \) is an expression such that \( \emptyset ; \emptyset \vdash e : \sigma \), then there exists a value \( v \) such that \( e \rightarrow v \).
A few simple examples will help to clarify the use of \( \text{type}
\). The function \( \text{sizeof} \) of type \( \forall t : \Omega . \text{int} \) that computes the “size” of values of a type can be defined as follows:
\[
\text{sizeof} = \Delta t : \Omega . \text{type}
(\langle t', \text{int} \rangle[e_{1} / t_{1} / t_{2}])
\]
where
\[
e_{1} = 1
\]
\[
e_{2} = \Delta t_{1} : \Omega . \Delta t_{2} : \Omega . \lambda x_{1} . \text{int} . \lambda x_{2} . \text{int} . 1
\]
\[
e_{\times} = \Delta t_{1} : \Omega . \Delta t_{2} : \Omega . \lambda x_{1} . \text{int} . \lambda x_{2} . \text{int} . x_{1} + x_{2}
\]
(Here we assume that arrow types are boxed and thus have size one.) It is easy to check that \( \text{sizeof} \) has the type \( \forall t : \Omega . \text{int} \). Note that in a parametric setting this type contains only constant functions.
As another example, Girard’s formulation of System F [16] includes a distinguished constant \( 0_{t} \) of type \( \tau \) for each type \( \tau \) (including variable types). We may define an analogue of these constants using \( \text{type}
\) as follows:
\[
\text{zero} = \Delta t : \Omega . \text{type}
(\langle t', T(t') \rangle[e_{1} / t_{1} / t_{2}])
\]
where
\[
\begin{align*}
e_1 & = 0 \\
e_{\Rightarrow} & = \lambda t: \Omega. \lambda z_1: T(t_1). \lambda z_2: T(t_2). \lambda x: T(t_1). z_1, z_2) \\
e_\times & = \lambda t_1: \Omega. \lambda t_2: \Omega. \lambda z_1: T(t_1). \lambda z_2: T(t_2). (z_1, z_2)
\end{align*}
\]
It is easy to check that zero has type \(\forall t: \Omega. T(t)\), the “empty” type in System F and related systems. The presence of \(\text{typerec}\) violates parametricity to achieve a more flexible programming language.
To simplify the presentation we usually define terms such as \(\text{zero}\) and \(\text{sizeof}\) using recursion equations, rather than as a \(\text{typerec}\) expression. The definitions of \(\text{zero}\) and \(\text{sizeof}\) are given in this form as follows:
\[
\begin{align*}
\text{sizeof}[\text{int}] & = 1 \\
\text{sizeof}[\text{x}([\mu_1, \mu_2])] & = \text{sizeof}[\mu_1] + \text{sizeof}[\mu_2] \\
\text{sizeof}[\rightarrow[\mu_1, \mu_2]] & = 1 \\
\text{zero}[\text{int}] & = 0 \\
\text{zero}[\text{x}([\mu_1, \mu_2])] & = (\text{zero}[\mu_1], \text{zero}[\mu_2]) \\
\text{zero}[\rightarrow[\mu_1, \mu_2]] & = \lambda x:T(\mu_1) \text{zero}[\mu_2]
\end{align*}
\]
Whenever a definition is presented in this form we tacitly assert that it can be formalized using \(\text{typerec}\).
2.3 Translating Mini-ML into \(\lambda^\text{ML}\)
A compiler from Mini-ML to \(\lambda^\text{ML}\) is specified by a relation \(\Delta; \Gamma \vdash e_s : \tau \Rightarrow e_t\) that carries the meaning that \(\Delta; \Gamma \vdash e_s : \tau\) is a derivable typing in Mini-ML and that the translation of the source term \(e_s\) determined by that typing derivation is the \(\lambda^\text{ML}\) expression \(e_t\). Since the translation depends upon the typing derivation, it is possible to have many different translations of a given expression. However, all of the translation schemes we consider are coherent in the sense that any two typing derivations produce observationally equivalent translations [7, 20, 20].
Here, we give a straightforward compiler whose purpose is to make types explicit so that the primitive operations such as pairing and projection can potentially analyze their types at run-time. This simple translation does not utilize \(\text{typerec}\) or \(\text{Typerec}\), but subsequent translations take advantage of these constructs.
We begin by defining a translation from Mini-ML types to \(\lambda^\text{ML}\) constructors, written \(\Gamma \vdash \tau\):
\[
\begin{align*}
|t| & = t \\
|\text{int}| & = \text{int} \\
|T_1 \rightarrow T_2| & = \rightarrow(T_1, |T_2|) \\
|T_1 \times T_2| & = \times(T_1, |T_2|)
\end{align*}
\]
The translation is extended to map Mini-ML type schemes to \(\lambda^\text{ML}\) types as follows:
\[
\begin{align*}
|\mu| & = T(|\tau|) \\
|\forall \sigma| & = \forall \forall \Omega. |\sigma|
\end{align*}
\]
Finally, we write \(\Delta; \Gamma \vdash \tau\) for the kind assignment mapping \(t\) to the kind \(\Omega\) for each \(t \in \Delta\), and \(\Gamma\) for the type assignment mapping \(x\) to \(\Gamma(x)\) for each \(x \in \text{dom}(\Gamma)\).
Proposition 2.4 The type translation commutes with substitution:
\[
|T_1 / T_2 \cdot \cdots \cdot (|T_1 / T_1|) \cdots \cdot |T_1 / T_1|\cdot (\cdots (|T_1 / T_1|)\cdot |T_1 / T_1|)\cdots |T_1 / T_1|\cdot (\cdots (|T_1 / T_1|)\cdot |T_1 / T_1|)\cdots |
\]
The term translation is given in Figure 4 as a series of inference rules that parallel the typing rules for Mini-ML. The first rule turns Mini-ML implicit instantiation of type variables into \(\lambda^\text{ML}\) explicit type application. Operationally, this corresponds to passing the types to the polymorphic value at run-time. The \(\text{let}\) rule makes the implicit type abstraction of the bound expression explicit. The translation of \(\lambda\)-abstraction, application, pairing, and projection is straightforward except that these primitive operations are labelled with their types.
The term translation may be characterized by the following type preservation property.
Theorem 2.5 If \(\Delta; \Gamma \vdash e : \tau \Rightarrow e'\), then \(\Delta; \Gamma \vdash e' : |\tau|\).
Given a standard, call-by-value operational semantics for Mini-ML with the value restriction, and given the stratification between monotypes and polytypes in both Mini-ML and \(\lambda^\text{ML}\), it is possible to modify a standard binary logical relations-style argument for the simply-typed lambda calculus [48, 15, 40, 45, 46] to show the correctness of the
In which nested tuples are represented by a sequence of Mini-ML tuples
\[ \Delta; \Gamma \triangleright e_1 : \tau_1 \Rightarrow e'_1 \quad \Delta; \Gamma \triangleright e_2 : \tau_2 \Rightarrow e'_2 \]
We consider the representation of Mini-ML tuples by case analysis on the form of its first argument, relying on the defining equations for Prod. Informally, the constructor Prod computes the right-associated form of a product of two types. For example,
\[ \text{Prod}[\text{Prod}[\text{Int}]][\text{Int}] \]
and
\[ \text{Prod}[\text{Int}][\text{Prod}[\text{Int}][\text{Int}]] \]
and the equation
\[ \triangleright \text{Prod}[\text{Prod}[\text{Int}][\text{Int}]]; \text{Int}] \equiv \text{Prod}[\text{Int}][\text{Prod}[\text{Int}][\text{Int}]] \]
is derivable in \( \lambda^ML \).
The term translation is modified by changing the behavior of the pair and \( \pi \) rules:
\[ \Delta; \Gamma \triangleright e_1 : \tau_1 \Rightarrow e'_1 \quad \Delta; \Gamma \triangleright e_2 : \tau_2 \Rightarrow e'_2 \]
\[ \Delta; \Gamma \triangleright \langle e_1, e_2 \rangle : \tau_1 \times \tau_2 \Rightarrow \text{mkpair}[\tau_1][\tau_2] \ e'_1 \ e'_2 \]
\[ \Delta; \Gamma \triangleright e_1 : \tau_1 \times \tau_2 \Rightarrow e' \]
\[ \Delta; \Gamma \triangleright \pi_i e : \tau_1 \Rightarrow \text{proj}_i[\tau_1] \ e'_i \]
The modified translation makes use of three auxiliary functions, \( \text{mkpair} \), \( \text{proj}_1 \), and \( \text{proj}_2 \), with the following types:
\[ \text{mkpair} : \forall \tau_1, \tau_2 \Rightarrow \Omega \times T(\tau_1) \Rightarrow \Omega \times T(\tau_2) \]
\[ \text{proj}_1 : \forall \tau_1, \tau_2 \Rightarrow \Omega \times T(\tau_1) \Rightarrow T(\tau_1) \]
\[ \text{proj}_2 : \forall \tau_1, \tau_2 \Rightarrow \Omega \times T(\tau_1) \Rightarrow T(\tau_2) \]
The mkpair operation is defined as follows, using the “unofficial” syntax of the language:
\[ \text{mkpair}[\text{Int}][\mu] = \lambda x : \text{Int}. \lambda y : T(\mu). \langle x, y \rangle \]
\[ \text{mkpair}[\langle \mu, \mu \rangle][\mu] = \lambda x : \text{Int} \Rightarrow (\langle \mu, \mu \rangle)[\mu] \]
\[ \lambda y : T(\mu). \langle x, y \rangle \]
\[ \text{mkpair}[\langle \mu, \mu \rangle][\mu] = \lambda x : \text{Int} \Rightarrow (\langle \mu, \mu \rangle)[\mu] \]
\[ \lambda y : T(\mu). \langle x, \text{mkpair}[\mu][\mu](\tau_2)(x) \rangle \]
The verification that \( \text{mkpair} \) has the required type proceeds by case analysis on the form of its first argument, relying on the defining equations for Prod. For example, we must check that \( \text{mkpair}[\text{Int}][\mu] \) has type
\[ T(\text{Int}) \Rightarrow T(\mu) \Rightarrow T(\text{Prod}[\text{Int}][\mu]) \]
which follows from the definition of $\text{mkpair}[\text{int}][\mu]$ and the fact that
\[ T(\text{Prod}[\text{int}][\mu]) \equiv \text{int} \times T(\mu). \]
Similarly, we must check that $\text{mkpair}[\times(\mu_1, \mu_2)][\mu]$ has type
\[ T(\times(\mu_1, \mu_2)) \rightarrow T(\mu) \rightarrow T(\text{Prod}[\times(\mu_1, \mu_2)][\mu]) \]
which follows from its definition, the derivability of the equation
\[ T(\text{Prod}[\times(\mu_1, \mu_2)][\mu]) \equiv T(\mu_2) \times T(\text{Prod}[\mu_1][\mu]), \]
and, inductively, the fact that $\text{mkpair}[^{\text{nat}}][\mu]$ has type $T(\mu) \rightarrow T(\mu)$.
The operations $\text{proj}_1$ and $\text{proj}_2$ are defined as follows:
\[
\begin{align*}
\text{proj}_1[\text{int}][\mu] &= \lambda x : T(\text{Prod}[\text{int}][\mu]), \pi_1 x \\
\text{proj}_2[\text{int}][\mu] &= \lambda x : T(\text{Prod}[^{\text{nat}}][\mu]), \pi_2 x \\
\text{proj}_1[\times(\mu_1, \mu_2)][\mu] &= \lambda x : T(\text{Prod}[\times(\mu_1, \mu_2)][\mu]), \langle \pi_1 x, \text{proj}_1[\mu_1][\mu](\pi_2 x) \rangle \\
\text{proj}_2[\times(\mu_1, \mu_2)][\mu] &= \lambda x : T(\text{Prod}[\times(\mu_1, \mu_2)][\mu]), \langle \pi_2 x, \text{proj}_2[\mu_2][\mu](\pi_1 x) \rangle \\
\end{align*}
\]
The verification that these constructors have the required type is similar to that of $\text{mkpair}$, keeping in mind the equations governing $T(-)$ and $\text{Prod}[-][-]$.
One advantage of controlling data representation in this manner is that it becomes possible to support a type-safe form of casting that we call a view. Let us define two Mini-ML types $\tau_1$ and $\tau_2$ to be similar, $\tau_1 \approx \tau_2$, iff they have the same representation — i.e., if $\tau_1$ is definitionally equivalent to $\tau_2$ in $\lambda^{\text{ML}}$. If $\tau_1 \approx \tau_2$, then every value of type $\tau_1$ is also a value of type $\tau_2$, and vice-versa. For example, in the case of the right-associative representation of nested tuples above, we have that $\tau_1 \approx \tau_2$ iff $\tau_1$ and $\tau_2$ are equivalent modulo associativity of the product constructor, and a value of a (nested) product type is a value of every other association of that type.
In contrast to coercion implementations of type equivalence, such an approach is compatible with mutable types (i.e., arrays and refs) in the sense that $\tau_1 \approx \tau_2$ if $\tau_1 \approx \tau_2$ iff $\tau_1$ is equivalent to $\tau_2$. This means that we may freely intermingle updates with views of complex data structures, capturing some of the expressiveness of C casts without sacrificing type safety.
The right-associative representation does not capture all aspects of "flatness". In particular, access to components is not constant time, given a standard implementation of the pairing and projection operations. This may be overcome by extending $\lambda^{\text{ML}}$ with $n$-tuples (tuples of variable arity), and modifying the interpretation of the product type appropriately. A rigorous formulation of the target language extended with $n$-tuples is tedious, but appears to be straightforward.
### 3.2 Marshalling
Ohori and Kato give an extension of ML with primitives for distributed computing in a heterogeneous environment [39]. Their extension has two essential features: one is a mechanism for generating globally unique names ("handles" or "capabilities") that are used as proxies for functions provided by servers. The other is a method for representing arbitrary values in a form suitable for transmission through a network. Integers are considered transmissible, as are pairs of transmissible values, but functions cannot be transmitted (due to the heterogeneous environment) and are thus represented by proxy using unique identifiers. These identifiers are associated with their functions by a name server that may be contacted through a primitive addressing scheme.
In this section we sketch how a variant of Ohori and Kato’s representation scheme can be implemented using intensional polymorphism.
To accommodate Ohori and Kato’s primitives the $\lambda^{\text{ML}}$ language is extended with a primitive constructor $\text{id}$ of kind $\Omega \rightarrow \Omega$ and a corresponding type constructor $\text{id}(\sigma)$, linked by the equation $T(\text{id}[\mu]) \equiv \text{id}(T(\mu))$. The Typerec primitives are extended in the obvious way to account for constructors of the form $\text{id}[\mu]$. For example, the following constructor equivalence is added:
\[
\begin{align*}
\Delta \triangleright \mu : : \Omega \\
\Delta \triangleright \mu_1 : : \Omega \\
\Delta \triangleright \mu_2 : : \Omega \\
\mu_2 \equiv \text{Typerec} \text{id}[\mu] \text{ of } (\mu_1 \equiv \mu_3 \equiv \mu_4 \equiv \text{id}[\text{id}[\mu]])
\end{align*}
\]
The primitives newid and rpc are added with the following types:
\[
\begin{align*}
\text{newid} : \forall t_1, t_2 : \Omega, (T(\text{Tran}[t_1]) \rightarrow T(\text{Tran}[t_2])) \\
\text{rpc} : \forall t_1, t_2 : \Omega, (T(\text{Tran}[t_1] \rightarrow (t_1, t_2)))
ightarrow T(\text{Tran}[t_1])
ightarrow T(\text{Tran}[t_2])
\end{align*}
\]
where $\text{Tran}$ is a constructor coded using Typerec as follows:
\[
\begin{align*}
\text{Tran}[\text{int}] &= \text{int} \\
\text{Tran}[\rightarrow(\mu_1, \mu_2)] &= \text{id}[\rightarrow(\text{Tran}[\mu_1], \text{Tran}[\mu_2])] \\
\text{Tran}[\times(\mu_1, \mu_2)] &= \times(\text{Tran}[\mu_1], \text{Tran}[\mu_2]) \\
\text{Tran}[\text{id}[\mu]] &= \text{id}[\mu]
\end{align*}
\]
The constructor $\text{Tran}[\mu]$ maps $\mu$ to a constructor where each arrow is wrapped by an $\text{id}$ constructor. Thus, values of type $T(\text{Tran}[\mu])$ do not contain functions and are therefore transmissible. It is easy to check that $\text{Tran}$ is a constructor of kind $\Omega \rightarrow \Omega$.
From an abstract perspective, newid maps a function on transmissible representations to a transmissible representation of the function and rpc is its (left) inverse. Operationally, newid takes a function between transmissible values, generates a new, globally unique identifier and tells the name server to associate that identifier with the function on the local machine. For example, the unique identifier might consist of the machine’s name paired with the address of the function. The rpc operation takes a proxy identifier of a remote function, and a transmissible argument value. The name server is contacted to discover the remote machine where the value actually lives. The argument value is sent to this machine, the function associated with the identifier is applied to the argument, and the result of the function is transmitted back as the result of the operation.
The compilation of Ohori and Kato’s distribution primitives into this extension of $\lambda^{\text{ML}}$ relies critically on a “marshalling” operation $\text{M}$ that converts a value to its transmissible representation and an “unmarshalling” operation $\text{U}$ that
converts a value from its transmissible representation. The types of these operations can be easily expressed in terms of \( \text{Trans} \):
\[
\begin{align*}
M &: \forall t: \Omega \cdot T(t) \to T(\text{Trans}(t)) \\
U &: \forall t: \Omega \cdot \text{Trans}(t) \to T(t)
\end{align*}
\]
The operations themselves can be defined as follows using the unofficial syntax of \text{Typerec}:
\[
\begin{align*}
M[\cdot\cdot\cdot] &= \lambda x: \text{int}. x \\
M[\cdot\cdot\cdot]\text{[}x(\cdot\cdot\cdot), \cdot\cdot\cdot\text{]} &= \lambda f: T(\cdot\cdot\cdot) \to (\cdot\cdot\cdot). (\cdot\cdot\cdot)(\cdot\cdot\cdot)(\cdot\cdot\cdot)
\end{align*}
\]
Here, \( \text{Eq} \) serves as a predicate on types in the sense that a non-\text{Void} constructor \( \mu \) is definitionally equal to \( \text{Eq}[\mu] \) only if \( \mu \) is a constructor that does not contain the constructor \( \Rightarrow \text{[-]} \).
The equality method can be coded using \text{Typerec} as follows:
\[
\begin{align*}
\text{eq}[\cdot\cdot\cdot] &= \text{eqInt} \\
\text{eq}[\cdot\cdot\cdot] &= \lambda x, y. \text{eq}(\cdot\cdot\cdot)(\cdot\cdot\cdot)(\cdot\cdot\cdot)(\cdot\cdot\cdot)
\end{align*}
\]
Consequently, \( \text{eq}[\mu] \) can be well-typed only if \( e_1 \) and \( e_2 \) have types that are definitionally equal to \( T(\text{Eq}[\mu]) \). The encoding is not entirely satisfactory because \( \text{eq}[\cdot\cdot\cdot][\cdot\cdot\cdot] \) may be a well-typed expression. However, the function resulting from evaluation of this expression can only be applied to values of type \text{Void}. Since no such values exist, the function can never be applied.
3.4 Dynamics
In the presence of intensional polymorphism a predicative form of the type dynamic [2] may be defined to be the existential type \( \exists \cdot \Omega \cdot T(t) \). The typing rules for existential types are as follows:
\[
\begin{align*}
\Delta &\vdash \{ t: \kappa \}\triangleright \sigma \quad \Delta \vdash \mu: \kappa \\
\Delta; \Gamma \vdash \text{pack}\mu \text{ with } \mu &:\exists \cdot \kappa. \kappa. \sigma
\end{align*}
\]
The \text{pack} operation introduces existentials by packaging a type with a value. The \text{abtype} operation eliminates existentials by allowing the type and value to be unpacked and used within a certain scope.
Under this interpretation, the introductory form dynamic\( [\cdot] \) stands for \( \exists \cdot \Omega \cdot T(t) \). The eliminatory form, \text{typecase} \( d \) of \( e_1, e_2, e_3 \), where \( d \) : dynamic, \( e_1 : \sigma \), and \( e_2, e_3 : \forall t_1, t_2 : \Omega. \sigma \), is defined as follows:
\[
\begin{align*}
\text{abtype}d &= t: \Omega \cdot x: T(t) \text{ in } \text{typecase} d \text{ of } \{ t; \sigma \}[e_1; e_2]'[e_3]' \end{align*}
\]
Here, \( e_1' \) is defined as follows:
\[
\begin{align*}
\text{Eq} : \Omega &\to \Omega \\
\text{eqInt} &= \text{int} \\
\text{eqBool} &= \text{bool} \\
\text{eq}[\cdot\cdot\cdot][\cdot\cdot\cdot] &= \lambda x, y. \text{eq}(\cdot\cdot\cdot)(\cdot\cdot\cdot)(\cdot\cdot\cdot)(\cdot\cdot\cdot)
\end{align*}
\]
3.3 Type Classes
The language Haskell [25] provides the ability to define a class of types with associated operations called methods.
The canonical example is the class of types that admit equality (also known as equality types in SML [33, 19]).
Consider adding a distinguished type \text{Void} (with associated constructor \text{Void}) to \( \lambda^M \) in such a way that \text{Void} is "empty". This does not close value has type \text{Void}. We can encode a type class definition by using \text{Typerec} to map types in the class to themselves and types not in the class to \text{Void}.
In this fashion, \text{Typerec} may be used to compute a predicate (or in general an \( n \)-ary relation) on types. Definitional equality can be used to determine membership in the class.
For example, the class of types that admit equality can be defined using \text{Typerec} as follows:
\[
\begin{align*}
\text{Eq} : \Omega &\to \Omega \\
\text{Eq}[\cdot\cdot\cdot] &= \text{int} \\
\text{eqBool} &= \text{bool} \\
\text{eq}[\cdot\cdot\cdot][\cdot\cdot\cdot] &= \lambda x, y. \text{eq}(\cdot\cdot\cdot)(\cdot\cdot\cdot)(\cdot\cdot\cdot)(\cdot\cdot\cdot)
\end{align*}
\]
To compute \( M \) and \( U \) using the official syntax, we have to use a single \text{Typerec} that returns a pair holding the two functions for that type.
4 Related Work
There are two traditional interpretations of polymorphism, the explicit style (due to Girard [16, 17] and Reynolds [42]), in which types are passed to polymorphic operations, and the implicit style (due to Milner [32]), in which types are erased prior to execution. In their study of the type theory of Standard ML, Harper and Mitchell [20] argued that an explicitly-typed interpretation of ML polymorphism has better semantic properties and scales more easily to cover the full language. Harper and Mitchell formulated a predicative type theory, XML, a theory of dependent types augmented with a universe of small types, adequate for capturing many aspects of Standard ML. This type theory was refined by Harper, Mitchell, and Moggi [21], and provides the basis for this work. The idea of intensional type analysis exploited here was inspired by the work of Constable [12, 13], from which the term “intensional analysis” is taken. The rules for type, and the need for , are derived from the “universe elimination” rules in NuPRL (described only in unpublished work of Constable).
The idea of passing types to polymorphic functions is exploited by Morrison et al. [37] in the implementation of Napier ’88. Types are used at run-time to specialize data representations in roughly the manner described here. The authors do not, however, provide a rigorous account of the type theory underlying their implementation technique. The work of Ohori on compiling record operations [38] is similarly based on a type-passing interpretation of polymorphism, and was an inspiration for the present work. Ohori’s solution is ad hoc in the sense that no general type-theoretic framework is proposed, but many of the key ideas in his work are present here. Jones [28] has proposed a general framework for passing data derived from types to “qualified” polymorphic operations, called evidence passing. His approach differs from ours in that whereas we pass types to polymorphic operations, that are then free to analyze them, Jones passes code corresponding to a proof that a type satisfies the constraints of the qualification. From a practical point of view it appears that both mechanisms can be used to solve similar problems, but the exact relationship between the two approaches is not clear.
Recently Duggan and Ophel [14] and Thatte [50] have independently suggested semantics for type classes that are similar in spirit to our proposal. In particular both approaches represent the restriction of a class as a user-defined, possibly recursive, kind definition in a predicative language. Both sets of authors are concerned with providing a source-level overloading facility and consequently examine hard issues such as type inference and open-scoped definitions that do not directly concern us, since we are primarily concerned with a target-level type-analysis facility. The implementation technique proposed by Duggan and Ophel is similar to ours in that polymorphic routines are passed type names at run-time and a typecase construct is used to determine the behavior of an overloaded operation. As with type classes and Jones’s qualified types, it appears that we can code many of their kind definitions using Type when the approach sketched in Section 3.3. However, Type can also be used to transform types — a facility crucial for representation transformations such as flattening and marshalling. That is, neither Duggan and Ophel nor Thatte provide a facility for coding constructors such as Prod or Tran that map types to types.
A number of authors have considered problems pertaining to representation analysis in the presence of polymorphism. The boxing interpretation of polymorphism has been studied by Peyton Jones and Launchbury [29], by Leroy [30], by Poulson [41], by Henglein and Jørgensen [24], and by Shao [43] with the goal of minimizing the overhead of boxing and unboxing at run-time. All but the first of these approaches involve copying coercions. Of a broadly similar nature is the work on “soft” type systems [3, 10, 23, 49, 53] that seek to improve data representations through global analysis techniques. All of these methods are based on the use of program analysis techniques to reduce the overhead of box and tag manipulation incurred by the standard compilation method for polymorphic languages. Many (including the soft type systems, but not Leroy’s system) rely on global analysis for their effectiveness. In contrast we propose a new approach to compiling polymorphism that affords control over data representation without compromising modularity.
Finally, a type-passing interpretation of polymorphism is exploited by Tolmach [51] in his implementation of a tag-free garbage collection algorithm. Tolmach’s results demonstrate that it is feasible to build a run-time system for ML in which no type information is associated with data in the heap. Morisset, Harper, and Felleisen [36] give a semantic framework for discussing garbage collection, and provide a proof of correctness of Tolmach’s algorithm.
5 Summary and Future Directions
We have presented a type-theoretic framework for expressing computations that analyze types at run-time. The key feature of our framework is the use of structural induction on types at both the term and type level. This allows us to express the typing properties of non-trivial computations that perform intensional type analysis. When viewed as an intermediate language for compiling ML programs, much of the type analysis in the translations can be eliminated prior to run-time. In particular, the prenex quantification restriction of ML ensures good binding-time separation between type arguments and value arguments and the “value restriction” on polymorphic functions, together with the well-foundedness of type induction, ensures that a polymorphic instantiation always terminates. This provides important opportunities for optimization. For example, if a type variable occurring as the parameter of a functor is the subject of intensional type analysis, then the type can be simplified when the functor is applied and becomes known. Similarly, link-time specialization is possible whenever a term is defined in a separately-compiled module. Inductive analysis of type variables arising from let-style polymorphism is ordinarily handled at run-time, but it is possible to expand each instance and perform type analysis in each case separately.
The type theory considered here extends readily to inductively defined types such as lists and trees. However, extending type and Type to handle generally recursive types is problematic because of the negative occurrence of in a recursive constructor. In particular, termination cannot no longer be guaranteed, which presents problems not only for optimization but also for type checking.
The restriction to predicative polymorphism is sufficient for compiling ML programs. More recent languages such as Quest [1] extend the expressive power to admit impredicative polymorphism, in which quantified types may be
4However, types are passed independently as data and associated with code.
instantiated by quantified types. (Both Girard's [16] and Reynolds's [22] calculi exhibit this kind of polymorphism.) It is natural to consider whether the methods proposed here may be extended to the impredicative case. Since the universal quantifier may be viewed as a constant of kind \( (\Omega \to \Omega) \to \Omega \), similar problems arise as for recursive types. In particular, we may extend type analysis to the quantified case, but only at the expense of termination, due to the negative occurrence of \( \Omega \) in the kind of the quantifier. Ad hoc solutions are possible, but in general it appears necessary to sacrifice termination guarantees.
Compiling polymorphism using intensional type analysis enables data representations that are impossible using type-free techniques. Setting aside the additional expressiveness of the present approach, it is interesting to consider the performance of a type-passing implementation of ML as compared to the type-free approach adopted in SML/NJ [5]. As pointed out by Tolmach [51], a type-passing implementation need not maintain tag bits on values for the sake of garbage collection. The only remaining use of tag bits in SML/NJ is for polymorphic equality, which can readily be implemented using intensional type analysis. Thus tag bits can be eliminated, leading to a considerable space savings.
On the other hand, it costs time and space to pass type arguments at run-time, and it is not clear whether type analysis is cheaper in practice than carrying tag bits. An empirical study of the relative performance of the two approaches is currently planned by the second author, and will be reported elsewhere.
The combination of intensional polymorphism and existential types [35] raises some interesting questions. On the one hand, the type dynamic [2] may be defined in terms of existentials. On the other hand, data abstraction may be violated since a "client" of an abstraction may perform in-terms of the abstraction, which is replaced at run-time by the implementation type of the abstraction. This suggests that it may be advantageous to distinguish two kinds of types, those that are analyzable and those that are not. In this way parametricity and representation independence can be enforced by restricting the use of type analysis.
The idea of intensional analysis of types bears some resemblance to the notion of reflection [44, 4] — we may think of type-passing as a "reflection" of the meta-level notion of types. It is interesting to speculate that the type theory proposed here is but a special case of a fully reflective type theory. The reflective viewpoint may provide a solution to the problem of intensional analysis of recursive and quantified types since, presumably, types would be refined in a syntactic form that is more amenable to analysis — using first-order, rather than higher-order, abstract syntax.
Acknowledgments
We are grateful to Martin Abadi, Andrew Appel, Lars Birkedal, Luca Cardelli, Matthias Felleisen, Andrzej Filinski, Mark Jones, Simon Peyton Jones, Mark Leone, Phil Waldier, Jeanette Wing and the reviewers for their comments and suggestions.
References
|
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/2794950/Morrisett_PolymorphismTypeAnalysis.pdf?sequence=2", "len_cl100k_base": 15461, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 58190, "total-output-tokens": 20004, "length": "2e13", "weborganizer": {"__label__adult": 0.00035953521728515625, "__label__art_design": 0.00031566619873046875, "__label__crime_law": 0.0002541542053222656, "__label__education_jobs": 0.0005059242248535156, "__label__entertainment": 6.133317947387695e-05, "__label__fashion_beauty": 0.00016129016876220703, "__label__finance_business": 0.00018227100372314453, "__label__food_dining": 0.00036835670471191406, "__label__games": 0.00048160552978515625, "__label__hardware": 0.0007157325744628906, "__label__health": 0.0005044937133789062, "__label__history": 0.0002701282501220703, "__label__home_hobbies": 7.87973403930664e-05, "__label__industrial": 0.0003573894500732422, "__label__literature": 0.00032830238342285156, "__label__politics": 0.0002887248992919922, "__label__religion": 0.0005702972412109375, "__label__science_tech": 0.0110626220703125, "__label__social_life": 8.219480514526367e-05, "__label__software": 0.0033740997314453125, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0003104209899902344, "__label__transportation": 0.0005464553833007812, "__label__travel": 0.00021791458129882812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69213, 0.02025]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69213, 0.5673]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69213, 0.79489]], "google_gemma-3-12b-it_contains_pii": [[0, 853, false], [853, 5425, null], [5425, 12371, null], [12371, 20186, null], [20186, 25037, null], [25037, 30917, null], [30917, 35455, null], [35455, 38172, null], [38172, 45149, null], [45149, 49582, null], [49582, 56708, null], [56708, 63348, null], [63348, 69213, null]], "google_gemma-3-12b-it_is_public_document": [[0, 853, true], [853, 5425, null], [5425, 12371, null], [12371, 20186, null], [20186, 25037, null], [25037, 30917, null], [30917, 35455, null], [35455, 38172, null], [38172, 45149, null], [45149, 49582, null], [49582, 56708, null], [56708, 63348, null], [63348, 69213, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69213, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69213, null]], "pdf_page_numbers": [[0, 853, 1], [853, 5425, 2], [5425, 12371, 3], [12371, 20186, 4], [20186, 25037, 5], [25037, 30917, 6], [30917, 35455, 7], [35455, 38172, 8], [38172, 45149, 9], [45149, 49582, 10], [49582, 56708, 11], [56708, 63348, 12], [63348, 69213, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69213, 0.01322]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
be9f5608b24a5e037eac48de616f649fb9168882
|
Leads-to Properties and the But-not-yet Operator
Ken Calvert
College of Computing
Georgia Institute of Technology
Atlanta, Georgia 30032-0280
calvert@cc.gatech.edu
GIT–CC–93/60
October 1993
Abstract
We define a predicate transformer, in terms of which finite disjunctions of leads-to properties can be rewritten as single leads-to properties. Although disjunctions of leads-to properties do not typically arise naturally in progress specifications, an example shows how they may be introduced through the use of nested implications of leads-to properties; such implications allow subtle dependencies between a program’s progress and that of its environment to be conveniently specified. After introducing the predicate transformer, which is called the but-not-yet operator, we show how to define a single leads-to property equivalent to a given disjunction of two leads-to properties. An alternative definition of the but-not-yet operator is shown to be equivalent to the first, and some properties of the operator are proved. Finally, the predicate transformer is generalized to any finite number of arguments.
1 Introduction
Many specification formalisms use some fragment of linear temporal logic to specify program properties. In such formalisms, sequences of states represent the possible behaviors of a program. A temporal property is a predicate on sequences of states; a program satisfies such a property $P$ if and only if $P$ is true for each sequence representing a possible program behaviors. In recent years it has been demonstrated that a wide range of properties useful in practice can be specified using a few simple temporal forms [4, 5, 6, 7, 8].
This paper considers a class of properties built up from basic temporal properties using boolean connectives: the temporal predicate $P \land Q$ is true for a sequence iff both $P$ and $Q$ are; $P \lor Q$ is true iff either $P$ or $Q$ is, and $P \Rightarrow Q$ is true iff either $P$ is not true or $Q$ is true. The familiar laws of propositional logic apply in the manipulation of such predicates. The basic temporal properties considered in this paper are of the form $p \leadsto q$ ("$p$ leads-to $q"$), where $p$ and $q$ are predicates in the program’s variables. A sequence of states satisfies $p \leadsto q$ if each state in the sequence satisfying $p$ is followed by a state satisfying $q$; thus a program satisfies $p \leadsto q$ if, whenever it reaches a state where $p$ holds, at that state or some later state $q$ will hold. Leads-to properties are useful for specification of progress, i.e. asserting that a program will do something. Simple, easy-to-use proof systems, which allow elementary leads-to properties to be established for particular programs, are well known [4].
A property of the form $(x \leadsto y) \Rightarrow (p \leadsto q)$ means that every behavior of the program that satisfies $x \leadsto y$ also satisfies $p \leadsto q$. In a compositional theory of program specifications—one in which composition preserves all temporal properties of each component—such a property expresses the dependence of a program’s progress on that of its environment. (Such a theory is described, for example, in [1, 2].) In other words, $(x \leadsto y) \Rightarrow (p \leadsto q)$ is a weaker specification than $p \leadsto q$, but when a program known to satisfy that specification is placed in an environment where only behaviors satisfying $x \leadsto y$ can occur, the resulting system satisfies $p \leadsto q$. If the environment does not satisfy $x \leadsto y$, the composite of program and environment may not satisfy $p \leadsto q$; however, the original component can still be considered correct with respect to the specification $(x \leadsto y) \Rightarrow (p \leadsto q)$.
More complicated progress dependencies are sometimes useful. To see how such dependencies can arise, consider the specification of a random bit generator that communicates with its environment via a variable $b$. When the environment requests a bit by setting $b$ to the special value $\bot$, the bit generator responds by setting $b$ to either 0 or 1. Both 0’s and 1’s are guaranteed to be produced, but only over an infinite number of bits. In other words, there is no limit on the number of requests the environment may have to make before receiving a particular value. The conjunction of the following two implications constitutes the progress specification for the bit generator:
\begin{align*}
(true \leadsto b = \bot) & \Rightarrow (true \leadsto b = 0) \quad (1) \\
(true \leadsto b = \bot) & \Rightarrow (true \leadsto b = 1) \quad (2)
\end{align*}
Now consider a natural number generator (NNG) that uses a bit generator as a subcomponent. NNG communicates with its environment via two variables: $n$, whose value is either a natural number or $\bot$, and $b$, through which it interacts with the bit generator. The environment sets $n$ to $\bot$ to request a natural number. NNG then sets $b$ to $\bot$ and waits for its environment (i.e. the bit generator) to set $b$ to 0 or 1; it repeats this step until $b$ has the value 1. Then
\newpage
NNG sets $n$ to the number of times $b$ was 0 before it became 1. Different natural number distributions can be obtained by composing NNG with bit generators having different 0-1 distribution characteristics, so long as they satisfy the specification above.
Observe that a specification of NNG requiring $n = \bot \leadsto n \neq \bot$ is too strong. There is no way for NNG to make progress unless the bit generator does: if the latter eventually produces a 1, then NNG eventually sets $n$ to some natural number. Moreover, requiring NNG to satisfy $true \leadsto b = \bot$ is also too strong: NNG need never change $b$ if the environment never changes $n$. The correct progress specification for NNG says that if its environment satisfies the progress specification of the bit generator, NNG will eventually respond to any request:
$$((true \leadsto b = \bot) \implies (true \leadsto b = 1)) \implies (n = \bot \leadsto n \neq \bot)$$ \hspace{1cm} (3)
Although the operational meaning of simple boolean combinations of leads-to properties is generally intuitively clear, the meaning of nested implications like (3) is not. A more serious objection to admitting such properties in specifications is that it is not clear how one would prove them for a given program.
The nested implication above is propositionally equivalent to the conjunction of the following properties:
$$true \leadsto b = \bot \lor (n = \bot \leadsto n \neq \bot)$$ \hspace{1cm} (4)
$$true \leadsto b = 1 \implies (n = \bot \leadsto n \neq \bot)$$ \hspace{1cm} (5)
Any such nested implication can be eliminated in favor of a disjunction and a simple implication. Indeed, it can be shown that any positive boolean combination of elementary properties is propositionally equivalent to a property in the form of a conjunction of (simple) implications [3]. However, it is also not clear how to prove such disjunctions of leads-to properties.
It turns out that a straightforward extension to the UNITY proof rules for leads-to [4] is adequate for proving simple implications; moreover, it is not necessary to prove disjunctions. The rules for deriving and using implications will be dealt with in a separate report; the rest of this paper is devoted to showing that under a reasonable assumption about the set of sequences representing program behavior, finite disjunctions of leads-to properties can be eliminated from specifications. In particular, for any disjunction like (4) there exists a single, semantically equivalent leads-to property.
The next section introduces notation and formal definitions. Section 3 presents the main result, which asserts the existence of a specification equivalent to any disjunction of two leads-to properties. In order to state this result, we introduce a predicate transformer; Section 4 gives an alternative definition of this operator. Section 5 explores properties of the operator, and Section 6 generalizes the result to finite disjunctions of more than two properties. Section 7 offers some concluding remarks.
2 Definitions and Notation
Let $\Phi$ be a set of infinite sequences\(^1\) and call the elements of these sequences states. The $\Phi$ represents the set of possible behaviors of some (fixed) program. Note well that the set of states under consideration contains exactly the elements that occur in some sequence in $\Phi$.
\(^1\)The assumption of infinite sequences is made for simplicity; the results are easily extended for the case where finite sequences are also included.
i.e. the *reachable* states of the program. The Greek letters $\alpha$, $\beta$, and $\gamma$ will denote arbitrary sequences in $\Phi$.
The letters $i$, $j$, and $k$ will range over positions in sequences. For sequence $\alpha$, $\alpha[i]$ denotes the $i^{th}$ element of $\alpha$, with the initial element being $\alpha[0]$. The notation $\alpha[..i]$ denotes the finite sequence consisting of the first $i$ elements of $\alpha$, while $\alpha[i..]$ denotes the suffix remaining when the first $i$ elements of $\alpha$ are removed. Note that the last element of $\alpha[..i]$ is $\alpha[i - 1]$, and the first element of $\alpha[i..]$ is $\alpha[i]$.
The set $\Phi$ is postulated to have the following characteristic:
If $\alpha$ and $\beta$ are in $\Phi$, and $\alpha[i] = \beta[j]$, then there exists $\gamma \in \Phi$ such that $\gamma[..i] = \alpha[..i]$ and $\gamma[i..] = \beta[i..].$ \hfill (\ast)
In terms of program semantics, this can be interpreted as saying that a program’s future behavior is completely determined by its current state.
A *state predicate* is a function from the set of states to $\{true, false\}$. The letters $p$, $q$, $x$ and $y$ denote state predicates; $p_s$ denotes the value of predicate $p$ at state $s$. The boolean connectives are used as operators on predicates in the usual way: $(p \land q)_s$ is equal to $p_s \land q_s$, etc. A state $s$ such that $p_s = true$ is called a $p$-state. Universal quantification of a predicate over the entire state space is denoted by the square brackets $[ ]$:
$$[p] \stackrel{\text{def}}{=} \langle \forall s :: p_s = true \rangle$$
For state predicates $p$ and $q$, we say a sequence $\alpha$ satisfies the property $p \leadsto q$ and write $\alpha \models p \leadsto q$, according to the following definition:
$$\alpha \models p \leadsto q \stackrel{\text{def}}{=} \langle \forall i :: p, \alpha[i] \Rightarrow \langle \exists j : i \leq j : q, \alpha[j] \rangle \rangle$$
Capital letters $P$, $X$, etc. denote leads-to properties. We write $\alpha \not\models P$ to mean that $\alpha$ does not satisfy the leads-to property $P$. For arbitrary leads-to properties $P$ and $Q$ the property $P \lor Q$ is defined by
$$\alpha \models P \lor Q \stackrel{\text{def}}{=} (\alpha \models P) \lor (\alpha \models Q)$$
We will also deal with *unless* properties; for any predicates $p$ and $q$, a sequence satisfies $p \text{ unless } q$ iff every state satisfying $(p \land \neg q)$ is immediately followed by a state satisfying $p \lor q$.\footnote{The symbols used in this paper differ from those used for UNITY properties (*unless* and $\rightarrow$), to emphasize that these are temporal predicates defined in terms of *sequences* and not properties defined in terms of the program text itself as in UNITY.} Formally:
$$\alpha \models p \text{ unless } q \stackrel{\text{def}}{=} \langle \forall i :: (p \land \neg q), \alpha[i] \Rightarrow (p \lor q), \alpha[i + 1] \rangle$$
We have one more notational convention regarding quantification: the double square brackets $[ ]$ stand for universal quantification over $\Phi$: for a leads-to or *unless* property $P$, we have
$$[P] \stackrel{\text{def}}{=} \langle \forall \alpha : \alpha \in \Phi : \alpha \models P \rangle$$
For a program whose possible behaviors are described by $\Phi$, $[P]$ means the program satisfies the specification $P$.\footnote{The symbols used in this paper differ from those used for UNITY properties (*unless* and $\rightarrow$), to emphasize that these are temporal predicates defined in terms of *sequences* and not properties defined in terms of the program text itself as in UNITY.}
3 Eliminating Disjunctions
We would like to to define, for a disjunction $P \lor Q$, a single leads-to property $W$ such that
$$[P \lor Q] \equiv \llbracket W \rrbracket$$
that is, a property that is equivalent to the disjunction as a specification for any program (whose behaviors satisfy (*)).
Clearly the form of any such $W$ will depend upon the predicates from which $P$ and $Q$ are constructed. Moreover, because the desired equivalence is relative to the set $\Phi$, the form of $W$ may depend upon the structure of $\Phi$ as well.
The key step in constructing $W$ will be to define a new state predicate that is true wherever the leads-to property $p \sim q$ is “undischarged” in a sequence—that is, a predicate true at states occurring in some sequence after a $p$-state has occurred but without any intervening $q$-state. The idea is that a sequence fails to satisfy $P \lor Q$ if it fails to satisfy $P$ and it fails to satisfy $Q$, i.e. if $p \sim q$ and $x \sim y$ remain undischarged forever in some infinite suffix of the sequence. We denote this predicate by $p \odot q$ and define it as follows:
$$(p \odot q).s \overset{\text{def}}{=} \langle \exists \alpha, i, n :: \alpha \in \Phi \land \alpha[i] = s \land n \leq i \land p.\alpha[n] \land \langle \forall k : n \leq k \leq i : \neg q.\alpha[k] \rangle \rangle$$
(A similar predicate was defined by Pachl in proving completeness of the UNITY proof rules for leads-to [9].)
Henceforth let $p, q, x$ and $y$ be arbitrary fixed state predicates, and let $P, X$ and $W$ stand for the following leads-to properties:
$$P \overset{\text{def}}{=} p \sim q$$
$$X \overset{\text{def}}{=} x \sim y$$
$$W \overset{\text{def}}{=} (((p \odot q) \land x) \lor ((x \odot y) \land p)) \sim (q \lor y)$$
We shall establish that $W$ is equivalent to $P \lor X$ as a specification. The first result implies that $W$ is no weaker than $P \lor X$.
**Lemma 0.** For any sequence $\alpha$, if $\alpha \not\models P \lor X$ then $\alpha \not\models W$.
**Proof of Lemma 0.** If $\alpha \not\models P \lor X$, then $\alpha$ satisfies neither $p \sim q$ nor $x \sim y$, and in particular there exist states $\alpha[i]$ and $\alpha[j]$ such that
$$p.\alpha[i] \land \langle \forall k : i \leq k \leq j : \neg q.\alpha[k] \rangle \quad (6)$$
$$x.\alpha[j] \land \langle \forall k : j \leq k \leq j : \neg y.\alpha[k] \rangle \quad (7)$$
Now suppose $i \leq j$. We observe:
$$i \leq j \land (6) \land (7)$$
$$= \{ \text{predicate calculus—split range in (6) } \}$$
$$i \leq j \land p.\alpha[i] \land \langle \forall k : i \leq k \leq j : \neg q.\alpha[k] \rangle \land$$
---
$^3$Note that $\odot$ does depend on $\Phi$; because $\Phi$ is assumed fixed, this dependence will be left implicit.
\( x.\alpha[j] \wedge (\forall k : j \leq k : \neg q.\alpha[k]) \wedge (\forall k : j \leq k : \neg y.\alpha[k]) \)
\[ \Rightarrow \{ \text{existential generalization and predicate calculus } \} \]
\( (\exists n : n \leq j : p.\alpha[n] \wedge (\forall k : n \leq k \leq j : \neg q.\alpha[k]) \wedge x.\alpha[j] \wedge (\forall k : j \leq k : (\neg q \wedge \neg y).\alpha[k]) \)
\[ \Rightarrow \{ \text{definition of } p \odot q; \text{DeMorgan’s law } \} \]
\( (p \odot q).\alpha[j] \wedge x.\alpha[j] \wedge (\forall k : j \leq k : \neg (q \lor y).\alpha[k]) \)
\[ = \{ \text{predicate calculus } \} \]
\( ((p \odot q) \wedge x).\alpha[j] \wedge (\forall k : j \leq k : \neg (q \lor y).\alpha[k]) \)
\[ \Rightarrow \{ \text{weakening the first conjunct } \} \]
\( (((p \odot q) \wedge x) \lor ((x \odot y) \wedge p)).\alpha[j] \wedge (\forall k : j \leq k : \neg (q \lor y).\alpha[k]) \)
\[ \Rightarrow \{ \text{definition of } W \} \]
\( \alpha \not\models W \)
For the case \( j \leq i \), a similar argument again proves \( \alpha \not\models W \).
\[ \square \]
**Lemma 1.** If there exists a sequence \( \beta \in \Phi \) and a state \( \beta[j] \) such that
\[ ((p \odot q) \wedge x).\beta[j] \wedge (\forall k : j \leq k : \neg (q \lor y).\beta[k]) \]
then there exists a sequence \( \gamma \in \Phi \) such that \( \gamma \not\models (p \odot q) \lor (x \odot y) \).
**Proof of Lemma 1.** Let \( \beta \) and \( j \) satisfy the hypothesis of the Lemma. Because \( \beta[j] \) is a \( p \odot q \)-state, there exists a sequence \( \alpha \) in \( \Phi \) and \( i \) and \( n \) such that \( \alpha[i] = \beta[j] \), and
\[
\left( \begin{array}{c}
n \leq i \wedge p.\alpha[n] \wedge (\forall k : n \leq k \leq i : \neg q.\alpha[k]) \\
\end{array} \right) \tag{8}
\]
Because \( \alpha \) and \( \beta \) are in \( \Phi \) and \( \alpha[i] = \beta[j] \), by property \((*)\) there exists a sequence \( \gamma \) in \( \Phi \) such that \( \gamma[..i] = \alpha[..i] \) and \( \gamma[i..] = \beta[j..] \). From this fact, the hypothesis, and \( (8) \) we have the following properties of \( \gamma \):
\[ x.\gamma[i] \tag{9} \]
\[ (\forall k : i \leq k : \neg (q \lor y).\gamma[k]) \tag{10} \]
\[ p.\gamma[n] \wedge (\forall k : n \leq k \leq i : \neg q.\gamma[k]) \tag{11} \]
Now we observe:
\[ (10) \wedge (11) \]
\[ \Rightarrow \{ \text{predicate calculus } \} \]
\[ p.\gamma[n] \wedge (\forall k : n \leq k : \neg q.\gamma[k]) \]
\[ = \{ \text{definition } \} \]
\[ \gamma \not\models p \odot q \]
The symmetric argument establishes \( \gamma \not\models x \odot y \), and thus \( \gamma \not\models (p \odot q) \lor (x \odot y) \).
\[ \square \]
Now by a similar argument we can prove
**Lemma 2.** If there exists a sequence \( \beta \) in \( \Phi \), and a position \( j \) in \( \beta \) such that
\[ ((x \odot y) \wedge p).\beta[j] \wedge (\forall k : j \leq k : \neg (q \lor y).\beta[k]) \]
then there exists a sequence \( \gamma \in \Phi \) such that \( \gamma \not\models (p \odot q) \lor (x \odot y) \).
Using the foregoing Lemmata, we can show that $W$ is no stronger than $P \lor X$.
**Lemma 3.** $[P \lor X] \Rightarrow [W]$.
**Proof of Lemma 3.** We prove the contrapositive, i.e. if there exists a sequence $\beta \in \Phi$ such that $\beta \not\models W$, then there exists a sequence $\gamma \in \Phi$ such that $\gamma \not\models P \lor X$. Let $\beta$ be any sequence such that $\beta \not\models W$. By definition, there exists a state $\beta[j]$ such that
$$(((p \odot q) \land x) \lor ((x \lor y) \land p)) \land \forall k : j \leq k : \neg(q \lor y), \beta[k]$$
which is equivalent to the disjunction of the following:
$$((p \odot q) \land x), \beta[j] \land \forall k : j \leq k : \neg(q \lor y), \beta[k]$$
$$((x \lor y) \land p), \beta[j] \land \forall k : j \leq k : \neg(q \lor y), \beta[k]$$
Thus $\beta[j]$ satisfies the hypothesis of either Lemma 1 or Lemma 2. The result follows from the definitions of $P$ and $X$ and the appropriate Lemma. \(\square\)
**Theorem 0.** For fixed $\Phi$ with property (+), and $P$, $X$ and $W$ as defined above:
$$[P \lor X] \equiv [W]$$
**Proof of Theorem 0.** The right side implies the left by the contrapositive of Lemma 0 and the monotonicity of universal quantification; the left side implies the right by Lemma 3. \(\square\)
For the example given in the first section, we can replace property (4) in the specification by
$$((\text{true} \odot b = \bot) \land n = \bot) \lor (n = \bot \lor n \neq \bot) \leadsto (b = 1 \lor n \neq \bot) \quad (12)$$
4 Alternative Definition of $p \odot q$
The definition of the predicate $p \odot q$ given above is tailored to the proof of Theorem 0. In practice, however, we want a definition that is conveniently representable. Therefore we give an alternative characterization of the predicate using *unless* properties.
For given predicates $p$ and $q$, consider predicates $z$ satisfying the following:
$$[(p \land \neg q) \Rightarrow z] \quad (13)$$
$$[z \text{ unl } q] \quad (14)$$
Henceforth we write $\text{BNY}(p, q, z)$ to indicate that predicates $p$, $q$ and $z$ satisfies (13) and (14); we write $\text{BNY}(x, y, w)$ to indicate that $x$, $y$ and $z$ satisfy
$$[(x \land \neg y) \Rightarrow w]$$
$$[w \text{ unl } y] \quad (15)$$
For any $p$ and $q$, there is at least one predicate $z$ such that $\text{BNY}(p, q, z)$: *true* satisfies the conditions trivially. The following theorem states that $p \odot q$ can be characterized as an extreme solution to the “equation”
$$z : \text{BNY}(p, q, z).$$
**Theorem 1.** The predicate \( p \odot q \) is the strongest predicate \( z \) satisfying (13) and (14), that is:
\[
\begin{align*}
(p \land \neg q) \Rightarrow (p \odot q) \\
[p \odot q \land q] \\
\langle \forall z : \text{BNY}(p, q, z) : [(p \odot q) \Rightarrow z] \rangle
\end{align*}
\]
(15)
(16)
(17)
**Proof of (15).** We observe for any state \( s \):
\[
(p \land \neg q).s
\]
\( \{ \) every state occurs in some sequence in \( \Phi \) \( \} \)
\[
\langle \exists \alpha, j :: \alpha[j] = s \land (p \land \neg q).\alpha[j] \rangle
\]
\( \{ \) predicate calculus \( \} \)
\[
\langle \exists \alpha, j :: \alpha[j] = s \land j \leq j \land p.\alpha[j] \land \langle \forall k : j \leq k \leq j : \neg q.\alpha[k] \rangle \rangle
\]
\( \Rightarrow \)
\( \{ \) existential instantiation \( \} \)
\[
\langle \exists \alpha, j, n :: \alpha[j] = s \land n \leq j \land p.\alpha[n] \land \langle \forall k : n \leq k \leq j : \neg q.\alpha[k] \rangle \rangle
\]
\( \{ \) definition \( \}
\[
(p \odot q).s
\]
\( \Box \)
**Proof of (16).** Our proof obligation is:
\[
\langle \forall \beta, j :: ((p \odot q) \land \neg q).\beta[j] \Rightarrow ((p \odot q) \lor q).\beta[j + 1] \rangle
\]
The term of this formula can be rewritten:
\[
\begin{align*}
((p \odot q) \land \neg q).\beta[j] \Rightarrow ((p \odot q) \lor q).\beta[j + 1]
\end{align*}
\]
\( \{ \) [p \odot q \Rightarrow \neg q] by definition; predicate calculus \( \} \)
\[
(p \odot q).\beta[j] \Rightarrow p.\beta[j] \lor q.\beta[j + 1]
\]
\( \{ \) predicate calculus \( \} \)
\[
(p \odot q).\beta[j] \land \neg q.\beta[j + 1] \Rightarrow (p \odot q).\beta[i + 1]
\]
Now let \( \beta \) and \( j \) be such that \((p \odot q).\beta[j] \land \neg q.\beta[j + 1]. We shall establish p \odot q.\beta[j + 1]. According to the definition of p \odot q, there exists a sequence \( \alpha \), and \( n \) and \( i \) such that
\[
\alpha[i] = \beta[j] \land n \leq i \land p.\alpha[n] \land \langle \forall k : n \leq k \leq i : \neg q.\alpha[k] \rangle
\]
Thanks to property (*) and the above, there exists a sequence \( \gamma \) such that \( \beta[j..] = \gamma[i..] \); from the properties of \( \alpha \) and \( \beta \), we have the following property of this sequence:
\[
n \leq i \land p.\gamma[n] \land \langle \forall k : n \leq k \leq i : \neg q.\gamma[k] \rangle \land \neg q.\gamma[i + 1]
\]
(18)
Now we calculate:
\[
(p \odot q).\gamma[i + 1]
\]
\( \{ \) definition \( \}
\[
\gamma[i..] = \beta[j..]
\]
\[
(p \odot q).\beta[j + 1]
\]
\( \Box \)
Proof of (17). Let \( z \) be any predicate such that \( \text{BNY}(p, q, z) \), and let \( s \) be any state such that \( (p \odot q).s \); we shall establish \( z.s \).
By hypothesis there exist \( \alpha, j \) and \( n \leq j \) such that \( \alpha[j] = s \) and
\[
p.\alpha[n] \land (\forall k : n \leq k \leq j : \neg q.\alpha[k])
\]
(19)
From this follows \( (p \land \neg q).\alpha[n] \), which implies (because \( z \) satisfies (13))
\[
z.\alpha[n]
\]
(20)
In case \( n = j \), we have established \( z.\alpha[j] \) as required. Otherwise, we have \( n < j \). For this case, we shall establish that for each \( k \) in the range \( n \leq k < j \),
\[
z.\alpha[k] \Rightarrow z.\alpha[k + 1].
\]
(21)
From this and (20), \( z.\alpha[j] \) follows by induction.
Because \( z \) satisfies (14) we have for this \( \alpha \) and any \( k' \),
\[
(z \land \neg q).\alpha[k'] \Rightarrow (z \lor q).\alpha[k' + 1]
\]
(22)
Now assume that \( z.\alpha[k] \) holds for \( n \leq k < j \). We observe:
\[
z.\alpha[k]
\]
\[
= \begin{cases}
\{ & k \text{ is in the range of the quantification in (19) } \\
\} (z \land \neg q).\alpha[k]
\end{cases}
\]
\[
\Rightarrow \begin{cases}
\{ & (22) \} \\
\} (z \lor q).\alpha[k + 1]
\end{cases}
\]
\[
= \begin{cases}
\{ & k < j, \text{ so } k + 1 \text{ is in the range of quantification in (19) } \\
\} (z \lor q) \land \neg q).\alpha[k + 1]
\end{cases}
\]
\[
\Rightarrow \begin{cases}
\{ & \text{predicate calculus} \} \\
z.\alpha[k + 1]
\end{cases}
\]
Thus we have established (21), and hence \( z.\alpha[j] \), that is, \( z.s \). □
As a consequence of these results, if we want to prove that every sequence in \( \Phi \) satisfies \( W \), it is sufficient to find a predicates \( z \) and \( w \) such that \( \text{BNY}(p, q, z) \) and \( \text{BNY}(x, y, w) \), and such that the following leads-to property is provable:
\[
(z \land x) \lor (w \land p) \sim (q \lor y).
\]
By the foregoing, we have \( p \odot q \Rightarrow z \) and \( x \odot y \Rightarrow w \) for any such \( z \) and \( w \); the leads-to property \( W \) then follows by the operational definition of leads-to.
5 Properties of $\odot$
In this section we briefly explore some of the simple properties of $\odot$ as a predicate transformer, using only the “definition” given in Theorem 1.
We begin with some simple identities.
$$[\text{true} \odot q \equiv \neg q]$$
(23)
**Proof of (23).** $\text{BNY}(true, q, \neg q)$ holds trivially, and thus (by Theorem 1) we have $[true \odot q \Rightarrow \neg q]$. But by the same result we have also $\text{BNY}(true, q, true \odot q)$, and thus $[true \land \neg q \Rightarrow true \odot q]$.
\[\square\]
By similar arguments, we can prove
$$[-q \odot q \equiv \neg q]$$
(24)
$$[p \odot true \equiv false]$$
(25)
**Proof of (25).** From Theorem 1 $p \odot true$ is the strongest $z$ such that $\text{BNY}(p, true, z)$, i.e. such that $[false \Rightarrow z]$ and $[z \text{ unl true}]$. Evidently $false$ satisfies both of these, and is the strongest predicate to do so.
\[\square\]
A similar argument proves
$$[false \odot q \equiv false]$$
(26)
The predicate $p \odot false$ corresponds to the strongest stable predicate weaker than $p$, which has been named (in the context of UNITY) $\text{sst}_p$ by Sanders [10].
We turn now to monotonicity properties. The first of these is that $\odot$ is monotonic in its first argument, i.e.
$$[x \Rightarrow y] \Rightarrow [x \odot q \Rightarrow y \odot q]$$
(27)
**Proof of (27).** It is sufficient to show $\text{BNY}(x, q, y \odot q)$ using $[x \Rightarrow y]$; the result then follows by Theorem 1. Our proof obligations are $[x \land \neg q \Rightarrow y \odot q]$ and $[(y \odot q) \text{ unl q}]$. The latter property holds, by Theorem 1 For the former, we observe:
$$x \land \neg q$$
$$\Rightarrow \quad \{ [x \Rightarrow y] \}$$
$$y \land \neg q$$
$$\Rightarrow \quad \{ \text{BNY}(y, q, y \odot q), \text{Theorem 1} \}$$
$$y \odot q$$
\[\square\]
With respect to its second argument, $\odot$ is antimonotonic, i.e.
$$[x \Rightarrow y] \Rightarrow [p \odot y \Rightarrow p \odot x]$$
(28)
**Proof of (28).** Again, it is sufficient to show $\text{BNY}(p, y, p \odot x)$. Our first obligation is $[p \land \neg y \Rightarrow p \odot x]$; we observe:
\[
p \land \neg y\\
\Rightarrow \{ [x \Rightarrow y] \\\np \land \neg x\\
\Rightarrow \{ \text{BNY}(p, x, p \odot x) \text{ and Theorem 1 } \}
\]
Our second proof obligation is \([p \odot x \ unl y]\). But this follows because \(unl\) is monotonic in its second argument:
\([x \Rightarrow y] \Rightarrow (\llbracket p \ unl x \rrbracket \Rightarrow \llbracket p \ unl y \rrbracket)\)
and we have \([p \ unl x]\) by Theorem 1.
As for junctivity properties of \(\odot\), it is universally disjunctive in its first argument, i.e. for any set \(W\) such that \(x.i\) is a predicate for each \(i \in W\):
\([\exists i : i \in W : x.i) \odot q \equiv (\exists i : i \in W : x.i \odot q)\] (29)
However, \(\odot\) enjoys no other interesting junctivity properties. It is not conjunctive in its first argument, because \((x \odot q) \land (y \odot q)\) may hold at a state that is not reachable via any sequence that goes through a state where \(x \land y\) holds—for example if \(y = \neg x\). By considering that \(p \odot (x \land y)\) may hold at a state where \(x\) holds, and therefore \(p \odot x\) does not hold, we can see that it is not conjunctive in its second argument either.
That \(\odot\) is not disjunctive in its second argument can be seen operationally by considering that if \(p \odot (x \lor y)\) holds at a state, neither \(x\) nor \(y\) holds at that state; the latter is not necessarily true of a state satisfying \((p \odot x) \lor (p \odot y)\). However, we do have the following:
\([p \odot (x \lor y) \Rightarrow (p \odot x) \land (p \odot y)]\) (30)
Finally, we observe that if \([p \ unl q]\) holds, we have also (by properties of \(unl\)) \([p \land \neg q \ unl q]\), and thus \(\text{BNY}(p, q, p \land \neg q)\); because \(p \land \neg q\) is the obviously the strongest predicate for which this can be true, we have
\([(p \odot q) \equiv (p \land \neg q)]\)
For the example given in the first section, every sequence trivially satisfies both \(true \ unl b = 1\) and \(n = \bot \ unl n \neq \bot\); thus \(true \odot b = 1\) is equivalent to \(b \neq 1\) and \(n = \bot \odot n \neq \bot \equiv n = \bot\). Property (12) thus becomes:
\((b \neq 1 \land n = \bot) \Rightarrow b = 1 \lor n \neq \bot\)
6 Generalization to Finite Disjunctions
Theorem 0 can be generalized to disjunctions of more than two properties. Doing so requires that the operator \(\odot\) be generalized to define a predicate true at any state where each of a collection of leads-to properties is undischarged. That is, to eliminate a finite disjunction of \(N\) leads-to properties
\(\langle \forall m : 0 \leq m < N : p_m \sim q_m \rangle\)
we have to define a predicate that is true at any state in a sequence where each \(p_m\) has occurred earlier in the sequence, and there is no intervening \(q_m\)-state. To this end, let \(A\) be
the set of predicate pairs \((p_m, q_m)\) from the above disjunction. Define the predicate \(\Gamma.A\) as follows:
\[
(\Gamma.A), s \overset{\text{def}}{=} \exists \alpha, k : \alpha \in \Phi \land \alpha[k] = s : \\
(\forall m : (\exists i : i \leq k : p_m.\alpha[i] \land \forall j : i \leq j \leq k : \neg q_m.\alpha[j]))
\]
We then have:
**Theorem 2.** For \(N > 0\), let \(A\) be the set of predicate pairs \((p_m, q_m)\), \(m = 0, 1, \ldots, N\). Let \(\Gamma.A\) be the predicate defined above. For any set \(\Phi\) of sequences with property (\(*\)):
\[
\langle \forall \alpha : \alpha \in \Phi : \alpha \models p_0 \leadsto q_0 \lor \ldots \lor p_N \leadsto q_N \rangle \equiv \\
\langle \forall \alpha : \alpha \in \Phi : \alpha \models \Gamma.A \leadsto q_0 \lor \ldots \lor q_N \rangle
\]
A proof of Theorem 2, along with additional details, may be found in [3].
### 7 Discussion
The results presented here can be used to show that complex dependencies among progress properties can be represented by specifications expressed using only conjunctions and simple implications of leads-to properties. The advantage of stating specifications in this restricted form is that only a few small extensions to the simple and elegant proof rules for leads-to (e.g., as in [4]) are required to obtain a complete proof system for such properties.
**Acknowledgement**
This paper has benefitted from helpful comments by J. R. Rao and Ted Herman.
**References**
|
{"Source-Url": "https://smartech.gatech.edu/bitstream/handle/1853/6784/GIT-CC-93-60.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 9545, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 68336, "total-output-tokens": 11252, "length": "2e13", "weborganizer": {"__label__adult": 0.0005197525024414062, "__label__art_design": 0.0006213188171386719, "__label__crime_law": 0.0006437301635742188, "__label__education_jobs": 0.0019102096557617188, "__label__entertainment": 0.0001697540283203125, "__label__fashion_beauty": 0.0002682209014892578, "__label__finance_business": 0.0004706382751464844, "__label__food_dining": 0.0006766319274902344, "__label__games": 0.001300811767578125, "__label__hardware": 0.00147247314453125, "__label__health": 0.0014390945434570312, "__label__history": 0.0004305839538574219, "__label__home_hobbies": 0.0002193450927734375, "__label__industrial": 0.0008406639099121094, "__label__literature": 0.0011539459228515625, "__label__politics": 0.0005049705505371094, "__label__religion": 0.0009098052978515624, "__label__science_tech": 0.2156982421875, "__label__social_life": 0.0001621246337890625, "__label__software": 0.0073089599609375, "__label__software_dev": 0.76171875, "__label__sports_fitness": 0.0003817081451416016, "__label__transportation": 0.001056671142578125, "__label__travel": 0.0002460479736328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33396, 0.03196]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33396, 0.47758]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33396, 0.75394]], "google_gemma-3-12b-it_contains_pii": [[0, 1117, false], [1117, 5129, null], [5129, 8643, null], [8643, 12302, null], [12302, 15051, null], [15051, 18087, null], [18087, 20617, null], [20617, 23164, null], [23164, 25334, null], [25334, 27497, null], [27497, 30351, null], [30351, 32674, null], [32674, 33396, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1117, true], [1117, 5129, null], [5129, 8643, null], [8643, 12302, null], [12302, 15051, null], [15051, 18087, null], [18087, 20617, null], [20617, 23164, null], [23164, 25334, null], [25334, 27497, null], [27497, 30351, null], [30351, 32674, null], [32674, 33396, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33396, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33396, null]], "pdf_page_numbers": [[0, 1117, 1], [1117, 5129, 2], [5129, 8643, 3], [8643, 12302, 4], [12302, 15051, 5], [15051, 18087, 6], [18087, 20617, 7], [20617, 23164, 8], [23164, 25334, 9], [25334, 27497, 10], [27497, 30351, 11], [30351, 32674, 12], [32674, 33396, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33396, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
50fb688ed0bbb468769ee755f9156c52a7c13ed3
|
[REMOVED]
|
{"len_cl100k_base": 15446, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 73389, "total-output-tokens": 17947, "length": "2e13", "weborganizer": {"__label__adult": 0.00032830238342285156, "__label__art_design": 0.0003077983856201172, "__label__crime_law": 0.0002391338348388672, "__label__education_jobs": 0.0007309913635253906, "__label__entertainment": 4.1604042053222656e-05, "__label__fashion_beauty": 0.0001398324966430664, "__label__finance_business": 0.00017631053924560547, "__label__food_dining": 0.00024127960205078125, "__label__games": 0.00037288665771484375, "__label__hardware": 0.0004711151123046875, "__label__health": 0.0002675056457519531, "__label__history": 0.00018668174743652344, "__label__home_hobbies": 6.276369094848633e-05, "__label__industrial": 0.00028443336486816406, "__label__literature": 0.00023448467254638672, "__label__politics": 0.0002205371856689453, "__label__religion": 0.0003802776336669922, "__label__science_tech": 0.003353118896484375, "__label__social_life": 6.031990051269531e-05, "__label__software": 0.0034465789794921875, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00022268295288085935, "__label__transportation": 0.0004150867462158203, "__label__travel": 0.00017333030700683594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74155, 0.01958]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74155, 0.55928]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74155, 0.88262]], "google_gemma-3-12b-it_contains_pii": [[0, 2514, false], [2514, 5895, null], [5895, 8656, null], [8656, 11400, null], [11400, 14233, null], [14233, 16844, null], [16844, 19830, null], [19830, 21932, null], [21932, 23939, null], [23939, 25631, null], [25631, 26910, null], [26910, 29217, null], [29217, 32364, null], [32364, 34278, null], [34278, 36086, null], [36086, 37293, null], [37293, 40460, null], [40460, 43501, null], [43501, 45008, null], [45008, 47885, null], [47885, 49996, null], [49996, 52133, null], [52133, 54185, null], [54185, 55587, null], [55587, 58696, null], [58696, 61954, null], [61954, 65006, null], [65006, 68367, null], [68367, 71672, null], [71672, 74155, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2514, true], [2514, 5895, null], [5895, 8656, null], [8656, 11400, null], [11400, 14233, null], [14233, 16844, null], [16844, 19830, null], [19830, 21932, null], [21932, 23939, null], [23939, 25631, null], [25631, 26910, null], [26910, 29217, null], [29217, 32364, null], [32364, 34278, null], [34278, 36086, null], [36086, 37293, null], [37293, 40460, null], [40460, 43501, null], [43501, 45008, null], [45008, 47885, null], [47885, 49996, null], [49996, 52133, null], [52133, 54185, null], [54185, 55587, null], [55587, 58696, null], [58696, 61954, null], [61954, 65006, null], [65006, 68367, null], [68367, 71672, null], [71672, 74155, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74155, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74155, null]], "pdf_page_numbers": [[0, 2514, 1], [2514, 5895, 2], [5895, 8656, 3], [8656, 11400, 4], [11400, 14233, 5], [14233, 16844, 6], [16844, 19830, 7], [19830, 21932, 8], [21932, 23939, 9], [23939, 25631, 10], [25631, 26910, 11], [26910, 29217, 12], [29217, 32364, 13], [32364, 34278, 14], [34278, 36086, 15], [36086, 37293, 16], [37293, 40460, 17], [40460, 43501, 18], [43501, 45008, 19], [45008, 47885, 20], [47885, 49996, 21], [49996, 52133, 22], [52133, 54185, 23], [54185, 55587, 24], [55587, 58696, 25], [58696, 61954, 26], [61954, 65006, 27], [65006, 68367, 28], [68367, 71672, 29], [71672, 74155, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74155, 0.28622]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
16cac8489b4732b2d44d01fed562caba37a4d248
|
Transforming Constrained Horn Clauses for Program Verification
*Maurizio Proietti* (IASI-CNR, Rome, Italy)
Joint work with Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi
Overview
1. Rule-based transformations: Fold/Unfold transformations, CHC specialization, predicate tupling
2. Generating verification conditions via CHC specialization
3. CHC specialization as CHC solving
4. Increasing the power of CHC solving via predicate tupling
Constrained Horn Clauses (CHCs)
• First order formulas of the form:
\[ A_1 \land \ldots \land A_n \land c \rightarrow A_0 \]
where \( A_0, A_1, \ldots, A_n \) are atomic formulas and \( c \) is a formula in a theory \( Th \) of constraints (any first-order theory). All variables are assumed to be universally quantified in front.
• Prolog-like syntax:
\[ A_0 : - \ c, A_1, \ldots, A_n. \]
CHCs for Program Verification
Program *sumupto*: summing the first $n$ integers
$x=0; \ y=0; \text{while } (x<n) \{ x=x+1; \ y=x+y\}$
**Specifiication**
\{\text{\{n\geq 1\}} \text{SumUpto } \{y\geq x\}\}
**Translation**
\[
\begin{align*}
\text{false} :& \quad N \geq 1, \ X=0, \ Y=0, \ p(X, Y, N). & \%\text{Init} \\
p(X, Y, N) :& \quad X<N, \ X1=X+1, \ Y1=X+Y, \ p(X1, Y1, N). & \%\text{Loop} \\
p(X, Y, N) :& \quad X \geq N, \ Y<X. & \%\text{Exit}
\end{align*}
\]
- The program satisfies the specification iff the set of CHCs is **satisfiable**.
- Satisfiability of CHCs is **undecidable**: no ultimate verifier exists.
- Two ways for improving the effectiveness of CHC-based verification:
1. Designing smart heuristics for satisfiability **solvers**;
2. **Transforming** difficult CHC problems into equisatisfiable, easy ones.
(1 and 2 not mutually exclusive)
1. Rule-based Transformations
Transformations of Functional and Logic Programs
Main idea of this talk: Transformation techniques introduced for improving functional and logic programs [Burstall-Darlington 1977, Tamaki-Sato 1984] can be adapted to ease satisfiability proofs for CHCs.
- Each rule application preserves the semantics:
\[ M(P_0) = M(P_1) = \ldots = M(P_n) \]
- The application of the rules is guided by a strategy that guarantees that \( P_n \) is more efficient than \( P_0 \).
Transformation Rules for CHCs
Initial clauses $S_0 \rightarrow S_1 \rightarrow \cdots \rightarrow S_n$ Final clauses
where $\rightarrow$ is an application of a transformation rule.
R1. **Definition.** Introduce a new predicate definition
introduce \( C : \text{newp}(X) :- c, G \)
\[ S_{i+1} = S_i \cup \{C\} \quad \text{Defs} := \text{Defs} \cup \{C\} \]
Transformation Rules for CHCs
Initial clauses: \( S_0 \rightarrow S_1 \rightarrow \ldots \rightarrow S_n \) Final clauses
where '→' is an application of a transformation rule.
R1. **Definition.** Introduce a new predicate definition
\[
C: \text{newp}(X) : c, \ G
\]
\[
S_{i+1} = S_i \cup \{C\} \quad \text{Defs := Defs} \cup \{C\}
\]
R2. **Unfolding.** Apply a Resolution step
given \( C: H : c, A, G \quad A : d_{1, G_1} \ldots A : d_{m, G_m} \) in \( S_i \)
derive \( S_i = \{ H : c, d_{1, G_1}, G \ldots H : c, d_{m, G_m}, G \} \)
\[
S_{i+1} = (S_i - \{C\}) \cup S
\]
... Transformation rules for CHCs
R3. **Folding.** Replace a conjunction by a new predicate
given \( C: H : d, B, G \) in \( S_i \) \( \text{newp}(X) : c, B. \) with \( d \rightarrow c \) in Defs
derive \( D: H : d, \text{newp}(X), G. \)
\( S_{i+1} = (S_i - \{C\}) \cup \{D\} \)
... Transformation rules for CHCs
R3. **Folding.** Replace a conjunction by a new predicate
given \( C: H \quad : \quad d, B, G \) in \( S_i \) \hspace{1cm} newp(X) :- c, B. with \( d \rightarrow c \) in Defs
derive \hspace{1cm} D: H :- d, newp(X), G.
\( S_{i+1} = (S_i - \{C\}) \cup \{D\} \)
R4. **Constraint replacement.** Replace a constraint by an equivalent one
given \( C: H \quad : \quad c, B, G \) in \( S_i \) with \( Th \models c \leftrightarrow d \)
derive \hspace{1cm} D: H :- d, B, G
\( S_{i+1} = (S_i - \{C\}) \cup \{D\} \)
... Transformation rules for CHCs
R3. **Folding.** Replace a conjunction by a new predicate
given \( C: H : - d, B, G \) in \( S_i \) \hspace{1cm} \text{newp}(X) :- c, B. \) with \( d \rightarrow c \) in Defs
derive \( D: H : - d, \text{newp}(X), G. \)
\( S_{i+1} = (S_i - \{C\}) \cup \{D\} \)
R4. **Constraint replacement.** Replace a constraint by an equivalent one
given \( C: H : - c, B, G \) in \( S_i \) with \( \text{Th} \models c \iff d \)
derive \( D: H : - d, B, G \)
\( S_{i+1} = (S_i - \{C\}) \cup \{D\} \)
R5. **Clause Removal.** Remove a clause \( C \) with unsatisfiable constraint or subsumed by another
\( S_{i+1} = (S_i - \{C\}) \)
... Transformation rules for CHCs
R3. **Folding.** Replace a conjunction by a new predicate
Given \( C : H : \neg d, B, G \) in \( S_i \)
\[ \text{newp}(X) : - c, B. \]
with \( d \rightarrow c \) in Defs
Derive \( D : H : \neg d, \text{newp}(X), G. \)
\[ S_{i+1} = (S_i - \{C\}) \cup \{D\} \]
R4. **Constraint replacement.** Replace a constraint by an equivalent one
Given \( C : H : \neg c, B, G \) in \( S_i \) with \( \text{Th} \models c \iff d \)
Derive \( D : H : \neg d, B, G \)
\[ S_{i+1} = (S_i - \{C\}) \cup \{D\} \]
R5. **Clause Removal.** Remove a clause \( C \) with unsatisfiable constraint or subsumed by another
\[ S_{i+1} = (S_i - \{C\}) \]
**Theorem** [Tamaki-Sato 84, Etalle-Gabbrielli 96]: If every new definition is unfolded at least once in \( S_0 \rightarrow S_1 \rightarrow \ldots \rightarrow S_n \) then \( S_0 \) satisfiable iff \( S_n \) satisfiable.
Transformation Strategies
• Transformation rules need to be guided by suitable strategies.
• Main idea: exploit some knowledge about the query to produce a customized, easier to verify set of clauses.
• **Specialization** [Gallagher, Leuschel, FPP, ...]: Given a set of clauses $S$ and a query $\text{false} : - c, A$, where $A$ is atomic, transform $S$ into a set of clauses $S_{\text{SP}}$ such that
$$S \cup \{\text{false} : - c, A\} \text{ satisfiable} \iff S_{\text{SP}} \cup \{\text{false} : - c, A\} \text{ satisfiable.}$$
• **Predicate Tupling** (also known as **Conjunctive Partial Deduction**) [PP, Leuschel, ...]: Given a set of clauses $S$ and a query $\text{false} : - c, G$, where $G$ is a (non-atomic) conjunction, introduce a new predicate $\text{newp}(X) : - G$ and transform set of clauses $S_T$ such that
$$S \cup \{\text{false} : - c, G\} \text{ satisfiable} \iff S_T \cup \{\text{false} : - c, \text{newp}(X)\} \text{ satisfiable.}$$
Specialization Strategy: An Example
false :- X<0, p(X,b).
p(X,C) :- X=Y+1, p(Y,C).
p(X,a).
p(X,b) :- X≥0, tm_halts(X).
∀X. p(X,b) → X≥0
% the X-th Turing machine halts on X
Specialization Strategy: An Example
Define: \( q(X) := X<0, p(X,b). \)
\[
\begin{align*}
\text{false} & :- X<0, p(X,b). \\
p(X,C) & :- X=Y+1, p(Y,C). \\
p(X,a). \\
p(X,b) & :- X\geq 0, \text{tm_halts}(X).
\end{align*}
\]
% \( \forall X. p(X,b) \rightarrow X\geq 0 \) \hspace{1cm} S_0
% the X-th Turing machine halts on X
% \( q(X) \) is a specialization of \( p(X,C) \) \hspace{1cm} S_1
% to a specific constraint on \( X \) and value of \( C \)
Specialization Strategy: An Example
false :- X<0, p(X,b). % ∀X. p(X,b) → X≥0
p(X,C) :- X=Y+1, p(Y,C).
p(X,a).
% the X-th Turing machine halts on X
p(X,b) :- X≥0, tm_halts(X).
Define: q(X) :- X<0, p(X,b). % q(X) is a specialization of p(X,C)
% to a specific constraint on X and value of C
Unfold: q(X) :- X<0, X=Y+1, p(Y,b).
q(X) :- X<0, X≥0, tm_halts(X).
Specialization Strategy: An Example
false :- X<0, p(X,b).
p(X,C) :- X=Y+1, p(Y,C).
p(X,a).
p(X,b) :- X\geq0, tm_halts(X).
% \forall X. p(X,b) \rightarrow X\geq0
% the X-th Turing machine halts on X
Define: \ q(X) :- X<0, p(X,b).
% q(X) is a specialization of p(X,C)
% to a specific constraint on X and value of C
Unfold: \ q(X) :- X<0, X=Y+1, p(Y,b).
\ q(X) :- X<0, X\geq0, tm_halts(X).
% clause removal
Specialization Strategy: An Example
false :- X<0, p(X,b).
p(X,C) :- X=Y+1, p(Y,C).
p(X,a).
p(X,b) :- X\geq0, \text{tm\_halts}(X). % the X-th Turing machine halts on X
% \forall X. p(X,b) \rightarrow X\geq0
Define: q(X) :- X<0, p(X,b). % q(X) is a specialization of p(X,C)
% to a specific constraint on X and value of C
Unfold: q(X) :- X<0, X=Y+1, p(Y,b).
q(X) :- X<0, X\geq0, \text{tm\_halts}(X). % clause removal
Fold: false :- X<0, q(X).
q(X) :- X<0, X=Y+1, q(Y).
Satisfiability of $S_3$ is easy to check: $q(X) \equiv false$ makes all clauses true (no facts for q)
Tupling Strategy: An Example
asum(A,I,N,S) : “the sum of the elements of array A from index I to N is S”
amax(A,I,N,M) : “the largest element of array A from index I to N is M”
asum(A,I,N,S) :- I=N, S=0.
asum(A,I,N,S) :- 0≤I<N, I1=I+1, read(A,I,X), X≥0, S=S1+X, asum(A,I1,N,S1).
amax(A,I,N,M) :- I=N, M=0.
amax(A,I,N,M) :- 0≤I<N, I1=I+1, read(A,I,X), X≥0,
((X≥M1, M=X) ∨ (X<M1, M=M1)), amax(A,I1,N,M1).
Proof of satisfiability:
amax(A,I,N,M) → ∃K.(I≤K<N, read(A,K,M)) → S≥M
The proof uses quantified array constraints. Eldarica (with the SimpleArray(1) theory) does not solve these clauses.
Tupling Strategy: An Example
\[ \text{false} : - S < M, \ 0 \leq I < N, \ \text{asum}(A, I, N, S), \ \text{amax}(A, I, N, M). \] \quad S_0
Tupling Strategy: An Example
false :- S<M, 0\leq I<N, asum(A,I,N,S), amax(A,I,N,M). \quad S_0
Tupling Strategy: An Example
Unfold: asummax(A,I,N,S,M) :- I=N-1, S=0, M=0.
& CR asummax(A,I,N,S,M) :- 0=<I, I<N-1, X≥0, I1=I+1, S=S1+X, read(A,I,X),
((X≥M1, M=X) ∨ (X<M1, M=M1)),
asum(A,I1,N,S1), amax(A,I1,N,M1).
Tupling Strategy: An Example
Define:
Unfold:
assummax(A,I,N,S,M) :- I=N-1, S=0, M=0.
& CR
assummax(A,I,N,S,M) :- 0=<I, I<N-1, X≥0, I1=I+1, S=S1+X, read(A,I,X),
((X≥M1, M=X) ∨ (X<M1, M=M1)),
asum(A,I1,N,S1), amax(A,I1,N,M1).
Fold:
assummax(A,I,N,S,M) :- I=N-1, S=0, M=0.
assummax(A,I,N,S,M) :- 0=<I, I<N-1, X≥0, I1=I+1, S=S1+X, read(A,I,X),
((X≥M1, M=X) ∨ (X<M1, M=M1)), assummax(A,I1,N,S1,M1).
false :- S<M, 0≤I<N, assummax(A,I,N,S,M).
Proof of satisfiability:
assummax(A,I,N,S,M) → S≥M
The proof only uses linear arithmetic constraints.
Eldarica (with the SimpleArray(1) theory) solves these clauses.
A Generic U/F Transformation Strategy
1. Define
2. Unfold
3. Replace Constraints
4. Remove Clauses
5. Fold?
If no: Return to step 1
If yes: Move to next step
$S_0$ → $S_n$
Some Issues About the U/F Strategy
• **Unfolding:** Which atoms should be unfolded? When to stop?
• **Constraint replacement:** A suitable constraint reasoner is needed
• **Definition:** Suitable new predicates need to be introduced to guarantee termination and effectiveness of strategy
2. Generating Verification Conditions via CHC Specialization
CHC Specialization as a Verification Condition Generator
Program P in L → CHC Specializer → VC
Property F → CHC Specializer → VC
Interp_L → CHC Specializer → VC
L: Programming language
Interp_L: CHC interpreter for L
VC: Verification Conditions, i.e., a set of CHCs independent of L
F holds for P iff VC is satisfiable
The CHC specializer is parametric with respect to the programming language L and the class of properties.
Translating Imperative Programs into CHC
- C-like imperative language with assignments, conditionals, jumps. While-loops translated to conditionals and jumps.
- Commands encoded as atomic assertions: \texttt{at(Label, Cmd)}.
\begin{verbatim}
x=0; 0. x=0;
y=0; 1. y=0;
while (x<n) {
x=x+1; 2. if (x<n) 3 else 6;
y=x+y
} 3. x=x+1;
0. x=0; 1. y=0;
2. if (x<n) 3 else 6;
3. x=x+1;
4. y=x+y;
5. goto 2;
6. halt
\end{verbatim}
A Small-Step Operational Semantics
• The operational semantics is a one-step transition relation between configurations
\[ <n_1: cmd_1, \text{env}_1> \Rightarrow <n_2: cmd_2, \text{env}_2> \]
where: \( n: cmd \) is a labelled command and \( \text{env} \) is an environment mapping variable identifiers to values;
• Assignment
\[ <n: x=e, \text{env}> \Rightarrow <\text{next}(n), \text{update}(\text{env}, x, [e]\text{env})> \]
where: \( \text{next}(n) \) is the next labelled command and \( \text{update}(\text{env}, x, [e]\text{env}) \) updates the value of \( x \) in \( \text{env} \) to the value of expression \( e \) in \( \text{env} \);
• Conditional
\[ <n: \text{if } (e) n_1 \text{ else } n_2, \text{env}> \Rightarrow <\text{next}(n_1), \text{env}> \quad \text{if } [e]\delta \neq 0 \]
\[ <n: \text{if } (e) n_1 \text{ else } n_2, \text{env}> \Rightarrow <\text{next}(n_2), \text{env}> \quad \text{if } [e]\delta = 0 \]
• Jump
\[ <n: \text{goto } n_1, \text{env}> \Rightarrow <\text{next}(n_1), \text{env}> \]
A CHC Interpreter for the Small-Step Semantics
- **Configurations**: cf(LC, Env)
where:
- LC is a labelled command represented by a term of the form cmd(L,C), where L is a label, C is a command
- Env is an environment represented as a list of (variable-id,value) pairs: [(x,X),(y,Y),(z,Z)]
- **One-step transition relation** between configurations:
\( \text{tr}( \text{cf}(LC1,Env1), \text{cf}(LC2,Env2) ) \)
\textbf{assignment} \hspace{5mm} x=e;
\textbf{source configuration} \hspace{5mm} \textbf{target configuration}
\texttt{tr(} \hspace{5mm} \texttt{cf(cmd(L, asgn(X,E)), Env1)}, \hspace{5mm} \texttt{cf(cmd(L1, C), Env2)) :-}
\texttt{nextlab(L,L1),} \hspace{5mm} % \texttt{next label}
\texttt{at(L1,C),} \hspace{5mm} % \texttt{next command}
\texttt{eval(E,Env1,V),} \hspace{5mm} % \texttt{evaluate expression}
\texttt{update(Env1,X,V,Env2).} \hspace{5mm} % \texttt{update environment}
More clauses for predicate \texttt{tr} to encode the semantics of the other commands.
Encoding Partial Correctness Properties
- **Partial correctness** specification (Hoare triple):
\[
\{ \varphi \} \text{prog} \{ \psi \}
\]
If the initial values of the program variables satisfy the precondition \( \varphi \) and \( \text{prog} \) terminates, then the final values of the program variables satisfy the postcondition \( \psi \).
- **CHC encoding** of partial correctness:
```prolog
false :- initConf(Cf), reach(Cf).
reach(Cf1) :- tr(Cf1,Cf2), reach(Cf2).
reach(Cf) :- errorConf(Cf).
initConf(cf(C, Env)) :- at(0,C), \varphi(Env).
errorConf(cf(C, Env)) :- at(h,C), \neg \psi(Env).
tr(cf1,cf2) :- ...
```
\textit{PC-prop}
- \( \{ \varphi \} \text{prog} \{ \psi \} \) is **valid** iff **PC-prop** is satisfiable.
VCGen: Generating Verification Conditions
- **VCGen** is a transformation strategy that specializes **PC-prop** to a given \( \{\varphi\} \text{ prog} \{\psi\} \), and removes explicit reference to the interpreter (function \( \text{cf} \), predicates \( \text{at}, \text{tr} \), etc.).
- All new definitions are of the form \( \text{newp}(X) :- \text{reach}(\text{cf}) \), corresponding to a program point.
- Limited reasoning about constraints at specialization time (satisfiability only).
- VCGen is parametric wrt \( \text{Interp}_L \) (to a large extent).
- If \( \text{PC-prop} \xrightarrow{VCGen} \text{VC} \) then **PC-prop** is **satisfiable** iff **VC** is **satisfiable**
Generating Verification Conditions: An Example
PC property:
\{n \geq 1\} \text{SumUpto} \ \{y > x\}
CHC encoding:
\begin{align*}
\text{false} & :- \text{initConf}(\text{Cf}), \text{reach}(\text{Cf}). \\
\text{reach}(\text{Cf1}) & :- \text{tr}(\text{Cf1, Cf2}), \text{reach}(\text{Cf2}). \\
\text{reach}(\text{Cf}) & :- \text{errorConf}(\text{Cf}). \\
\text{initConf}(\text{cf}(\text{C, [(x, X, y, Y), (n, N)]})) & :- \text{at}(0, C), \ N \geq 1. \\
\text{errorConf}(\text{cf}(\text{C, [(x, X, y, Y), (n, N)]})) & :- \text{at}(6, C), \ Y \leq X. \\
\text{tr}(\text{cf1, cf2}) & :- \ldots \\
\ldots & \\
\text{at}(0, \text{asgn}(\text{int}(x), \text{int}(0))).
\end{align*}
VCGen
Verification Conditions:
\begin{align*}
\text{false} & :- \ N \geq 1, \ X=0, \ Y=0, \ p(X, Y, N). \\
p(X, Y, N) & :- \ X < N, \ X1=X+1, \ Y1=Y+2, \ p(X1, Y1, N). \\
p(X, Y, N) & :- \ X \geq N, \ Y < X.
\end{align*}
Experimental evaluation
- Other semantics: multi-step for recursive functions, exceptions, etc.
- Checking the satisfiability of the VCs using QARMC, Z3 (PDR), MathSAT (IC3), Eldarica
- VCGen+QARMC compares favorably to HSF+QARMC
3. CHC Specialization as CHC Solving
VCTransf: Specializing Verification Conditions
false :- c, p(X)
newp(X) :- c, p(X)
apply theory of constraints
Specializing verification conditions by propagating constraints.
Introduction of new predicates by generalization (e.g., widening and convex hull techniques)
VC is satisfiable iff VC' is satisfiable
Eindhoven, April 3rd, 2016
VCTransf as CHC Solving
The effect of applying VCTransf can be:
1. A set VC’ of verification conditions without constrained facts for the predicates on which the queries depend (i.e., no clauses of the form p(X) :- c).
VC’ is satisfiable.
2. A set VC’ of verification conditions including false :- true.
VC’ is unsatisfiable.
3. Neither 1 nor 2 (constrained facts of the form p(X) :- c, but not false :- true).
Satisfiability is unknown.
\[
\text{false} :- X<0, p(X,b).
\]
\[
p(X,C) :- X=Y+1, p(Y,C).
\]
\[
p(X,a).
\]
\[
p(X,b) :- X \geq 0, \text{tm_halts}(X).
\]
\[
\text{false} :- X<0, q(X).
\]
\[
p(X,C) :- X=Y+1, p(Y,C).
\]
\[
p(X,a).
\]
\[
p(X,b) :- X \geq 0, \text{tm_halts}(X).
\]
\[
\text{false} :- X<0, q(X).
\]
\[
p(X,C) :- X=Y+1, p(Y,C).
\]
\[
p(X,a).
\]
\[
p(X,b) :- X \geq 0, \text{tm_halts}(X).
\]
\[
\text{false} :- X<0, q(X).
\]
\[
p(X,C) :- X=Y+1, p(Y,C).
\]
\[
p(X,a).
\]
\[
p(X,b) :- X \geq 0, \text{tm_halts}(X).
\]
Iterated CHC Specialization
- If the satisfiability of $VC'$ is unknown, $VCTransf$ can be iterated.
- Between two applications of $VCTransf$ we can apply the Reversal transformation (particular case of the query-answer transformation [KafleGallagher 15] for linear programs) that interchanges premises and conclusions of clauses (backward reasoning from queries simulates forward reasoning from facts).
$\text{false} \leftarrow a(X), p(X)$.
$p(X) \leftarrow c(X,Y), p(Y)$.
$p(X) \leftarrow b(X)$.
\[
\text{Reversal}
\]
$\text{p(X)} \leftarrow a(X)$.
$p(Y) \leftarrow c(X,Y), p(X)$.
false $\leftarrow b(X), p(X)$.
\[
\text{VC is satisfiable iff } VC' \text{ is satisfiable}
\]
\[
\begin{array}{cccccc}
VCTransf & VCTransf & VCTransf & \cdots & VCTransf \\
VC_0 & \rightarrow & VC_1 & \rightarrow & VC_2 & \rightarrow & VC_3 & \rightarrow & \cdots & \rightarrow & VC_n \\
\end{array}
\]
Iterated CHC Specialization: *SumUpto* Example
false :- \( N \geq 1 \), \( X = 0 \), \( Y = 0 \), \( p(X, Y, N) \).
\[ VC_0 \]
\( p(X, Y, N) :- X < N \), \( X1 = X + 1 \), \( Y1 = Y + 2 \), \( p(X1, Y1, N) \).
\( p(X, Y, N) :- X \geq N \), \( Y < X \).
Iterated CHC Specialization: *SumUpto* Example
\[
\text{false :- } N \geq 1, X=0, Y=0, p(X, Y, N). \quad \text{VC}_0
\]
\[
p(X, Y, N) :- X < N, X_1=X+1, Y_1=Y+2, p(X_1, Y_1, N).
\]
\[
p(X, Y, N) :- X \geq N, Y < X.
\]
\[
\text{false :- } N \geq 1, X_1=1, Y_1=1, \text{new2}(X_1, Y_1, N). \quad \text{VC}_1
\]
\[
\text{new2}(X, Y, N) :- X=1, Y=1, N>1, X_1=2, Y_1=3, \text{new3}(X_1, Y_1, N).
\]
\[
\text{new3}(X, Y, N) :- X_1 \geq 1, Y_1 \geq X_1, X < N, X_1=X+1, Y_1=X_1+Y, \text{new3}(X_1, Y_1, N).
\]
\[
\text{new3}(X, Y, N) :- Y \geq 1, N \geq 1, X \geq N, Y < X.
\]
Iterated CHC Specialization: *SumUpto* Example
\[
\text{false} :- \; N \geq 1, \; X=0, \; Y=0, \; p(X, \; Y, \; N). \quad \text{VC}_0
\]
\[
p(X, \; Y, \; N) :- \; X < N, \; X1=X+1, \; Y1=Y+2, \; p(X1, \; Y1, \; N).
\]
\[
p(X, \; Y, \; N) :- \; X \geq N, \; Y<X.
\]
\[
\text{false} :- \; N \geq 1, \; X1=1, \; Y1=1, \; \text{new2}(X1, \; Y1, \; N). \quad \text{VC}_1
\]
\[
\text{new2}(X, \; Y, \; N) :- \; X=1, \; Y=1, \; N>1, \; X1=2, \; Y1=3, \; \text{new3}(X1, \; Y1, \; N).
\]
\[
\text{new3}(X, \; Y, \; N) :- \; X1 \geq 1, \; Y1 \geq X1, \; X < N, \; X1=X+1, \; Y1=X1+Y, \; \text{new3}(X1, \; Y1, \; N).
\]
\[
\text{new3}(X, \; Y, \; N) :- \; Y \geq 1, \; N \geq 1, \; X \geq N, \; Y<X.
\]
\[
\text{false} :- \; N \geq 1, \; X1=1, \; Y1=1, \; \text{new2}(X1, \; Y1, \; N). \quad \text{VC}_2
\]
\[
\text{new2}(X1, \; Y1, \; N) :- \; N \geq 1, \; X1=1, \; Y1=1.
\]
\[
\text{new3}(X1, \; Y1, \; N) :- \; X=1, \; Y=1, \; N>1, \; X1=2, \; Y1=3, \; \text{new2}(X, \; Y, \; N).
\]
\[
\text{new3}(X1, \; Y1, \; N) :- \; X1 \geq 1, \; Y1 \geq X1, \; X < N, \; X1=X+1, \; Y1=X1+Y, \; \text{new3}(X, \; Y, \; N).
\]
\[
\text{false} :- \; N \geq 1, \; X \geq N, \; Y<X, \; \text{new3}(X, \; Y, \; N).
\]
Iterated CHC Specialization: *SumUpto* Example
\[
\begin{align*}
\text{false} &: - N \geq 1, X=0, Y=0, p(X, Y, N). & \text{VC}_0 \\
p(X, Y, N) &: - X < N, X1=X+1, Y1=Y+2, p(X1, Y1, N). \quad & \\
p(X, Y, N) &: - X \geq N, Y < X. \\
\end{align*}
\]
\[\text{VCTransf}\]
\[
\begin{align*}
\text{false} &: - N \geq 1, X1=1, Y1=1, \text{new2}(X1, Y1, N). & \text{VC}_1 \\
\text{new2}(X, Y, N) &: - X=1, Y=1, N > 1, X1=2, Y1=3, \text{new3}(X1, Y1, N). \\
\text{new3}(X, Y, N) &: - X1 \geq 1, Y1 \geq X1, X < N, X1=X+1, Y1=X1+Y, \text{new3}(X1, Y1, N). \quad & \\
\text{new3}(X, Y, N) &: - Y \geq 1, N \geq 1, X \geq N, Y < X. \\
\end{align*}
\]
\[\text{Reversal}\]
\[
\begin{align*}
\text{new2}(X1, Y1, N) &: - N \geq 1, X1=1, Y1=1. & \text{VC}_2 \\
\text{new3}(X1, Y1, N) &: - X=1, Y=1, N > 1, X1=2, Y1=3, \text{new2}(X, Y, N). \\
\text{new3}(X1, Y1, N) &: - X1 \geq 1, Y1 \geq X1, X < N, X1=X+1, Y1=X1+Y, \text{new3}(X, Y, N). \\
\text{false} &: - N \geq 1, Y \geq 1, X \geq N, Y < X, \text{new3}(X, Y, N). \quad & \\
\end{align*}
\]
\[\text{VCTransf}\]
\[
\begin{align*}
\text{false} &: - N \geq 1, Y \geq 1, X \geq N, Y < X, \text{new4}(X, Y, N). & \text{VC}_3 \\
\end{align*}
\]
No constrained facts. *VC*$_3$ is satisfiable.
Experiments with VeriMAP
216 examples taken from: DAGGER, TRACER, InvGen, and TACAS 2013 Software Verification Competition.
- ARMC [Podelski, Rybalchenko PADL 2007]
- HSF(C) [Grebenshchikov et al. TACAS 2012]
- TRACER [Jaffar, Murali, Navas, Santosa CAV 2012]
<table>
<thead>
<tr>
<th></th>
<th>VeriMAP ($Gen_{ph}$)</th>
<th>ARMC</th>
<th>HSF(C)</th>
<th>TRACER</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1 correct</td>
<td>185</td>
<td>138</td>
<td>159</td>
<td>91</td>
</tr>
<tr>
<td>2 safe</td>
<td>154</td>
<td>112</td>
<td>137</td>
<td>74</td>
</tr>
<tr>
<td>3 unsafe</td>
<td>31</td>
<td>26</td>
<td>22</td>
<td>17</td>
</tr>
<tr>
<td>4 incorrect</td>
<td>0</td>
<td>9</td>
<td>5</td>
<td>13</td>
</tr>
<tr>
<td>5 false</td>
<td>0</td>
<td>8</td>
<td>3</td>
<td>13</td>
</tr>
<tr>
<td>6 missed</td>
<td>0</td>
<td>1</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>7 errors</td>
<td>0</td>
<td>18</td>
<td>0</td>
<td>20</td>
</tr>
<tr>
<td>8 timed-out</td>
<td>31</td>
<td>51</td>
<td>52</td>
<td>92</td>
</tr>
<tr>
<td>9 total score</td>
<td>339 (0)</td>
<td>210 (-40)</td>
<td>268 (-28)</td>
<td>113 (-52)</td>
</tr>
<tr>
<td>10 total time</td>
<td>10717.34</td>
<td>15788.21</td>
<td>15770.33</td>
<td>27757.46</td>
</tr>
<tr>
<td>11 average time</td>
<td>57.93</td>
<td>114.41</td>
<td>99.18</td>
<td>305.03</td>
</tr>
</tbody>
</table>
Table 1: Verification results using VeriMAP, ARMC, HSF(C) and TRACER. For each column the sum of the values of lines 1, 4, 7, and 8 is 216, which is the total number of the verification problems we have considered. The timeout limit is five minutes. Times are in seconds.
4. Improving CHC Solving via Predicate Tupling
Limitations of Linear Arithmetic Specifications
• Not very expressive.
• Example: computing Fibonacci numbers
\[\text{fibonacci: while } (n > 0) \{\]
\[\quad t = u;\]
\[\quad u = u + v;\]
\[\quad v = t;\]
\[\quad n = n - 1\]
\[\}\]
\[\{n = N, N \geq 0, u = 1, v = 0, t = 0}\text{ fibonacci} \{\text{fib}(N, u)\}\]
• The postcondition of \text{fibonacci} cannot be specified by using linear arithmetic constraints \textit{only}.
• Recursive Horn specifications:
\{z_1 = P_1, \ldots, z_s = P_s, \textbf{pre}(P_1, \ldots, P_s)\} \textit{prog} \{\textbf{post}(P_1, \ldots, P_s, z)\}
where:
- \(z_1, \ldots, z_s\) are global variables of \textit{prog};
- \(P_1, \ldots, P_s\) are parameters;
- \(z\) is a variable in \{\(z_1, \ldots, z_s\}\};
- \textbf{pre} and \textbf{post} are defined by a set of CHCs;
- \textbf{post} is a functional relation: \(z = F(P_1, \ldots, P_s)\) for some function \(F\) defined for all \(P_1, \ldots, P_s\) that satisfy \textbf{pre}.
• All computable functions on integers can be specified by sets of CHCs.
Recursive Horn Specification for Fibonacci
Fibonacci specification:
\{n=N, N\geq 0, u=1, v=0, t=0\} fibonacci \{fib(N,u)\}
where:
\textit{Fib}:
\begin{align*}
\text{fib}(0,F) & : - F=1. \\
\text{fib}(1,F) & : - F=1. \\
\text{fib}(N3,F3) & : - N1\geq 0, N2=N1+1, N3=N2+1, F3=F1+F2, \text{fib}(N1,F1), \text{fib}(N2,F2).
\end{align*}
Translating Partial Correctness into CHCs
A recursive Horn specification can be translated into CHCs in two steps:
Step 1. Translate the operational semantics into CHCs;
Step 2. Generate verification conditions as a set of CHCs.
Translating the Operational Semantics
• Define a relation \textit{fibonacci\_prog}(N,U) such that, for all integers N, if the program variables satisfy the precondition
\[
n=N, \text{ } N \geq 0, \text{ } u=1, \text{ } v=0, \text{ } t=0
\]
then the final value of u computed by program \textit{fibonacci} is U.
• \textit{fibonacci\_prog} is defined by a set \textit{OpSem} of clauses that encode the operational semantics:
\[
\text{fibonacci\_prog}(N,U) :\text{initConf}(Cf0,N), \text{reach}(Cf0,Cfh), \text{finalConf}(Cfh,U).
\]
\[
\text{initConf(cf(LC, [(n,N),(u,U),(v,V),(t,T)]), N)} :\text{N} \geq 0, \text{ } U=1, \text{ } V=0, \text{ } T=0, \text{ } at(0,LC).
\]
\[
\text{finalConf(cf(LC, [(n,N),(u,U),(v,V),(t,T)]), U)} :\text{at(last,LC)}.
\]
\[
\text{reach(Cf,Cf)}.
\]
\[
\text{reach(Cf0,Cf2)} :\text{tr(Cf0,Cf1), reach(Cf1,Cf2)}.
\]
\text{tr(Cf0,Cf1)} is the interpreter of the imperative language.
Specializing the Operational Semantics
- Apply VCGen and specialize $OpSem$ w.r.t. the program $fibonacci$.
- $OpSem_{SP}$:
- $fibonacci\_prog(N,F)$ :-
- $N\geq0$, $U=1$, $V=0$, $T=0$,
- $r(N,U,V,T,\ N1,F,V1,T1)$.
- $r(N,U,V,T, \ N2,U2,V2,T2)$ :-
- $N\geq1$, $N1=N-1$, $U1=U+V$, $V1=U$, $T1=U$,
- $r(N1,U1,V1,T1,\ N2,U2,V2,T2)$.
- $r(N,U,V,T,\ N,U,V,T)$ :- $N<0$.
- For every query false :- $G$, $OpSem \cup \{\text{false} :- G\}$ is satisfiable iff $OpSem_{SP} \cup \{\text{false} :- G\}$ is satisfiable.
- $OpSem_{SP}$ is linear recursive (at most one predicate in the premise).
Generating Verification Conditions (Nonlinear recursive)
Q: false :- F1≠F2, fibonacci_progm(N,F1), fib(N,F2).
plus clauses for fibonacci_progm (OpSem_{SP}) and fib (Fib).
Program fibonacci is partially correct iff OpSem_{SP} ∪ Fib ∪ {Q}.
Q in not linear recursive; the clauses in Fib are not linear recursive.
Generating Verification Conditions
(Almost linear recursive)
- Under suitable assumptions, linear recursive clauses, except for queries.
- Transform each clause $\text{post}(P_1, \ldots, P_s, Z) :- B$ defining the postcondition, into a query for $\text{prog}$:
1. Replace $\text{post}$ by $\text{prog}$ in the head and in the body
$$\text{prog}(P_1, \ldots, P_s, Z) :- B'$$
2. Move the conclusion to the premise (exploiting functionality of $\text{prog}$):
$$\text{false} :- Y \neq Z, \text{prog}(P_1, \ldots, P_s, Y), B'$$
where $Y$ is a new variable
3. Case split:
$$\text{false} :- Y > Z, \text{prog}(X_1, \ldots, X_s, Z), B'$$
$$\text{false} :- Y < Z, \text{prog}(X_1, \ldots, X_s, Z), B'$$
- If for all generated queries $\text{false} :- G$, $\text{OpSem}_{sp} \cup \{\text{false} :- G\}$ is satisfiable, then
$$\{z_1= P_1, \ldots, z_s= P_s, \text{pre}(P_1, \ldots, P_s)\} \text{prog} \{\text{post}(P_1, \ldots, P_s, z)\}$$
is valid.
Verification Conditions for Fibonacci
Generating the verification conditions for fibonacci
fib(0,F) :- F=1.
(1) Replace fib by fibonacci_prog in the head and in the body
fibonacci_prog(0,F) :- F=1.
(2) Move the conclusion to the premise:
false :- F≠1, fibonacci_prog(0,F).
(3) Case split
Q1: false :- F>1, fibonacci_prog(0,F).
Q2: false :- F<1, fibonacci_prog(0,F).
Verification Conditions for Fibonacci
- **Verification conditions for fibonacci**
Q1: false :- F>1, fibonacci_prog(0,F).
Q2: false :- F<1, fibonacci_prog(0,F).
Q3: false :- F>1, fibonacci_prog(1,F).
Q4: false :- F<1, fibonacci_prog(1,F).
Q5: false :- N1>=0, N2=N1+1, N3=N2+1, F3>F1+F2, fibonacci_prog(N1,F1), fibonacci_prog(N2,F2), fibonacci_prog(N3,F3).
Q6: false :- N1>=0, N2=N1+1, N3=N2+1, F3<F1+F2, fibonacci_prog(N1,F1), fibonacci_prog(N2,F2), fibonacci_prog(N3,F3).
- Program fibonacci is partially correct if, for i=1,...,6, OpSem_{SP} U {Qi} is satisfiable.
Satisfiability via LA-Solvability
- Consider constraints $C_{LA}$ in Linear (Integer) Arithmetics (linear equalities and inequalities over the integers). An LA-solution of a set $S$ of CHCs is a mapping
\[ \Sigma : \text{Atom} \rightarrow C_{LA} \]
such that, for every clause $A_0 \vdash c, A_1, \ldots, A_n$ in $S$,
\[ \text{LA} \models \forall (c, \Sigma(A_1) \land \ldots \land \Sigma(A_n) \rightarrow \Sigma(A_0)) \]
- A set of CHCs is LA-solvable if it has an LA-solution.
- LA-solvability implies satisfiability, but not vice versa.
- Satisfiability is undecidable and not semidecidable. LA-solvability is semidecidable. ($C_{LA}$ is r.e. and entailment is decidable.) Complete LA-solving methods exist (e.g., CEGAR).
Limitations of LA-solving
- Program \textit{fibonacci} is partially correct and each \textit{OpSem} U \{Qi\} is satisfiable.
- However, there is no LA-solution for \textit{OpSem} U \{Q5\} (nor for \textit{OpSem} U \{Q6\}).
\textit{Proof} (see details in ICLP-15 paper): there exists no LA constraint \(c(N,F)\) which is an LA-solution of the clauses for \textit{r_fibonacci} and:
\[
\text{LA} \models \forall (N1 \geq 0, N2=N1+1, N3=N2+1, F3>F1+F2, c(N1,F1), c(N2,F2), c(N3,F3) \rightarrow \text{false})
\]
- LA-solvers cannot prove the partial correctness of \textit{fibonacci}.
Improving LA-solving by Transforming Verification Conditions
• Solution 1: More powerful constraint theories, but decidability of entailment is lost for non-linear polynomials [Matijasevic 70].
• Solution 2: Transform the verification conditions into equisatisfiable CHCs that are (sometimes) LA-solvable.
• Transformation: Linearization via predicate tupling.
Linearization
Q5: false :- N1>=0, N2=N1+1, N3=N2+1, F3>F1+F2,
fibonacci_prog(N1,F1), fibonacci_prog(N2,F2), fibonacci_prog(N3,F3).
• No LA-solution of single fibonacci_prog atoms is able to prove that the premise of Q5 is false.
• An “LA-solution” for the conjunction of the three fibonacci_prog atoms exists. The conjunction of the three atoms implies the LA-constraint:
N1>=0, N2=N1+1, N3=N2+1, F3=F1+F2
which implies satisfiability of Q5.
• Apply predicate tupling and transform conjunctions of atoms into single atoms, i.e., transform OpSem_{sp} U \{Q5\} into a set of linear recursive clauses.
The Linearization Transformation
Nonlinear queries → Nonlinear clauses
The Linearization Transformation
\[ H \leftarrow c, p_1(X_1), \ldots, p_k(X_k). \]
The Linearization Transformation
Nonlinear clauses
H :- c, p₁(X₁), ..., pₖ(Xₖ).
Unfold using OpSem_RI
H :- d, q₁(X₁), ..., qⱼ(Xⱼ).
H :- e, r₁(X₁), ..., rₘ(Xₘ).
The Linearization Transformation
Define:
\[ \text{newr}(X_1, \ldots, X_m) :\]
\[ \text{r}_1(X_1), \ldots, \text{r}_m(X_m). \]
Define:
\[ \text{newq}(X_1, \ldots, X_j) :\]
\[ \text{q}_1(X_1), \ldots, \text{q}_j(X_j). \]
Nonlinear clauses
Unfold using \( OpSem_{RI} \)
\[ H :\]
\[ \text{c}, \text{p}_1(X_1), \ldots, \text{p}_k(X_k). \]
\[ H :\]
\[ \text{d}, \text{q}_1(X_1), \ldots, \text{q}_j(X_j). \]
\[ H :\]
\[ \text{e}, \text{r}_1(X_1), \ldots, \text{r}_m(X_m). \]
The Linearization Transformation
Define:
newq(X_1, \ldots, X_j) :- q_1(X_1), \ldots, q_j(X_j).
Fold:
H :- d, newq(X_1, \ldots, X_j).
Define:
newr(X_1, \ldots, X_m) :- r_1(X_1), \ldots, r_m(X_m).
Fold:
H :- e, newr(X_1, \ldots, X_m).
Unfold using OpSem_{RI}:
H :- d, q_1(X_1), \ldots, q_j(X_j).
H :- e, r_1(X_1), \ldots, r_m(X_m).
Nonlinear clauses
H :- c, p_1(X_1), \ldots, p_k(X_k).
The Linearization Transformation
Nonlinear clauses
\[ H \leftarrow c, p_1(X_1), \ldots, p_k(X_k). \]
Define:
\[ \text{newr}(X_1, \ldots, X_m) \leftarrow r_1(X_1), \ldots, r_m(X_m). \]
Fold:
\[ H \leftarrow d, \text{newr}(X_1, \ldots, X_m). \]
Define:
\[ \text{newq}(X_1, \ldots, X_j) \leftarrow q_1(X_1), \ldots, q_j(X_j). \]
Fold:
\[ H \leftarrow d, \text{newq}(X_1, \ldots, X_j). \]
Unfold using OpSem\textsubscript{RI}
\[ H \leftarrow d, q_1(X_1), \ldots, q_j(X_j). \]
\[ H \leftarrow e, r_1(X_1), \ldots, r_m(X_m). \]
Linear clauses
The Linearization Transformation
Nonlinear clauses
H :- c, p₁(X₁), ..., pₖ(Xₖ).
Unfold using OpSemₐₙₙ
H :- d, q₁(X₁), ..., qₗ(Xₗ).
Define:
newq(X₁, ..., Xₗ) :-
q₁(X₁), ..., qₗ(Xₗ).
Fold:
H :- d, newq(X₁, ..., Xₗ).
H :- e, r₁(X₁), ..., rₘ(Xₘ).
Define:
newr(X₁, ..., Xₘ) :-
r₁(X₁), ..., rₘ(Xₘ).
Fold:
H :- e, newr(X₁, ..., Xₘ).
Linear clauses
false :- N1>= 0, N2=N1+1, N3=N2+1, F3>F1+F2, U=1, V=0, new1(N3,U,V,F3,N2,F2,N1,F1).
new1(N1,U,V,U,N2,U,N3,U) :- N1=<0, N2=<0, N3=<0.
new1(N1,U,V,U,N2,U,N3,F3) :- N1=<0, N2=<0, N4=N3-1, W=U+V, N3>=1, new2(N4,W,U,F3).
new1(N1,U,V,U,N2,F2,N3,U) :- N1=<0, N4=N2-1, W=U+V, N2>=1, N3=<0, new2(N4,W,U,F2).
new1(N1,U,V,U,N2,F2,N3,F3) :- N1=<0, N4=N2-1, N2>=1, N5=N3-1, N3>=1, new3(N4,W,U,F2,N5,F3).
new1(N1,U,V,F1,N2,U,N3,U) :- N4=N1-1, W=U+V, N1>=1, N2=<0, N3=<0, new2(N4,W,U,F1).
new1(N1,U,V,F1,N2,U,N3,F3) :- N4=N1-1, N1>=1, N2=<0, N5=N3-1, W=U+V, N3>=1, new3(N4,W,U,F1,N5,F3).
new1(N1,U,V,F1,N2,F2,N3,U) :- N4=N1-1, N1>=1, N5=N2-1, W=U+V, N2>=1, N3=<0, new3(N4,W,U,F1,N5,F2).
new1(N1,U,V,F1,N2,F2,N3,F3) :- N4=N1-1, N1>=1, N5=N2-1, N2>=1, N6=N3-1, W=U+V, N3>=1, new1(N4,W,U,F1,N5,F2,N6,F3).
plus linear clauses for new2 and new3.
new1, new2, new3 have been introduced by the following definitions:
new1(N1,U,V,F1,N2,F2,N3,F3) :- r(N1,U,V,V,X1,F1,Y1,Z1), r(N2,U,V,V,X2,F2,Y2,Z2), r(N3,U,V,V,X3,F3,Y3,Z3).
new3(N2,U,V,F2,N1,F1) :- r(N1,U,V,V,X1,F1,Y1,Z1), r(N2,U,V,V,X2,F2,Y2,Z2).
The linearized clauses for fibonacci are LA-solvable.
Properties of Linearization
• $OpSem_{sp}$ is a set of linear recursive clauses if no recursive functions in the imperative language.
• For every query $\text{false} : - G$, linearization terminates for the input $OpSem_{sp} \cup \{\text{false} : - G\}$.
• Let $TransfCls$ be the output of linearization. $OpSem_{sp} \cup \{\text{false} : - G\}$ is satisfiable iff $TransfCls$ is satisfiable.
• If $OpSem_{sp} \cup \{\text{false} : - G\}$ is LA-solvable, then $TransfCls$ is LA-solvable. Not vice versa: LA-solvability can be increased.
## Experiments
<table>
<thead>
<tr>
<th>Program</th>
<th>VCG</th>
<th>LA-solving-1</th>
<th>LIN</th>
<th>LA-solving-2: VeriMAP &</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td>Z3</td>
<td>Eldarica</td>
<td></td>
</tr>
<tr>
<td>1. binary_division</td>
<td>0.02</td>
<td>4.16</td>
<td>TO</td>
<td>0.04</td>
</tr>
<tr>
<td>2. fast_multiplication_2</td>
<td>0.02</td>
<td>TO</td>
<td>3.71</td>
<td>0.01</td>
</tr>
<tr>
<td>3. fast_multiplication_3</td>
<td>0.03</td>
<td>TO</td>
<td>4.56</td>
<td>0.02</td>
</tr>
<tr>
<td>4. fibonacci</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>5. Dijkstra_fusc</td>
<td>0.01</td>
<td>1.02</td>
<td>3.80</td>
<td>0.05</td>
</tr>
<tr>
<td>6. greatest_common_divisor</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>7. integer_division</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>8. 91-function</td>
<td>0.01</td>
<td>1.27</td>
<td>TO</td>
<td>0.06</td>
</tr>
<tr>
<td>9. integer_multiplication</td>
<td>0.02</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>10. remainder</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>11. sum_first_integers</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>12. lucas</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>13. padovan</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>14. perrin</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.02</td>
</tr>
<tr>
<td>15. hanoi</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>16. digits10</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.01</td>
</tr>
<tr>
<td>17. digits10-itmd</td>
<td>0.06</td>
<td>TO</td>
<td>TO</td>
<td>0.04</td>
</tr>
<tr>
<td>18. digits10-opt</td>
<td>0.08</td>
<td>TO</td>
<td>TO</td>
<td>0.10</td>
</tr>
<tr>
<td>19. digits10-opt100</td>
<td>0.01</td>
<td>TO</td>
<td>TO</td>
<td>0.02</td>
</tr>
</tbody>
</table>
Conclusions
• CHC transformations are useful for CHC satisfiability
– For generating verification conditions from the program semantics
– For proving the satisfiability of CHCs
– For pre-processing the input of CHC solvers: CHC solving < transformation + CHC solving
• Future work:
– Characterization of the power of fold/unfold: How much improvement?
– Other applications (more languages, properties, etc.)
– Integration with CHC solvers
|
{"Source-Url": "http://www.iasi.cnr.it/%7Eproietti/talks/2016_Mau_Invited-HCVS.pdf", "len_cl100k_base": 14901, "olmocr-version": "0.1.49", "pdf-total-pages": 71, "total-fallback-pages": 0, "total-input-tokens": 129250, "total-output-tokens": 18309, "length": "2e13", "weborganizer": {"__label__adult": 0.0003566741943359375, "__label__art_design": 0.0005688667297363281, "__label__crime_law": 0.0004410743713378906, "__label__education_jobs": 0.0014019012451171875, "__label__entertainment": 9.1552734375e-05, "__label__fashion_beauty": 0.00016498565673828125, "__label__finance_business": 0.0004553794860839844, "__label__food_dining": 0.0004360675811767578, "__label__games": 0.0011816024780273438, "__label__hardware": 0.0010395050048828125, "__label__health": 0.00049591064453125, "__label__history": 0.0003287792205810547, "__label__home_hobbies": 0.0001462697982788086, "__label__industrial": 0.00067901611328125, "__label__literature": 0.00029468536376953125, "__label__politics": 0.0003113746643066406, "__label__religion": 0.0006403923034667969, "__label__science_tech": 0.058319091796875, "__label__social_life": 0.00010967254638671876, "__label__software": 0.0081634521484375, "__label__software_dev": 0.92333984375, "__label__sports_fitness": 0.0003936290740966797, "__label__transportation": 0.0006346702575683594, "__label__travel": 0.00023376941680908203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37414, 0.03305]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37414, 0.28236]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37414, 0.50286]], "google_gemma-3-12b-it_contains_pii": [[0, 183, false], [183, 450, null], [450, 845, null], [845, 1720, null], [1720, 1750, null], [1750, 2217, null], [2217, 2400, null], [2400, 2577, null], [2577, 3157, null], [3157, 3441, null], [3441, 3987, null], [3987, 4645, null], [4645, 5543, null], [5543, 6504, null], [6504, 6680, null], [6680, 7132, null], [7132, 7491, null], [7491, 7900, null], [7900, 8474, null], [8474, 9169, null], [9169, 9310, null], [9310, 9477, null], [9477, 9809, null], [9809, 10536, null], [10536, 10711, null], [10711, 11002, null], [11002, 11063, null], [11063, 11492, null], [11492, 11928, null], [11928, 12964, null], [12964, 13383, null], [13383, 13953, null], [13953, 14715, null], [14715, 15403, null], [15403, 16303, null], [16303, 16534, null], [16534, 16571, null], [16571, 16913, null], [16913, 17881, null], [17881, 18784, null], [18784, 19041, null], [19041, 19618, null], [19618, 20817, null], [20817, 22050, null], [22050, 23588, null], [23588, 23635, null], [23635, 24066, null], [24066, 24674, null], [24674, 25011, null], [25011, 25243, null], [25243, 26162, null], [26162, 26772, null], [26772, 27086, null], [27086, 28063, null], [28063, 28450, null], [28450, 29025, null], [29025, 29760, null], [29760, 30346, null], [30346, 30710, null], [30710, 31323, null], [31323, 31395, null], [31395, 31479, null], [31479, 31644, null], [31644, 32119, null], [32119, 32511, null], [32511, 33058, null], [33058, 33413, null], [33413, 34593, null], [34593, 35134, null], [35134, 36962, null], [36962, 37414, null]], "google_gemma-3-12b-it_is_public_document": [[0, 183, true], [183, 450, null], [450, 845, null], [845, 1720, null], [1720, 1750, null], [1750, 2217, null], [2217, 2400, null], [2400, 2577, null], [2577, 3157, null], [3157, 3441, null], [3441, 3987, null], [3987, 4645, null], [4645, 5543, null], [5543, 6504, null], [6504, 6680, null], [6680, 7132, null], [7132, 7491, null], [7491, 7900, null], [7900, 8474, null], [8474, 9169, null], [9169, 9310, null], [9310, 9477, null], [9477, 9809, null], [9809, 10536, null], [10536, 10711, null], [10711, 11002, null], [11002, 11063, null], [11063, 11492, null], [11492, 11928, null], [11928, 12964, null], [12964, 13383, null], [13383, 13953, null], [13953, 14715, null], [14715, 15403, null], [15403, 16303, null], [16303, 16534, null], [16534, 16571, null], [16571, 16913, null], [16913, 17881, null], [17881, 18784, null], [18784, 19041, null], [19041, 19618, null], [19618, 20817, null], [20817, 22050, null], [22050, 23588, null], [23588, 23635, null], [23635, 24066, null], [24066, 24674, null], [24674, 25011, null], [25011, 25243, null], [25243, 26162, null], [26162, 26772, null], [26772, 27086, null], [27086, 28063, null], [28063, 28450, null], [28450, 29025, null], [29025, 29760, null], [29760, 30346, null], [30346, 30710, null], [30710, 31323, null], [31323, 31395, null], [31395, 31479, null], [31479, 31644, null], [31644, 32119, null], [32119, 32511, null], [32511, 33058, null], [33058, 33413, null], [33413, 34593, null], [34593, 35134, null], [35134, 36962, null], [36962, 37414, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37414, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37414, null]], "pdf_page_numbers": [[0, 183, 1], [183, 450, 2], [450, 845, 3], [845, 1720, 4], [1720, 1750, 5], [1750, 2217, 6], [2217, 2400, 7], [2400, 2577, 8], [2577, 3157, 9], [3157, 3441, 10], [3441, 3987, 11], [3987, 4645, 12], [4645, 5543, 13], [5543, 6504, 14], [6504, 6680, 15], [6680, 7132, 16], [7132, 7491, 17], [7491, 7900, 18], [7900, 8474, 19], [8474, 9169, 20], [9169, 9310, 21], [9310, 9477, 22], [9477, 9809, 23], [9809, 10536, 24], [10536, 10711, 25], [10711, 11002, 26], [11002, 11063, 27], [11063, 11492, 28], [11492, 11928, 29], [11928, 12964, 30], [12964, 13383, 31], [13383, 13953, 32], [13953, 14715, 33], [14715, 15403, 34], [15403, 16303, 35], [16303, 16534, 36], [16534, 16571, 37], [16571, 16913, 38], [16913, 17881, 39], [17881, 18784, 40], [18784, 19041, 41], [19041, 19618, 42], [19618, 20817, 43], [20817, 22050, 44], [22050, 23588, 45], [23588, 23635, 46], [23635, 24066, 47], [24066, 24674, 48], [24674, 25011, 49], [25011, 25243, 50], [25243, 26162, 51], [26162, 26772, 52], [26772, 27086, 53], [27086, 28063, 54], [28063, 28450, 55], [28450, 29025, 56], [29025, 29760, 57], [29760, 30346, 58], [30346, 30710, 59], [30710, 31323, 60], [31323, 31395, 61], [31395, 31479, 62], [31479, 31644, 63], [31644, 32119, 64], [32119, 32511, 65], [32511, 33058, 66], [33058, 33413, 67], [33413, 34593, 68], [34593, 35134, 69], [35134, 36962, 70], [36962, 37414, 71]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37414, 0.0458]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
2a542aac662a9bff5916527a91d70da19a2ecaba
|
Bidirectional Domain Names
Steven Atkin
IBM, Austin, TX
atkin@us.ibm.com
Ryan Stansifer
Florida Tech, Melbourne, FL
ryan@cs.fit.edu
Mohsen Alsharif
Florida Tech, Melbourne, FL
alsharif@usa.com
Abstract
Unicode's ability to represent multilingual text makes it a good candidate for establishing the basis for a domain name structure. Unicode brings not only an encoding framework, but also support for things like bidirectional scripts. The collection of Unicode's character equivalences is both desirable and at times necessary given Unicode's goal of encoding natural language text. These equivalences, however may present problems in the context of domain names.
Unicode's Bidirectional Algorithm as currently specified is unsuitable for determining an appropriate display ordering for multilingual domain names. The Bidirectional Algorithm possesses a set of implicit assumptions about the usage of common characters that are not applicable to domain names. Domain names use the same repertoire of characters that appear in text, but requires a different algorithm for handling them in domain names.
In this paper we propose how domain names can accommodate different reading orders. In particular, this paper offers an algorithm for determining the display order “reading” of multilingual domain names. Additionally, we relate this notion to Unicode’s Bidirectional Algorithm.
Keywords
Domain Names, Unicode, Bidirectional Data, Multilingual Display
I. INTRODUCTION
The transition from the now ubiquitous monolingual ASCII based domain name system to a truly multilingual extendable system has been long awaited [3]. Indeed, it may have already begun without waiting for standards [2]. This move brings the dream of the multilingual web one step closer. Nevertheless, this transition must be approached cautiously as decisions made today may have long lasting effects.
These decisions include the set of characters for constructing names, the base character encoding, and the code point transmission protocol. Nonetheless, there are certain constraints that must be honored regardless of these decisions. For example, domain names that are legal today must still remain legal in the new domain name system.
The most natural starting point for choosing the allowable set of characters from which domain names may be constructed is to start with the character repertoire available in Unicode/ISO10646 [4]. The range of characters available in Unicode is vast and accommodates most modern written scripts. In contrast to ASCII, this includes scripts such as Arabic, Farsi, and Hebrew. On the surface extending the current domain name system may not seem to be much of a challenge, given that all we are doing is adding more characters. However, unlike ASCII which only encodes scripts written left-to-right, Unicode encodes scripts written right-to-left as well as those written left-to-right. It may well be necessary to combine characters from different scripts. However, when these scripts are intermixed their display becomes uncertain, due to the conflicting directions.
In creating a new domain name system display ambiguities cannot be tolerated. The display of domain names cannot simply be left totally to the discretion of the user or application. This would certainly lead to confusion. Unfortunately, this problem has already occurred in the display of bidirectional natural language text [1]. In order to alleviate this situation an algorithm must be created that guarantees that there are no such ambiguities. Additionally, this algorithm must be both simple to understand, easy to implement, and inexpensive to execute.
This paper presents an algorithm for unambiguously determining the display order of bidirectional domain names. This paper will not delve into all aspects of creating domain names. In particular we do not discuss encoding of Unicode into domain name octets. One obvious strategy is UTF-8, but other encoding forms may be more beneficial. The paper starts by examining the class of characters that should be included along with those that should be excluded from bidirectional domain names. This is followed by an analysis of the current approaches for displaying bidirectional data. The bidirectional domain name algorithm is then presented.
In order to simplify comprehension of the examples in this paper. The following convention is used: lowercase Latin let-
ters a-z indicate Latin letters, uppercase Latin letters A-M represent Arabic letters, uppercase Latin letters N-Z represent Hebrew letters, the digits 0-4 indicate European numerals, the digits 5-9 indicate Arabic numerals, and the hyphen-minus (European terminator) is represented by “—”. See Table 1.
This is the same convention used by Unicode to discuss the input and output of the Unicode Bidirectional Algorithm [5].
**Table 1: Bidirectional character mappings**
<table>
<thead>
<tr>
<th>Direction Type</th>
<th>Mapping</th>
</tr>
</thead>
<tbody>
<tr>
<td>L</td>
<td>a-z</td>
</tr>
<tr>
<td>AL</td>
<td>A-M</td>
</tr>
<tr>
<td>R</td>
<td>N-Z</td>
</tr>
<tr>
<td>AN</td>
<td>5-9</td>
</tr>
<tr>
<td>EN</td>
<td>0-4</td>
</tr>
<tr>
<td>ET</td>
<td>—</td>
</tr>
</tbody>
</table>
**II. Proposed Domain Name Character Set**
The richness of characters available in Unicode is certainly an asset when used to encode natural language text. Nevertheless, this richness is something that is not necessarily desirable when encoding domain names. The various ways in which characters can be constructed in Unicode, precomposed and decomposed makes the representation of domain names unnecessarily complex.
This complexity presents two significant problems for encoding domain names, name registration and name equivalence. Historically these have never been a problem, because it made no difference whether the registration of a domain name was based upon characters or code points. In ASCII there is no distinction between characters and code points, however in Unicode such a distinction becomes necessary at times.
In Unicode, characters that contain diacritic marks may be represented in two ways, precomposed form and decomposed form. Characters in precomposed form are represented by a single code point, while characters in decomposed form are constructed from multiple code points. For example, the latin capital letter u with diaeresis and acute can be encoded. See Figure 1, lines 1-3. In all cases the same visual output is produced irrespective of the sequence of code points. [4]
**Figure 1: Latin capital letter u with diaeresis and acute**
\[\text{Ü} \text{ U01D7} \quad \text{Ü} \] (1)
\[\text{Ü} \text{ U00DC,U0301} \quad \text{Ü} \text{´} \] (2)
\[\text{Ü} \text{ U0055,U0308,U0301} \quad \text{Ü} \text{¨} \] (3)
This has a big impact on the clear representation of data and especially domain names. If domain names are registered by characters and not by code points then domain name servers or clients will be required to perform some form of normalization. If domain names are registered via code points then normalization becomes a non problem. On the other hand, it forces the registration of multiple names that really represent the same name.
To complicate matters Unicode also encodes some characters that are merely glyph variants of other characters. This situation also requires some form of normalization. For example, the two character sequence “fi” may be represented in two ways in Unicode. See Figure 2. Line 1 on Figure 2 encodes the “fi” sequence using a single code point, while on line 2 the “fi” sequence is encoded using two code points. In either case both character sequences encode the same semantic content. The only difference being the glyph used to render the sequence.
**Figure 2: The fi ligature**
\[\text{UFB01 fi} \] (1)
\[\text{U0066,U0069 fi} \] (2)
In order to simplify the construction of domain names the authors recommend that decomposed characters only be used in cases where there is no corresponding precomposed character, Unicode Normal Form C [6]. This greatly simplifies the task of determining name equivalence, as each domain name has a unique representation. Additionally, those characters that are glyph variants of other characters (compatibility characters) should not be used in domain names either. At first this may seem too restrictive, however this is nothing more than an artificial restriction. The authors argue that there is no need for compatibility characters, as domain name distinction is not based upon visual appearance. Naturally, some may argue that these characters are necessary for legacy data conversion. This is not a concern, to domain names as they are encoded in ASCII now.
The authors view multilingual domain names simply as an extension of the current domain name character set [3]. In keeping with this strategy only additional letters and digits are added. The Ayna domain name registration system, however greatly restricts the characters that can be used in domain names [1]. Their registration system only allows European numerals and does not permit the intermixing of different script systems within a domain name1. Nonetheless, our approach does not have such limitations.
Control codes are excluded from domain names today (sensibly enough) there is no reason to include them in multilingual domain names. These include the bidirectional controls as well (LRE, LRO, LRM, RLE, RLO, RLM, and PDF) [5]. The purpose of these controls is to override the behavior of Unicode’s Bidirectional Algorithm. In most situations Unicode’s Bidirectional Display Algorithm produces acceptable results when rendering natural language text. The use of the controls is only required in the rarest of situations, and thus their elimination outweighs any potential benefits.
---
1. Information about Ayna can be found at [http://registrar.ayna.com/ayna_html](http://registrar.ayna.com/ayna_html)
Naturally the set of allowable domain name characters must expand to include Arabic and Hebrew letters, however Unicode has many code points for the Arabic writing system and the Hebrew writing system. Not all of these code points are required in the context of domain names.
There are a number of Arabic characters that can be safely excluded from domain names. Specifically, these include the Arabic presentation forms, UFB50-UFDF5 and UFE70-UFEFC. It is safe to exclude these characters, as they only represent ligatures and glyph variants of the base nominal Arabic characters. Additionally, the Arabic points U064B-U0652, U0653-U0655, and U0670 should also be excluded. In most cases the Arabic points are only used as pronunciation guides. If the points were to be included, then names that differed only in their use of points would be treated as if they were distinct and different names. This is like the English homograph “bow” (the arrow) and “bow” (the ship) which are ambiguous. Removing the Arabic points eliminates such problems, with the understanding that not every Arabic word would be able to be represented. The Koranic annotation signs U06D6-U06ED can also be eliminated from domain names, as they are not used to distinguish one name from another.
In Hebrew the cantillation marks U0591-U05AF and Hebrew points UFB0-U5C4 can be excluded as they are predominately used as pronunciation guides and for indicating the underlying structure of text. Additionally, the Arabic and Hebrew punctuation characters are also excluded from domain names as they are currently not permitted. The list of acceptable Arabic and Hebrew characters are listed in Table 2.
Table 2: Acceptable Arabic and Hebrew characters
<table>
<thead>
<tr>
<th>Unicode Range</th>
<th>Script</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td>U05DF-U05F4</td>
<td>Hebrew</td>
<td>ISO8859-8</td>
</tr>
<tr>
<td>U0621-U064A</td>
<td>Arabic</td>
<td>ISO8859-6</td>
</tr>
<tr>
<td>U0660-U0669</td>
<td>Arabic</td>
<td>Arabic-Indic digits</td>
</tr>
<tr>
<td>U06671-U066D3, U066D5</td>
<td>Arabic</td>
<td>Extended Arabic letters</td>
</tr>
<tr>
<td>U06F0-U06FE</td>
<td>Arabic</td>
<td>Persian, Urdu, and Sindhi</td>
</tr>
</tbody>
</table>
### III. Bidirectional Text in Domain Names
Unicode’s ability to intermix the various script systems of the world makes the creation of multilingual documents no more difficult than the creation of monolingual documents. This new freedom, however does come with a cost. When various script systems are intermixed their display (generally) becomes unclear. We consider the type left-to-right (English, etc.) and the typically right-to-left writing system Arabic.
Unicode provides an algorithm for determining the appropriate display order given an arbitrary sequence of characters in logical order. The algorithm is based upon a set of implicit heuristics along with a set of explicit control code overrides. These control codes are used in cases where the implicit rules do not yield an appropriate display order. [5]
Naturally one would assume that since Unicode characters are going to be used in domain names then Unicode’s Bidirectional Algorithm should also be used. Upon closer examina-
labels can be determined. This statement is not true in multilingual domain names, however.
In many cases it is impossible to tell the overall reading direction by merely looking at the output. It turns out that it is possible to obtain the same output “display order” given two distinct inputs in logical order. In this example the input on lines 1 and 2 produce the same output on line 3. In this case the most specific part of the name on line 1 is “ABC”, while on line 2 it is “IBM”. This does not indicate that there is a flaw in Unicode’s algorithm, rather it only further illustrates the hidden assumptions concerning the intended use of the Unicode Bidirectional Algorithm.
Normally in natural language text processing this is not a problem given that the two can be distinguished by their physical justification on the screen, either right or left. This luxury, however, is not afforded to domain names. When a domain name appears in printed text there is no generally accepted way to indicate the overall reading direction.
Nonetheless, some may argue that if the entire domain name is in Arabic then the label hierarchy should be reversed. The problem in adopting this strategy occurs when the entire domain name is not from the same script, as is the case in this example. The authors suggest that the output on line 4 in Figure 3 is more desirable. This output is consistent with the current structure of domain names. In this case the full stop characters are ignored, and the Bidirectional Algorithm is applied to each of the individual labels of the domain name. Naturally one might assume that Unicode’s Bidirectional Algorithm may still be appropriate, given that it is run independently on each of the individual labels. This strategy also presents problems, however.
**Figure 3: Using a full stop in a domain name**
- ABC.ibm.com (1)
- ibm.com.ABC (2)
- ibm.com.CBA (3)
- CBA.ibm.com (4)
The problem with this approach involves the use of the hyphen-minus character “-”, U002D. In the Unicode Bidirectional Algorithm the hyphen-minus is assigned to the European Terminator character class. Unfortunately, this causes the character to behave as if it were an European numeral when adjacent to European numerals. See rule W5 in Unicode Standard Annex #9 [5]. This behavior may be acceptable when processing natural language, but is unacceptable when processing domain names. In domain names the predominant usage of the hyphen-minus is as white space and not as an European terminator. The example in Figure 4 illustrates the effect of European digits surrounding the hyphen-minus characters.
Line 1 on Figure 4 is a single domain name label in logical order. Line 2 is the same label in display order, this is the output of the Unicode Bidirectional Algorithm. The text on Line 3 is also in display order, however this output is obtained when the hyphen-minus characters are treated as white space characters.
**Figure 4: Using a hyphen minus in a domain name**
- NOP--123 (1)
- --123PON (2)
- 123--PON (3)
The last remaining problem occurs when an individual label contains characters with varying directions. In this situation the reading order of a label may become ambiguous. This is illustrated in Figure 5. Line 1 in Figure 5 is an individual label in display order. Unfortunately there are two possible readings “logical order” associated with this output, lines 2 and 3 in Figure 5. If however we assume that in this mixed case a label always takes a general left-to-right reading then there is only one possible reading. The authors contend that this policy is consistent with the overall left-to-right reading of a domain name. Nevertheless, the Unicode algorithm still maps the two logical inputs to the single display output even when the overall reading direction is fixed to left-to-right (higher order protocol). This situation potentially causes problems for domain name resolution.
**Figure 5: Label with varying directions**
- abcFED (1)
- abcDEF (2)
- DEFabc (3)
The authors believe that domain name registration will be made in logical order. This policy is consistent with how bidirectional data is generally stored in files today. If we permit the Unicode Bidirectional Algorithm to be used for the display of domain names, then there may be situations when a domain name cannot be resolved even when it appears to be entered correctly. One solution to this situation is to register multiple logical names that yield the same display order. The authors argue that a better approach is to create an algorithm that is one-to-one. In this algorithm each display order is mapped to one and only one logical input and each logical input is mapped to one and only one display output. This policy comes with some associated cost, however. There maybe cases where the reading may seem unnatural. The authors believe that this will occur infrequently and that the benefits outweigh any potential misreading.
**IV. Algorithm for Domain Names**
The authors believe the primary goal of the domain name display algorithm is to unambiguously represent multilingual
domain names. There are additional goals, however that the authors judge are necessary for a successful solution:
- The algorithm must provide a one-to-one mapping between names in logical order and names in display order.
- The output should be consistent with Unicode’s Bidirectional Algorithm when possible.
- The algorithm should be easy to understand and simple to implement.
- The algorithm should not require any form of normalization.
- The algorithm should minimize impact to the current DNS architecture.
- Maximize the readability of multilingual labels.
As we have seen Unicode’s algorithm is inappropriate, because different inputs give the same output and assumptions about syntax and punctuation are inappropriate for domain names.
Our algorithm is divided into two phases, inferencing and reordering. Inferencing resolves the direction of indeterminate characters (full stop, hyphen-minus, Arabic numeral, and European numeral). During this phase each character is assigned a strong direction, either left or right. The reordering phase takes the fully resolved characters and generates a display ordering for them.
The inferencing phase is accomplished in several passes. Implementers may wish to optimize this phase. In the first pass Arabic and Hebrew letters are assigned the right-to-left direction, while full stops and other alphabetic characters are assigned the left-to-right direction. The next set of passes resolves the directions of digits.
There are two rules for resolving the direction of Arabic and European numerals. All Arabic numerals are assigned the right-to-left direction. European numerals are assigned the left-to-right direction, unless the European numeral is surrounded by right-to-left characters (Arabic or Hebrew letters), in which case it takes the right-to-left direction. This is accomplished in two passes—a forward pass and a reverse pass. The final set of passes resolves the directions of hyphen-minus characters.
There are two rules for the resolution of hyphen-minus characters. All hyphen-minus characters become left-to-right, unless the hyphen-minus is surrounded by characters whose direction is right-to-left in which case the hyphen-minus becomes right-to-left. This is the same resolution as digits, but occurs after digit resolution. At this point each character in the domain name has a strong direction.
The reordering of the resolved characters makes use of a few simple data structures:
- The digit accumulator — holds a sequence of European or Arabic numerals that have a right-to-left direction.
- The character stack — holds Arabic letters, Hebrew letters, and sequences of digits.
- The mode variable — keeps track of the current direction.
The algorithm makes use of a few simple operations on these data structures:
- The clear operation — outputs each digit from the accumulator, then outputs each character from the character stack, and finally outputs the current character. After this operation the digit accumulator and the character stack are empty.
- The empty operation — outputs each character from the character stack, then outputs each digit from the accumulator, and finally outputs the current character. After this operation the digit accumulator and the character stack are empty. Empty is like clear, but the order of operations is reversed.
- The push operation — Pushes the contents of the digit accumulator onto the character stack, and then pushes the current character onto the stack. After this operation the accumulator is empty.
- The accumulate operation — appends the current character onto the digit accumulator.
The algorithm for reordering is:
- Current character direction is left-to-right (includes European numerals with a left-to-right direction).
a. If mode is left-to-right, then “empty” else “clear”.
b. Set mode to right-to-left.
- Current character direction is right-to-left and character is not a digit.
a. Perform “push”.
b. Set mode to right-to-left.
- Current character is a numeral (European and Arabic) with a right-to-left direction.
a. Perform “accumulate”.
b. Set mode to right-to-left.
At the end of the input stream if the mode is left-to-right, then “empty” else “clear”.
The bidirectional domain name display algorithm converts a string of characters in logical order to a string of the same length in display order. In fact the algorithm is its own inverse, in other words $A(A(x)) = x$. Hence $A$ is a one-to-one function.
To see that this is the case we make the following argument. First, it is obvious that the function $A$ loses no characters, so the output is a string of the same length and a permutation of the original characters. Second, all left-to-right runs (including full stop and certain hyphen-minus characters), are preserved in exactly their original positions. Third, all right-to-left runs are permuted within their own run. No characters “leak”, “flop” or move to another run and the right-to-left runs are preserved in their same order. Finally, the right-to-left runs are reversed (approximately).
The nature of reversing right-to-left runs requires further explanation as the numerals (Arabic and European) complicate the matter. Consider the logical right-to-left run on line 1 in Figure 6 and its corresponding display on line 2 in Figure 6. The output on line 2 is a string reversal treating digits as units.
Hence, this sort of reversal is its own inverse. Therefore, the whole algorithm is its own inverse.
**Figure 6: String reversal**
\[
\begin{align*}
&\text{AB12CDE678FGHI} & (1) \\
&\text{IHGF678EDC12BA} & (2)
\end{align*}
\]
This algorithm can be used to accommodate two different groups of domain name creators. One group know what they want to register, but are unsure how it will be displayed. On the other hand, there are creators who know what they want to see displayed, but are unsure what logical sequence of characters should be registered. This single universal algorithm addresses both of these situations. This eliminates the need for specialized individual algorithms (logical to display and display to logical).
**V. Display Order**
In this section we provide sample input (logical order) along with corresponding output (display order) of domain name labels. This set of input represents some of the numerous ways in which domain name labels can be created. In the following tables we do not use entire domain names such as www.label.label.com. This is unnecessary, as each label is rendered independently of the others.
The output was tested for readability by a number of native Arabic readers (at Florida Tech). These readers are from various countries (Saudi Arabia, Libya, Egypt, and Lebanon) and represent a wide audience of potential domain name creators.
The sample test cases are divided into two classes. One class contains sequences of letters and letters with hyphens. The other class contains sequences of letters, digits, and hyphens. All of the sample input tables contain three columns. The logical column contains the logical sequence of typed characters, the display column contains the output from the domain name display algorithm, and the comment column indicates the type of the logical input sequence. An explanation of these types can be found in Table 1. Table 3 and Table 4 contain labels form the first and second input classes respectively.
We anticipate that most Arabic domain name creators will construct labels that are comprised of Arabic letters, Arabic numerals, European numerals, and hyphen-minus characters. On the other hand, we foresee Hebrew domain name creators constructing labels that are solely comprised of Hebrew letters, European numerals, and hyphen-minus characters. Naturally, we do not expect Latin based labels to dramatically change. We consider the correct output in these cases to be essential.
There are other inputs that are “contrived” or “artificial”. These contrived inputs are cases where it is difficult to determine an appropriate display. Such situations include the intermixing of right-to-left and left-to-right characters with Arabic and European numerals. These cases are discussed here to illustrate the behavior of the algorithm, and also to examine trade-offs made during the construction of the algorithm. Certainly, the algorithm must correctly display both legacy domain names and single script domain names. Legacy conformance is illustrated in test case 1 in Table 3 and test case 1 in Table 4. In the case of single script (Arabic or Hebrew) label, it is essential that their display be consistent with user expectations. This is shown in test cases 2 and 3 in Table 3.
Looking at the contrived cases, in particular test case 5 in Table 3. This test case consists of Arabic letters (right-to-left) followed by left-to-right characters. Normally, a reader would expect the Arabic characters to appear in the display at the right most end of the label. This is the output that Unicode’s Bidirectional Display Algorithm yields. Our algorithm does not generate this display order as it requires the adoption of rules that cause the algorithm to no longer remain one-to-one. This problem was explored earlier. See Figure 5. Our algorithm always generates a display order that has an implicit left-to-right embedding. The authors claim that this is an acceptable trade-off given the goal of creating a one-to-one algorithm. Additionally, this policy is consistent with the overall left-to-right reading of a domain name.
The next example examines the algorithm’s treatment of digits bordering conflicting directional boundaries. For example, when the Unicode Bidirectional Algorithm with the embedding level fixed to left-to-right is applied to test cases 5 and 6 in Table 4 the output (12BA) is the same. This occurs, because digits bind with the last logical strong directional run. See rules W2 and W7 in Unicode Standard Annex #9 [5]. In test case 5 in Table 4 the “12” binds with the “AB”, while in test case 6 in Table 4 the “12” binds with the left-to-right embedding. Our algorithm, however always breaks directional ties by examining the types of the digits. In other words, European numerals always bind with a left-to-right run, while Arabic numerals always bind with a right-to-left run. The authors argue that this trade-off is acceptable as this rule is easier to comprehend and implement despite the discrepancy with Uni-
<table>
<thead>
<tr>
<th>Test Case #</th>
<th>Logical</th>
<th>Display</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>abc</td>
<td>abc</td>
<td>L,R,L</td>
</tr>
<tr>
<td>2</td>
<td>ABC</td>
<td>CBA</td>
<td>AL,AL,AL</td>
</tr>
<tr>
<td>3</td>
<td>NOP</td>
<td>PON</td>
<td>R,R,R</td>
</tr>
<tr>
<td>4</td>
<td>abDE</td>
<td>abED</td>
<td>L,L,AL,AL</td>
</tr>
<tr>
<td>5</td>
<td>DEab</td>
<td>Edab</td>
<td>AL,AL,L</td>
</tr>
<tr>
<td>6</td>
<td>abNO</td>
<td>abON</td>
<td>L,R,R</td>
</tr>
<tr>
<td>7</td>
<td>NOab</td>
<td>ONab</td>
<td>R,R,L</td>
</tr>
<tr>
<td>8</td>
<td>abDfgh</td>
<td>abDfgh</td>
<td>L,L,AL,AL</td>
</tr>
<tr>
<td>9</td>
<td>ABDeGH</td>
<td>BDaeHG</td>
<td>AL,AL,AL,AL</td>
</tr>
<tr>
<td>10</td>
<td>ABNOde</td>
<td>ONBade</td>
<td>AL,AR,R,L,L</td>
</tr>
<tr>
<td>11</td>
<td>ab-de</td>
<td>ab-de</td>
<td>L,L,ET,L</td>
</tr>
<tr>
<td>12</td>
<td>AB-DE</td>
<td>ED-BA</td>
<td>AL,ET,AL,AL</td>
</tr>
<tr>
<td>13</td>
<td>NO--QR</td>
<td>RQ--ON</td>
<td>R,R,ET,ET,R,R</td>
</tr>
<tr>
<td>14</td>
<td>ab-DE--NO</td>
<td>ab-ON--ED</td>
<td>L,L,ET,AL,ET,ET,R,R</td>
</tr>
<tr>
<td>15</td>
<td>AB--dc-NO</td>
<td>BA--dc-ON</td>
<td>AL,AL,ET,ET,ET,ET,R,R</td>
</tr>
</tbody>
</table>
Table 3: Labels with letters and hyphens
code’s output. Most significantly, it enables the algorithm to remain one-to-one.
Table 4: Labels with letters, digits, and hyphens
<table>
<thead>
<tr>
<th>Test Case #</th>
<th>Logical Display</th>
<th>Comment</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>ab12</td>
<td>ab12</td>
</tr>
<tr>
<td>2</td>
<td>56-ab</td>
<td>56-ab</td>
</tr>
<tr>
<td>3</td>
<td>56-AB</td>
<td>BA-56</td>
</tr>
<tr>
<td>4</td>
<td>56-NO</td>
<td>ON-56</td>
</tr>
<tr>
<td>5</td>
<td>AB12</td>
<td>BA12</td>
</tr>
<tr>
<td>6</td>
<td>12AB</td>
<td>12BA</td>
</tr>
<tr>
<td>7</td>
<td>12-34-AB</td>
<td>12-34-BA</td>
</tr>
<tr>
<td>8</td>
<td>12NO</td>
<td>12ON</td>
</tr>
<tr>
<td>9</td>
<td>1256AB</td>
<td>12BA56</td>
</tr>
<tr>
<td>10</td>
<td>5612AB</td>
<td>5612BA</td>
</tr>
<tr>
<td>11</td>
<td>AB-56-78</td>
<td>78-56-BA</td>
</tr>
<tr>
<td>12</td>
<td>AB-12-34</td>
<td>BA-12-34</td>
</tr>
<tr>
<td>13</td>
<td>AB-12-34-CD</td>
<td>DC-34-12-BA</td>
</tr>
<tr>
<td>14</td>
<td>AB-56-78-CD</td>
<td>DC-78-56-BA</td>
</tr>
<tr>
<td>15</td>
<td>NO-12-34-AB</td>
<td>BA-34-12-ON</td>
</tr>
<tr>
<td>16</td>
<td>ab-56-78-cd</td>
<td>ab-78-56-cd</td>
</tr>
<tr>
<td>17</td>
<td>ab-12-56-cd</td>
<td>ab-12-DC-56</td>
</tr>
<tr>
<td>18</td>
<td>ab-56-12-cd</td>
<td>ab-56-12-DC</td>
</tr>
<tr>
<td>19</td>
<td>NO1256PQ</td>
<td>QP1256ON</td>
</tr>
<tr>
<td>20</td>
<td>NO5612ab</td>
<td>56ON12ab</td>
</tr>
<tr>
<td>21</td>
<td>NO1256ab</td>
<td>ON1256ab</td>
</tr>
<tr>
<td>22</td>
<td>12-34</td>
<td>12-34</td>
</tr>
<tr>
<td>23</td>
<td>56-78</td>
<td>78-56</td>
</tr>
</tbody>
</table>
VI. Conclusion
The contributions of this paper are:
- Exposes the essence of bidirectional reordering.
- An illustration of separating inferencing from reordering.
- An argument for the importance of a one-to-one algorithm for domain names.
- A Proposal for multilingual domain names and their display: honors legacy, is one-to-one, and is simple.
When domain names are interspersed within natural language text the problem of displaying the text and domain names becomes rather complex. This complexity, however can be managed if the problem is broken into separate and distinct phases. The problem with simply modifying the Unicode Bidirectional Algorithm to accommodate domain names is it makes an already complex algorithm even more difficult to manage.
The essence of the Unicode Bidirectional Algorithm is first to perform contextual analysis on the text and then determine where the boundaries of the directional runs are. The general problem with this strategy is that as technology continues to expand greater demands will be placed upon the bidirectional algorithm to always correctly render any and all textual data causing the algorithm to be in a constant state of flux.
When the Unicode Bidirectional Algorithm performs contextual analysis on text it overrides the static proprieties assigned to some of the characters. Specifically, this occurs during the processing of weak and neutral types. Separating this portion of the algorithm from resolving implicit levels and reordering levels greatly extends the applicability of the algorithm. Ideally the analysis of the text should be distinct from the actual determination of directional boundaries.
During the analysis phase domain names, mathematical expressions, phone numbers, and other higher order data elements are detected. Nevertheless, it is impossible to create an algorithm that can always correctly identify such elements. The real issue is whether or not it is possible to create an algorithm that identifies such elements within some reasonable range of error and under a set of acceptable constraints for the elements themselves.
The determination as to whether a stream contains a domain name is rather straightforward if the domain name is preceded by some special identifier. Specifically, “http://”, “ftp://”, or “telnet://”. When these identifiers are not present, however the ability to recognize a domain name becomes more challenging. The authors believe it is unreasonable to force every domain name to be preceded by some special signal. There are many cases where it is inappropriate to specify the protocol. For example, consider the case where a domain name appears in a printed advertisement on a bus. The authors therefore recommend that there be a clear separation between natural language element detection and the rendering of those elements. In the future we plan to examine such issues.
VII. References
VIII. Appendix
1. // DomainName.java version 1.0
2. // Converts domain names in logical and display order.
3. // Steven Atkin
4. // 6/15/01
5.
6. import java.io.BufferedReader;
7. import java.io.InputStreamReader;
8. import java.io.IOException;
9. import java.util.LinkedList;
10. import java.util.Stack;
11.
12. public class DomainName {
13.
14. private class AttributedCharacter {
15. private char character;
16. private byte direction;
17. private boolean digit;
18.
19. public AttributedCharacter (char ch, byte type) {
20. character = ch;
21. digit = false;
22. direction = type;
23. // set all full stop characters to left
24. if (type == CS)
25. direction = L;
26. else if (type == EN || type == AN)
27. digit = true;
28. }
29. public byte getDir () { return direction; }
30. public void setDir (byte dir) { direction = dir; }
31. public boolean isDigit() { return digit; }
32. public char getCharacter() { return character; }
33. }
34.
35. private static final byte L = 0;
36. private static final byte R = 1;
37. private static final byte AL = 2;
38. private static final byte EN = 3;
39. private static final byte ES = 4;
40. private static final byte ET = 5;
41. private static final byte AN = 6;
42. private static final byte CS = 7;
43. private static final byte BN = 8;
44. private static final byte B = 9;
45. private static final byte S = 10;
46. private static final byte WS = 11;
47. private static final byte ON = 12;
48.
49.
50.
51.
52.
53.
54.
55.
private static final byte[] mixedMap = {
BN, BN, BN, BN, BN, BN, BN,
BN, S, B, S, WS, B, BN, BN,
BN, BN, BN, BN, BN, BN, BN, BN,
BN, BN, BN, BN, B, B, S,
WS, ON, ON, ET, ET, ET, ON, ON,
ON, ON, ON, ET, CS, ET, CS, ES,
EN, EN, EN, EN, AN, AN, AN,
AN, AN, CS, ON, ON, ON, ON, ON,
ON, AL, AL, AL, AL, AL, AL,
AL, AL, AL, AL, AL, R, R,
ON, L, L, L, L, L, L,
L, L, L, L, L, L, L,
L, L, L, L, L, L, L,
L, L, L, ON, ON, ON, ON, BN
};
private byte[] activeMap = mixedMap;
public DomainName () {
activeMap = mixedMap;
}
// Convert a logical or display domain name
public String convert (String domainName) {
LinkedList attribs = assignAttributes(domainName);
resolveDigits(attribs);
resolveHyphenMinus(attribs);
return reorderStrong(attribs);
}
// Use the character map to get the character attributes
private LinkedList assignAttributes (String label) {
LinkedList list = new LinkedList();
for (int i = 0; i < label.length(); ++i) {
final char character = label.charAt(i);
final byte type = activeMap[character];
list.add(new AttributedCharacter(character, type));
}
return list;
}
private String emptyStack(Stack stack) {
StringBuffer result = new StringBuffer();
while(!stack.empty())
result.append(stack.pop());
return result.toString();
}
private void resolveDigits (LinkedList label) {
byte lastStrong = L;
boolean remaining = false;
int len = label.size();
for (int i = 0; i < len; ++i) {
final byte type = ((AttributedCharacter) label.get(i)).getDir();
if (type == L || type == AL || type == R)
lastStrong = type;
else if (type == EN && lastStrong == L)
((AttributedCharacter) label.get(i)).setDir(L);
else if (type == EN)
remaining = true;
else if (type == AN)
((AttributedCharacter) label.get(i)).setDir(AL);
}
// If there are any unresolved European numerals, make the second pass.
if (remaining) {
lastStrong = L;
for (int i = len-1; i >= 0; --i) {
final byte type = ((AttributedCharacter) label.get(i)).getDir();
final boolean isdigit = ((AttributedCharacter) label.get(i)).isDigit();
if ((type == L || type == AL || type == R) && !isdigit)
lastStrong = type;
else if (type == EN && (lastStrong == R || lastStrong == AL))
((AttributedCharacter) label.get(i)).setDir(R);
else if (type == EN)
((AttributedCharacter) label.get(i)).setDir(L);
}
}
}
private void resolveHyphenMinus (LinkedList label) {
byte lastStrong = L;
boolean remaining = false;
int len = label.size();
for (int i = 0; i < len; ++i) {
final byte type = ((AttributedCharacter) label.get(i)).getDir();
final boolean isdigit = ((AttributedCharacter) label.get(i)).isDigit();
if (type == L || type == AL || type == R)
lastStrong = type;
else if (type == ET && lastStrong == L)
((AttributedCharacter) label.get(i)).setDir(L);
else if (type == ET)
((AttributedCharacter) label.get(i)).setDir(L);
}
}
private void resolveHyphenMinus (LinkedList label) {
byte lastStrong = L;
boolean remaining = false;
int len = label.size();
for (int i = 0; i < len; ++i) {
final byte type = ((AttributedCharacter) label.get(i)).getDir();
final boolean isdigit = ((AttributedCharacter) label.get(i)).isDigit();
if (type == L || type == AL || type == R)
lastStrong = type;
else if (type == ET && lastStrong == L)
((AttributedCharacter) label.get(i)).setDir(L);
else if (type == ET)
remaining = true;
}
}
If there are any hyphen-minus characters left, make the second pass.
```java
if (remaining) {
lastStrong = L;
for (int i = len-1; i >= 0; --i) {
final byte type = ((AttributedCharacter) label.get(i)).getDir();
if (type == L || type == AL || type == R)
lastStrong = type;
else if (type == ET && (lastStrong == R || lastStrong == AL))
((AttributedCharacter) label.get(i)).setDir(R);
else if (type == ET)
((AttributedCharacter) label.get(i)).setDir(L);
}
}
```
Reorder the characters once their directions have been resolved
```java
private String reorderStrong (LinkedList attribs) {
byte mode = L;
StringBuffer result = new StringBuffer(attribs.size());
StringBuffer digits = new StringBuffer();
Stack rightStack = new Stack();
for (int i = 0; i < attribs.size(); ++i) {
final char character = ((AttributedCharacter) attribs.get(i)).getCharacter();
final byte dir = ((AttributedCharacter) attribs.get(i)).getDir();
final boolean isdigit = ((AttributedCharacter) attribs.get(i)).isDigit();
// left-to-right characters
if (dir == L) {
if (mode == AL || mode == R) {
result.append(digits);
result.append(emptyStack(rightStack));
}
else {
result.append(emptyStack(rightStack));
result.append(digits);
}
result.append(character);
mode = L;
digits = new StringBuffer();
} // end if left
// right-to-left characters
else if ((dir == AL || dir == R) && !isdigit) {
rightStack.push(digits);
rightStack.push(new StringBuffer().append(character));
mode = AL;
digits = new StringBuffer();
} // end if Arabic or Hebrew
// Numerals
else if (isdigit && (dir == AL || dir == R)) {
digits.append(character);
mode = dir;
} // end if Arabic or European numeral
} // end for loop
```
227. // cleanup
228. if (mode == R || mode == AL) {
229. result.append(digits);
230. result.append(emptyStack(rightStack));
231. } else {
232. result.append(emptyStack(rightStack));
233. result.append(digits);
234. }
235. return result.toString();
236. }
237. }
238. }
239. public static void main (String args[]) {
240. DomainName domain = new DomainName();
241. String line = new String();
242. BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
243. do {
244. try {
245. line = in.readLine();
246. } catch (IOException e) {
247. System.out.println("Error on input line");
248. }
249. if (line != null && !line.equals("")
250. System.out.println(domain.convert(line));
251. } while (line != null && !line.equals(""));
252. }
253. }
|
{"Source-Url": "http://www.users.zetnet.co.uk/hopwood/unicode/idnbidi.pdf", "len_cl100k_base": 10087, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 33775, "total-output-tokens": 10780, "length": "2e13", "weborganizer": {"__label__adult": 0.00029730796813964844, "__label__art_design": 0.0004963874816894531, "__label__crime_law": 0.0002987384796142578, "__label__education_jobs": 0.0004811286926269531, "__label__entertainment": 9.40561294555664e-05, "__label__fashion_beauty": 0.00013780593872070312, "__label__finance_business": 0.0002567768096923828, "__label__food_dining": 0.00024509429931640625, "__label__games": 0.000598907470703125, "__label__hardware": 0.001155853271484375, "__label__health": 0.00028014183044433594, "__label__history": 0.0002658367156982422, "__label__home_hobbies": 5.352497100830078e-05, "__label__industrial": 0.0002732276916503906, "__label__literature": 0.0004315376281738281, "__label__politics": 0.00023674964904785156, "__label__religion": 0.0003788471221923828, "__label__science_tech": 0.039703369140625, "__label__social_life": 5.817413330078125e-05, "__label__software": 0.0177154541015625, "__label__software_dev": 0.93603515625, "__label__sports_fitness": 0.00018084049224853516, "__label__transportation": 0.0003581047058105469, "__label__travel": 0.00014889240264892578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42858, 0.05727]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42858, 0.47601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42858, 0.83408]], "google_gemma-3-12b-it_contains_pii": [[0, 4426, false], [4426, 9843, null], [9843, 12992, null], [12992, 18099, null], [18099, 23512, null], [23512, 29415, null], [29415, 34270, null], [34270, 35976, null], [35976, 37220, null], [37220, 39864, null], [39864, 41954, null], [41954, 42858, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4426, true], [4426, 9843, null], [9843, 12992, null], [12992, 18099, null], [18099, 23512, null], [23512, 29415, null], [29415, 34270, null], [34270, 35976, null], [35976, 37220, null], [37220, 39864, null], [39864, 41954, null], [41954, 42858, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42858, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42858, null]], "pdf_page_numbers": [[0, 4426, 1], [4426, 9843, 2], [9843, 12992, 3], [12992, 18099, 4], [18099, 23512, 5], [23512, 29415, 6], [29415, 34270, 7], [34270, 35976, 8], [35976, 37220, 9], [37220, 39864, 10], [39864, 41954, 11], [41954, 42858, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42858, 0.12752]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
f20b92ca74828e10f47ee8827e8942d123805724
|
Sudo for Windows (sudowin)
Sudo was developed in reaction to the standard UNIX security model where although some granularity is possible with group and file permissions, delegating security is largely all or nothing. If a user was designated an administrator this usually meant giving them access to the root account's password. The problem with this model was that it provided no accountability for actions taken on the system since all actions were being executed under the auspices of one user account. In summary, Sudo provides delegation and acc...
Sudo for Windows (sudowin)
GCWN Gold Certification
Author: Schley Andrew Kutz, a.kutz@its.utexas.edu
Adviser: Jim Purcell
Accepted: January 20, 2007
Outline
1. Abstract .......................................................... 4
2. Document Conventions ........................................... 6
3. Introduction / Executive Summary ............................ 7
4. History ............................................................ 9
5. Implications ....................................................... 10
6. Design ............................................................ 12
Server ......................................................... 12
Configuration ................................................ 13
Application Settings ........................................ 13
Remoting Settings .......................................... 14
Remoting Object .......................................... 15
Remoting Channel ......................................... 16
Diagnostics Settings ...................................... 17
Client .......................................................... 18
Command Line Client .......................................... 18
Configuration ............................................. 19
GUI Client .................................................. 20
Configuration ............................................. 21
Plugins ......................................................... 21
Configuration ................................................ 21
Plugin Configuration Schema ................................ 24
Plugin Types ................................................... 27
Authentication ............................................... 27
NT ......................................................... 28
Authorization ............................................... 28
XML ......................................................... 28
CredentialsCache ............................................ 38
LocalServer .................................................. 38
CallbackApplication ......................................... 38
7. Walk Through ................................................... 40
Service Startup ................................................. 40
Client Invocation ............................................... 41
8. Implementation .................................................. 45
Requirements .................................................... 45
Installing ....................................................... 45
Upgrading ....................................................... 46
Schley Andrew Kutz 2
Sudo for Windows (sudowin)
Configuring ...................................................... 46
The Sudoers Group ................................................. 46
The Sudoers File .................................................. 47
Uninstalling .......................................................... 47
Locations .............................................................. 47
Files ................................................................. 47
Registry .............................................................. 49
Groups ............................................................... 51
Active Directory ..................................................... 51
Known Issues ......................................................... 51
9. Conclusion ......................................................... 53
1. Abstract
The original Sudo application was designed by Bob Coggeshall and Cliff Spencer in 1980 within the halls of the Department of Computer Science at SUNY/Buffalo.\(^1\) For twenty-six years, Sudo has provided the foundation of secure computing on UNIX and Linux platforms by allowing systems administrators to delegate privileged commands to trusted users and audit their use. A trusted user can execute a privileged command in their own user context by reaffirming their identity through confirming their passphrase and this execution will then be recorded in an auditable log. Sudo encourages the principal of least privilege – that is, a user operates with a bare minimum number of privileges on a system until the user requests a higher level of privilege in order to accomplish some task.
Sudo was developed in reaction to the standard UNIX security model where although some granularity is possible with group and file permissions, delegating security is largely all or nothing. If a user was designated an administrator this usually meant giving them access to the root account’s password. The problem with this model was that it provided no accountability for actions taken on the system since all actions were being executed under the auspices of one user account. In summary, Sudo provides delegation and accountability.
The current versions of Microsoft Windows lack equivalent functionality to that which Sudo provides. Therefore the security model in Windows is described by delegating a fixed privilege level
\(^1\) http://www.gratisoft.us/ sudo/history.html
Sudo for Windows (sudowin)
to distinct user accounts, but this model encourages, and in fact requires, separate user accounts for any user who logs into a computer system as a non-privileged and privileged user account. This security model is a hassle and to circumvent the frustrations associated with multiple user accounts, privileged users inevitably begin to log into a computer system with their privileged user accounts only. This behavior is very unsecure because it results in applications running with unnecessarily high levels of privilege. Therefore the security model that Microsoft Windows encourages is flawed.
Sudo for Windows, or sudowin, was developed in the summer of 2005 in order to provide Sudo functionality to the Microsoft Windows operating system. This practical will review the history of sudowin, its implications and use in practical application, how it is designed, and most importantly, how everyday users can obtain, install, and begin using sudowin.
2. **Document Conventions**
When you read this practical assignment, you will see that certain words are represented in different fonts and typefaces. The types of words that are represented this way include the following:
- **command**: Operating system commands are represented in this font style. This style indicates a command that is entered at a command prompt or shell.
- **filename**: Filenames, paths, and directory names are represented in this style.
- **URL**: Web URL's are shown in this style.
3. **Introduction / Executive Summary**
Microsoft Windows has a broken security model that implicitly encourages users to log onto computer systems with administrative privileges at all times. Users are forced to choose between being hindered by the frustrations of a normal user account or being exposed to the viruses and Windows exploits of the world by logging in as an administrative account. This is a choice between bad and worse.
Other operating systems, such as UNIX/Linux and Apple OS X, solve this problem by employing privilege escalation. This privilege escalation is founded upon a command known as sudo. The sudo command allows a user to escalate her privileges when needed by entering her own passphrase. Not only that, but the user also retains her original user context. Microsoft Windows lacks this functionality.
Many mistakenly believe the Windows RunAs command provides Windows with sudo functionality. The RunAs command does not enable a user to escalate her privileges. It allows the user to assume the identity of another account, if the user knows the passphrase for that account. For this reason the RunAs should be thought of as an equivalent to the UNIX/Linux su command, not sudo. Hence, Windows remains without sudo functionality.
**Sudo for Windows (sudowin)** attempts to fix the Microsoft Windows security model by providing Sudo functionality to the Windows operating system. This practical will explore the history of sudowin, its implications on the world of Windows security, the complex design of this project, and finally, how end-users may
Sudo for Windows (sudowin)
implement sudowin.
4. History
Sudowin is an invention of necessity. In the summer of 2005 Andrew Kutz decided to stop logging into his Windows XP SP2 workstation as an administrative user. This caused an immediate problem. Andrew used a software package called DVD Decrypter to back up his DVDs, but DVD Decrypter would not run as a non-administrator because of the level of access it required to the DVD-ROM drive. The simple solution was to simply use the RunAs command to launch DVD Decrypter as an administrator so that it had the privileges that it needed, except this would not work because of Andrew’s permissions scheme.
Andrew did not set explicit permissions inside of his home directory, therefore Andrew’s user account, akutz, did not own the directories or files within his home directory, the special user CREATOR OWNER did. This meant that the user account that created a file or directory inside of Andrew’s home directory would own it. If Andrew had launched DVD Decrypter as another user, the administrator, and backed up his DVDs to ISO images inside of his home directory, the other user would own the files, not Andrew’s user account. There were two obvious solutions:
1. Change the permissions scheme so that Andrew’s user account was the explicit owner of all the directories and files inside of his home directory.
2. Develop sudo functionality for Windows so that DVD Decrypter, and other applications like it that require elevated privileges, could be launched with temporary elevated privileges while retaining the original user context.
Always the road less traveled.
5. **Implications**
There are other tools available that provide similar functionality, but they are all lacking in some respect.\(^2\)\(^3\)\(^4\)\(^5\) These tools are not configurable, they are not extensible or worse yet, they actually create security holes because they are subject to man-in-the-middle attacks.
Sudo for Windows is configurable out of the box. There is hardly a setting that cannot be changed by simply editing a text file. And because of its plugin architecture, anyone with some programming experience can develop custom authentication, authorization, and credentials caching plugins, extending its capabilities even further.
Most importantly, sudowin does not decrease security by creating man-in-the-middle-attacks for malicious users and disgruntled administrators to exploit. Sudo for Windows increases overall security by enabling your entire enterprise to run in Least User Access (LUA) mode.
The Windows desktop environment would benefit greatly from Sudo for Windows. Windows could ship with the Administrator account disabled and the first user a member of the Sudoers group (much like Ubuntu Linux). Whenever a user needed to make a system change she could simply escalate her own privileges to do so instead of running
---
\(^2\) SuDown - http://sudown.sourceforge.net/
\(^4\) WinSudo - http://winsudo.toadlife.net/
Schley Andrew Kutz
Sudo for Windows (sudowin)
the command with the Administrator account. Sudowin could bring the learned security practices of Linux to the world of Windows.
Sudowin is also ideal for enterprise deployment. An Active Directory administrator who delegates organizational units (OU) management to other administrators is typically assigned two separate accounts – one unprivileged, everyday account, and one privileged account used for system administration. Keeping up with two accounts is a huge pain for administrators and inevitably results in most of them staying logged into their computers as their privileged account, a massive security problem.
With sudowin, OU permissions can be delegated to groups that OU administrators are not normally members of. The OU administrators could use sudowin to launch Active Directory Users and Computers with privileges elevated to the level needed to manage a particular OU.
This is just one example, but since every object in Active Directory has a permission set, sudowin could be used to manage each and every one of those objects. Sudowin will create happier administrators and a more secure Active Directory.
6. **Design**
Sudowin is a distributed application that has several components. It must be, and has to, because Windows as an operating system is not conducive to sudo-like functionality, and thus a program that provides it is required by necessity to be tricky.
At its core, sudowin is composed of a client and a server. The client asks the server to escalate a user’s privileges and execute a given command, and the server either complies or refuses. Obviously this is a great simplification of a complex process.
Sudowin is designed to be completely extensible and therefore has a rich plugin architecture. It currently supports three types of plugins – authentication, authorization, and credentials cache.
Finally, the linchpin of sudowin may very well be the callback application. This is what enables the entire project to actually work. This is sudowin’s flux capacitor.
What follows is a review of all the components that make sudowin work. This includes:
- Server
- Client
- Plugins
- Authentication
- Authorization
- CredentialsCache
- CallbackApplication
**Server**
The sudowin server is hosted inside of a Windows service named...
Sudo for Windows (sudowin)
sudowin. This service runs as the local system account. The server exposes a SingleCall object that clients can access so long as the user account invoking the client is a member of the configured Sudoers group. The server also loads the configured plugins in order to authenticate users, authorize their requests, and cache their credentials.
Configuration
The sudowin server’s configuration file is a standard .NET application configuration file and must reside in the same directory as the sudowin server binary, Sudowin.Server.exe. Because a .NET configuration file has the same name as its corresponding binary but with an extension of .config, the name of the configuration file is Sudowin.Server.exe.config. The configuration file is really divided into three sections: the application settings, the remoting settings, and the diagnostics settings.
Application Settings
The application settings section should appear towards the top of the configuration file and is contained by the <appSettings/></appSettings> node. Values are added to it in the following format:
```xml
<add key="keyName" value="settingValue" />
```
The following is a list of the application settings that the sudowin server is configured to use:
- **pluginConfigurationUri** This is the Uniform Resource Indicator (URI) of the file that stores the server’s plugin configuration. This URI can point to a local file or one accessible via HTTP. The plugin configuration file must adhere to the plugin configuration schema as defined in PluginConfigurationSchema.xsd.
- **pluginConfigurationSchemaUri** This is the URI to the schema file the server uses to validate the plugin configuration file with. This URI can point to a local file or one accessible via the Hyper Text Transfer Protocol (HTTP). This URI must point to a file that adheres to the plugin configuration schema as defined in PluginConfigurationSchema.xsd.
- **callbackApplicationPath** This is the fully qualified path to the sudowin callback application. This is used by the sudowin server to launch processes in the context of the user that invoked the sudowin client.
**Remoting Settings**
The remoting settings section appears directly after the application settings section and is contained by the following node <system.runtime.remoting/></system.runtime.remoting>. Although there are a few things going on in this section, there are two sections where someone could want to potentially configure things differently than is default:
1. The name of the sudowin service remoting object.
2. The remoting channel.
Remoting Object
The sudowin server remoting object is defined in the configuration file by the following stanza:
```xml
<service>
<wellknown
mode="SingleCall"
objectUri="sudowinserver.rem"
/>
</service>
```
- **wellknown** The sudowin server is configured as a WellKnown remoting object. This means that this is a server activated object (SAO), as opposed to a client activated object (CAO). The difference is that the server controls the lifetime of the object as opposed to the client. To change this to a CAO, change the name of the node from wellknown to activated.
- **mode** The sudowin server object is configured as a SingleCall object. This means that a new sudowin server is instantiated for every remote call made to it. The alternative is to configure the server as a Singleton object. In this case the server is instantiated once and that object handles all remote calls. Changing this setting is not recommended. The sudowin server is configured as a SingleCall object because it does not have a high penalty upon creation, an instance where you would want to use a Singleton, and a Singleton synchronizes access to itself meaning that two users logged into the same computer would be prevented from invoking sudowin at the same time.
- **objectUri** This is the URI of the remoted sudowin server object. If you change this value then you will need to change the URI that clients attempt to connect to.
**Remoting Channel**
The sudowin remoting channel is an inter-process channel (IPC) that the server exposes to allow clients to establish a communications link to the remoted objects registered on this channel, in this case the sudowin server object. The channel is defined by the following stanza:
```xml
<channel
type="System.Runtime.Remoting.Channels.Ipc.IpcServerChannel,
System.Runtime.Remoting, Version=2.0.0.0,
Culture=neutral, PublicKeyToken=b77a5c561934e089"
portName="sudowin"
secure="True"
authorizedGroup="Sudoers">
...
</channel>
```
- **portName** This is the name of the port that the server listens for incoming communications on. If this value is changed then the port name the clients are configured to talk to will need to changed as well.
- **secure** By settings this value to True communications between the sudowin server and any clients are secured.
- **authorizedGroup** The security context in which the client is run must be a member of this group or else the client will be refused when the client attempts to establish a communications channel with the server channel. By default this is set to Sudoers and the Sudoers group is created when sudowin is installed.
Diagnostics Settings
The diagnostics section is contained by the <system.diagnostics/> node. It is defined by the following stanza:
```xml
<system.diagnostics>
<trace autoflush="true"/>
<sources>
<source name="traceSrc" switchValue="ActivityTracing, Verbose">
<listeners>
<add
initializeData="r:\projects\sudowin\trunk\sudowin\server\bin\Debug\service.log"
type="System.Diagnostics.DelimitedListTraceListener"
name="traceListener"
traceOutputOptions="DateTime, ProcessId, ThreadId, Timestamp"
/>
</listeners>
</source>
</sources>
</system.diagnostics>
```
- **autoflush** Setting this value to true means that all trace data is immediately written to the trace listeners, be they files or event logs. Settings this value to false will result in said data being queued until it is manually flushed or the sudowin service is shut down.
- **switchValue** Upon installation this value is set to Error instead of ActivityTracing, Verbose. This means that only trace data classified as Error or greater, for example, Critical, is actually written to the trace listeners.
Sudo for Windows (sudowin)
Tracing is not of much use without a place to send its data, hence the trace listener. To add a trace listener simply add the correct stanza in the listeners section. For example, in the following example a System.Diagnostics.DelimitedListTraceListener has been added:
```xml
<add
initializeData="r:\projects\sudowin\trunk\sudowin\server\bin\Debug\service.log"
type="System.Diagnostics.DelimitedListTraceListener"
name="traceListener"
traceOutputOptions="DateTime, ProcessId, ThreadId, Timestamp"
/>
```
- **initializeData** This is the fully qualified path to the file that will have the trace data written to it.
- **traceOutputOptions** These are essentially columns that get written to the trace listener every time data is written. This alleviates the need to manually write the current date or process ID with every log entry – .NET handles it for you.
**Client**
Sudowin ships with two clients, the command line client and the Graphical User Interface (GUI) client. However, anyone can design a client for sudowin, all is required is to download the source code and follow the examples.
**Command Line Client**
The sudowin command line client binary is named Sudowin.Clients.Console.exe but is renamed to sudo.exe upon installation. The path to this file is added to the system’s PATH environment variable, so from any command line. The only thing a
Sudo for Windows (sudowin)
user must do in order to invoke sudo is to type sudo. For example, to open a new command prompt with sudowin, type:
```
sudo cmd
```
Sudowin will prompt the user for a passphrase and launch a new command prompt with elevated privileges, assuming the user is authorized to invoke sudo on the command shell.
This command also accepts a passphrase from the command line. If a user’s passphrase is foobar, the user could type:
```
sudo -password foobar cmd
```
Sudowin will not prompt the user for a passphrase, but instead use the passphrase specified on the command line and will launch a new command prompt with elevated privileges, assuming the user is authorized to invoke sudo on the command shell.
Configuration
The command line client has a configuration file located side-by-side with its binary. The default path for this is INSTALLDIR\Clients\Console\sudo.exe.
There are only two sections that might need to be modified. The first section defines the sudowin server client with the following stanza:
```
<wellknown
type="Sudowin.Common.ISudoServer, Sudowin.Common"
url="ipc://sudowin/sudowinserver.rem"
/>
```
- **url** If the port or URI of the sudowin server object is modified then this value needs to be updated as well.
The other section is as follows:
```
<channel
type="System.Runtime.Remoting.Channels.Ipc.IpcClientChannel,
System.Runtime.Remoting, Version=2.0.0.0, Culture=neutral,
PublicKeyToken=b77a5c561934e089"
portName="sudowin"
secure="True"
useDefaultCredentials="True">
...
</channel>
```
- **portName** This should be the same as the port name defined for the server channel in the sudowin server configuration file.
**GUI Client**
Sudowin also ships with a GUI client. The GUI client is enabled by default for the following types:
- `.bat`
- `.cmd`
- `.exe`
- `.msi`
- A folder
Right clicking on any of these file types will bring up the
Schley Andrew Kutz
Sudo for Windows (sudowin)
default Windows context menu and Sudo… will be one of the options. Clicking Sudo… will bring up a prompt to enter a passphrase (if it is not cached) and then the intended program or folder will be opened with elevated privileges.
Configuration
The GUI client’s configuration file is by default located at \INSTALLDIR\Clients\GUI\Sudowin.Clients.Gui.exe.config. It follows the same format as the console client’s configuration file.
Plugins
What sets sudowin apart from all other projects that attempt to provide sudo functionality to Windows is its extensible design. Almost everything in sudowin is a plugin, from its authorization handlers, to authentication and credential caching.
The sudowin project defines a master interface that all plugins must implement called IPlugin. However, because sudowin is distributed through remoting, the interface alone is not enough. All remoted objects must derive from a class called MarshalRefByObject. Since an interface cannot derive from a class, it stands to reason that all plugins must derive from some class. This class is called Plugin. All plugins should therefore derive from Plugin, which both implements IPlugin and derives from MarshalRefByObject.
Configuration
Plugins for sudowin are configured in the plugin configuration file. This file is by default located at \INSTALLDIR\Server\pluginConfiguration.xml. The plugin configuration...
file follows a very strict schema. This schema is by default located at INSTALLDIR\Server\PluginConfigurationSchema.xsd. The path to both of these files can be changed in the sudowin server’s configuration file - for example, it would be possible to centrally configure sudowin plugins by placing the plugin configuration file on a network share or make it accessible via HTTP.
The following is an example of a plugin configuration file:
```xml
<?xml version="1.0" encoding="utf-8" ?>
<pluginConfiguration
xmlns="http://sudo.win.sourceforge.net/
schemas/PluginConfiguration/"
>
<plugins>
<plugin
pluginType="authenticationPlugin"
assemblyString="SudoWin.Plugins.Authentication.NT.
NTAuthenticationPlugin, SudoWin.Plugins.
Authentication.NT"
/>
<plugin
pluginType="authorizationPlugin"
XmlAuthorizationPlugin, SudoWin.Plugins.
Authorization.Xml"
dataSourceConnectionString="r:\projects\sudo.win\trunk\n sudo.win\plugins.authorization.xml\sudoers.xml"
dataSourceSchemaUri="r:\projects\sudo.win\trunk\sudo.win\n plugins.authorization.xml\XmlAuthorizationPluginSchema.xsd"
dataSourceCacheFilePath="r:\projects\sudo.win\trunk\sudo.win\n server\sudoers.xml.cache"
dataSourceCacheFrequency="00:05"
dataSourceCacheEnabled="true"
/>
<plugin
pluginType="credentialsCachePlugin"
LocalServerCredentialsCachePlugin,
SudoWin.Plugins.CredentialsCache.LocalServer"
serverType="Singleton"
/>
</plugins>
</pluginConfiguration>
```
Instead of explaining the plugin configuration file line by line, the plugin configuration schema will be examined.
**Plugin Configuration Schema**
The plugin configuration schema is an XML schema definition (XSD). Employing a schema ensures that a plugin configuration file will always be parsed correctly. When it is loaded it is checked against the schema file and is rejected if it does not conform to the schema.
The current version of the plugin configuration schema is available at http://sudowin.svn.sourceforge.net/viewvc/sudowin/trunk/sudowin/Plugins/PluginConfigurationSchema.xsd?view=markup.
A basic outline of the schema looks like this:
```xml
<?xml version="1.0" encoding="utf-8" ?>
<pluginConfiguration>
<plugin attribute1="value" attribute2="value" ... />
</pluginConfiguration>
```
Missing from the basic outline above are the attributes for the plugin node. What follows is a list of all valid plugin node attributes and their definitions:
- **assemblyString**
- **type** string
- **use** required
- **description** This is the fully qualified assembly type.
Sudo for Windows (sudowin)
• serverType
o type string
o use optional
o default SingleCall
o description This defines whether a plugin is loaded as a SingleCall object or a Singleton. If the plugin takes a very long time to load or it needs to persist its state between calls then choose Singleton, otherwise SingleCall is a safe bet. The credentials cache plugin that ships with sudowin, Sudowin.Plugins.CredentialsCache.LocalServer, must be configured as a Singleton object or it will not cache credentials between sudo invocations.
• serverLifetime
o type int
o use optional
o default 0
o description This defines the lifetime of a plugin configured as a Singleton before the object is released and a new one is created (this is its lease). This value has no effect on plugins configured as SingleCall.
• enabled
o type bool
o use optional
o default true
o description You can set this value to false to disable the plugin without removing its entry from the plugin configuration file.
• activationData
o type string
o use optional
o description If a custom plugin is designed and upon activation requires additional configuration information not specified by the normal plugin configuration parameters, this attribute may be used to pass the plugin its required data.
Sudo for Windows (sudowin)
- **pluginType**
- type string
- use required
- description Valid values are currently authorizationPlugin, authenticationPlugin, and credentialsCachePlugin.
- **dataSourceConnectionString**
- type string
- use optional
- description If a plugin uses a data source this attribute can be used to specify its connection string.
- **dataSourceSchemaUri**
- type string
- use optional
- description If a plugin uses a data source and uses a schema to validate it, this attribute can be used to specify the schema URI.
- **dataSourceCacheFilePath**
- type string
- use optional
- description If a plugin enables data source caching this attribute can be used to specify a fully-qualified file path for the cache file.
- **dataSourceCacheUpdateFrequency**
- type TimeSpan
- use optional
- default 00:05
- description If a plugin enables data source caching this attribute can be used to specify how often the cache should be updated.
- **dataSourceCacheEnabled**
- type bool
- use optional
- default true
- description Set this to true to enable data source caching.
Sudo for Windows (sudowin)
- **dataSourceCacheUseAsPrimary**
- **type** bool
- **use** optional
- **default** false
- **description** Set this to true to treat the data source cache as the primary source of data. In this scenario the original source is only contacted for updates at the interval defined by dataSourceCacheUpdateFrequency.
- **dataSourceCacheUseStaleCache**
- **type** bool
- **use** optional
- **default** false
- **description** Set this to true to let a plugin know that it is okay to use a data source cache file that has not been updated after the given interval defined by dataSourceCacheUpdateFrequency. Enable this if sudowin should remain functional even if the plugin cannot connect to the remote data source.
**Plugin Types**
There are currently three supported plugin types. These are:
- **authentication** Used to authenticate users credentials.
- **authorization** Used to authorize sudo requests.
- **credentialsCache** Used to cache credentials.
**Authentication**
Authentication plugins are used to authenticate a user’s credentials upon invoking a sudowin client. Currently sudowin ships with one authentication plugin, Sudowin.Plugins.Authentication.NT.
Sudo for Windows (sudowin)
However, an authentication plugin could be developed that would authenticate users from an Oracle database, a text file, or even simply always return true. The crux is that the credentials the user supplied must be able to be used to launch a process on the computer where they invoked sudo.
**NT**
The NT authentication plugin uses the native win32 Application Programming Interface (API) function, LogonUser, to authenticate the user’s credentials. This plugin authenticates local and domain accounts.
**Authorization**
**XML**
The XML authorization plugin uses an XML file as its data source. This XML file must adhere to the schema file that is provided with the plugin. Both of these values must be configured in the plugin’s configuration section in the plugin configuration file. By default the XML authorization plugin is configured as a SingleCall object so that it will read the XML file every time it is called – the effect of this is that changes made to the XML file are reflected the next time a sudo invocation occurs. The alternative is to
Sudo for Windows (sudowin)
configure the XML authorization plugin as a Singleton, the effect of which is that the XML file only gets read one time when the sudowin service starts.
The XML authorization schema appears a little complicated, but it is in fact quite straightforward. Here is an outline of the parent-child node relationship of the schema:
```
sudoers
├── users
│ └── userGroup
│ ├── commands
│ │ └── commandGroup
│ │ └── command
│ │ ├── users
│ │ │ └── commandGroupRef
│ │ └── user
│ └── commands
│ └── commandGroupRefs
│ └── command
```
As seen above, the root element of the schema is the `<sudoers>` node. The `<sudoers>` node can contain two types of child nodes, `<users>` and `<commands>`. The `<commands>` node can contain `<commandGroup>` nodes which in turn can contain `<command>` nodes. The `<users>` node may
Sudo for Windows (sudowin)
contain one type of child node, <userGroup>. <userGroup> in turn can contain three types of child nodes: <users>, <commands>, and <commandGroupRefs>. The <commands> and <commandGroupRefs> nodes can each contain one child node type: <command> and <commandGroupRef>, respectively. The <users> node can contain one type of child node, <user>. The <user> node can contain two types of child nodes: <commands> and <commandGroupRefs>. As before, the <commands> and <commandGroupRefs> nodes can each contain one child node type: <command> and <commandGroupRef>, respectively.
Here is an example a sudoers.xml file the implements the schema:
```xml
<?xml version="1.0" encoding="utf-8"?>
<sudoers xmlns="http://sudowin.sourceforge.net/schemas/XmlAuthorizationPlugin/"
privilegesGroup="Administrators"
invalidLogons="3"
timesExceededInvalidLogons="3"
invalidLogonTimeout="180"
lockoutTimeout="180"
logonTimeout="180"
startTime="00:00:00.00000"
endTime="23:59:59.99999"
loggingLevel="Both"
allowAllCommands="false">
<users>
<userGroup name="standard">
<users>
<user name="poppy\akutz" allowAllCommands="true">
<commands>
<command
path="c:\windows\system32\regedit.exe"/>
</commands>
</user>
<user name="lostcreations\akutz" allowAllCommands="false"/>
</users>
<commandGroupRefs>
<commandGroupRef commandGroupName="standard"/>
</commandGroupRefs>
</userGroup>
</users>
</sudoers>
```
Sudo for Windows (sudowin)
As seen, node relationships and attributes defined on the nodes contain the data that is parsed from the file. The following is a list of nodes that contain attributes and their definitions:
<sudoers>
The <sudoers> node is the root node of a file that implements the XmlAuthorizationSchema. Its attribute list is unique in that all attributes have default values. This is so if an attribute is not defined at a lower level the attribute will always have a value by virtue of having a default value on the root node.
Valid attributes are:
- **privilegesGroup**
- **type** string
- **use** optional
- **default** Administrators
- **description** The group that a user’s privileges will be escalated to when they use sudo to execute an application.
- **startTime**
- **type** TimeSpan
- **use** optional
- **default** 00:00:00.00000
- **description** The earliest time of day that sudo can be invoked.
- **endTime**
- **type** TimeSpan
- **use** optional
- **default** 23:59:59:99999
- **description** The latest time of day that sudo can be invoked.
- **invalidLogons**
- **type** integer
- **use** optional
- **default** 3
- **description** The number of invalid logon attempts a user is allowed before they accrue one strike towards being locked out.
- **timesExceededInvalidLogons**
- **type** integer
- **use** optional
- **default** 3
- **description** The number of times a user is allowed to exceed the invalid logon attempt value before they are locked out.
- **logonTimeout**
Sudo for Windows (sudowin)
- **type** integer
- **use** optional
- **default** 180
- **description** The number of seconds that the user’s credentials will be cached upon a successful authentication.
- **lockoutTimeout**
- **type** integer
- **use** optional
- **default** 180
- **description** The number of seconds a user is locked out once they have exceeded the value specified by `timesExceededInvalidLogons`.
- **invalidLogonTimeout**
- **type** integer
- **use** optional
- **default** 180
- **description** The number of seconds sudowin keeps track of a user exceeding the invalid logon limit. For example, by default a user could exceed the invalid logon limit twice in 3 minutes and once more 4 seconds later, but the counter would have already reset and that user would not be locked out.
- **loggingLevel**
- **type** loggingLevelType
- **use** optional
- **default** Failure
- **description** The level of logging to adhere to. Valid values are Success, Failure, Both, and None. Success will log all successful attempts to invoke sudo. Failure will log all failed attempts to invoke sudo. Both will log both all successful and failed attempts to invoke sudo. None will log neither successful nor failed attempts to invoke sudo.
Sudo for Windows (sudowin)
- **allowAllCommands**
- **type** bool
- **use** optional
- **default** false
- **description** Set to true to allow the execution of all commands or set to false to restrict commands to those explicitly allowed.
- **allowedNetworks**
- **type** string
- **use** optional
- **default** *
- **description** The networks that users are allowed to execute sudo from. This value can be set to * to match all networks, a host name, an IP address, or a Perl-compliant regular expression that matches a network syntax, such as 192.168.0.[0-255]. Multiple values are allowed by separating them with commas, for example spindy,poppy,marmer,192.168.9.0,192.\d{1,3}\.0.[5-30]. If no value is specified for this attribute then all networks are considered valid.
**<userGroup>**
The `<userGroup>` node is used for grouping distinct users into a singular group. This is useful because it allows an administrator to set attribute values on a group of users instead of individual users. Valid attributes are:
- **name**
- **type** string
- **use** required
- **description** The name of the user group. This value does not have to be unique; it is simply for an administrator to be able to easily distinguish between groups of users.
The remaining attributes of the `<userGroup>` node are the same as the `<sudoers>` node except they have no default value.
Sudo for Windows (sudowin)
<User>
The <user> node defines a Windows user by their host or domain name plus their user name (the `samAccountName`). Valid attributes are:
- **name**
- **type** string
- **use** required
- **description** The name of the user. This value must adhere to the following format: `HOST_OR_DOMAIN_NAME\USER_NAME`. For example, `lostcreations\akutz`.
The remaining attributes of the <user> node are the same as the <sudoers> node except they have no default value.
<commandGroup>
The <commandGroup> node is used for grouping distinct commands into a singular group. This is useful because it allows an administrator to set attribute values on a group of commands instead of individual commands. The name of this node, specified by the `name` attribute is important because it is what the <commandGroupRef> node references. Valid attributes are:
- **name**
- **type** string
- **use** required
- **description** The name of this command group. This value should be unique as it is used by the <commandGroupRef> node.
- **enabled** See the <command> node.
- **startTime** See the <sudoers> node.
- **endTime** See the <sudoers> node.
- **loggingLevel** See the <sudoers> node.
Sudo for Windows (sudowin)
- **allowedNetworks** See the `<sudoers>` node.
**<command>**
The `<command>` node defines a command by its fully-qualified path, in addition to other possible command characteristics. Valid attributes are:
- **path**
- o type string
- o use required
- o description The fully qualified path of the command. For example, `c:\windows\system32\cmd.exe`.
- **argumentString**
- o type string
- o use optional
- o description The arguments that are valid with the given command. For example, specify `/K` so that the command shell must always be executed with the `/K` switch. This value also supports Perl-compliant regular expressions by beginning and ending the value with a `/`. For example, `/^/K echo.*$/` will restrict this command so that its argument list must always begin with `/K echo`. If no value for this attribute is specified then all arguments will be allowed.
- **enabled**
- o type bool
- o use optional
- o default true
- o description Specify true to enable this command or false to disable it without removing its entry from the file.
Sudo for Windows (sudowin)
- **md5Checksum**
- **type** string
- **use** optional
- **description** The MD5 checksum of the given command. If specified then the server will not okay this command’s execution if at execution time the command’s MD5 checksum does not match what is in this file. Keep in mind that checksums can differ between OS version and service pack level.
- **startTime** See the `<sudoers>` node.
- **endTime** See the `<sudoers>` node.
- **loggingLevel** See the `<sudoers>` node.
- **allowedNetworks** See the `<sudoers>` node.
```xml
<commandGroupRef commandGroupName="tools" />
```
The administrator could reference that group at another location with:
```xml
<commandGroupRef commandGroupName="tools" />
```
Valid attributes are:
- **commandGroupName**
- **type** string
- **use** required
- **description** The name of the <commandGroup> node that this command group reference points to.
**CredentialsCache**
The credentials cache plugin persists a user’s credentials and information about their authentication attempts between sudo invocations.
**LocalServer**
The local server plugin caches credentials in local memory. To ensure protection of said credentials it uses the .NET System.Security.SecureString when storing any type of private information. The local server plugin must be configured as a Singleton object so that it is instantiated once and stays resident in memory. Otherwise it will not be useful to persist information between sudo invocations.
**CallbackApplication**
The sudowin callback application is extremely important. The sudowin Windows service must be able to both launch a process that is attached to the logged in user’s desktop and be able to launch that process with a new logon token for the user so the user’s temporarily
---
assigned group membership is respected. The win32 API function CreateProcessAsUser enables developers to launch processes with a user’s logon token, and that process will be attached to the same desktop that the user’s logon token came from. Using the win32 Windows Terminal Services API (wtsapi32) it is possible to get the logon token for any user currently logged onto the computer. So sudowin uses the wtsapi32 to get the logon token for a given user and uses the CreateProcessAsUser function to create a process that is attached to the user’s desktop.
However, creating a new process with an existing logon token is not enough. The existing logon token does not respect the given user’s temporarily assigned group memberships. This is where the callback application comes into play. The process that the sudowin service creates is always the callback application with several arguments. The first argument to the callback application is the given user’s passphrase, the second is the fully-qualified path of the command the user has invoked sudo on, and any additional arguments are arguments that go with the command the given user has invoked sudo on.
The callback application then launches a new process for the command the user invoked sudo on, but does so with the user’s passphrase, creating a new logon token and a process that respects the user’s temporarily assigned group memberships.
7. **Walk Through**
Below is a step-by-step walk through of the events that occur when the sudowin service is started and during a sudowin client invocation. There are a few things to keep in mind when reading the walk through. The sudowin server object and some plugin objects are configured as SingleCall objects. This means every time any of their methods are invoked, the object is created anew. Therefore any code that the object runs upon creation, for example its constructor, gets called every time one of its methods is invoked.
**Service Startup**
1. The sudowin service is started by the Service Control Manager (SCM) either by code, the GUI interface, or the command net start sudowin.
2. The sudowin service controller callback method OnStart is triggered. The method checks its arguments to see if they were set. If so, the method will sleep the current thread for the number of seconds specified by the first argument passed to the method. This is so developers can easily pause the startup routine of the sudowin service in order to attach debuggers to it before things really get moving.
3. The OnStart method then loads the sudowin service’s application configuration file and configures its application domain’s remoting environment by parsing the system.runtime.remoting section from the configuration file.
4. The OnStart method then verifies the plugin configuration file as specified by the pluginConfigurationUri value in the server’s configuration file against the plugin configuration schema as specified by the pluginConfigurationSchemaUri value in the server’s configuration file. If the plugin configuration file cannot be verified an exception will be thrown that crashes the sudowin service. Administrators can
check the event log in order to determine exactly why the exception was caused.
5. If the plugin configuration file was loaded successfully, the sudowin service will register the configured plugins as remoting objects and then obtain references to the objects it just registered. After the plugin’s reference is obtained the plugin has its Activate method invoked. This method invocation will cause the plugin object to be instantiated. Even though no plugins are used at this stage, the point of forcing plugin instantiation at this stage is to see if instantiating any of the configured plugins will cause an exception. If so then it is better it should happen now when the service is starting then later when a user first invokes a sudowin client. If a plugin activation throws an exception then that exception will crash the sudowin service. Administrators can check the event log in order to determine exactly why the exception was caused.
6. The sudowin service has been started and is ready to accept connections from clients.
Client Invocation
1. Mandy invokes a sudowin client.
2. The client verifies that the sudowin service is running. If it is not, the client will inform the user that the sudowin service is not running.
3. The client attempts to establish a secure connection with the sudowin server. The client must then determine whether or not it has established a successful connection to the sudowin server. It does this by invoking the server method IsConnectionOpen. This method always returns True, the real test is whether or not invoking this method throws an exception. If this method invocation does not throw an exception then the client knows a successful connection has been established. Because the sudowin server is a SingleCall object, this method invocation also instantiates the sudowin server object.
4. When the sudowin server object is instantiated it verifies the plugin configuration file as specified by the
Schley Andrew Kutz
Sudo for Windows (sudowin)
pluginConfigurationUri value in the server’s configuration file against the plugin configuration schema as specified by the pluginConfigurationSchemaUri value in the server’s configuration file. If the plugin configuration file cannot be verified an exception will be thrown that halts the current client invocation. The user can report the error to an administrator so that they can check the event log in order to determine exactly why the exception was caused.
5. If the plugin configuration file was loaded successfully, the sudowin server object will obtain references to the plugin objects that have already been registered when the service was started. After the plugin’s reference is obtained the plugin has its Activate method invoked. If a plugin activation throws an exception then that exception will halt the current client invocation. The user can report the error to an administrator so that they can check the event log in order to determine exactly why the exception was caused.
6. If Mandy is not a member of the configured Sudoers group then an exception is thrown that the client should catch and inform Mandy that she is not allowed to continue. If Mandy is a member of the configured Sudoers group then the process continues.
7. The client then asks the server if Mandy has exceeded her invalid login limit, i.e. she is locked out. This action instantiates the sudowin server object. Please see Steps 4 and 5 for a description of what occurs when the sudowin server object is instantiated. If Mandy is locked out then the server will inform the client of this. The client should inform Mandy she is locked out and quit gracefully. If the client did attempt to proceed the sudo invocation would fail because the server knows Mandy is locked out. If Mandy is not locked out then the process continues unabated.
8. The server checks to see if Mandy has cached credentials. This action instantiates the sudowin server object. Please see Steps 4 and 5 for a description of what occurs when the sudowin server object is instantiated. If Mandy has cached credentials then the client invokes the server’s Sudo method. If Mandy does not have cached credentials the client should prompt Mandy for her credentials and once obtained the client will invoke the server’s Sudo method. Invoking the server’s
Sudo for Windows (sudowin)
Sudo method instantiates the sudowin server object. Please see Steps 4 and 5 for a description of what occurs when the sudowin server object is instantiated.
9. The sudowin server uses its authorization plugins to verify that Mandy exists in one of the authorization plugins’ data sources. If she does not, the sudowin server returns the status code CommandNotAllowed to the client. If Mandy does exist in one of the authorization data sources the process continues.
10. The sudowin server checks to see if Mandy is locked out. If she is, the sudowin server returns the appropriate status code to the client. If Mandy is not locked out the process continues.
11. The sudowin server checks to see if Mandy has cached credentials. If she does then the sudowin server will proceed using the cached credentials set. If Mandy does not have cached credentials the sudowin server will attempt to validate the credentials that the client passed to the sudowin server earlier using the authentication plugins. If the credentials are not validated then the status code InvalidLogon will be returned to the client. If the credentials are validated then they will be cached for the amount of time specified for Mandy in the authorization plugin data source.
12. The sudowin server then authorizes the command that the sudowin client as invoked on using the authorization plugins. If the command is not authorized the status code CommandNotAllowed is returned to the client. If the command is authorized then the process continues.
13. Next, the sudowin server verifies that the server binary and the callback application binary, as specified by the callbackApplicationPath value in the server’s configuration file, are both signed by the same strong name key. This is to ensure that the callback application has not been tampered with. If the signatures do not match then the status code CommandNotAllowed is returned to the client. If the signatures do match then the process continues.
Sudo for Windows (sudowin)
14. The sudowin server uses wtsapi32 to query Mandy’s user token from her active desktop session.
15. The sudowin server checks to see if Mandy is already a member of the privileges group specified by her entry in the authorization data source. If Mandy is already a member then a note is made of this, otherwise she is added to the group.
16. The sudowin server uses Mandy’s token to create a process for the callback application. Because Mandy’s token from her desktop session was used, the callback application’s process will be created bound to Mandy’s desktop. The server then waits for the callback application process to conclude before continuing.
17. The callback application launches the intended command with Mandy’s credentials, thereby creating a new user token for Mandy and associating that token with the process the intended command was launched in. Because Mandy was added to the privileges group by the server before this occurred, the new process respects Mandy’s new group membership and thereby has elevated privileges.
18. Once the callback application process has concluded the sudowin server checks to see if Mandy was a member of the privileges group prior to this sudowin invocation. If she was then she is left in the group. If Mandy was added to the privileges group for the sole purpose of this sudowin invocation she is then removed from the privileges group.
19. This concludes a sudowin client invocation.
8. **Implementation**
Sudowin is a complex product that provides a simple solution to an intricate problem. Luckily for the end-user, implementing sudowin on a desktop is quite easy.
**Requirements**
Sudowin requires the following:
- Operating System
- Microsoft Windows XP Professional
- Microsoft Windows Server 2003
- Microsoft Windows Vista Professional+
- Microsoft Windows Server Longhorn
- Microsoft .NET 2.0 Framework
- The Terminal Services service
**Installing**
1. Download either the EXE or MSI installer for the latest release from [http://sourceforge.net/project/showfiles.php?group_id=143653](http://sourceforge.net/project/showfiles.php?group_id=143653).
2. Install sudowin by double-clicking on the installer.
a. The installation will prompt the user to decide whether or not to allow anyone to use sudowin, or only the installing user. The default choice is everyone.
Sudo for Windows (sudowin)
b. The installation will prompt for an installation location. The default installation location is %ProgramFiles%\Sudowin.
Upgrading
There have already been several releases of sudowin. For this reason, it is impractical to discuss all of the upgrade scenarios in this document. Please see the online documentation regarding how to upgrade sudowin from a previous version at http://www.lostcreations.com/sudowin/documentation#upgrading.
Configuring
If sudowin has been upgraded from version 0.1.1-r95 or up to a more recent version then it should be ready for use immediately after the upgrade. However, an earlier upgrade or a fresh installation will require some configuration before sudowin can be used.
The Sudoers Group
1. Click “Start” → “Run” and type “compmgmt.msc”. This will bring up the “Computer Management” application.
2. Under “System Tools” find “Local Users and Groups” and expand it so that it is possible to click on “Groups”.
3. There should be a group called “Sudoers”. Double-click on it.
4. Add the user accounts to this group that should be able to use sudowin on this computer.
5. Any user that is added to the “Sudoers” group will need to log out and back in to this computer before they can use sudowin. This is because Windows only loads group
The Sudoers File
1. Use notepad to open the “sudoers.xml” file the installer places in INSTALLDIR\Sudowin\Server.
2. Find the section called <users> and add a user node for the user that should be enabled to use sudowin. Please see the Sudowin.Plugins.Authorization.Xml plugin documentation for information on how to configure a user and what commands they are allowed to invoke sudo on.
Uninstalling
1. Use “Add/Remove Programs” or the original installer to remove sudowin.
2. The uninstaller will ask if the “Sudoers” group and the “Sudoers” file should be removed. Only remove these if sudowin is not being uninstalled in preparation for an upgrade.
Locations
Some users are very obsessive about uninstalling every last bit of information about a program when that program is being removed. To that end, the following is all of the data that sudowin creates on a computer’s hard drive upon installation.
Files
```
INSTALLDIR\
|-- Callback\
| |-- SudoWin.CallbackApplication.exe
| |-- SudoWin.Common.dll
|-- Clients
```
Schley Andrew Kutz
Sudo for Windows (sudowin)
```
|-- Console\
| |-- sudo.exe
| |-- sudo.exe.config
| |-- Sudowin.Common.dll
| |-- Gui\
| | |-- Sudowin.Clients.Gui.exe
| | |-- Sudowin.Clients.Gui.exe.config
| | |-- Sudowin.Common.dll
|-- Server\
| |-- pluginConfiguration.xml
| |-- PluginConfigurationSchema.xsd
| |-- sudoers.xml
| |-- Sudowin.Common.dll
| |-- Sudowin.Plugins.Authentication.dll
| |-- Sudowin.Plugins.Authentication.NT.dll
| |-- Sudowin.Plugins.Authorization.dll
| |-- Sudowin.Plugins.Authorization.Xml.dll
| |-- Sudowin.Plugins.CredentialsCache.dll
| |-- Sudowin.Plugins.CredentialsCache.LocalServer.dll
| |-- Sudowin.Plugins.dll
| |-- Sudowin.Server.exe
| |-- Sudowin.Server.exe.config
| |-- Sudowin.Server.InstallState
| |-- WtsApi32.NET.dll
| |-- XmlAuthorizationPluginSchema.xsd
|-- Setup\
| |-- CustomActions\
| | |-- Sudowin.Setup.CustomActions.dll
| | |-- Sudowin.Setup.CustomActions.InstallState
|-- CHANGLOG.txt
|-- LICENSE.txt
|-- README.rtf
|-- README.txt
HOMEDIR (or ALLUSERSHOMEDIR depending on the installation step that asks if this program is to be used by only you or everyone)
```
Sudo for Windows (sudowin)
|-- Start Menu
|-- Programs
|-- Sudo for Windows
|-- Sudo for Windows.url
Registry
HKEY_CLASSES_ROOT
|-- batfile
|-- shell
|-- sudo
|-- command
|-- (Default) = ""[INSTALLDIR]Clients\Gui\Sudowin.Clients.Gui.exe" "%1" %*
|-- (Default) = "Sudo..."
|-- cmdfile
|-- shell
|-- sudo
|-- command
|-- (Default) = ""[INSTALLDIR]Clients\Gui\Sudowin.Clients.Gui.exe" "%1" %*
|-- (Default) = "Sudo..."
|-- exefile
Sudo for Windows (sudowin)
```
|-- shell
||-- sudo
| |-- command
| |-- (Default) = ""[INSTALLDIR]Clients\Gui\Sudowin.Clients.Gui.exe" "%1" %*
|
|-- (Default) = "Sudo..."
|-- Folder
||-- shell
| |-- sudo
| |-- command
| |-- (Default) = ""[INSTALLDIR]Clients\Gui\Sudowin.Clients.Gui.exe"
| %SystemRoot%\Explorer.exe /e,/idlist,%I,%L"
|
|-- (Default) = "Sudo..."
|-- Msi.Package
||-- shell
| |-- sudo
| |-- command
| |-- (Default) = ""[INSTALLDIR]Clients\Gui\Sudowin.Clients.Gui.exe"
| "%SystemRoot%\system32\msiexec.exe" /package "%1" %*"
|
|-- (Default) = "Sudo..."
```
Sudo for Windows (sudowin)
Groups
The installer creates a local group called “Sudoers.” The uninstaller will prompt for its removal, and if it is not removed then the group will remain until it is manually removed.
Active Directory
Even if an organization possesses hundreds of servers and thousands of desktops, it is incredibly easy to leverage sudowin on each and every one of them. Sudowin is available for download as a Microsoft Software Installer (MSI) file. This means that sudowin can be deployed to any number of desktops using Group Policy. For more information about deploying software packages via Group Policy please see http://support.microsoft.com/kb/314934.
Known Issues
There are some known issues with using sudowin:
1. The XML Authorization plugin does not currently support adding built-in user groups to the data source. This feature is intended for a future release.
2. Sudowin does not currently support escalating privileges to the level of Active Directory domain groups. This feature is intended for a future release when the domain controller component of sudowin is completed.
3. There is currently no GUI editor for the XML Authorization
Sudo for Windows (sudowin)
plugin file. This is a low priority and the suggested method for editing this file is with a text editor such as notepad.
4. Sudowin does not work with Windows XP Home edition. This is because this version of Windows is stripped of the terminal services component – a service that sudowin relies upon in order to query a user’s logon token.
9. **Conclusion**
Sudowin was developed with the purpose of promoting a more traditional security model like the one found in use by UNIX and Linux systems. Because of the conscious effort put into the sudowin project from the beginning regarding high security and end-users, sudowin enables users of Microsoft Windows to practice the principal of least privilege without sacrificing ease of use.
A lot of people ask why there is still a need for a separate Sudo for Windows project when Windows Vista includes sudo functionality. Sadly, these people are misinformed, because despite the propagated misinformation, there is no sudo in Windows Vista.
Windows Vista implements a feature called User Account Control (UAC). UAC includes support for Over-the-Shoulder (OTS) Credentials – that is a user is prompted for an administrative passphrase when an administrative task needs to be accomplished. If the user does not know the passphrase then they are not allowed to perform the task. This is obviously not sudo because the user must know a passphrase that is not her own to accomplish a task.
UAC also introduces Admin Approval Mode. This is what is confused for sudo. Essentially, administrators are prompted for their credentials or their consent whenever they need to perform a sensitive task. Because the administrators are being prompted for their own passphrase this may seem a lot like sudo, but there is one very important thing to remember – the administrator is not being
---
Schley Andrew Kutz
53
Sudo for Windows (sudowin)
granted any privileges that she does not already have. There is no privilege escalation occurring. There is no sudo in Microsoft Vista.
Microsoft continues its journey into the 21st century with operating systems designed for high security, but these operating systems still lack one of the most basic pieces of functionality towards security that UNIX and Linux have enjoyed for more than 20 years. While it would be a fantastic step in the right direction towards a more secure tomorrow if millions of Windows users started using sudowin, the Sudo for Windows project also exists for another reason. The sudowin software is only a byproduct of a larger goal – to inform those millions of users that there are always ways to make themselves, and their neighbors connected to them all over the world by the internet, more secure.
Users may follow the progress of the sudowin project by visiting its SourceForge project page at http://sourceforge.net/projects/sudowin or its homepage at http://sudowin.sourceforge.net (redirects to http://www.lostcreations.com/sudowin). This practical, although static, will see updates via the documentation on the sudowin site at http://www.lostcreations.com/sudowin/documentation. Finally, this practical leaves the reader with a question.
Do you sudo?
<table>
<thead>
<tr>
<th>Course Name</th>
<th>Location</th>
<th>Dates</th>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>SANS DFIRCON Miami 2018</td>
<td>Miami, FLUS</td>
<td>Nov 05, 2018 - Nov 10, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS London November 2018</td>
<td>London, GB</td>
<td>Nov 05, 2018 - Nov 10, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dallas Fall 2018</td>
<td>Dallas, TXUS</td>
<td>Nov 05, 2018 - Nov 10, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Rome 2018</td>
<td>Rome, IT</td>
<td>Nov 12, 2018 - Nov 17, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Mumbai 2018</td>
<td>Mumbai, IN</td>
<td>Nov 12, 2018 - Nov 17, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>Pen Test HackFest Summit & Training 2018</td>
<td>Bethesda, MDUS</td>
<td>Nov 12, 2018 - Nov 19, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Osaka 2018</td>
<td>Osaka, JP</td>
<td>Nov 12, 2018 - Nov 17, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS San Diego Fall 2018</td>
<td>San Diego, CAUS</td>
<td>Nov 12, 2018 - Nov 17, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS November Singapore 2018</td>
<td>Singapore, SG</td>
<td>Nov 19, 2018 - Nov 24, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS San Francisco Fall 2018</td>
<td>San Francisco, CAUS</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>European Security Awareness Summit 2018</td>
<td>London, GB</td>
<td>Nov 26, 2018 - Nov 29, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Austin 2018</td>
<td>Austin, TXUS</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Stockholm 2018</td>
<td>Stockholm, SE</td>
<td>Nov 26, 2018 - Dec 01, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Khobar 2018</td>
<td>Khobar, SA</td>
<td>Dec 01, 2018 - Dec 06, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dublin 2018</td>
<td>Dublin, IE</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Santa Monica 2018</td>
<td>Santa Monica, CAUS</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Nashville 2018</td>
<td>Nashville, TNUS</td>
<td>Dec 03, 2018 - Dec 08, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>Tactical Detection & Data Analytics Summit & Training 2018</td>
<td>Scottsdale, AZUS</td>
<td>Dec 04, 2018 - Dec 11, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Frankfurt 2018</td>
<td>Frankfurt, DE</td>
<td>Dec 10, 2018 - Dec 15, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Bangalore January 2019</td>
<td>Bangalore, IN</td>
<td>Jan 07, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Sonoma 2019</td>
<td>Santa Rosa, CAUS</td>
<td>Jan 14, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Amsterdam January 2019</td>
<td>Amsterdam, NL</td>
<td>Jan 14, 2019 - Jan 19, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Miami 2019</td>
<td>Miami, FLUS</td>
<td>Jan 21, 2019 - Jan 26, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Dubai January 2019</td>
<td>Dubai, AE</td>
<td>Jan 26, 2019 - Jan 31, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Las Vegas 2019</td>
<td>Las Vegas, NVUS</td>
<td>Jan 28, 2019 - Feb 02, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Security East 2019</td>
<td>New Orleans, LAUS</td>
<td>Feb 02, 2019 - Feb 09, 2019</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS Sydney 2018</td>
<td>OnlineAU</td>
<td>Nov 05, 2018 - Nov 17, 2018</td>
<td>Live Event</td>
</tr>
<tr>
<td>SANS OnDemand</td>
<td>Books & MP3s OnlyUS</td>
<td>Anytime</td>
<td>Self Paced</td>
</tr>
</tbody>
</table>
|
{"Source-Url": "https://www.sans.org/reading-room/whitepapers/bestprac/sudo-windows-sudowin-1726", "len_cl100k_base": 15503, "olmocr-version": "0.1.53", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 127023, "total-output-tokens": 18312, "length": "2e13", "weborganizer": {"__label__adult": 0.0003445148468017578, "__label__art_design": 0.00048422813415527344, "__label__crime_law": 0.0007500648498535156, "__label__education_jobs": 0.002719879150390625, "__label__entertainment": 0.0001538991928100586, "__label__fashion_beauty": 0.00015366077423095703, "__label__finance_business": 0.0010118484497070312, "__label__food_dining": 0.0001983642578125, "__label__games": 0.0009131431579589844, "__label__hardware": 0.002498626708984375, "__label__health": 0.00029277801513671875, "__label__history": 0.0002930164337158203, "__label__home_hobbies": 0.0001461505889892578, "__label__industrial": 0.00049591064453125, "__label__literature": 0.00031065940856933594, "__label__politics": 0.0003223419189453125, "__label__religion": 0.0003514289855957031, "__label__science_tech": 0.060516357421875, "__label__social_life": 0.00013375282287597656, "__label__software": 0.169677734375, "__label__software_dev": 0.7578125, "__label__sports_fitness": 0.00015497207641601562, "__label__transportation": 0.0002338886260986328, "__label__travel": 0.00013768672943115234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70251, 0.01629]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70251, 0.31275]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70251, 0.82097]], "google_gemma-3-12b-it_contains_pii": [[0, 556, false], [556, 709, null], [709, 3329, null], [3329, 4167, null], [4167, 5752, null], [5752, 6737, null], [6737, 7249, null], [7249, 8834, null], [8834, 8881, null], [8881, 10463, null], [10463, 11992, null], [11992, 13152, null], [13152, 14310, null], [14310, 15470, null], [15470, 16908, null], [16908, 18395, null], [18395, 19632, null], [19632, 20783, null], [20783, 22191, null], [22191, 23094, null], [23094, 24165, null], [24165, 25593, null], [25593, 25971, null], [25971, 27592, null], [27592, 28568, null], [28568, 29873, null], [29873, 31007, null], [31007, 32219, null], [32219, 33509, null], [33509, 34772, null], [34772, 36309, null], [36309, 36855, null], [36855, 37868, null], [37868, 39139, null], [39139, 40535, null], [40535, 41756, null], [41756, 42863, null], [42863, 43609, null], [43609, 44832, null], [44832, 46234, null], [46234, 47982, null], [47982, 49957, null], [49957, 52303, null], [52303, 54312, null], [54312, 55784, null], [55784, 56689, null], [56689, 57999, null], [57999, 59052, null], [59052, 60247, null], [60247, 60799, null], [60799, 61443, null], [61443, 62620, null], [62620, 62990, null], [62990, 64646, null], [64646, 65966, null], [65966, 70251, null]], "google_gemma-3-12b-it_is_public_document": [[0, 556, true], [556, 709, null], [709, 3329, null], [3329, 4167, null], [4167, 5752, null], [5752, 6737, null], [6737, 7249, null], [7249, 8834, null], [8834, 8881, null], [8881, 10463, null], [10463, 11992, null], [11992, 13152, null], [13152, 14310, null], [14310, 15470, null], [15470, 16908, null], [16908, 18395, null], [18395, 19632, null], [19632, 20783, null], [20783, 22191, null], [22191, 23094, null], [23094, 24165, null], [24165, 25593, null], [25593, 25971, null], [25971, 27592, null], [27592, 28568, null], [28568, 29873, null], [29873, 31007, null], [31007, 32219, null], [32219, 33509, null], [33509, 34772, null], [34772, 36309, null], [36309, 36855, null], [36855, 37868, null], [37868, 39139, null], [39139, 40535, null], [40535, 41756, null], [41756, 42863, null], [42863, 43609, null], [43609, 44832, null], [44832, 46234, null], [46234, 47982, null], [47982, 49957, null], [49957, 52303, null], [52303, 54312, null], [54312, 55784, null], [55784, 56689, null], [56689, 57999, null], [57999, 59052, null], [59052, 60247, null], [60247, 60799, null], [60799, 61443, null], [61443, 62620, null], [62620, 62990, null], [62990, 64646, null], [64646, 65966, null], [65966, 70251, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70251, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70251, null]], "pdf_page_numbers": [[0, 556, 1], [556, 709, 2], [709, 3329, 3], [3329, 4167, 4], [4167, 5752, 5], [5752, 6737, 6], [6737, 7249, 7], [7249, 8834, 8], [8834, 8881, 9], [8881, 10463, 10], [10463, 11992, 11], [11992, 13152, 12], [13152, 14310, 13], [14310, 15470, 14], [15470, 16908, 15], [16908, 18395, 16], [18395, 19632, 17], [19632, 20783, 18], [20783, 22191, 19], [22191, 23094, 20], [23094, 24165, 21], [24165, 25593, 22], [25593, 25971, 23], [25971, 27592, 24], [27592, 28568, 25], [28568, 29873, 26], [29873, 31007, 27], [31007, 32219, 28], [32219, 33509, 29], [33509, 34772, 30], [34772, 36309, 31], [36309, 36855, 32], [36855, 37868, 33], [37868, 39139, 34], [39139, 40535, 35], [40535, 41756, 36], [41756, 42863, 37], [42863, 43609, 38], [43609, 44832, 39], [44832, 46234, 40], [46234, 47982, 41], [47982, 49957, 42], [49957, 52303, 43], [52303, 54312, 44], [54312, 55784, 45], [55784, 56689, 46], [56689, 57999, 47], [57999, 59052, 48], [59052, 60247, 49], [60247, 60799, 50], [60799, 61443, 51], [61443, 62620, 52], [62620, 62990, 53], [62990, 64646, 54], [64646, 65966, 55], [65966, 70251, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70251, 0.04822]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e2f7e40a7df2de57d1c31a21a4015155651f539e
|
[REMOVED]
|
{"len_cl100k_base": 13404, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39265, "total-output-tokens": 14250, "length": "2e13", "weborganizer": {"__label__adult": 0.000850677490234375, "__label__art_design": 0.0014801025390625, "__label__crime_law": 0.0012712478637695312, "__label__education_jobs": 0.0049591064453125, "__label__entertainment": 0.0008111000061035156, "__label__fashion_beauty": 0.0005102157592773438, "__label__finance_business": 0.0262603759765625, "__label__food_dining": 0.0008931159973144531, "__label__games": 0.0064239501953125, "__label__hardware": 0.0020160675048828125, "__label__health": 0.0019168853759765625, "__label__history": 0.0009336471557617188, "__label__home_hobbies": 0.00026154518127441406, "__label__industrial": 0.0010766983032226562, "__label__literature": 0.0013713836669921875, "__label__politics": 0.0013856887817382812, "__label__religion": 0.0007605552673339844, "__label__science_tech": 0.449462890625, "__label__social_life": 0.0002267360687255859, "__label__software": 0.0330810546875, "__label__software_dev": 0.4619140625, "__label__sports_fitness": 0.0005245208740234375, "__label__transportation": 0.0012798309326171875, "__label__travel": 0.00048279762268066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46513, 0.04902]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46513, 0.19731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46513, 0.83444]], "google_gemma-3-12b-it_contains_pii": [[0, 3281, false], [3281, 7957, null], [7957, 13021, null], [13021, 17659, null], [17659, 23139, null], [23139, 27576, null], [27576, 31370, null], [31370, 36782, null], [36782, 42105, null], [42105, 46513, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3281, true], [3281, 7957, null], [7957, 13021, null], [13021, 17659, null], [17659, 23139, null], [23139, 27576, null], [27576, 31370, null], [31370, 36782, null], [36782, 42105, null], [42105, 46513, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46513, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46513, null]], "pdf_page_numbers": [[0, 3281, 1], [3281, 7957, 2], [7957, 13021, 3], [13021, 17659, 4], [17659, 23139, 5], [23139, 27576, 6], [27576, 31370, 7], [31370, 36782, 8], [36782, 42105, 9], [42105, 46513, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46513, 0.23003]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
f36f1dd506d07a154477eee85d1ab0d39144994e
|
[REMOVED]
|
{"Source-Url": "https://dspace.jaist.ac.jp/dspace/bitstream/10119/9948/1/ogata2010icfem.pdf", "len_cl100k_base": 14120, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 54792, "total-output-tokens": 16223, "length": "2e13", "weborganizer": {"__label__adult": 0.0003933906555175781, "__label__art_design": 0.0005593299865722656, "__label__crime_law": 0.0005474090576171875, "__label__education_jobs": 0.001720428466796875, "__label__entertainment": 0.00013434886932373047, "__label__fashion_beauty": 0.00019729137420654297, "__label__finance_business": 0.0004286766052246094, "__label__food_dining": 0.0004470348358154297, "__label__games": 0.00083160400390625, "__label__hardware": 0.0018062591552734375, "__label__health": 0.0006237030029296875, "__label__history": 0.0004379749298095703, "__label__home_hobbies": 0.0001767873764038086, "__label__industrial": 0.0009002685546875, "__label__literature": 0.0005674362182617188, "__label__politics": 0.0004055500030517578, "__label__religion": 0.0006766319274902344, "__label__science_tech": 0.270751953125, "__label__social_life": 0.0001475811004638672, "__label__software": 0.01161956787109375, "__label__software_dev": 0.705078125, "__label__sports_fitness": 0.0003037452697753906, "__label__transportation": 0.0008592605590820312, "__label__travel": 0.0002218484878540039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50904, 0.02374]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50904, 0.62066]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50904, 0.81408]], "google_gemma-3-12b-it_contains_pii": [[0, 1063, false], [1063, 3581, null], [3581, 6685, null], [6685, 10052, null], [10052, 13618, null], [13618, 16650, null], [16650, 19498, null], [19498, 23547, null], [23547, 25362, null], [25362, 28917, null], [28917, 31768, null], [31768, 35668, null], [35668, 39188, null], [39188, 41948, null], [41948, 44653, null], [44653, 47919, null], [47919, 50904, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1063, true], [1063, 3581, null], [3581, 6685, null], [6685, 10052, null], [10052, 13618, null], [13618, 16650, null], [16650, 19498, null], [19498, 23547, null], [23547, 25362, null], [25362, 28917, null], [28917, 31768, null], [31768, 35668, null], [35668, 39188, null], [39188, 41948, null], [41948, 44653, null], [44653, 47919, null], [47919, 50904, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50904, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50904, null]], "pdf_page_numbers": [[0, 1063, 1], [1063, 3581, 2], [3581, 6685, 3], [6685, 10052, 4], [10052, 13618, 5], [13618, 16650, 6], [16650, 19498, 7], [19498, 23547, 8], [23547, 25362, 9], [25362, 28917, 10], [28917, 31768, 11], [31768, 35668, 12], [35668, 39188, 13], [39188, 41948, 14], [41948, 44653, 15], [44653, 47919, 16], [47919, 50904, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50904, 0.04396]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
b5c23815a56f0502249f0382f4a9a5913bb83221
|
Eager Evaluation Isn’t Eager Enough
A Transformation Based Approach to Semantics-Directed Code Generation
Arthur Nunes-Harwitt
Rochester Institute of Technology
102 Lomb Memorial Drive
Rochester, New York 14623
anhn@cs.rit.edu
ABSTRACT
An interpreter is a concise definition of the semantics of a programming language and is easily implemented. A compiler is more difficult to construct, but the code that it generates runs faster than interpreted code. This paper introduces rules for staging an interpreter so that it generates a compiler. An extended example suggests the utility of the technique. The rules are described formally and correctness is discussed. Finally, this technique is compared to staging and partial evaluation.
Categories and Subject Descriptors
D.3.4 [Programming Languages]: Processors—Compilers; D.2.2 [Software Engineering]: Miscellaneous—Rapid Prototyping
General Terms
Verification, Performance
Keywords
meta-programming, compilers, interpreters, partial evaluation, staging
1. INTRODUCTION
A semantics-directed code generator is a code generator that has been derived from a semantic specification such as an interpreter. Semantics-based approaches to code generation have a number of benefits: correctness, ease of implementation, maintainability, and rational justification. Two common techniques for semantics-based code generation are partial evaluation and staging.
Partial evaluation[11] is a transformation technique for specializing programs. Program specialization can mean simply replacing some of a function’s parameters with values; however, specialization is usually understood to involve Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org.
IJC ’14 August 14 – 17 2014, Montreal, QC, Canada
Copyright is held by the owner/author(s). Publication rights licensed to ACM.
ACM 978-1-4503-2931-6/14/08$15.00.
http://dx.doi.org/10.1145/2635648.2635652
ting time improvements to achieve good specialization.
A staged computation is a computation that is organized so that part of the computation occurs at one stage, or time, and the rest of the computation occurs at another. Partial evaluation is a technique for staging, but this notion has broader scope. For example, it includes manual techniques such as Marc Feeley’s closure based approach to code generation[7] and related techniques that generate text. Although William L. Scherlis developed a form of equational reasoning similar to Bustall and Darlington, subsequently he and Ulrik Jørring[12] identified staging as a way to produce a code generator. No rules for staging have been established. Instead the emphasis more recently has been on creating type systems for statically typed programming languages with quotation[15, 14, 17].
This paper is concerned with an alternative approach to semantics-directed code generation that lies between staging and partial evaluation. It makes the following contributions. It identifies a new technique for staging an interpreter and deriving a code generator in the form of four essential transformations. The technique provides more guidance than traditional staging and can be used to derive a compiler, but is not fully automated like partial evaluation. Motivation is provided for this technique. An extended example suggests the utility of the technique. The transformations are presented formally, and correctness is discussed. Finally, this technique is compared to staging and partial evaluation.
2. TRANSFORMATIONS
The motivation for this transformation technique comes
from denotational-semantics\[16, 20\]. A denotational definition can be understood as an interpreter. It is less commonly recognized that the denotational definition can also be understood as a compiler: given a term, we are free to evaluate the recursive calls and derive a $\lambda$-term. Yet the eager evaluation strategy prevents reducing the applications inside abstractions. The rules described below concern answering the question: How can a denotational-style interpreter be modified so that it too generates a $\lambda$-term when using an eager evaluation strategy?
2.1 Currying Dynamic Variables
Currying is a mathematical trick to make all functions take one argument; it transforms a function of two arguments into a function of one argument that returns a function. For example, the multiplication function $m(x, y) = x \times y$ becomes $m(x) = \lambda y.x \times y$. If we have in mind that $x$ is known statically, but $y$ is known dynamically, then applying the curried form to a statically known value specializes the multiplication function. For example, applying $m$ to 2 results in the following term: $m(2) = \lambda y.2 \times y$. Thus the application of a curried function is a weak form of code generation.
Many programming languages, especially today, allow for first-class functions. In Scheme[13], the multiplication example looks as follows.
```
(define (m x y) (* x y))
```
When curried, it becomes the following.
```
(define (m x) (lambda (y) (* ,x y)))
```
However, applying $m$ to 2 yields an opaque result rather than the desired term. Something more is needed to see the text of the resulting procedure.
2.2 Code Via Quoting
To fix the problem in section 2.1, we want to see the text of the function rather than the function itself (which may not be displayable). To return text, rather than a function, we can use Scheme’s quotation and un-quotation mechanisms: backquote and comma. Upon making this change, the term comes out as expected, although now `eval` is needed to actually apply this function.
```
(define (m x) '((lambda (y) (* ,x y)))
```
But consider the following more complicated example of raising $b$ to the $n$th power and what happens when applying these currying and quoting transformations.
```
(define (p n b) ; original
(if (= n 0)
1
(* b ((p (- n 1)) b))))
```
```
(define (p n) ; curried
(lambda (b)
(if (= n 0)
1
(* b ((p (- n 1)) b)))))
```
```
(define (p n) ; quoted
(if (= n 0)
'1
'(* b ((p (- n 1)) b)))))
```
> (p 3)
```
(lambda (b) (* b ((lambda (b) b)))))
```
The result this time is inadequate because a substantial amount of static computation remains. In particular, the conditional does not depend on the parameter $b$ and should not be there. The code generated also assumes a run-time environment in which the curried form of $p$ is defined. Of course, the goal is to eliminate the need for such a run-time function.
2.3 Lambda Lowering
To fix the problem in section 2.2, we need to evaluate the test in the conditional. A way to do that is to move the function with the formal parameter $b$ inside the conditional after currying. Upon making this sequence of transformations, applying the code generating function does yield a simpler term.
```
(define (p n) ; original
(if (= n 0)
1
(* b ((p (- n 1)) b))))
```
```
(define (p n) ; curried
(lambda (b)
(if (= n 0)
1
(* b ((p (- n 1)) b)))))
```
```
(define (p n) ; quoted
(if (= n 0)
'1
'(* b ((p (- n 1)) b)))))
```
> (p 3)
```
(lambda (b) (* b ((lambda (b) b))))
```
While the result here is better, it is still inadequate because we have not yet eliminated the reference to the function $p$.
2.4 Expression Lifting
To fix the problem in section 2.3, we need to evaluate the recursive call. Since it resides in a $\lambda$-expression, the only way to evaluate the expression is to lift it out. Upon making this sequence of transformations, applying the code generating function yields an ungainly but fully simplified term.
```
(define (p n) ; original
(if (= n 0)
1
(* b (p (- n 1) b))))
```
```
(define (p n) ; curried
(lambda (b)
(if (= n 0)
1
(* b ((p (- n 1)) b))))
```
```
(define (p n) ; quoted
(if (= n 0)
'1
'(* b ((p (- n 1)) b)))))
```
> (p 3)
```
(lambda (b) (* b ((lambda (b) b))))
```
While the result here is better, it is still inadequate because we have not yet eliminated the reference to the function $p$.
2.5 Lambda Lowering
To fix the problem in section 2.4, we need to evaluate the recursive call. Since it resides in a $\lambda$-expression, the only way to evaluate the expression is to lift it out. Upon making this sequence of transformations, applying the code generating function yields an ungainly but fully simplified term.
```
(define (p n) ; original
(if (= n 0)
1
(* b (p (- n 1) b))))
```
```
(define (p n) ; curried
(lambda (b)
(if (= n 0)
1
(* b ((p (- n 1)) b))))
```
```
(define (p n) ; quoted
(if (= n 0)
'1
'(* b ((p (- n 1)) b)))))
```
> (p 3)
```
(lambda (b) (* b ((lambda (b) b))))
```
While the result here is better, it is still inadequate because we have not yet eliminated the reference to the function $p$.
2.6 Expression Lifting
To fix the problem in section 2.5, we need to evaluate the recursive call. Since it resides in a $\lambda$-expression, the only way to evaluate the expression is to lift it out. Upon making this sequence of transformations, applying the code generating function yields an ungainly but fully simplified term.
```
(define (p n) ; original
(if (= n 0)
1
(* b (p (- n 1) b))))
```
```
(define (p n) ; curried
(lambda (b)
(if (= n 0)
1
(* b ((p (- n 1)) b))))
```
```
(define (p n) ; quoted
(if (= n 0)
'1
'(* b ((p (- n 1)) b)))))
```
> (p 3)
```
(lambda (b) (* b ((lambda (b) b))))
```
While the result here is better, it is still inadequate because we have not yet eliminated the reference to the function $p$.
2.7 Lambda Lowering
To fix the problem in section 2.6, we need to evaluate the recursive call. Since it resides in a $\lambda$-expression, the only way to evaluate the expression is to lift it out. Upon making this sequence of transformations, applying the code generating function yields an ungainly but fully simplified term.
```
(define (p n) ; original
(if (= n 0)
1
(* b (p (- n 1) b))))
```
```
(define (p n) ; curried
(lambda (b)
(if (= n 0)
1
(* b ((p (- n 1)) b))))
```
```
(define (p n) ; quoted
(if (= n 0)
'1
'(* b ((p (- n 1)) b)))))
```
> (p 3)
```
(lambda (b) (* b ((lambda (b) b))
```
While the result here is better, it is still inadequate because we have not yet eliminated the reference to the function $p$.
2.8 Expression Lifting
To fix the problem in section 2.7, we need to evaluate the recursive call. Since it resides in a $\lambda$-expression, the only way to evaluate the expression is to lift it out. Upon making this sequence of transformations, applying the code generating function yields an ungainly but fully simplified term.
```
(define (p n) ; original
(if (= n 0)
1
(* b (p (- n 1) b))))
```
```
(define (p n) ; curried
(lambda (b)
(if (= n 0)
1
(* b ((p (- n 1)) b))))
```
```
(define (p n) ; quoted
(if (= n 0)
'1
'(* b ((p (- n 1)) b)))))
```
> (p 3)
```
(lambda (b) (* b ((lambda (b) b) b)))))
```
The result this time is inadequate because a substantial amount of static computation remains. In particular, the conditional does not depend on the parameter $b$ and should not be there. The code generated also assumes a run-time environment in which the curried form of $p$ is defined. Of course, the goal is to eliminate the need for such a run-time function.
Although ideally the generated code would be more readable, we can make it more pleasant looking by post-processing.
\( \lambda b \cdot (\lambda x. (\lambda y. y b)) b) \)
2.5 Rule Ordering
The rules are performed in the following order. When applying the rules above, first the function is curried. Then the lambda lowering and expression lifting rules are applied repeatedly until those rules can no longer be applied. Finally the quoting rule is applied to all abstractions derived from the curried function.
2.6 Beyond Denotational Interpreters
These transformations will work on a denotational-style interpreter, but what about other kinds of interpreters? Indeed, for non-denotational interpreters these transformations are often insufficient. For example, consider an operational-style interpreter. In that case, it is quite reasonable to say that, just as a number evaluates to a number, an abstraction evaluates to an abstraction (or a closure):
\( \epsilon (\lambda x. E, env) = \text{closure}(x, \lambda \text{env}. \epsilon(E, \text{env}'), \text{env}) \).
Another way operational style interpreters can be different from denotational ones is that recursion in an operational style interpreter need not be on a smaller term. For example, iteration constructs are often defined in terms of themselves. The transformation technique will lead to infinite loops on this sort of expansive recursion. Following Gunter[9], we repair this problem in the interpreter by explicitly identifying the fixed-point and eliminating the expansive recursion.
For example, consider the following interpreter snippet for a while-loop construct. If the test expression \( B \) evaluates to \( False \) then the command \( C \) is not executed. If the test expression \( B \) evaluates to \( True \) then the command \( C \) is executed at least once. Then iteration is achieved by invoking the interpreter on the entire while command.
\[ \mathcal{I} (\text{while } B \text{ do } C, s) = \begin{cases} \mathcal{I} (B, s) = False & \text{then } s \\ \mathcal{I} (\text{while } B \text{ do } C, \mathcal{I} (C, s)) & \text{else} \end{cases} \]
After currying and lambda-lowering that snippet becomes the following:
\[ I (\text{while } B \text{ do } C) = \lambda s. \text{if } \epsilon(B)(s) = False \text{ then } s \text{ else } I (\text{while } B \text{ do } C)(I(C)(s)) \]
But the expression-lifting rule does not apply because attempting to lift \( I (\text{while } B \text{ do } C) \) will lead to non-termination. However, if we let \( g = I (\text{while } B \text{ do } C) \) it becomes apparent that this function can be computed; \( g \) is the fixed-point function.
\[ g(s) = \text{if } \epsilon(B)(s) = False \text{ then } s \text{ else } g(I(C)(s)) \]
3. EXTENDED EXAMPLE
To illustrate this technique, consider the application of regular expression matching. A regular expression matching interpreter takes a regular expression and a string, and determines if the string is in the language denoted by the regular expression. Often, the regular expression is fixed, and we would like the code that answers whether a string is in the language denoted by that fixed regular expression.
**Definition 1.** A regular expression is one of the following, where the predicate testing each option is in parentheses.
- The empty string. \( \text{null?} \)
- A character in the alphabet. \( \text{char?} \)
- The union of two regular expressions. \( \text{or?} \)
- The concatenation of two regular expressions. \( \text{cat?} \)
- The Kleene star of a regular expressions. \( \text{star?} \)
The matching algorithm is expressed in Scheme using continuation passing style; the continuation \( k \) is the property that must be satisfied by the remainder of the string. In the code below, a string is represented as a list of characters \( cl \).
\[
\begin{align*}
\text{(define } \text{match } \text{regexp } \text{cl } \text{k}) &= \text{cond }\begin{cases} \text{null? } \text{regexp} & (k \text{ cl}) \\
\text{char? } \text{regexp} & (\text{if } \text{null? } \text{cl} \text{ } \\
\text{#f} & (\text{and } \text{(eq? } \text{(car } \text{cl}) \text{ regexp} \text{ k cl)}) \\
\text{or? } \text{regexp} & (\text{or } \text{(match } \text{exp1<or } \text{regexp} \text{ cl } \text{k}) \text{ cl} \text{ k }) \\
\text{cat? } \text{regexp} & (\text{match } \text{exp1<cat } \text{regexp} \text{ cl} \text{ cl} \text{ c12}) \\
\text{star? } \text{regexp} & (\text{let } \text{loop } \text{((c12 cl1) } \text{ (match } \text{exp1<cat } \text{regexp} \text{ cl )})) \\
\text{#f} & (\text{and } \text{(eq? } \text{(car cl2) regexp} \text{ k cl2})) \\
\text{else} & (\text{error 'match 'match's input is bad'})))
\end{cases}
\end{align*}
\]
When \text{regexp} is the empty string, \text{match} invokes the continuation on the character list. Note that the initial continuation verifies that the character list is empty. When \text{regexp} is a character, \text{match} checks that the first character
in the character list is that character, and invokes the continuation on the tail of the character list. When regexp is a union, match merely tries both possibilities. When regexp is a concatenation, match recursively matches the first component and adds a check for the second to the continuation. When regexp is a Kleene star, match loops checking if either the continuation is satisfied or the pattern to be repeated is matched.
In the following sections, we will now apply the technique to this interpreter and derive a code generator.
3.1 Currying
A compiler for regular expressions must be a function that takes a regular expression; hence the dynamic parameters are cl and k. They are removed from the top-level parameter list and put into the parameter list of the λ-expression. The recursive calls are modified to account for this new protocol.
```
(define (match1 regexp)
(lambda (cl k)
(cond ((null? regexp) (k cl))
((char? regexp)
(if (null? cl)
#f
(and (eq? (car cl) regexp) (k (cdr cl)))))
((or? regexp)
(or ((match1 (exp1<-or regexp)) cl k)
((match1 (exp2<-or regexp)) cl k)))
((cat? regexp)
((match1 (exp1<-cat regexp)) cl
(lambda (cl2)
((match1 (exp2<-cat regexp)) cl2 k))))
((star? regexp)
(let loop ((cl2 cl))
(or (k cl2)
((match1 (exp<-star regexp)) cl2
(lambda (cl3)
(if (eq? cl2 cl3) #f (loop cl3)))))))
(else (error 'match1 "match1's input is bad")))))
```
3.2 Lambda lowering
Since (cond (c1 c2) ... ) ≡ (if c1 c2 (cond ... )), it is possible to apply the conditional form of the lambda lowering rule several times. The lambda just below the definition in match1 is lowered into each branch of the cond-expression3.
```
(define (match2 regexp)
(cond ((null? regexp) (lambda (cl k) (k cl)))
((char? regexp)
(lambda (cl k)
(if (null? cl)
#f
(and (eq? (car cl) ,regexp) (k (cdr cl))))))
((or? regexp)
(let ((f1 (match2 (exp1<-or regexp)))
(f2 (match2 (exp2<-or regexp))))
(lambda (cl k) (or (f1 cl k) (f2 cl k)))))
((cat? regexp)
(let ((f1 (match2 (exp1<-cat regexp)))
(f2 (match2 (exp2<-cat regexp))))
(lambda (cl k)
(f1 cl (lambda (cl2) (f2 cl2))))))
((star? regexp)
(let ((f (match2 (exp<-star regexp))))
(lambda (cl k)
(let loop ((cl2 cl))
(or (k cl2)
(f cl2 (lambda (cl3)
(if (eq? cl2 cl3) #f (loop cl3))))))))
(else (error 'match2 "match2's input is bad")))))
```
3.3 Expression lifting
Since the recursive calls have been curried and do not depend on the dynamic variables, it is possible to lift them out of the lowered lambdas. In this example, it is clear that the calls will halt since the recursive calls are always on smaller structures.
```
(define (match3 regexp)
(cond ((null? regexp) (lambda (cl k) (k cl)))
((char? regexp)
(lambda (cl k)
(if (null? cl)
#f
(and (eq? (car cl) ,regexp) (k (cdr cl))))))
((or? regexp)
(let ((f1 (match3 (exp1<-or regexp)))
(f2 (match3 (exp2<-or regexp))))
(lambda (cl k) (or (f1 cl k) (f2 cl k)))))
((cat? regexp)
(let ((f1 (match3 (exp1<-cat regexp)))
(f2 (match3 (exp2<-cat regexp))))
(lambda (cl k)
(f1 cl (lambda (cl2) (f2 cl2))))))
((star? regexp)
(let ((f (match3 (exp<-star regexp))))
(lambda (cl k)
(let loop ((cl2 cl))
(or (k cl2)
(f cl2 (lambda (cl3)
(if (eq? cl2 cl3) #f (loop cl3))))))))
(else (error 'match3 "match3's input is bad")))))
```
3.4 Quoting
Now each λ-expression is quoted. The Scheme backquote syntax is used to allow some sub-expressions to be evaluated. In particular, non-global free variables are quoted in the text4.
```
(define (match4 regexp)
(cond ((null? regexp) ' (lambda (cl k) (k cl)))
((char? regexp)
' (lambda (cl k)
(if (null? cl)
#f
(and (eq? (car cl) ,regexp) (k (cdr cl))))))
((or? regexp)
(let ((f1 (match4 (exp1<-or regexp)))
(f2 (match4 (exp2<-or regexp))))
' (lambda (cl k) (or (f1 cl k) (f2 cl k)))))
((cat? regexp)
(let ((f1 (match4 (exp1<-cat regexp)))
(f2 (match4 (exp2<-cat regexp))))
(lambda (cl k)
(f1 cl (lambda (cl2) (f2 cl2))))))
((star? regexp)
(let ((f (match4 (exp<-star regexp))))
(lambda (cl k)
(let loop ((cl2 cl))
(or (k cl2)
(f cl2 (lambda (cl3)
(if (eq? cl2 cl3) #f (loop cl3))))))))
(else (error 'match3 "match3's input is bad")))))
```
3 An exception to the rule is made in the error case; the lambda is not lowered. The motivation is practical: it is preferable to find out right away that the input is invalid.
4The formal rule for quoting in section 4 requires that the variables that are unquoted are all in a surrounding let binding. This restriction forces more locality without a loss of expressiveness; however, in practice we feel free to unquote other free variables. In the example, regexp is unquoted in the character case.
3.5 Output
When the regular expression is \(a^*\), the simplified output becomes the following.
\[
\begin{align*}
\lambda (cl\ k) \\
& \begin{cases}
& (or (if (null? cl2) #\a)
& (k (cdr cl2))) \\
& (if (null? cl2)
& #f)
\end{cases} \\
& (let loop (((c1 cl2))
(if (null? cl)
#f)
\begin{cases}
& (and (eq? (car cl2) #\b)
& (k (cdr cl2))))
\end{cases})
\end{align*}
\]
3.6 A Non-Denotational Variation
Suppose we had naively defined the Kleene star operation non-denotationally in the expression interpreter. We start by modifying the interpreter in the following fashion that makes staging and partial evaluation more challenging.
\[
\begin{align*}
\lambda (cl\ k) \\
& \begin{cases}
& (or (if (null? cl2) #\a)
& (k (cdr cl2))) \\
& (if (null? cl2)
#f)
\end{cases} \\
& (let loop (((c1 cl2))
(if (null? cl)
#f)
\begin{cases}
& (and (eq? (car cl2) #\b)
& (k (cdr cl2))))
\end{cases})
\end{align*}
\]
Let \(f_2 = (\text{match} \ \text{regexp})\), then \((\text{match} \ \text{regexp}) = \lambda (cl, k) \ldots (\text{match} \ \text{regexp})\) becomes \(f_2 = \lambda (cl, k) \ldots f_2 \ldots\), at which point we consider the fixed point solution of \(f_2\). To implement that idea, the code is modified as follows.
\[
\begin{align*}
\lambda (cl\ k) \\
& \begin{cases}
& (or (if (null? cl2) #\a)
& (k (cdr cl2))) \\
& (if (null? cl2)
#f)
\end{cases} \\
& (let loop (((c1 cl2))
(if (null? cl)
#f)
\begin{cases}
& (and (eq? (car cl2) #\b)
& (k (cdr cl2))))
\end{cases})
\end{align*}
\]
Now the quoting rule can be applied. When performed, we get a code generator for regular expressions that includes Kleene star forms even though the interpreter was not written in a denotational-style.
4. SUMMARY AND FORMALIZATION
In this section, we formalize the interpreter language so that the transformation rules can be stated more formally. This formality allows for a discussion of correctness and other properties.
4.1 Modeling Scheme
The language the interpreter is written in is assumed to be Scheme-like. We model Scheme via a call-by-value \(\lambda\)-calculus with constants, conditionals, and quotation (see figure 1). A new kind of variable, the comma variable, is used to model Scheme's unquote. A \(\text{let}\) not involving a comma variable is understood in the usual way to abbreviate the application of an abstraction. The \(\text{eval}\) operator can be defined in terms of the \(\text{let}\) with comma variables. The semantics is similar to the \(\lambda\)-calculus with quotation found in [15].
In Scheme, both programs and data are parenthesized expressions. Data are distinguished from programs by putting a quotation mark in front. Thus \'(+ 2 3)\ performs addition, but \'(+(2 3))\ is a list. In the interpreter language above, \'(+ 2 3)\ performs addition and \'([+ 2 3])\ is data. The notation here differs somewhat from Scheme in that Scheme allows an arbitrary form to be quoted; thus \'(1 2 3)\ is simply a list of numbers. While it is technically possible to write \([1(2, 3)]\), the syntax of application exists and so the expression does not make sense. Scheme also makes it possible to plug values into a quoted form. The comma operator is used to unquote an expression. Thus the Scheme expression \(\text{let} \ldots \ldots \text{match}\)
unquoted thereby implementing the eval
let, y = e in e'
if e₀ then e₁ else e₂
Values v ::= e
Values v ::= e
let x = e in e' is syntactic sugar.
(eval e) is syntactic sugar.
Figure 1: Interpreter language syntax
((y 2)) ‘(+ , y 3)) evaluates to a list whose second component is 2. In the interpreter language above, the comma is not an operator; rather a second kind of variable is introduced, the comma-variable, which is intended to resemble Scheme’s application of the comma operator to a variable. The let-form for comma-variables is used to plug into a quoted expression. In the interpreter language, the comma example is written let , y = 2 in [+(,(y),3)]. Further, it is natural to use this let-form to define the operator eval.
The meaning of this interpreter language λ-calculus is mostly standard (see appendix A). In Scheme, we have that (let ((y ‘(+ 3 4))) ‘(* 2 ,y)) evaluates to the list (* 2 (+ 3 4)). This can be understood as removing the quotation and replacing the comma-variable with the unquoted term. In Scheme, the body of the let-form cannot merely be a comma-variable; in Scheme the comma operator must appear inside a quasi-quote form. However, in the interpreter language it is possible. Observe that when the body of the let-form is merely the comma-variable, the quoted term is unquoted thereby implementing the eval operator.
Reduction can be extended to an equivalence relation by making sure the relation is reflexive, symmetric, and transitive (see appendix B). In addition, when making arguments it is useful to be able to say that sub-structural equality implies equality. Hence those rules are added.
4.2 Formal Transformation Rules
The examples in sections 2 and 3 illustrate how a procedure can be modified so that it generates a λ-term. The key idea is the following: An expression within an abstraction cannot be evaluated, and so the code is restructured so that the expression is no longer within the abstraction.
The transformations are in figure 2. Rules (1) and (2) are about currying. The equivalence of functions and their curried counterparts is well known. Although the rules are expressed as local changes, rule (2) must be applied completely using non-local assumptions and information.
Rules (3) and (4) are about lambda lifting\(^5\). These rules involve moving an expression that is just inside an abstraction and does not depend on the parameters of an abstraction out of the abstraction. In particular, if the abstraction body is a conditional, but the conditional does not depend on the abstraction’s parameters, we may regard the conditional as specifying one of two abstractions. Or, if the abstraction body defines an intermediate value that does not depend on the parameters, we may regard the definition as occurring outside the body of the abstraction.
For rule (3), concerning a conditional, if e reduces to a value v, then the body of the abstraction depends on v. When false, the body is e₂; otherwise the body is e₁. And that is what the right-hand-side says. For rule (4), concerning a let-binding, if e reduces to a value v, then the let on the left-hand-side substitutes v for z in e₀. The let on the right-hand-side substitutes v for z in the abstraction, but it passes right through and becomes a substitution in e₀ since z is distinct from the formal parameters.
Rule (5) is expression lifting\(^6\). This rule is similar to lambda lowering insofar as both involve moving an expression out of an abstraction. However, with expression lifting, the entire expression is moved completely out of the abstraction if it does not depend on the parameters of the abstraction. Typically, the expression being lifted is an application. If e reduces to a value v, then the body of the abstraction on the left-hand-side will replace u with v. The let on the right-hand-side also ultimately replaces u with v since the substitution for z passes right through the abstraction.
\(^5\)Lambda lowering sounds similar to ‘lambda dropping’. Lambda dropping is very different involving, among other things, moving an entire abstraction rather than the lambda.
\(^6\)Extracting an expression is reminiscent of how the ANF transform works. Here the expression is moved out of an abstraction; with the ANF transform the expression is moved out of an evaluation context.
4.3 Correctness
The correctness of rules (3), (4), and (5) relies only on local reasoning (see appendix C). Note that they all assume that the evaluation of $e$ terminates. If that is not the case, looping outside of an abstraction is always observed, but looping inside an abstraction is observed only if the abstraction is called. In practice, it is clear for rules (3) and (4) whether or not $e$ terminates: typically it is a call to a structure predicate and it does not loop. The termination of $e$ in rule (5) is more subtle. If it is a recursive call on substructure it will terminate. If it is a recursive call on the same structure it will not terminate. Otherwise, termination is not obvious.
Rule (6) is about quotation. It transforms an expression that returns an abstraction into an expression that returns the text that represents that abstraction. With this rule, the transformed expression reduces to a different value from the original, and so here the notion of correctness is different. Correctness means that applying the eval operator to the text that results from reducing the transformed expression results in the same value as the original expression. This rule also requires non-local information and assumptions; it must be applied to all branches of a conditional. Intuitively, we argue that since the text looks like the expression we would have evaluated right away, and name capture is avoided, then it must be that the same value is computed. (See appendix D for a global correctness result involving an abstractly specified interpreter.)
4.4 Additional Properties
Here we briefly mention two properties concerning whether the set of rules in figure 2 is the right size.
One interesting property is that it is possible to use only rule (5) to achieve a transformation very similar to rule (4). Nevertheless, it is convenient to use rule (4). Further rule (4) avoids the introduction of an extra variable; an additional rule would be needed to eliminate the extra variable.
Another interesting property concerns whether there are enough rules. For the interpreter language in figure 1, we argue informally that no additional rules are necessary. The currying rules introduce an abstraction from which sub-expressions can be extracted, the quoting rule turns that abstraction into text, and the remaining rules extract sub-expressions from that abstraction. Assuming the body of the abstraction has no sub-expressions involving quotation, there are only five cases. For variables and constants, there is nothing to extract. If the body is also an abstraction, we assume the rules are enough for that abstraction, and use them again. If the body is an application, use rule (5) on either the sub-expressions or the entire expression; that is all that can be done. If the body is a conditional, use rule (5) on sub-expressions or use rule (3) if only the first sub-expression is independent of the parameters; that is all that can be done.
5. COMPARISON OF TECHNIQUES
The transformation technique presented in this paper is a manual technique. Another manual technique is staging. The work in staging assumes the programmer guesses a staged form of an algorithm, and then provides a type-checking algorithm that verifies the staging has been done correctly. The transformation technique here is complementary since it helps the programmer perform the staging.
While the ideas underlying partial evaluation can often be used effectively to manually derive a sophisticated algorithm from a naive one, that is not the case when attempting to derive a code generator. Manually partially evaluating an interpreter on a particular input may yield code for that input, but deriving a code generator traditionally requires at least the second Futamura projection[8]: applying the partial evaluator to itself. Manually partially evaluating the partial evaluator with an interpreter is unwieldy. In contrast, the transformation technique in this paper can often be used to manually derive a code generator from an interpreter.
The cogen approach[1, 18] is an alternative to traditional partial evaluation. Like the technique presented here, the emphasis is on generating a code generator. The cogen approach borrows from the ideas involved in off-line partial evaluation. To create a code generator, a binding time analysis is performed and the input program is annotated. Instead of using the annotated program for partial evaluation, the annotations are reified to generate the generator. Then the generator can be used for partial evaluation, if desired. However, if a derivation is desired, a separate binding time analysis is less direct than the transformation technique discussed in this paper.
Partial evaluators are fully automatic. This may make partial evaluation more attractive for some applications; yet it seems possible to at least partially automate the application of the transformations. Implementing currying appears straightforward, but would involve a control flow analysis. Depending on the level of automation desired, one difficulty when implementing rules (3), (4), and (5) involves verifying that particular terms terminate. Another difficulty is if some parameters, such as the store, are implicit. Finally, quoting may be the trickiest to implement, because one would like to relax the restriction that the free variables are all locally let-bound, and the semantics of Scheme unquoting is more complicated than the model of unquoting in this paper.
There are interpreters for which the transformation technique does not succeed. That is also the case for partial evaluation algorithms. Early partial evaluators had trouble with assignment and/or higher-order functions[4, 3]. Coming up with the right binding time improvements to help a partial evaluator can be challenging because partial evaluation algorithms are quite complicated[6]. In contrast, because the individual transformations that form the transformation technique are so simple it is easier to identify the necessary changes in an interpreter so that a compiler can be generated than guessing binding time improvements.
6. CONCLUSION
This paper has presented a new transformation technique for deriving a code generator from an interpreter. The transformations were presented formally, and proved correct. We have provided an example that illustrates the ideas. Finally, we argue that this technique is a worthwhile alternative to partial evaluation and staging.
A number of questions remain that deserve investigation. For example, while the transformation techniques from section 4 can be effectively applied to any denotational-style interpreter, it is not yet clear to what extent that class of interpreters can be extended. We anticipate investigating whether it is possible to formulate rules that transform an
operational semantics into a denotational one.
The longest example considered in this paper is still fairly short. A practical test for this technique would involve applying it to large examples. We have been experimenting with Prolog implementations of intermediate size and anticipate reporting on the results in a another paper.
Finally, manual transformation is a double-edged sword. It is more flexible than automatic transformation, yet it allows for the introduction of human error. It may be worthwhile to create software tools to help perform some of the suggested transformations.
7. ACKNOWLEDGEMENTS
I would like to thank Axel Schreiner and Melissa Nunes-Harwitt for carefully reading previous drafts of this paper. I would like to thank Matthew Fluet for reading a draft and for looking carefully at my proofs. I would also like to thank the anonymous referees for their valuable comments.
8. REFERENCES
APPENDIX
A. OPERATIONAL SEMANTICS
Definition 2. The form let x = e in e′ is syntactic sugar for (λx.e′)(e).
Definition 3. The form eval(e) is syntactic sugar for let , y = e in , y.
\[ \begin{align*}
\delta(c_{op}, v) &= v' \\
\delta(c_{op}(v)) &= v'
\end{align*} \]
\[ \begin{align*}
(λx.e)(v) &= e[x := v] \\
\text{if } v \neq [c] & \text{ then } e_1 \text{ else } e_2 \rightarrow e_1 \\
\text{if } v \neq \text{False} & \text{ then } e_1 \text{ else } e_2 \rightarrow e_2
\end{align*} \]
B. TERM EQUALITY
\[ \begin{align*}
e &= e \quad \text{reflexive} \\
e &= e' \quad \text{symmetric} \\
e' &= e \quad \text{transitive}
\end{align*} \]
C. LOCAL CORRECTNESS THEOREMS
Note that equality below is term equality.
Lemma 1. If \( e \rightarrow^* v \) then (if \( e \) then \( e_1 \) else \( e_2 \)) \( \rightarrow^* \)
Theorem 1. If \( e \rightarrow^* v \), and \( x_i \notin \text{FV}(e) \) then \( \lambda x. \text{if } e \text{ then } e_1 \text{ else } e_2 \), if \( e \) then \( \lambda x. e_1 \) else \( \lambda x. e_2 \).
Proof. By case analysis on \( v \).
- Suppose \( v \neq \text{False} \).
\[ \lambda x. \text{if } e \text{ then } e_1 \text{ else } e_2 = \lambda x. \text{if } v \text{ then } e_1 \text{ else } e_2 = \lambda x. e_1 \]
\[ = \text{if } v \text{ then } \lambda x. e_1 \text{ else } \lambda x. e_2 \]
\[ = \text{if } e \text{ then } \lambda x. e_1 \text{ else } \lambda x. e_2 \]
- Suppose \( v = \text{False} \).
The argument is similar.
Lemma 2. If \( e \rightarrow^* v \) then (let \( z = e \) in \( e_b \)) \( \rightarrow^* \) (let \( z = v \) in \( e_b \)).
Theorem 2. If \( e \rightarrow^* v \), \( z \neq x_i \), and \( x_i \notin \text{FV}(e) \) then let \( z = e \) in \( \lambda x. e_b = \lambda x. \text{let } z = e \) in \( e_b \).
Proof.
- Let \( z = e \) in \( \lambda x. e_b = \lambda x. \text{let } z = e \) in \( e_b \)
\[ = \lambda x. \text{let } z = v \text{ in } e_b \]
\[ = \lambda x. (e_b)[z := v] \]
\[ = \lambda x. \text{let } z = v \text{ in } e_b \]
\[ = \lambda x. \text{let } z = e \text{ in } e_b \]
\[ = \lambda x. \text{let } z = e \text{ in } e_b \]
- Theorem 3. If \( e' \rightarrow^* v \), \( z \) is fresh, and \( x_i \notin \text{FV}(e') \) then let \( z = e' \) in \( \lambda x. e[u := z] = \lambda x. e[u := e'] \).
Proof.
- Let \( z = e' \) in \( \lambda x. e[u := z] = \lambda z = v \) in \( \lambda x. e[u := z] = (\lambda x. e[u := z])[z := v] \)
\[ = \lambda x. e[u := z][z := v] \]
\[ = \lambda x. e[u := e'] \]
D. GLOBAL CORRECTNESS THEOREM
Let \( \tilde{X} \) be a collection of sets such that each \( X_i \) is a subset of the set of constants in the interpreter language. Further, \( L(\tilde{X}) \) is also a subset of the set of constants in the interpreter language.
Definition 4. Given a finite collection of sets \( \tilde{X} \), \( L(\tilde{X}) \)
We can then abstractly write an interpreter for \( L(\tilde{X}) \) using a sugared version of the interpreter language as follows.
\[ f : L(\tilde{X}) \times S \times D \rightarrow A \]
\[ f(s, i, t_1) = g_i(h^i_s(i, x_i), h^i_t(d, x_i)) \]
Applying currying, lambda lowering, expression lifting, and quoting to \( f \) yields the sugared term below.
\[ f'(x_i, s) = \lambda z. \text{let } y = h^i_s(i, x_i) \text{ in } [\lambda d. g_i(y, h^i_t(d, x_i))] \]
\[ f'(t_1, s) = \lambda z. \text{let } y = h^i_s(i, x_i), \text{ and } u_1 = f'(t_1, s), \text{ and } \cdots \]
\[ \text{in } [\lambda d. \tilde{g}_i(y, h^i_t(d, x_i), u_1(d), \cdots, u_c(d))] \]
Theorem 4. For any \( t \in L(\tilde{X}) \), for any \( i \in S \), if \( h^i_s \) and \( h^i_t \) are total and quote free, then \( f'(t, s) = [\lambda d. f(t, s, d)] \).
Proof. By structural induction on \( t \).
- Suppose \( t = x_i \). Since \( h^i_s \) is total and quote free, \( h^i_s(i, x_i) = v \) and \( v \notin [c] \).
\[ f'(x_i, s) = \lambda z. \text{let } y = h^i_s(i, x_i) \text{ in } [\lambda d. g_i(y, h^i_t(d, x_i))] \]
\[ = \lambda z. \text{let } y = v \text{ in } [\lambda d. g_i(y, h^i_t(d, x_i))] \]
\[ = [\lambda d. g_i(v, h^i_t(d, x_i))] \]
\[ = [\lambda d. g_i(h^i_s(i, x_i), h^i_t(d, x_i))] \]
\[ = [\lambda d. f(x_i, s, d)] \]
Suppose \( t = c_j(\bar{x}_j, \bar{t}_j) \). Since \( h^v_{i^v_j} \) is total and quote free, \( h^v_{i^v_j}(\bar{s}, \bar{x}_j) = v \) and \( v \neq [c] \).
\[
f'(c_j(\bar{x}_j, \bar{t}_j), \bar{s}) = \text{let } y = h^v_{i^v_j}(\bar{s}, \bar{x}_j), \quad u_1 = f'(t^v_{i^v_j}, \bar{s}),
\]
\[
\vdots
\]
\[
\text{in } [\lambda \bar{d}. h_{i^v_j}(y, h^v_{i^v_j}(\bar{d}, \bar{x}_j), (\bar{u}_1)_{i^v_j}(\bar{d}), \bar{u}_{i^v_j}(\bar{d}))]
\]
\[
= \text{let } y = v, \quad u_1 = [\lambda \bar{d}. f(t^v_{i^v_j}, \bar{s}, \bar{d})],
\]
\[
\vdots
\]
\[
\text{in } [\lambda \bar{d}. h_{i^v_j}(y, h^v_{i^v_j}(\bar{d}, \bar{x}_j), (\bar{u}_1)_{i^v_j}(\bar{d}), \bar{u}_{i^v_j}(\bar{d}))]
\]
\[
= [\lambda \bar{d}. h_{i^v_j}(v, h^v_{i^v_j}(\bar{d}, \bar{x}_j), (\lambda \bar{d}. f(t^v_{i^v_j}, \bar{s}, \bar{d}))(\bar{d}), \bar{u}_{i^v_j}(\bar{d}))], \quad f(t^v_{i^v_j}, \bar{s}, \bar{d})]
\]
\[
= [\lambda \bar{d}. h_{i^v_j}(v, h^v_{i^v_j}(\bar{d}, \bar{x}_j), f(t^v_{i^v_j}, \bar{s}, \bar{d}), \bar{u}_{i^v_j}(\bar{d}))]
\]
\[
= [\lambda \bar{d}. h_{i^v_j}(v, h^v_{i^v_j}(\bar{d}, \bar{x}_j), f(t^v_{i^v_j}, \bar{s}, \bar{d}), \bar{u}_{i^v_j}(\bar{d}))]
\]
\[
= [\lambda \bar{d}. f(c_j(\bar{x}_j, \bar{t}_j), \bar{s}, \bar{d})]
\]
\[
\square
\]
Corollary 1. For any \( t \in L(\bar{X}) \), for any \( \bar{s} \in \bar{S} \), for any \( \bar{d} \in \bar{D} \), if \( h^v_{i^v_j} \) and \( h^v_{i^v_j} \) are total and quote free, then \( \text{eval}(f'(t, \bar{s}))(\bar{d}) = f(t, \bar{s}, \bar{d}) \).
|
{"Source-Url": "https://www.cs.rit.edu/~anh/position.pdf", "len_cl100k_base": 11864, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 41997, "total-output-tokens": 14036, "length": "2e13", "weborganizer": {"__label__adult": 0.0003592967987060547, "__label__art_design": 0.00032019615173339844, "__label__crime_law": 0.00028014183044433594, "__label__education_jobs": 0.0005431175231933594, "__label__entertainment": 5.9723854064941406e-05, "__label__fashion_beauty": 0.00014340877532958984, "__label__finance_business": 0.00015926361083984375, "__label__food_dining": 0.0003612041473388672, "__label__games": 0.0004868507385253906, "__label__hardware": 0.0006976127624511719, "__label__health": 0.0004215240478515625, "__label__history": 0.00018537044525146484, "__label__home_hobbies": 8.416175842285156e-05, "__label__industrial": 0.0003333091735839844, "__label__literature": 0.00028824806213378906, "__label__politics": 0.0002295970916748047, "__label__religion": 0.0005016326904296875, "__label__science_tech": 0.00878143310546875, "__label__social_life": 7.450580596923828e-05, "__label__software": 0.003170013427734375, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00028514862060546875, "__label__transportation": 0.0004837512969970703, "__label__travel": 0.00018310546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46466, 0.02828]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46466, 0.62176]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46466, 0.81584]], "google_gemma-3-12b-it_contains_pii": [[0, 4065, false], [4065, 11913, null], [11913, 16921, null], [16921, 22548, null], [22548, 25916, null], [25916, 30253, null], [30253, 37097, null], [37097, 41413, null], [41413, 44955, null], [44955, 46466, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4065, true], [4065, 11913, null], [11913, 16921, null], [16921, 22548, null], [22548, 25916, null], [25916, 30253, null], [30253, 37097, null], [37097, 41413, null], [41413, 44955, null], [44955, 46466, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46466, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46466, null]], "pdf_page_numbers": [[0, 4065, 1], [4065, 11913, 2], [11913, 16921, 3], [16921, 22548, 4], [22548, 25916, 5], [25916, 30253, 6], [30253, 37097, 7], [37097, 41413, 8], [41413, 44955, 9], [44955, 46466, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46466, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
53ec568bcedae20b03feddbda49ff79d7bfd9ba2
|
Package ‘raer’
May 1, 2024
Type Package
Title RNA editing tools in R
Version 1.2.0
Description Toolkit for identification and statistical testing of RNA editing signals from within R. Provides support for identifying sites from bulk-RNA and single cell RNA-seq datasets, and general methods for extraction of allelic read counts from alignment files. Facilitates annotation and exploratory analysis of editing signals using Bioconductor packages and resources.
License MIT + file LICENSE
Imports stats, methods, GenomicRanges, IRanges, Rsamtools, BSgenome, Biostrings, SummarizedExperiment, SingleCellExperiment, S4Vectors, GenomeInfoDb, GenomicAlignments, GenomicFeatures, BiocGenerics, BiocParallel, rtracklayer, Matrix, cli
Suggests testthat (>= 3.0.0), knitr, DESeq2, edgeR, limma, rmarkdown, BiocStyle, ComplexHeatmap,TxDb.Hsapiens.UCSC.hg38.knownGene, SNPlocs.Hsapiens.dbSNP144.GRCh38, BSgenome.Hsapiens.NCBI.GRCh38, scater, scran, scuttle, AnnotationHub, covr, raerdata, txdbmaker
LinkingTo Rhtslib
SystemRequirements GNU make
VignetteBuilder knitr
Encoding UTF-8
Roxygen list(markdown = TRUE)
RoxygenNote 7.3.1
BugReports https://github.com/rnabioco/raer/issues
biocViews MultipleComparison, RNASeq, SingleCell, Sequencing, Coverage, Epitranscriptomics, FeatureExtraction, Annotation, Alignment
Config/Needs/website pkgdown, rnabioco/rbitemplate
Config/testthat/edition 3
git_url https://git.bioconductor.org/packages/raer
git_branch RELEASE_3_19
git_last_commit c2ce20e
git_last_commit_date 2024-04-30
Repository Bioconductor 3.19
Date/Publication 2024-04-30
Author Kent Riemondy [aut, cre] (<https://orcid.org/0000-0003-0750-1273>),
Kristen Wells-Wrasman [aut] (<https://orcid.org/0000-0002-7466-8164>),
Ryan Sheridan [ctb] (<https://orcid.org/0000-0003-4012-3147>),
Jay Hesselberth [ctb] (<https://orcid.org/0000-0002-6299-179x>),
RNA Bioscience Initiative [cph, fnd]
Maintainer Kent Riemondy <kent.riemondy@gmail.com>
Contents
annot_from_gr .................................................. 3
annot_snps ....................................................... 4
calc_AEI .......................................................... 5
calc_confidence .................................................. 7
calc_edit_frequency ............................................. 8
calc_scAEI ....................................................... 9
correct_strand ................................................... 11
filter_clustered_variants ...................................... 12
filter_multiallelic .............................................. 13
filter_splice_variants .......................................... 14
find_de_sites .................................................... 15
find_mispriming_sites .......................................... 17
find_scde_sites .................................................. 18
get_overlapping_snps ........................................... 19
get_splice_sites ................................................ 20
make_de_object .................................................. 21
mock_rse ......................................................... 22
pileup_cells ..................................................... 22
pileup_sites ..................................................... 25
raer ............................................................... 29
raer_example ..................................................... 30
read_sparray ..................................................... 30
Index 32
Annotate sites using GRanges object
Description
Utility function to map annotations from GRanges to rowData of SummarizedExperiment or to mcols of GRanges object. If multiple features overlap then they will be concatenated with the specified separator string.
Usage
\[
\text{annot_from_gr}(\text{obj, gr, cols_to_map, RLE = TRUE, sep = ",", \ldots})
\]
Arguments
- `obj`: RangedSummarizedExperiment or GRanges object
- `gr`: GRanges with annotations to map to obj
- `cols_to_map`: character vector of columns from GRanges to map to SummarizedExperiment. If the vector has names, the names will be the column names in the output.
- `RLE`: If TRUE, columns added will returned as `S4Vectors::Rle()` vectors to reduce memory
- `sep`: separator string, defaults to comma.
- `\ldots`: additional arguments to pass to `GenomicRanges::findOverlaps()`
Value
Either a SummarizedExperiment or GRanges object with additional annotations provided by the supplied GRanges object.
Examples
```r
library(SummarizedExperiment)
rse_adar_ifn <- mock_rse()
gr <- GRanges(rep(c("SSR3", "SPCS3"), c(5, 15)),
IRanges(seq(1, 500, by = 25), width = 50),
strand = "+
)
gr$feature <- sample(1:100, size = 20)
gr$id <- sample(LETTERS, size = 20)
rse <- annot_from_gr(rse_adar_ifn, gr, c(feature_set = "feature", "id"))
rowData(rse)
```
Annotation known SNP positions
**Description**
This function will annotate a GRanges or the rowRanges of a SummarizedExperiment with SNPs from a SNP package.
**Usage**
annot_snps(obj, ...)
```r
## S3 method for class 'GRanges'
annot_snps(
obj,
dbsnp,
chrom = NULL,
col_to_aggr = "RefSNP_id",
drop = FALSE,
genome = NULL,
RLE = TRUE,
...
)
## S3 method for class 'SummarizedExperiment'
annot_snps(obj, ...)
```
**Arguments**
- **obj**: GRanges or SummarizedExperiment object
- **...**: For the generic, further arguments to pass to specific methods. Unused for now.
- **dbsnp**: SNPlocs package, see available packages from BSgenome::available.SNPs()
- **chrom**: only operate on a specified chromosome
- **col_to_aggr**: column from SNPlocs package to add to input. If multiple SNPs overlap these values will be concatenated as comma separated values.
- **drop**: If TRUE, remove sites overlapping SNPs
- **genome**: A BSgenome object, which if supplied, will be used to provide additional snp_ref_allele and snp_alt_alleles columns containing the reference and alternate allele sequences, with respect to the positive strand. Additionally the SNP sequences will be checked against the allele at the site if a column named ALT is present in object. The strand of the site will be used to determine if the ALT allele needs to be complemented prior to comparing against the SNP db (which always returns sequences w.r.t the plus strand).
- **RLE**: If TRUE, columns added will returned as S4Vectors::Rle() vectors to reduce memory usage.
Value
Either a GRanges or SummarizedExperiment object with a new column added with information from `col_to_aggr` and optionally `snp_ref_allele`, `snp_alt_alleles`, and `snp_matches_site` annotations.
See Also
SNPlocs.Hsapiens.dbSNP144.GRCh38
Examples
```r
if (require(SNPlocs.Hsapiens.dbSNP144.GRCh38)) {
gr <- GRanges(rep("22", 10),
IRanges(
seq(10510077, 10610077, by = 1000)
),
strand = "+
genome(gr) <- "GRCh38.p2"
annot_snps(gr, SNPlocs.Hsapiens.dbSNP144.GRCh38)
}
```
Description
The Adenosine Editing Index describes the magnitude of A-to-I editing in a sample. The index is a weighted average of editing events (G bases) observed at A positions. The vast majority A-to-I editing occurs in ALU elements in the human genome, and these regions have a high A-to-I editing signal compared to other regions such as coding exons. This function will perform pileup at specified repeat regions and return a summary AEI metric.
Usage
```r
calc_AEI(
bamfiles,
fasta,
alu_ranges = NULL,
txdb = NULL,
snp_db = NULL,
param = FilterParam(),
BPPARAM = SerialParam(),
verbose = FALSE
)
```
Arguments
- **bamfiles**: character vector of paths to indexed bam files. If a named character vector is supplied the names will be used in the output.
- **fasta**: fasta filename
- **alu_ranges**: GRanges with regions to query for calculating the AEI, typically ALU repeats.
- **txdb**: A TxDb object, if supplied, will be used to subset the alu_ranges to those found overlapping genes. Alternatively a GRanges object with gene coordinates. If the library_type, specified by FilterParam, is unstranded then the TxDb will be used to correct the strandness relative to the reference and is a required parameter.
- **snp_db**: either a SNPlocs, GPos, or GRanges object. If supplied, will be used to exclude polymorphic positions prior to calculating the AEI. If calc_AEI() will be used many times, one will save time by first identifying SNPs that overlap the supplied alu_ranges, and passing these as a GRanges to snp_db rather than supplying all known SNPs (see get_overlapping_snps()).
- **param**: object of class FilterParam() which specify various filters to apply to reads and sites during pileup.
- **BPPARAM**: A BiocParallelParam object for specifying parallel options for operating over chromosomes.
- **verbose**: report progress on each chromosome?
Value
A named list containing:
- **AEI**: a matrix of AEI index values computed for all allelic combinations, one row for each supplied bam file.
- **AEI_per_chrom**: a data.frame containing values computed for each chromosome
References
Examples
```r
suppressPackageStartupMessages(library(Rsamtools))
bamfn <- raer_example("SRR5564269_Aligned.sortedByCoord.out.md.bam")
bam2fn <- raer_example("SRR5564277_Aligned.sortedByCoord.out.md.bam")
bams <- c(bamfn, bam2fn)
names(bams) <- c("ADAR1KO", "WT")
fafn <- raer_example("human.fasta")
mock_alu_ranges <- scanFaIndex(fafn)
```
calc_confidence
res <- calc_AEI(bams, fafn, mock_alu_ranges)
res$AEI
calc_confidence Calculate confidence score for observing editing
Description
Calculate a confidence score based on a Bayesian inverse probability model as described by Washburn et al. Cell Reports. 2015, and implemented in the SAILOR pipeline.
Usage
calc_confidence(
se,
edit_to = "G",
edit_from = "A",
per_sample = FALSE,
exp_fraction = 0.01,
alpha = 0L,
beta = 0L
)
Arguments
se SummarizedExperiment::SummarizedExperiment containing editing sites
edit_to edited base
edit_from non-edited base
per_sample if TRUE, calculate confidence per sample, otherwise edited and non-edited counts will be summed across all samples.
exp_fraction Numeric value between 0 and 1, specifying the expected error rate
alpha Pseudo-count to add to non-edited base counts
beta Pseudo-count to add to edited base counts
Value
SummarizedExperiment::SummarizedExperiment with either a new assay or rowData column named "confidence" depending on whether confidence is calculated per_sample.
References
SAILOR pipeline: https://github.com/YeoLab/sailor
Examples
```r
rse_adar_ifn <- mock_rse()
calc_confidence(rse_adar_ifn)
calc_confidence(rse_adar_ifn, per_sample = TRUE)
```
## calc_edit_frequency
Adds editing frequencies
### Description
Adds editing frequencies to an existing `RangedSummarizedExperiment` object (created by `pileup_sites()`). The `RangedSummarizedExperiment` with a new assay for editing frequencies for each site (`edit_freq`), depth of coverage computed using the indicated edited nucleotides (`depth`) and new `colData` columns with the number of edited sites (`n_sites`) and the fraction of edits (`edit_idx`) is returned.
### Usage
```r
calc_edit_frequency(
rse,
edit_from = "A",
edit_to = "G",
drop = FALSE,
replace_na = TRUE,
edit_frequency = 0,
min_count = 1
)
```
### Arguments
- **rse**: A `RangedSummarizedExperiment` object created by `pileup_sites()`
- **edit_from**: This should correspond to a nucleotide or assay (A, C, G, T, Ref, or Alt) you expect in the reference. Ex. for A to I editing events, this would be A.
- **edit_to**: This should correspond to a nucleotide or assay (A, C, G, T, Ref, or Alt) you expect in the editing site. Ex. for A to I editing events, this would be G.
- **drop**: If TRUE, the `RangedSummarizedExperiment` returned will only retain sites matching the specified `edit_from` and `edit_to` bases.
- **replace_na**: If TRUE, NA and NaN editing frequencies will be coerced to 0.
- **edit_frequency**: The edit frequency cutoff used when calculating the number of sites. Set to 0 to require any non-zero editing frequency. The number of sites is stored as `n_sites` in the `colData`.
- **min_count**: The minimum number of reads required when enumerating number of editing sites detected.
**Value**
*RangedSummarizedExperiment* supplemented with *edit_freq* and *depth* assay.
**Examples**
```r
library(SummarizedExperiment)
calc_scAEI(bamfiles, sites, cell_barcodes, param = FilterParam(),
edit_from = "A",
edit_to = "G",
output_dir = NULL,
return_sce = FALSE,
...
)
```
**Description**
The Adenosine Editing Index describes the magnitude of A-to-I editing in a sample. The index is a weighted average of editing events (G bases) observed at A positions. The vast majority A-to-I editing occurs in ALU elements in the human genome, and these regions have a high A-to-I editing signal compared to other regions such as coding exons. This function will examine enumerate edited and non-edited base counts at the supplied sites and return summary AEI metric per cell. Potential editing sites within repeat regions can be generated using `get_scAEI_sites()`.
**Usage**
```r
calc_scAEI(bamfiles,
sites,
cell_barcodes,
param = FilterParam(),
edit_from = "A",
edit_to = "G",
output_dir = NULL,
return_sce = FALSE,
...)
```
```
calc_scAEI(bamfiles, sites, cell_barcodes, param = FilterParam(),
edit_from = "A",
edit_to = "G",
output_dir = NULL,
return_sce = FALSE,
...)
```
**get_scAEI_sites(fasta, genes, alus, edit_from = "A", edit_to = "G")**
**Arguments**
- **bamfiles**
- A path to a BAM file (for 10x libraries), or a vector of paths to BAM files (smart-seq2). Can be supplied as a character vector, BamFile, or BamFileList.
- **sites**
- A GRanges object produced by `get_scAEI_sites()` containing sites to process.
- **cell_barcodes**
- A character vector of single cell barcodes to process. If processing multiple BAM files (e.g. smart-seq-2), provide a character vector of unique identifiers for each input BAM, to name each BAM file in the output files.
param object of class FilterParam() which specify various filters to apply to reads and sites during pileup.
date_from This should correspond to the base (A, C, G, T) you expect in the reference. Ex. for A to I editing events, this would be A.
date_to This should correspond to the base (A, C, G, T) you expect in an edited site. Ex. for A to I editing events, this would be G.
output_dir Output directory for nRef and nAlt sparseMatrix files. If NULL, a temporary directory will be used.
return_sce if TRUE, data is returned as a SingleCellExperiment, if FALSE a DataFrame containing computed AEI values will be returned.
... additional arguments to pileup_cells()
fasta Path to a genome fasta file
genes A GRanges object with gene coordinates. Alternatively a TxDb object, which if supplied, will be used extract gene coordinates.
alus GRanges with repeat regions to query for calculating the AEI, typically ALU repeats. The strand of the supplied intervals will be ignored for defining repeat regions.
Value
A DataFrame containing computed AEI values, count of editing events (n_alt), and count of reference events (n_ref) per cell. If return_sce is TRUE, then a SingleCellExperiment is returned with the AEI values stored in the colData.
References
Examples
suppressPackageStartupMessages(library(Rsamtools))
library(GenomicRanges)
bam_fn <- raer_example("5k_neuron_mouse_possort.bam")
bai <- indexBam(bam_fn)
# cell barcodes to query
cbs <- c("TGTTGTCTCCATCGT-1", "CAACCAACATAATCGC-1", "TGGAACTCAAGCTGTT-1")
# genes used to infer transcribed strand
gen_gr <- GRanges(c(
"2:100-400:=-",
"2:500-605:=-",
"2:600-680:+"
))
# alu intervals
alus_gr <- GRanges(c(
"2:110-380",
"2:510-600",
"2:610-670"
))
# genome fasta file, used to find A bases
fa_fn <- raer_example("mouse_tiny.fasta")
# get positions of potential A -> G changes in alus
sites <- get_scAEI_sites(fa_fn, genes_gr, alus_gr)
fp <- FilterParam(
library_type = "fr-second-strand",
min_mapq = 255
)
calc_scAEI(bam_fn, sites, cbs, fp)
---
**correct_strand**
**Apply strand correction using gene annotations**
**Description**
Gene annotations are used to infer the likely strand of editing sites. This function will operate on unstranded datasets which have been processed using "unstranded" library type which reports variants with respect to the + strand for all sites. The strand of the editing site will be assigned the strand of overlapping features in the genes_gr object. Sites with no-overlap, or overlapping features with conflicting strands (+ and -) will be removed.
**Usage**
```r
correct_strand(rse, genes_gr)
```
**Arguments**
- **rse**: RangedSummarizedExperiment object containing editing sites processed with "unstranded" setting
- **genes_gr**: GRanges object containing reference features to annotate the strand of the editing sites.
**Value**
RangedSummarizedExperiment object containing pileup assays, with strand corrected based on supplied genomic intervals.
filter_clustered_variants
Filter out clustered sequence variants
Description
Sequence variants of multiple allele types (e.g., A -> G, A -> C) proximal to a putative editing site can be indicative of a region prone to mis-alignment artifacts. Sites will be removed if variants of multiple allele types are present within a given distance in genomic or transcriptome coordinate space.
Usage
```r
filter_clustered_variants(
rse,
txdb,
regions = c("transcript", "genome"),
variant_dist = 100
)
```
Arguments
- **rse**: SummarizedExperiment::SummarizedExperiment containing editing sites
- **txdb**: GenomicFeatures::TxDb
- **regions**: One of transcript or genome, specifying the coordinate system for calculating distances between variants.
- **variant_dist**: distance in nucleotides for determining clustered variants
Examples
```r
suppressPackageStartupMessages(library("GenomicRanges"))
bamfn <- raer_example("SRR5564269_Aligned.sortedByCoord.out.md.bam")
fafn <- raer_example("human.fasta")
fp <- FilterParam(library_type = "unstranded")
rse <- pileup_sites(bamfn, fafn, param = fp)
genes <- GRanges(c(
"DHFR:200-400:+",
"SPCS3:100-200:-",
"SSR3:3-10:-",
"SSR3:6-12:+"
))
correct_strand(rse, genes)
```
**filter_multiallelic**
Filter out multi-allelic sites
**Description**
Remove sites with multiple variant bases from a SummarizedExperiment. rowData() gains a new column, ALT, that contains the variant allele detected at each site.
**Usage**
`filter_multiallelic(se)`
**Arguments**
- `se` SummarizedExperiment::SummarizedExperiment
**Value**
SummarizedExperiment::SummarizedExperiment with sites removed from object dependent on filtering applied.
**See Also**
Other se-filters: `filter_multiallelic()`, `filter_splice_variants()`
**Examples**
```r
if(require("txdbmaker")){
rse_adar_ifn <- mock_rse()
rse <- rse_adar_ifn[seqnames(rse_adar_ifn) == "SPCS3"]
# mock up a txdb with genes
gr <- GRanges(c(
"SPCS3:100-120:-",
"SPCS3:325-350:-"
))
gr$source <- "raer"
gr$type <- "exon"
gr$source <- NA
gr$phase <- NA_integer_
gr$gene_id <- c(1, 2)
gr$transcript_id <- c("1.1", "2.1")
txdb <- txdbmaker::makeTxDbFromGRanges(gr)
rse <- filter_multiallelic(rse)
filter_clustered_variants(rse, txdb, variant_dist = 10)
}
```
filter_splice_variants
Value
SummaryExperiment::SummaryExperiment with multiallelic sites removed. A new column, ALT will be added to rowData() indicating the single allele present at the site.
See Also
Other se-filters: filter_clustered_variants(), filter_splice_variants()
Examples
rse_adar_ifn <- mock_rse()
filter_multiallelic(rse_adar_ifn)
filter_splice_variants
Filter out sites near splice sites
Description
Remove editing sites found in regions proximal to annotated splice junctions.
Usage
filter_splice_variants(rse, txdb, splice_site_dist = 4, ignore.strand = FALSE)
Arguments
rse SummaryExperiment::SummaryExperiment with editing sites
txdb GenomicFeatures::TxDb
splice_site_dist distance to splice site
ignore.strand if TRUE, ignore strand when comparing editing sites to splice sites
Value
SummaryExperiment::SummaryExperiment with sites adjacent to splice sites removed.
See Also
Other se-filters: filter_clustered_variants(), filter_multiallelic()
find_de_sites
Examples
```r
if(require("txdbmaker")) {
rse_adar_ifn <- mock_rse()
# mock up a txdb with genes
gr <- GRanges(c(
"DHFR:310-330:-",
"DHFR:410-415:-",
"SSR3:100-155:-",
"SSR3:180-190:-"
))
gr$source <- "raer"
gr$type <- "exon"
gr$source <- NA
gr$phase <- NA_integer_
gr$gene_id <- c(1, 1, 2, 2)
gr$transcript_id <- rep(c("1.1", "2.1"), each = 2)
txdb <- txdbmaker::makeTxDbFromGRanges(gr)
filter_splice_variants(rse_adar_ifn, txdb)
}
```
find_de_sites
Perform differential editing
Description
Use edgeR or DESeq2 to perform differential editing analysis. This will work for designs that have 1 treatment and 1 control group. For more complex designs, we suggest you perform your own modeling.
Usage
```r
find_de_sites(
deobj,
test = c("edgeR", "DESeq2"),
sample_col = "sample",
condition_col = "condition",
condition_control = NULL,
condition_treatment = NULL
)
```
Arguments
- `deobj` A `RangedSummarizedExperiment` object prepared for differential editing analysis by `make_de_object()`
test
Indicate if edgeR or DESeq2 should be run.
sample_col
The name of the column from colData(deobj) that contains your sample information. Default is sample. If you do not have a column named "sample", you must provide the appropriate sample column.
condition_col
The name of the column from colData(deobj) that contains your treatment information. Default is condition. If you do not have a column named "condition", you must provide the appropriate condition column.
condition_control
The name of the control condition. This must be a variable in your condition_col of colData(deobj). No default provided.
condition_treatment
The name of the treatment condition. This must be a variable in your condition_col of colData(deobj).
Value
A named list:
- de_obj: The edgeR or deseq object used for differential editing analysis
- results_full: Unfiltered differential editing results
- sig_results: Filtered differential editing (FDR < 0.05)
- model_matrix: The model matrix used for generating DE results
Examples
```r
library(SummarizedExperiment)
bamfn <- raer_example("SRR5564269_Aligned.sortedByCoord.out.md.bam")
bam2fn <- raer_example("SRR5564277_Aligned.sortedByCoord.out.md.bam")
fafn <- raer_example("human.fasta")
bams <- rep(c(bamfn, bam2fn), each = 3)
sample_ids <- paste0(rep(c("KO", "WT"), each = 3), 1:3)
names(bams) <- sample_ids
fp <- FilterParam(only_keep_variants = TRUE)
rse <- pileup_sites(bams, fafn, param = fp)
rse$condition <- substr(rse$sample, 1, 2)
rse <- calc_edit_frequency(rse)
dse <- make_de_object(rse)
res <- find_de_sites(dse,
condition_control = "WT",
condition_treatment = "KO"
)
res$sig_results[1:3, ]
```
find_mispriming_sites
Find regions with oligodT mispriming
Description
OligodT will prime at A-rich regions in an RNA. Reverse transcription from these internal priming sites will install an oligodT sequence at the 3’ end of the cDNA. Sequence variants within these internal priming sites are enriched for variants converting the genomic sequence to the A encoded by the oligodT primer. Trimming poly(A) from the 3’ ends of reads reduces but does not eliminate these signals.
This function will identify regions that are enriched for mispriming events. Reads that were trimmed to remove poly(A) (encoded in the pa tag by 10x Genomics) are identified. The aligned 3’ positions of these reads are counted, and sites passing thresholds (at least 2 reads) are retained as possible sites of mispriming. By default regions 5 bases upstream and 20 bases downstream of these putative mispriming sites are returned.
Usage
```r
find_mispriming_sites(
bamfile,
fasta,
pos_5p = 5,
pos_3p = 20,
min_reads = 2,
tag = "pa",
tag_values = 3:300,
n_reads_per_chunk = 1e+06,
verbose = TRUE
)
```
Arguments
- **bamfile**: path to bamfile
- **fasta**: path to fasta file
- **pos_5p**: distance 5’ of mispriming site to define mispriming region
- **pos_3p**: distance 3’ of mispriming site to define mispriming region
- **min_reads**: minimum required number of reads at a mispriming site
- **tag**: bam tag containing number of poly(A) bases trimmed
- **tag_values**: range of values required for read to be considered
- **n_reads_per_chunk**: number of reads to process in memory, see `Rsamtools::BamFile()`
- **verbose**: if true report progress
find_scde_sites
Value
A GenomicsRanges containing regions enriched for putative mispriming events. The n_reads column specifies the number of polyA trimmed reads overlapping the mispriming region. mean_pal indicates the mean length of polyA sequence trimmed from reads overlapping the region. The n_regions column specifies the number overlapping independent regions found in each chunk (dictated by n_reads_per_chunk). The A_freq column indicates the frequency of A bases within the region.
Examples
bam_fn <- raer_example("5k_neuron_mouse_possort.bam")
fa_fn <- raer_example("mouse_tiny.fasta")
find_mispriming_sites(bam_fn, fa_fn)
find_scde_sites Identify sites with differential editing between cells in single cell datasets
Description
Compare editing frequencies between clusters or celltypes. REF and ALT counts from each cluster are pooled to create pseudobulk estimates. Each pair of clusters are compared using fisher exact tests. Statistics are aggregated across each pairwise comparison using scran::combineMarkers.
Usage
find_scde_sites(sce, group, rowData = FALSE, BPPARAM = SerialParam(), ...)
Arguments
sce SingleCellExperiment object with nRef and nAlt assays.
group column name from colData used to define groups to compare.
rowData if TRUE, rowData from the input SingleCellExperiment will be included in the output DataFrames
BPPARAM BiocParallel backend for control how parallel computations are performed.
... Additional arguments passed to scran::combineMarkers
Value
A named list of DataFrames containing results for each cluster specified by group. The difference in editing frequencies between cluster pairs are denoted as dEF. See scran::combineMarkers for a description of additional output fields.
get_overlapping_snps
Examples
```r
### generate example data ###
library(Rsamtools)
library(GenomicRanges)
bam_fn <- raer_example("5k_neuron_mouse_possort.bam")
gr <- GRanges(c("2:579:-", "2:625:-", "2:645:-", "2:589:-", "2:601:-"))
gr$REF <- c(rep("A", 4), "T")
gr$ALT <- c(rep("G", 4), "C")
cbs <- unique(scanBam(bam_fn, param = ScanBamParam(tag = "CB"))[1]$tag$CB)
cbs <- na.omit(cbs)
outdir <- tempdir()
bai <- indexBam(bam_fn)
fp <- FilterParam(library_type = "fr-second-strand")
sce <- pileup_cells(bam_fn, gr, cbs, outdir, param = fp)
# mock some clusters
set.seed(42)
sce$clusters <- paste0("cluster_", sample(1:3, ncol(sce), replace = TRUE))
res <- find_scde_sites(sce, "clusters")
res[[1]]
```
get_overlapping_snps Retrieve SNPs overlapping intervals
Description
This function will find SNPs overlapping supplied intervals using a SNPlocs package. The SNPs can be returned in memory (as GPos objects) or written to disk as a bed-file (optionally compressed).
Usage
```r
get_overlapping_snps(gr, snpDb, output_file = NULL)
```
Arguments
- **gr**: Intervals to query
- **snpDb**: A reference ot a SNPlocs database
- **output_file**: A path to an output file. If supplied the file can be optionally compressed by including a ".gz" suffix. If not supplied, SNPS will be returned as a GenomicRanges::GPos object
Value
GPos object containing SNPs overlapping supplied genomic intervals
get_splice_sites
Examples
```r
if (require(SNPlocs.Hsapiens.dbSNP144.GRCh38)) {
gr <- GRanges(rep("22", 10),
IRanges(seq(10510077, 10610077, by = 1000)[1:10], width = 250),
strand = "+")
get_overlapping_snps(gr, SNPlocs.Hsapiens.dbSNP144.GRCh38)
}
```
get_splice_sites
Extract regions surrounding splice sites
Description
Extract intervals at splice sites and their adjacent regions.
Usage
```r
get_splice_sites(txdb, slop = 4)
```
Arguments
- `txdb` GenomicFeatures::TxDb
- `slop` The number of bases upstream and downstream of splice site to extract
Value
GenomicRanges::GRanges containing positions of splice sites, with flanking bases.
Examples
```r
if (require(TxDb.Hsapiens.UCSC.hg38.knownGene)) {
txdb <- TxDb.Hsapiens.UCSC.hg38.knownGene
res <- get_splice_sites(txdb)
res[1:5]
}
```
make_de_object
Make summarized experiment object for differential editing analysis
Description
Generates a RangedSummarizedExperiment object for use with edgeR or DESeq2. Will generate a counts assay with a matrix formatted with 2 columns per sample, representing the reference and editing allele counts.
Usage
make_de_object(
rse,
edit_from = "A",
edit_to = "G",
min_prop = 0,
max_prop = 1,
min_samples = 1
)
Arguments
rse A RangedSummarizedExperiment object
edit_from This should correspond to a nucleotide or assay (A, C, G, T, Ref, or Alt) you expect in the reference. Ex. for A to I editing events, this would be A.
edit_to This should correspond to a nucleotide or assay (A, C, G, T, Ref, or Alt) you expect in the editing site. Ex. for A to I editing events, this would be G.
min_prop The minimum required proportion of reads edited at a site. At least min_samples need to pass this to keep the site.
max_prop The maximum allowable proportion of reads edited at a site. At least min_samples need to pass this to keep the site.
min_samples The minimum number of samples passing the min_prop and max_prop cutoffs to keep a site.
Value
RangedSummarizedExperiment for use with edgeR or DESeq2. Contains a counts assay with a matrix formatted with 2 columns per sample (ref and alt counts).
Examples
library(SummarizedExperiment)
rse_adar_ifn <- mock_rse()
rse <- calc_edit_frequency(rse_adar_ifn)
dse <- make_de_object(rse, min_samples = 1)
assay(dse, "counts")[1:5, ]
dse
mock_rse
Generate a small RangedSummarizedExperiment object for tests and examples
Description
A RangedSummarizedExperiment containing a subset of data from an RNA-seq experiment to measure the effects of IFN treatment of cell lines with wild-type or ADAR1-KO.
Usage
mock_rse()
Value
RangedSummarizedExperiment populated with pileup data
Source
References
Examples
mock_rse()
pileup_cells
Generate base counts per cell
Description
This function processes scRNA-seq library to enumerate base counts for Reference (unedited) or Alternate (edited) bases at specified sites in single cells. pileup_cells can process droplet scRNA-seq libraries, from a BAM file containing a cell-barcode and UMI, or well-based libraries that do not contain cell-barcodes.
The sites parameter specifies sites to quantify. This must be a GRanges object with 1 base intervals, a strand (+ or -), and supplemented with metadata columns named REF and ALT containing the reference and alternate base to query. See examples for the required format.
At each site, bases from overlapping reads will be examined, and counts of each ref and alt base enumerated for each cell-barcode present. A single base will be counted once for each UMI sequence present in each cell.
pileup_cells
Usage
pileup_cells(
bamfiles,
sites,
cell_barcodes,
output_directory,
chroms = NULL,
umi_tag = "UB",
cb_tag = "CB",
param = FilterParam(),
BPPARAM = SerialParam(),
return_sce = TRUE,
verbose = FALSE
)
Arguments
bamfiles a path to a BAM file (for droplet scRNA-seq), or a vector of paths to BAM files (Smart-seq2). Can be supplied as a character vector, BamFile, or BamFileList.
sites a GRanges object containing sites to process. See examples for valid formatting.
cell_barcodes A character vector of single cell barcodes to process. If processing multiple BAM files (e.g. Smart-seq2), provide a character vector of unique identifiers for each input BAM, to name each BAM file in the output files.
output_directory Output directory for output matrix files. The directory will be generated if it doesn’t exist.
chroms A character vector of chromosomes to process. If supplied, only sites present in the listed chromosomes will be processed
umi_tag tag in BAM containing the UMI sequence
cb_tag tag in BAM containing the cell-barcode sequence
param object of class FilterParam() which specify various filters to apply to reads and sites during pileup. Note that the min_depth and min_variant_reads parameters if set > 0 specify the number of reads from any cell required in order to report a site. E.g. if min_variant_reads is set to 2, then at least 2 reads (from any cell) must have a variant in order to report the site. Setting min_depth and min_variant_reads to 0 reports all sites present in the sites object. The following options are not enabled for pileup_cells(): max_mismatch_type, homopolymer_len, and min_allelic_freq.
BPPARAM BiocParallel instance. Parallel computation occurs across chromosomes.
return_sce if TRUE, data is returned as a SingleCellExperiment, if FALSE a character vector of the output files, specified by outfile_prefix, will be returned.
verbose Display messages
Returns either a SingleCellExperiment or character vector of paths to the sparseMatrix files produced. The SingleCellExperiment object is populated with two assays, nRef and nAlt, which represent base counts for the reference and alternate alleles. The rowRanges() will contain the genomic interval for each site, along with REF and ALT columns. The rownames will be populated with the format site_[seqnames]_[position(1-based)]_[strand]_[allele], with strand being encoded as 1 = +, 2 = -, and 3 = *, and allele being REF + ALT.
If return_sce is FALSE then a character vector of paths to the sparseMatrix files (barcodes.txt.gz, sites.txt.gz, counts.mtx.gz), will be returned. These files can be imported using read_sparray().
See Also
Other pileup: pileup_sites()
Examples
library(Rsamtools)
library(GenomicRanges)
bam_fn <- raer_example("5k_neuron_mouse_possort.bam")
gr <- GRanges(c("2:579:-", "2:625:-", "2:645:-", "2:589:-", "2:601:-"))
gr$REF <- c(rep("A", 4), "T")
gr$ALT <- c(rep("G", 4), "C")
cbs <- unique(scanBam(bam_fn, param = ScanBamParam(tag = "CB"))[[1]]$tag$CB)
cbs <- na.omit(cbs)
outdir <- tempdir()
bai <- indexBam(bam_fn)
fp <- FilterParam(library_type = "fr-second-strand")
sce <- pileup_cells(bam_fn, gr, cbs, outdir, param = fp)
sce
# example of processing multiple Smart-seq2 style libraries
many_small_bams <- rep(bam_fn, 10)
bam_ids <- LETTERS[1:10]
# for unstranded libraries, sites and alleles should be provided on + strand
gr$REF <- c(rep("T", 4), "A")
gr$ALT <- c(rep("C", 4), "G")
fp <- FilterParam(
library_type = "unstranded",
remove_overlaps = TRUE
)
sce <- pileup_cells(many_small_bams,
sites = gr,
**Description**
This function uses a pileup routine to examine numeric base counts from alignments at specified sites, regions, or across all read alignments, from one or more BAM files. Alignment and site filtering options are controlled by the `FilterParam` class. A `RangedSummarizedExperiment` object is returned, populated with base count statistics for each supplied BAM file.
**Usage**
```r
pileup_sites(
bamfiles, # Required
fasta, # Required
sites = NULL, # Required
region = NULL, # Optional
chroms = NULL, # Optional
param = FilterParam(), # Optional
BPPARAM = SerialParam(), # Optional
umi_tag = NULL, # Optional
verbose = FALSE # Optional
)
```
```r
FilterParam(
max_depth = 10000, # Optional
min_depth = 1L, # Optional
min_base_quality = 20L, # Optional
min_mapq = 0L, # Optional
library_type = "fr-first-strand", # Optional
bam_flags = NULL, # Optional
only_keep_variants = FALSE, # Optional
trim_5p = 0L, # Optional
trim_3p = 0L, # Optional
ftrim_5p = 0, # Optional
ftrim_3p = 0, # Optional
indel_dist = 0L # Optional
)
```
splice_dist = 0L,
min.splice_overhang = 0L,
homopolymer_len = 0L,
max.mismatch_type = c(0L, 0L),
read_bqual = c(0, 0),
min.variant.reads = 0L,
min.allelic_freq = 0,
report.multiallelic = TRUE,
remove_overlaps = TRUE
)
Arguments
bamfiles a character vector, BamFile or BamFileList indicating 1 or more BAM files to process. If named, the names will be included in the colData of the RangedSummarizedExperiment as a sample column, otherwise the names will be taken from the basename of the BAM file.
fasta path to genome fasta file used for read alignment. Can be provided in compressed gzip or bgzip format.
sites a GRanges object containing regions or sites to process.
region samtools region query string (i.e. chr1:100-1000). Can be combined with sites, in which case sites will be filtered to keep only sites within the region.
chrogs chromosomes to process, provided as a character vector. Not to be used with the region parameter.
param object of class FilterParam() which specify various filters to apply to reads and sites during pileup.
BPPARAM A BiocParallel class to control parallel execution. Parallel processing occurs per chromosome and is disabled when run on a single region.
umi_tag The BAM tag containing a UMI sequence. If supplied, multiple reads with the same UMI sequence will only be counted once per position.
verbose if TRUE, then report progress and warnings.
max_depth maximum read depth considered at each site
min_depth min read depth needed to report site
min_base_quality min base quality score to consider read for pileup
min_mapq minimum required MAPQ score. Values for each input BAM file can be provided as a vector.
library_type read orientation, one of fr-first-strand, fr-second-strand, and unstranded. Unstranded library type will be reported with variants w.r.t the + strand. Values for each input BAM file can be provided as a vector.
bam_flags bam flags to filter or keep, use Rsamtools::scanBamFlag() to generate.
only.keep.variants if TRUE, then only variant sites will be reported (FALSE by default). Values for each input BAM file can be provided as a vector.
trim_5p Bases to trim from 5’ end of read alignments
trim_3p Bases to trim from 3’ end of read alignments
ftrim_5p Fraction of bases to trim from 5’ end of read alignments
ftrim_3p Fraction of bases to trim from 3’ end of read alignments
indel_dist Exclude read if site occurs within given distance from indel event in the read
splice_dist Exclude read if site occurs within given distance from splicing event in the read
min_splice_overhang Exclude read if site is located adjacent to splice site with an overhang less than given length.
homopolymer_len Exclude site if occurs within homopolymer of given length
max_mismatch_type Exclude read if it has X different mismatch types (e.g. A-to-G, G-to-C, C-to-G, is 3 mismatch types) or Y # of mismatches, must be supplied as a integer vector of length 2. e.g. c(X, Y).
read_bqual Exclude read if more than X percent of the bases have base qualities less than Y. Numeric vector of length 2. e.g. c(0.25, 20)
min_variant_reads Required number of reads containing a variant for a site to be reported. Calculated per bam file, such that if 1 bam file has >= min_variant_reads, then the site will be reported.
min_allelic_freq minimum allelic frequency required for a variant to be reported in ALT assay.
report_multiallelic if TRUE, report sites with multiple variants passing filters. If FALSE, site will not be reported.
remove_overlaps if TRUE, enable read pair overlap detection, which will count only 1 read in regions where read pairs overlap using the htslib algorithm. In brief for each overlapping base pair the base quality of the base with the lower quality is set to 0, which discards it from being counted.
Value
A RangedSummarizedExperiment object populated with multiple assays:
- ALT: Alternate base(s) found at each position
- nRef: # of reads supporting the reference base
- nAlt: # of reads supporting an alternate base
- nA: # of reads with A
- nT: # of reads with T
- nC: # of reads with C
- nG: # of reads with G
The `rowRanges()` contains the genomic interval for each site, along with:
- **REF**: The reference base
- **rpbz**: Mann-Whitney U test of Read Position Bias from bcftools, extreme negative or positive values indicate more bias.
- **vdb**: Variant Distance Bias for filtering splice-site artifacts from bcftools, lower values indicate more bias.
- **sor**: Strand Odds Ratio Score, strand bias estimated by the Symmetric Odds Ratio test, based on GATK code. Higher values indicate more bias.
The rownames will be populated with the format `site_[seqnames]_[position(1-based)]_[strand]`, with `strand` being encoded as 1 = +, 2 = -, and 3 = *.
**See Also**
Other pileup: `pileup_cells()`
**Examples**
```r
library(SummarizedExperiment)
bamfn <- raer_example("SRR5564269_Aligned.sortedByCoord.out.md.bam")
bam2fn <- raer_example("SRR5564277_Aligned.sortedByCoord.out.md.bam")
fafn <- raer_example("human.fasta")
rse <- pileup_sites(bamfn, fafn)
fp <- FilterParam(only_keep_variants = TRUE, min_depth = 55)
pileup_sites(bamfn, fafn, param = fp)
# using multiple bam files
bams <- rep(c(bamfn, bam2fn), each = 3)
sample_ids <- paste0(rep(c("KO", "WT"), each = 3), 1:3)
names(bams) <- sample_ids
fp <- FilterParam(only_keep_variants = TRUE)
rse <- pileup_sites(bams, fafn, param = fp)
rse
rse$condition <- substr(rse$sample, 1, 2)
assays(rse)
colData(rse)
rowRanges(rse)
# specifying regions to query using GRanges object
sites <- rowRanges(rse)
rse <- pileup_sites(bams, fafn, sites = sites)
```
raer
raer
rse <- pileup_sites(bams, fafn, chroms = c("SPCS3", "DHFR"))
rse <- pileup_sites(bams, fafn, region = "DHFR:100-101")
rse
---
raer raer: RNA editing tools in R
Description
Toolkit for identification and statistical testing of RNA editing signals from within R. Provides support for identifying sites from bulk-RNA and single cell RNA-seq datasets, and general methods for extraction of allelic read counts from alignment files. Facilitates annotation and exploratory analysis of editing signals using Bioconductor packages and resources.
Author(s)
Maintainer: Kent Riemondy <kent.riemondy@gmail.com> (ORCID)
Authors:
- Kristen Wells-Wrasman <kristen.wells-wrasman@cuanschutz.edu> (ORCID)
Other contributors:
- Ryan Sheridan <ryan.sheridan@cuanschutz.edu> (ORCID) [contributor]
- Jay Hesselberth <jay.hesselberth@gmail.com> (ORCID) [contributor]
- RNA Bioscience Initiative [copyright holder, funder]
See Also
Useful links:
- https://rnabioco.github.io/raer
- https://github.com/rnabioco/raer
- Report bugs at https://github.com/rnabioco/raer/issues
### raer_example
**Provide working directory for raer example files.**
**Description**
Provide working directory for raer example files.
**Usage**
```r
raer_example(path)
```
**Arguments**
- `path`
path to file
**Value**
Character vector will path to an internal package file.
**Examples**
```r
raer_example("human.fasta")
```
### read_sparray
**Read sparseMatrix produced by pileup_cells()**
**Description**
Read in tables produced by `pileup_cells()` which are an extension of the matrixMarket sparse matrix format to store values for more than 1 matrix.
The `.mtx.gz` files are formatted with columns:
1. row index (0 based)
2. column index (0 based)
3. values for sparseMatrix #1 (nRef)
4. values for sparseMatrix #2 (nAlt)
**Usage**
```r
read_sparray(mtx_fn, sites_fn, bc_fn, site_format = c("coordinate", "index"))
```
Arguments
mtx_fn .mtx.gz file path
sites_fn sites.txt.gz file path
bc_fn bcs.txt.gz file path
site_format one of coordinate or index, coordinate will populate a SingleCellExperiment with rowRanges and rownames corresponing to genomic intervals, whereas ‘index’ will only add row indices to the rownames.
Value
a SingleCellExperiment object populated with nRef and nAlt assays.
Examples
library(Rsamtools)
library(GenomicRanges)
bam_fn <- raer_example("5k_neuron_mouse_possort.bam")
gr <- GRanges(c("2:579:--", "2:625:--", "2:645:--", "2:589:--", "2:601:--"))
gr$REF <- c(rep("A", 4), "T")
gr$ALT <- c(rep("G", 4), "C")
cbs <- unique(scanBam(bam_fn, param = ScanBamParam(tag = "CB"))[1]$tag$CB)
cbs <- na.omit(cbs)
outdir <- tempdir()
bai <- indexBam(bam_fn)
fp <- FilterParam(library_type = "fr-second-strand")
mtx_fns <- pileup_cells(bam_fn, gr, cbs, outdir, return_sce = FALSE)
sce <- read_sparray(mtx_fns[1], mtx_fns[2], mtx_fns[3])
sce
unlink(bai)
Index
* **internal**
- raer, 29
* **pileup**
- pileup_cells, 22
- pileup_sites, 25
* **se-filters**
- filter_clustered_variants, 12
- filter_multiallelic, 13
- filter_splice_variants, 14
- annot_from_gr, 3
- annot_snps, 4
- BamFile, 23, 26
- BamFileList, 23, 26
- BiocParallel, 26
- BiocParallelParam, 6
- BSgenome::available.SNPs(), 4
- calc_AEI, 5
- calc_confidence, 7
- calc_edit_frequency, 8
- calc_spAEI, 9
- colData, 26
- correct_strand, 11
- DataFrame, 18
- filter_clustered_variants, 12, 14
- filter_multiallelic, 13, 14
- filter_splice_variants, 13, 14, 14
- FilterParam (pileup_sites), 25
- FilterParam(), 6, 10, 23, 26
- find_de_sites, 15
- find_mispriming_sites, 17
- find_scde_sites, 18
- GenomicRanges::findOverlaps(), 3
- GenomicRanges::GPos, 19
- get_overlapping_snps, 19
- get_overlapping_snps(), 6
- get_scAEI_sites (calc_scAEI), 9
- get_scAEI_sites(), 9
- get_splice_sites(), 20
- GPos, 6
- GRanges, 4, 6, 22, 26
- make_de_object, 21
- make_de_object(), 15
- mock_rse, 22
- pileup_cells, 22, 28
- pileup_cells(), 10
- pileup_sites, 24, 25
- pileup_sites(), 8
- raer, 29
- raer-package (raer), 29
- raer_example, 30
- RangedSummarizedExperiment, 8, 9, 15, 21, 25–27
- read_sparray, 30
- read_sparray(), 24
- rowData, 18
- rowRanges(), 24, 28
- Rsamtools::BamFile(), 17
- Rsamtools::scanBamFlag(), 26
- S4Vectors::Rle(), 3, 4
- scran::combineMarkers, 18
- SingleCellExperiment, 18, 24
- SNPlocs, 6
- SummarizedExperiment, 4
- TxDb, 6
|
{"Source-Url": "https://www.bioconductor.org/packages/release/bioc/manuals/raer/man/raer.pdf", "len_cl100k_base": 12400, "olmocr-version": "0.1.49", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 66969, "total-output-tokens": 15118, "length": "2e13", "weborganizer": {"__label__adult": 0.0003933906555175781, "__label__art_design": 0.0007534027099609375, "__label__crime_law": 0.0004475116729736328, "__label__education_jobs": 0.0016527175903320312, "__label__entertainment": 0.0003659725189208984, "__label__fashion_beauty": 0.0002396106719970703, "__label__finance_business": 0.0003533363342285156, "__label__food_dining": 0.0005412101745605469, "__label__games": 0.0012884140014648438, "__label__hardware": 0.0019083023071289065, "__label__health": 0.0009632110595703124, "__label__history": 0.000507354736328125, "__label__home_hobbies": 0.00024700164794921875, "__label__industrial": 0.0006842613220214844, "__label__literature": 0.0003848075866699219, "__label__politics": 0.0005469322204589844, "__label__religion": 0.0007009506225585938, "__label__science_tech": 0.326904296875, "__label__social_life": 0.00024080276489257812, "__label__software": 0.06048583984375, "__label__software_dev": 0.59912109375, "__label__sports_fitness": 0.0004911422729492188, "__label__transportation": 0.0003275871276855469, "__label__travel": 0.0003001689910888672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47270, 0.02748]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47270, 0.74399]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47270, 0.69655]], "google_gemma-3-12b-it_contains_pii": [[0, 1442, false], [1442, 3618, null], [3618, 4938, null], [4938, 6494, null], [6494, 7629, null], [7629, 9673, null], [9673, 11123, null], [11123, 12853, null], [12853, 14714, null], [14714, 16557, null], [16557, 17899, null], [17899, 19139, null], [19139, 20206, null], [20206, 21188, null], [21188, 22241, null], [22241, 23906, null], [23906, 25561, null], [25561, 27295, null], [27295, 28704, null], [28704, 29527, null], [29527, 31026, null], [31026, 32378, null], [32378, 34311, null], [34311, 36045, null], [36045, 37155, null], [37155, 39278, null], [39278, 41275, null], [41275, 42782, null], [42782, 43859, null], [43859, 44707, null], [44707, 45670, null], [45670, 47270, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1442, true], [1442, 3618, null], [3618, 4938, null], [4938, 6494, null], [6494, 7629, null], [7629, 9673, null], [9673, 11123, null], [11123, 12853, null], [12853, 14714, null], [14714, 16557, null], [16557, 17899, null], [17899, 19139, null], [19139, 20206, null], [20206, 21188, null], [21188, 22241, null], [22241, 23906, null], [23906, 25561, null], [25561, 27295, null], [27295, 28704, null], [28704, 29527, null], [29527, 31026, null], [31026, 32378, null], [32378, 34311, null], [34311, 36045, null], [36045, 37155, null], [37155, 39278, null], [39278, 41275, null], [41275, 42782, null], [42782, 43859, null], [43859, 44707, null], [44707, 45670, null], [45670, 47270, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47270, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47270, null]], "pdf_page_numbers": [[0, 1442, 1], [1442, 3618, 2], [3618, 4938, 3], [4938, 6494, 4], [6494, 7629, 5], [7629, 9673, 6], [9673, 11123, 7], [11123, 12853, 8], [12853, 14714, 9], [14714, 16557, 10], [16557, 17899, 11], [17899, 19139, 12], [19139, 20206, 13], [20206, 21188, 14], [21188, 22241, 15], [22241, 23906, 16], [23906, 25561, 17], [25561, 27295, 18], [27295, 28704, 19], [28704, 29527, 20], [29527, 31026, 21], [31026, 32378, 22], [32378, 34311, 23], [34311, 36045, 24], [36045, 37155, 25], [37155, 39278, 26], [39278, 41275, 27], [41275, 42782, 28], [42782, 43859, 29], [43859, 44707, 30], [44707, 45670, 31], [45670, 47270, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47270, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
48d2e6a9077f4ea8af52fd6106a3122e443bdd34
|
Programming Languages and Compilers (CS 421)
Sasa Misailovic
4110 SC, UIUC
https://courses. engr. illinois. edu/cs421/fa2017/CS421A
Based in part on slides by Mattox Beckman, as updated by Vikram Adve, Gul Agha, and Elsa L Gunter
Booleans (aka Truth Values)
# true;;
- : bool = true
# false;;
- : bool = false
// ρ₇ = {c → 4, test → 3.7, a → 1, b → 5}
# if b > a then 25 else 0;;
- : int = 25
Booleans and Short-Circuit Evaluation
# 3 > 1 && 4 > 6;;
- : bool = false
# 3 > 1 || 4 > 6;;
- : bool = true
# not (4 > 6);
- : bool = true
# (print_string "Hi\n"; 3 > 1) || 4 > 6;;
Hi
- : bool = true
# 3 > 1 || (print_string "Bye\n"; 4 > 6);
- : bool = true
Tuples as Values
// \( \rho_0 = \{c \to 4, \ a \to 1, \ b \to 5\} \)
# let s = (5,"hi",3.2);;
val s : int * string * float = (5, "hi", 3.2)
// \( \rho = \{s \to (5, \ "hi", \ 3.2), \ c \to 4, \ a \to 1, \ b \to 5\} \)
Pattern Matching with Tuples
// ρ = {s → (5, "hi", 3.2), a → 1, b → 5, c → 4}
# let (a,b,c) = s;; (* (a,b,c) is a pattern *)
val a : int = 5
val b : string = "hi"
val c : float = 3.2
# let (a, _, _) = s;;
val a : int = 5
# let x = 2, 9.3;; (* tuples don't require parens in Ocaml *)
val x : int * float = (2, 9.3)
Nested Tuples
# (*Tuples can be nested *)
# let d = ((1,4,62),("bye",15),73.95);;
val d : (int * int * int) * (string * int) * float =
((1, 4, 62), ("bye", 15), 73.95)
# (*Patterns can be nested *)
# let (p, (st,_), _) = d;;
(* _ matches all, binds nothing *)
val p : int * int * int = (1, 4, 62)
val st : string = "bye"
Functions on tuples
```ocaml
# let plus_pair (n,m) = n + m;;
val plus_pair : int * int -> int = <fun>
# plus_pair (3,4);;
- : int = 7
# let twice x = (x,x);;
val twice : 'a -> 'a * 'a = <fun>
# twice 3;;
- : int * int = (3, 3)
# twice "hi";;
- : string * string = ("hi", "hi")
```
Save the Environment!
- A **closure** is a pair of an environment and an association of a sequence of variables (the input variables) with an expression (the function body), written:
\[
\langle (v_1, \ldots, v_n) \rightarrow \text{exp}, \rho \rangle
\]
- Where \( \rho \) is the environment in effect when the function is defined (for a simple function)
Closure for `plus_pair`
- Assume $\rho_{plus\_pair}$ was the environment just before `plus_pair` defined and recall
- let `plus_pair (n,m) = n + m;;`
- Closure for `fun (n,m) -> n + m:`
\[
\langle (n,m) \rightarrow n + m, \rho_{plus\_pair} \rangle
\]
- Environment just after `plus_pair` defined:
\[
\{plus\_pair \rightarrow \langle (n,m) \rightarrow n + m, \rho_{plus\_pair} \rangle\} + \rho_{plus\_pair}
\]
Functions with more than one argument
# let add_three x y z = x + y + z;;
val add_three : int -> int -> int -> int = <fun>
# let t = add_three 6 3 2;;
val t : int = 11
# let add_three =
fun x -> (fun y -> (fun z -> x + y + z));;
val add_three : int -> int -> int -> int -> int = <fun>
Again, first syntactic sugar for second
Curried vs Uncurried
- **Recall**
```ocaml
# let add_three u v w = u + v + w;;
val add_three : int -> int -> int -> int = <fun>
```
- **How does it differ from**
```ocaml
# let add_triple (u,v,w) = u + v + w;;
val add_triple : int * int * int -> int = <fun>
```
- **add_three is** **curried**;
- **add_triple is** **uncurried**
Curried vs Uncurried
# add_three 6 3 2;;
- : int = 11
# add_triple (6,3,2);;
- : int = 11
# add_triple 5 4;;
Characters 0-10: add_triple 5 4;;
^^^^^^^^^^^^^
This function is applied to too many arguments, maybe you forgot a `;`
# fun x -> add_triple (5,4,x);;
: int -> int = <fun>
Partial application of functions
let add_three x y z = x + y + z;;
# let h = add_three 5 4;;
val h : int -> int = <fun>
# h 3;;
- : int = 12
# h 7;;
- : int = 16
Partial application also called sectioning
Recall: let plus_x = fun x => y + x
let x = 12
let plus_x = fun y => y + x
let x = 7
Closure for plus_x
- When plus_x was defined, had environment:
\[ \rho_{\text{plus}_x} = \{\ldots, x \to 12, \ldots\} \]
- Recall: \texttt{let plus}_x y = y + x
is really \texttt{let plus}_x = \texttt{fun} y \to y + x
- Closure for \texttt{fun} y \to y + x:
\[ <y \to y + x, \rho_{\text{plus}_x}> \]
- Environment just after plus_x defined:
\[ \{\text{plus}_x \to <y \to y + x, \rho_{\text{plus}_x}>\} + \rho_{\text{plus}_x} \]
Evaluation
Running Ocaml source:
- Parse the program to detect each expression
- Keep an internal environment at each time step
- For each expression, interpret the program using the function Eval
- Nice property of Ocaml: everything is a declaration or an expression!
How does Eval (expression, environment) work:
- Evaluation uses a starting environment $\rho$
- Define the rules for evaluating declarations, constants, arithmetic expressions, function applications…
Evaluating Declarations
- Evaluation uses a starting environment $\rho$
- To evaluate a (simple) declaration $\text{let } x = e$
- **Evaluate** expression $e$ in $\rho$ to value $v$
- **Update** $\rho$ with the mapping from $x$ to $v$: $\{x \rightarrow v\} + \rho$
**Definition of $+$ on environments!**
- **Update**: $\rho_1 + \rho_2$ has all the bindings in $\rho_1$ and all those in $\rho_2$ that are not rebound in $\rho_1$
$\{x \rightarrow 2, \ y \rightarrow 3, \ a \rightarrow "hi"\} + \{y \rightarrow 100, \ b \rightarrow 6\} = \{x \rightarrow 2, \ y \rightarrow 3, \ a \rightarrow "hi", \ b \rightarrow 6\}$
9/10/2017
Evaluating Declarations
- Evaluation uses a starting environment $\rho$.
- To evaluate a (simple) declaration $\text{let } x = e$
- **Evaluate** expression $e$ in $\rho$ to value $v$
- **Update** $\rho$ with the mapping from $x$ to $v$: $\{x \rightarrow v\} + \rho$
Warm-up: we evaluate this case:
\[\rho = \{ x \rightarrow 2 \}\]
\[
\text{let } y = 2*x+1;;
\]
\[\rho' = \{ x \rightarrow 2; \ y \rightarrow 5 \}\]
Evaluating Expressions
- Evaluation uses an environment $\rho$
- A constant evaluates to itself
- To evaluate an variable, look it up in $\rho$ i.e., use $\rho(v)$
- To evaluate uses of $+$, $-$, etc, first eval the arguments, then do operation
- To evaluate a local declaration: $\text{let } x = e_1 \text{ in } e_2$
- Evaluate $e_1$ to $v$, evaluate $e_2$ using $\{x \rightarrow v\} + \rho$
- Function application $(f \, x)$ -- see next slide
Evaluation of Application with Closures
- **Function** defined as: \( \text{let } f(x_1, \ldots, x_n) = \text{body} \)
- **Function application**: \( f(e_1, \ldots, e_n) \)
- Evaluation uses the function \( \text{App(Closure, Value)} \):
- In environment \( \rho \), evaluate the left term to closure,\n \( c = \langle(x_1, \ldots, x_n) \rightarrow \text{body}, \rho \rangle \)
- Evaluate the arguments in the application \( e_1 \ldots e_n \) to their values \( v_1, \ldots, v_n \) in the environment \( \rho \)
- Update the environment \( \rho \) to
\[ \rho' = \{ x_1 \rightarrow v_1, \ldots, x_n \rightarrow v_n \} + \rho \]
- **Evaluate** the function body (\( \text{body} \)) in environment \( \rho' \)
Evaluation of Application of plus_x;;
- Have environment:
\[ \rho = \{ \text{plus}_x \rightarrow \langle y \rightarrow y + x, \rho_{\text{plus}_x} \rangle, \ldots, y \rightarrow 3, \ldots \} \]
where \( \rho_{\text{plus}_x} = \{ x \rightarrow 12, \ldots, y \rightarrow 24, \ldots \} \)
- \text{Eval} (\text{plus}_x \ y, \ \rho) \text{ rewrites to}
- \text{App} (\text{Eval}(\text{plus}_x, \ \rho), \ \text{Eval}(y, \ \rho)) \text{ rewrites to}
- \text{App} (\langle y \rightarrow y + x, \ \rho_{\text{plus}_x} \rangle, \ 3) \text{ rewrites to}
- \text{Eval} (y + x, \ \{ y \rightarrow 3 \} + \rho_{\text{plus}_x}) \text{ rewrites to}
- \text{Eval} (3 + 12, \ \rho_{\text{plus}_x}) = 15
Evaluation of Application of `plus_pair`
- **Assume environment**
$$\rho = \{ x \rightarrow 3, \ldots, \ plus_pair \rightarrow \langle (n,m) \rightarrow n + m, \\rho_{\text{plus_pair}} \rangle \} + \rho_{\text{plus_pair}}$$
- **Eval** (plus_pair (4, x), \(\rho\)) =
- **App** (Eval (plus_pair, \(\rho\)), Eval ((4, x), \(\rho\))) =
- **App** (\(\langle (n,m) \rightarrow n + m, \\rho_{\text{plus_pair}} \rangle\), (4, 3)) =
- **Eval** (n + m, \{n \rightarrow 4, m \rightarrow 3\} + \rho_{\text{plus_pair}}) =
- **Eval** (4 + 3, \{n \rightarrow 4, m \rightarrow 3\} + \rho_{\text{plus_pair}}) = 7
If we start in an empty environment, and we execute:
```ocaml
let f = fun n -> n + 5;;
(* 0 *)
let pair_map g (n,m) = (g n, g m);;
let f = pair_map f;;
let a = f (4,6);;
```
What is the environment at (* 0 *),?
Answer
\[ \rho_{\text{start}} = \{ \} \]
let \( f = \text{fun } n \rightarrow n + 5; \);
\[ \rho_0 = \{ f \rightarrow \langle n \rightarrow n + 5, \{ \} \rangle \} \]
Closure question
If we start in an empty environment, and we execute:
```ocaml
let f = fun => n + 5;;
let pair_map g (n,m) = (g n, g m);;
(* 1 *)
let f = pair_map f;;
let a = f (4,6);;
```
What is the environment at (* 1 *)?
Answer
\[ \rho_0 = \{ f \rightarrow <n \rightarrow n + 5, \{ \} > \} \]
\textbf{let }\text{pair\_map }g\ (n,m) = (g\ n, g\ m);;
\[ \rho_1 = \{ \]
\[ \text{f }\rightarrow\ <n \rightarrow n + 5, \{ \} >, \]
\[ \text{pair\_map }\rightarrow\]
\[ <g \rightarrow (\text{fun}\ (n,m) \rightarrow (g\ n, g\ m)),\]
\[ \{ f \rightarrow <n \rightarrow n + 5, \{ \} > \} > \]
\}
If we start in an empty environment, and we execute:
```ml
let f = fun => n + 5;;
let pair_map g (n,m) = (g n, g m);;
let f = pair_map f;;
(* 2 *)
let a = f (4,6);;
```
What is the environment at (* 2 *)?
Evaluate `pair_map f`
\[ \rho_0 = \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle \} \]
\[ \rho_1 = \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle, \]
\[ \text{pair\_map} \rightarrow \]
\[ \langle g \mapsto (\text{fun} (n,m) \rightarrow (g\ n, g\ m)), \]
\[ \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle \} \} \} \}
\text{let } f = \text{pair\_map} f;; \]
Evaluate `pair_map f`
\[ \rho_0 = \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle \} \]
\[ \rho_1 = \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle, \]
\[ \text{pair}_\text{map} \mapsto \]
\[ \langle g \mapsto (\text{fun}(n,m) \mapsto (g\ n, g\ m)), \]
\[ \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle \} \} \} \}
let f = pair_map f;;
Eval(pair_map f, \rho_1) =
Evaluate \text{pair\_map} \ f
\rho_0 = \{ f \mapsto <n \mapsto n + 5, \{ \}>\}
\rho_1 = \{ f \mapsto <n \mapsto n + 5, \{ \}>\},
\text{pair\_map} \rightarrow
\quad <g \mapsto (\text{fun} \ (n,m) \mapsto (g \ n, g \ m)),
\quad \{ f \mapsto <n \mapsto n + 5, \{ \}>\}>\}
\text{let } f = \text{pair\_map} \ f;;
\text{Eval}(\text{pair\_map} \ f, \rho_1) =
\text{App} \ (<g \mapsto \text{fun} \ (n,m) \mapsto (g \ n, g \ m), \rho_0>, <n \mapsto n + 5, \{ \}>) =
Evaluate `pair_map f`
\[\rho_0 = \{ f \to <n \to n + 5, \{ \} > \}\]
\[\rho_1 = \{ f \to <n \to n + 5, \{ \} > , \text{pair_map} \to <g \to (\text{fun } (n,m) \to (g \ n, g \ m)), \{f \to <n \to n + 5, \{ \} > \} > \} > \}
let f = pair_map f;;
\[\text{Eval}(\text{pair_map } f, \rho_1) = \text{App } (<g \to \text{fun } (n,m) \to (g \ n, g \ m), \rho_0 >, <n \to n + 5, \{ \} >) = \]
\[\text{Eval}(\text{fun } (n,m) \to (g \ n, g \ m), \{g \to <n \to n + 5, \{ \} > \} + \rho_0) = \]
\[<n,m) \to (g \ n, g \ m), \{g \to <n \to n + 5, \{ \} > \} + \rho_0 > = \]
\[<n,m) \to (g \ n, g \ m), \{g \to <n \to n + 5, \{ \} > , f \to <n \to n + 5, \{ \} > \} > \]
\( \rho_0 = \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle \} \)
\( \rho_1 = \{ f \mapsto \langle n \mapsto n + 5, \{ \} \rangle, \)
\( \text{pair\_map} \mapsto \)
\( \langle g \mapsto (\text{fun} (n,m) \rightarrow (g n, g m)), \)
\( \{ f \rightarrow \langle n \mapsto n + 5, \{ \} \rangle \} \rangle \} \}
\text{let } f = \text{pair\_map } f ;; \)
\( \rho_2 = \{ f \rightarrow \langle (n,m) \mapsto (g n, g m), \}
\{ g \rightarrow \langle n \mapsto n + 5, \{ \} \rangle, \}
f \rightarrow \langle n \mapsto n + 5, \{ \} \rangle \} \rangle \}, \)
\( \text{pair\_map} \mapsto \langle g \rightarrow \text{fun} (n,m) \rightarrow (g n, g m), \)
\( \{ f \rightarrow \langle n \mapsto n + 5, \{ \} \rangle \} \rangle \rangle \}
\} \} \}
9/10/2017
Closure question
If we start in an empty environment, and we execute:
```ml
let f = fun => n + 5;;
let pair_map g (n,m) = (g n, g m);;
let f = pair_map f;;
let a = f (4,6);;
(* 3 *)
```
What is the environment at (* 3 *)?
ρ₂ = \{ f \mapsto <(n,m) \mapsto (g \ n, \ g \ m), \\
\quad \{ g \mapsto <n \mapsto n + 5, \{ \}\}, \\
\quad f \mapsto <n \mapsto n + 5, \{ \}>, \\
\quad \text{pair\_map} \mapsto <g \mapsto \text{fun} \ (n,m) \mapsto (g \ n, \ g \ m), \\
\quad \{ f \mapsto <n \mapsto n + 5, \{ \}>, \\
\quad > \\
\quad \} \\
\}
let a = f (4,6);;
Evaluate \( f(4,6) \);;
\[ \rho_2 = \{ \begin{array}{l}
f \rightarrow \langle n, m \rangle \rightarrow (g \ n, \ g \ m), \\
\quad \{ g \rightarrow \langle n \rightarrow n + 5, \{ \} \rangle, \\
\quad \langle f \rightarrow \langle n \rightarrow n + 5, \{ \} \rangle \rangle \}, \\
\end{array} \]
\text{pair_map} \rightarrow \langle g \rightarrow \text{fun} \ (n, m) \rightarrow (g \ n, \ g \ m), \\
\quad \{ f \rightarrow \langle n \rightarrow n + 5, \{ \} \rangle \rangle \rangle \rangle \}
let \ a = f \ (4,6);;
\text{Eval} (f \ (4,6), \ \rho_2) =
Evaluate $f(4,6);$
\[ \rho_2 = \{ f \mapsto (n,m) \mapsto (g\ n,\ g\ m), \]
\[ \{ g \mapsto n \mapsto n + 5, \{ \} \}, \]
\[ f \mapsto n \mapsto n + 5, \{ \} \} \}, \]
\[ \text{pair_map} \mapsto \langle g \mapsto \text{fun}(n,m) \rightarrow (g\ n,\ g\ m), \]
\[ \{ f \mapsto n \mapsto n + 5, \{ \} \} \rangle \}
\]
\begin{align*}
\text{let } a &= f(4,6); \\
\text{Eval}(f(4,6), \rho_2) &= \text{App}(\langle(n,m) \mapsto (g\ n,\ g\ m), \{ g \mapsto n \mapsto n + 5, \{ \} \}, \\
&\quad \quad f \mapsto n \mapsto n + 5, \{ \} \} \rangle, \]
&\quad \quad (4,6) \} = \]
\end{align*}
Evaluate $f(4,6)$;
$$\text{App}(\langle n, m \rangle \rightarrow (g \ n, \ g \ m), \ \{g \rightarrow \langle \ n \rightarrow \ n + 5, \ \{\ \}\rangle, \ f \rightarrow \langle \ n \rightarrow \ n + 5, \ \{\ \}\rangle\}, \ (4,6)) =$$
$$\text{Eval}(\langle g \ n, \ g \ m \rangle, \ \{n \rightarrow 4, \ m \rightarrow 6\} + \{g \rightarrow \langle n \rightarrow n + 5, \ \{\ \}\rangle, \ f \rightarrow \langle n \rightarrow n + 5, \ \{\ \}\rangle\}) =$$
$$(\text{App}(\langle n \rightarrow n + 5, \ \{\ \}\rangle, \ 4), \ \text{App}(\langle n \rightarrow n + 5, \ \{\ \}\rangle, \ 6)) =$$
Evaluate $f(4, 6)$;
$$(\text{App}(\langle n \mapsto n + 5, \{ \} \rangle, 4),$$
$$(\text{App}(\langle n \mapsto n + 5, \{ \} \rangle, 6)) =$$
$$(\text{Eval}(n + 5, \{n \mapsto 4\} + \{ \})),$$
$$(\text{Eval}(n + 5, \{n \mapsto 6\} + \{ \}))) =$$
$$(\text{Eval}(4 + 5, \{n \mapsto 4\} + \{ \})),$$
$$(\text{Eval}(6 + 5, \{n \mapsto 6\} + \{ \}))) = (9, 11)$$
Functions as arguments
# let thrice f x = f (f (f x));;
val thrice : ('a -> 'a) -> 'a -> 'a = <fun>
# let g = thrice plus_two;; (* plus_two x is x+2 *)
val g : int -> int = <fun>
# g 4;;
- : int = 10
# thrice (fun s -> "Hi! " ^ s) "Good-bye!";;
- : string = "Hi! Hi! Hi! Good-bye!"
Higher Order Functions
- A function is *higher-order* if it takes a function as an argument or returns one as a result.
**Example:**
```ocaml
# let compose f g = fun x -> f (g x);;
val compose : ('a -> 'b) -> ('c -> 'a) -> 'c -> 'b = <fun>
```
- The type `('a -> 'b) -> ('c -> 'a) -> 'c -> 'b` is a higher order type because of `('a -> 'b)` and `('c -> 'a)` and `-> 'c -> 'b`
Thrice
- **Recall:**
```ocaml
# let thrice f x = f (f (f x));;
val thrice : ('a -> 'a) -> 'a -> 'a = <fun>
```
- **How do you write thrice with compose?**
```ocaml
# let thrice f = compose f (compose f f);;
val thrice : ('a -> 'a) -> 'a -> 'a = <fun>
```
Lambda Lifting
- You must remember the rules for evaluation when you use partial application
```ocaml
# let add_two = (+) (print_string "test\n"; 2);;
val add_two : int -> int = <fun>
# let add2 = (* lambda lifted *)
fun x -> (+) (print_string "test\n"; 2) x;;
val add2 : int -> int = <fun>
```
Lambda Lifting
```plaintext
# thrice add_two 5;;
- : int = 11
# thrice add2 5;;
test
test
test
- : int = 11
```
- Lambda lifting delayed the evaluation of the argument to (+) until the second argument was supplied
Match Expressions
# let triple_to_pair triple =
match triple with
(0, x, y) -> (x, y)
| (x, 0, y) -> (x, y)
| (x, y, _) -> (x, y);;
val triple_to_pair : int * int * int -> int * int
= <fun>
Recursive Functions
```ml
# let rec factorial n =
if n = 0 then 1
else n * factorial (n - 1);
val factorial : int -> int = <fun>
# factorial 5;;
- : int = 120
# (* rec is needed for recursive function declarations *)
```
Recursion Example
Compute $n^2$ recursively using:
\[ n^2 = (2 \times n - 1) + (n - 1)^2 \]
```ml
# let rec nthsq n = (* rec for recursion *)
match n with (* pattern matching for cases *)
| 0 -> 0 (* base case *)
| n -> (2 \times n -1) (* recursive case *)
+ nthsq (n -1); (* recursive call *)
val nthsq : int -> int = <fun>
# nthsq 3;;
- : int = 9
```
Structure of recursion similar to inductive proof
Recursion and Induction
# let rec nthsq n =
match n with
0 -> 0 (*Base case!*)
| n -> (2 * n - 1) + nthsq (n - 1) ;;
- Base case is the last case; it stops the computation
- Recursive call must be to arguments that are somehow smaller - must progress to base case
- **if or match must contain base case (!!!)**
- Failure of selecting base case **will cause non-termination**
- But the program will crash because it exhausts the stack!
Lists
- First example of a recursive datatype (aka algebraic datatype)
- Unlike tuples, lists are homogeneous in type (all elements same type)
Lists
List can take one of two forms:
- **Empty list**, written \([ \ ]\)
- **Non-empty list**, written \(x :: xs\)
- \(x\) is head element,
- \(xs\) is tail list, \(::\) called “cons”
How we typically write them (syntactic sugar):
- \([x] == x :: [ ]\)
- \([ x_1; x_2; \ldots; x_n ] == x_1 :: x_2 :: \ldots :: x_n :: [ ]\)
Lists
# let fib5 = [8;5;3;2;1;1];;
val fib5 : int list = [8; 5; 3; 2; 1; 1]
# let fib6 = 13 :: fib5;;
val fib6 : int list = [13; 8; 5; 3; 2; 1; 1]
# (8::5::3::2::1::1::[ ]) = fib5;;
- : bool = true
# fib5 @ fib6;;
- : int list =
[8; 5; 3; 2; 1; 1; 13; 8; 5; 3; 2; 1; 1]
Lists are Homogeneous
```ocaml
# let bad_list = [1; 3.2; 7];;
Characters 19-22:
let bad_list = [1; 3.2; 7];;
^^^
```
This expression has type float but is here used with type int
Question
- Which one of these lists is invalid?
1. [2; 3; 4; 6]
2. [2, 3; 4, 5; 6, 7]
3. [(2.3, 4); (3.2, 5); (6, 7.2)]
4. [[“hi”; “there”]; [“wahcha”]; [ ]; [“doin”]]
3 is invalid because of last pair.
Functions Over Lists
# let rec double_up list =
match list with
[ ] -> [ ] (* pattern before ->, expression after *)
| (x :: xs) -> (x :: x :: double_up xs);
val double_up : 'a list -> 'a list = <fun>
(* fib5 = [8;5;3;2;1;1] *)
# let fib5_2 = double_up fib5;;
val fib5_2 : int list = [8; 8; 5; 5; 3; 3; 2; 2; 1; 1; 1; 1]
Functions Over Lists
# let silly = double_up ["hi"; "there"];;
val silly : string list = ["hi"; "hi"; "there"; "there"]
# let rec poor_rev list =
match list
with [] -> []
| (x::xs) -> poor_rev xs @ [x];;
val poor_rev : 'a list -> 'a list = <fun>
# poor_rev silly;;
- : string list = ["there"; "there"; "hi"; "hi"]
Question: Length of list
- Problem: write code for the length of the list
- How to start?
```latex
let length l =
```
Question: Length of list
- Problem: write code for the length of the list
- How to start?
let rec length l =
match l with
Question: Length of list
- Problem: write code for the length of the list
- What patterns should we match against?
```ml
let rec length l =
match l with
```
Question: Length of list
Problem: write code for the length of the list
What patterns should we match against?
```ocaml
let rec length l =
match l with
| [] ->
| (a :: bs) ->
```
Question: Length of list
- Problem: write code for the length of the list
- What result do we give when $l$ is empty?
```ocaml
let rec length l =
match l with
| [] -> 0
| (a :: bs) ->
```
9/10/2017
Question: Length of list
- Problem: write code for the length of the list
- What result do we give when $l$ is not empty?
```ocaml
let rec length l =
match l with
| [] -> 0
| (a :: bs) ->
```
9/10/2017
Question: Length of list
- Problem: write code for the length of the list
- What result do we give when \( l \) is not empty?
```ocaml
let rec length l =
match l with
case
| [] -> 0
| (a :: bs) -> 1 + length bs
```
How can we efficiently answer if two lists have the same length?
How can we efficiently answer if two lists have the same length?
```ocaml
let rec same_length list1 list2 =
match list1 with
[] -> (match list2 with [] -> true |
(y::ys) -> false)
| (x::xs) -> (match list2 with [] -> false |
(y::ys) -> same_length xs ys)
```
Functions Over Lists
# let rec map f list =
match list with
[] -> []
| (h::t) -> (f h) :: (map f t);;
val map : ('a -> 'b) -> 'a list -> 'b list = <fun>
# map plus_two fib5;;
- : int list = [10; 7; 5; 4; 3; 3]
# map (fun x -> x - 1) fib6;;
: int list = [12; 7; 4; 2; 1; 0; 0]
Iterating over lists
# let rec fold_left f a list =
match list with
[] -> a
| (x :: xs) -> fold_left f (f a x) xs;;
val fold_left : ('a -> 'b -> 'a) -> 'a -> 'b list -> 'a = <fun>
# fold_left
(fun () -> print_string)
()
["hi"; "there"];;
hithere- : unit = ()
Iterating over lists
```ocaml
# let rec fold_right f list b =
match list with
| [] -> b
| (x :: xs) -> f x (fold_right f xs b);
val fold_right : ('a -> 'b -> 'b) -> 'a list -> 'b -> 'b = <fun>
# fold_right
(fun s -> fun () -> print_string s) [["hi"; "there"]
();;
therehi- : unit = ()
```
Structural Recursion
- Functions on recursive datatypes (eg lists) tend to be recursive
- Recursion over recursive datatypes generally by structural recursion
- Recursive calls made to components of structure of the same recursive type
- Base cases of recursive types stop the recursion of the function
Structural Recursion : List Example
```ocaml
# let rec length list =
match list with
| [] -> 0 (* Nil case *)
| x :: xs -> 1 + length xs;; (* Cons case *)
val length : 'a list -> int = <fun>
# length [5; 4; 3; 2];;
- : int = 4
```
- Nil case [] is base case
- Cons case recurses on component list xs
Forward Recursion
- In **Structural Recursion**, split input into components and (eventually) recurse
- **Forward Recursion** is a form of Structural Recursion
- In forward recursion, first call the function recursively on all recursive components, and then build final result from partial results
- Wait until whole structure has been traversed to start building answer
Forward Recursion: Examples
```ocaml
# let rec double_up list =
match list
with [ ] -> [ ]
| (x :: xs) -> (x :: x :: double_up xs);;
val double_up : 'a list -> 'a list = <fun>
# let rec poor_rev list =
match list
with [] -> []
| (x::xs) -> poor_rev xs @ [x];;
val poor_rev : 'a list -> 'a list = <fun>
```
# let rec append list1 list2 = match list1 with
[ ] -> list2 | x::xs -> x :: append xs list2;;
val append : 'a list -> 'a list -> 'a list = <fun>
# append [1;2;3] [4;5;6];;
- : int list = [1; 2; 3; 4; 5; 6]
# let append_alt list1 list2 =
fold_right (fun x y -> x :: y) list1 list2;;
val append_alt : 'a list -> 'a list -> 'a list = <fun>
One common form of structural recursion applies a function to each element in the structure.
```ocaml
# let rec doubleList list = match list
with [ ] -> [ ]
| x::xs -> 2 * x :: doubleList xs;;
val doubleList : int list -> int list = <fun>
# doubleList [2;3;4];;
- : int list = [4; 6; 8]
```
Mapping Recursion
- Can use the higher-order recursive map function instead of direct recursion
```ocaml
# let doubleList list =
List.map (fun x -> 2 * x) list;;
val doubleList : int list -> int list = <fun>
# doubleList [2;3;4];;
- : int list = [4; 6; 8]
```
- Same function, but no recursion
Folding Recursion
- Another common form “folds” an operation over the elements of the structure
```ocaml
# let rec multList list = match list
with [ ] -> 1
| x::xs -> x * multList xs;;
val multList : int list -> int = <fun>
# multList [2;4;6];;
- : int = 48
```
- Computes $(2 \times (4 \times (6 \times 1)))$
Folding Recursion
- multList folds to the right
- Same as:
```ocaml
# let multList list =
List.fold_right
(fun x -> fun p -> x * p)
list 1;;
val multList : int list -> int = <fun>
# multList [2;4;6];;
- : int = 48
```
How long will it take?
Common big-O times:
- Constant time $O(1)$
- input size doesn’t matter
- Linear time $O(n)$
- 2x input size $\Rightarrow$ 2x time
- Quadratic time $O(n^2)$
- 3x input size $\Rightarrow$ 9x time
- Exponential time $O(2^n)$
- Input size $n+1$ $\Rightarrow$ 2x time
Linear Time
- Expect most list operations to take linear time $O(n)$
- Each step of the recursion can be done in constant time
- Each step makes only one recursive call
- List example: `multList`, `append`
- Integer example: `factorial`
Quadratic Time
- Each step of the recursion takes time proportional to input
- Each step of the recursion makes only one recursive call.
List example:
```ml
# let rec poor_rev list =
match list
with [] -> []
| (x::xs) -> poor_rev xs @ [x];;
val poor_rev : 'a list -> 'a list = <fun>
```
Exponential running time
- Hideous running times on input of any size
- Each step of recursion takes constant time
- Each recursion makes two recursive calls
- Easy to write naïve code that is exponential for functions that can be linear
Exponential running time
# let rec naiveFib n = match n
with 0 -> 0
| 1 -> 1
| _ -> naiveFib (n-1) + naiveFib (n-2);;
val naiveFib : int -> int = <fun>
An Important Optimization
- When a function call is made, the return address needs to be saved to the stack so we know to where to return when the call is finished.
- What if $f$ calls $g$ and $g$ calls $h$, but calling $h$ is the last thing $g$ does (a tail call)?
An Important Optimization
- When a function call is made, the return address needs to be saved to the stack so we know to where to return when the call is finished.
- What if $f$ calls $g$ and $g$ calls $h$, but calling $h$ is the last thing $g$ does (a tail call)?
- Then $h$ can return directly to $f$ instead of $g$.
Tail Recursion
- A recursive program is tail recursive if all recursive calls are tail calls.
- Tail recursive programs may be optimized to be implemented as loops, thus removing the function call overhead for the recursive calls.
- Tail recursion generally requires extra “accumulator” arguments to pass partial results.
- May require an auxiliary function.
Tail Recursion - Example
```ocaml
# let rec rev_aux list revlist =
match list with [] -> revlist
| x :: xs -> rev_aux xs (x :: revlist);
val rev_aux : 'a list -> 'a list -> 'a list = <fun>
# let rev list = rev_aux list [];;
val rev : 'a list -> 'a list = <fun>
```
What is its running time?
Folding Functions over Lists
How are the following functions similar?
```ocaml
# let rec sumlist list = match list with
| [] -> 0 | x::xs -> x + sumlist xs;
val sumlist : int list -> int = <fun>
# sumlist [2;3;4];;
- : int = 9
# let rec prodlist list = match list with
| [] -> 1 | x::xs -> x * prodlist xs;
val prodlist : int list -> int = <fun>
# prodlist [2;3;4];;
- : int = 24
```
Folding
# let rec fold_left f a list = match list
with [] -> a | (x :: xs) -> fold_left f (f a x) xs;;
val fold_left : ('a -> 'b -> 'a) -> 'a -> 'b list -> 'a
= <fun>
fold_left f a [x_1; x_2;...;x_n] = f(...(f (f a x_1) x_2)...x_n
# let rec fold_right f list b = match list
with [] -> b | (x :: xs) -> f x (fold_right f xs b);;
val fold_right : ('a -> 'b -> 'b) -> 'a list -> 'b -> 'b
= <fun>
fold_right f [x_1; x_2;...;x_n] b = f x_1(f x_2 (...(f x_n b)...))
Folding - Forward Recursion
# let sumlist list = fold_right (+) list 0;;
val sumlist : int list -> int = <fun>
# sumlist [2;3;4];;
- : int = 9
# let prodlist list = fold_right ( * ) list 1;;
val prodlist : int list -> int = <fun>
# prodlist [2;3;4];;
- : int = 24
Folding - Tail Recursion
```
- # let rev list =
- fold_left
- (fun l -> fun x -> x :: l) //comb op [] //accumulator cell
list
```
Folding
- Can replace recursion by fold_right in any forward primitive recursive definition
- Primitive recursive means it only recurses on immediate subcomponents of recursive data structure
- Can replace recursion by fold_left in any tail primitive recursive definition
|
{"Source-Url": "https://courses.engr.illinois.edu/cs421/fa2017/CS421A/lectures/03-04-ocaml-lists.pdf", "len_cl100k_base": 10464, "olmocr-version": "0.1.53", "pdf-total-pages": 89, "total-fallback-pages": 0, "total-input-tokens": 129299, "total-output-tokens": 14370, "length": "2e13", "weborganizer": {"__label__adult": 0.00028061866760253906, "__label__art_design": 0.00033974647521972656, "__label__crime_law": 0.0002052783966064453, "__label__education_jobs": 0.002178192138671875, "__label__entertainment": 6.771087646484375e-05, "__label__fashion_beauty": 0.00012731552124023438, "__label__finance_business": 0.0001666545867919922, "__label__food_dining": 0.0003743171691894531, "__label__games": 0.0006256103515625, "__label__hardware": 0.0008869171142578125, "__label__health": 0.00031828880310058594, "__label__history": 0.00018703937530517575, "__label__home_hobbies": 0.00011849403381347656, "__label__industrial": 0.0003786087036132813, "__label__literature": 0.00020003318786621096, "__label__politics": 0.00019443035125732425, "__label__religion": 0.0004124641418457031, "__label__science_tech": 0.01007843017578125, "__label__social_life": 0.0001004338264465332, "__label__software": 0.00472259521484375, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00026345252990722656, "__label__transportation": 0.0004203319549560547, "__label__travel": 0.00018930435180664065}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28749, 0.03171]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28749, 0.68625]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28749, 0.61269]], "google_gemma-3-12b-it_contains_pii": [[0, 233, false], [233, 399, null], [399, 663, null], [663, 885, null], [885, 1205, null], [1205, 1536, null], [1536, 1822, null], [1822, 2185, null], [2185, 2616, null], [2616, 2949, null], [2949, 3298, null], [3298, 3609, null], [3609, 3819, null], [3819, 3907, null], [3907, 4347, null], [4347, 4818, null], [4818, 5453, null], [5453, 5876, null], [5876, 6324, null], [6324, 7057, null], [7057, 7747, null], [7747, 8348, null], [8348, 8561, null], [8561, 8733, null], [8733, 8961, null], [8961, 9335, null], [9335, 9542, null], [9542, 9919, null], [9919, 10298, null], [10298, 10758, null], [10758, 11419, null], [11419, 12176, null], [12176, 12401, null], [12401, 12732, null], [12732, 13290, null], [13290, 13872, null], [13872, 14461, null], [14461, 14825, null], [14825, 15111, null], [15111, 15491, null], [15491, 15766, null], [15766, 16068, null], [16068, 16285, null], [16285, 16482, null], [16482, 16710, null], [16710, 17126, null], [17126, 17579, null], [17579, 17724, null], [17724, 18056, null], [18056, 18333, null], [18333, 18520, null], [18520, 18729, null], [18729, 19059, null], [19059, 19385, null], [19385, 19507, null], [19507, 19633, null], [19633, 19796, null], [19796, 19990, null], [19990, 20199, null], [20199, 20418, null], [20418, 20641, null], [20641, 20706, null], [20706, 20994, null], [20994, 21289, null], [21289, 21574, null], [21574, 21894, null], [21894, 22202, null], [22202, 22522, null], [22522, 22897, null], [22897, 23232, null], [23232, 23578, null], [23578, 23875, null], [23875, 24176, null], [24176, 24494, null], [24494, 24724, null], [24724, 25020, null], [25020, 25258, null], [25258, 25554, null], [25554, 25793, null], [25793, 25952, null], [25952, 26220, null], [26220, 26541, null], [26541, 26903, null], [26903, 27207, null], [27207, 27600, null], [27600, 28070, null], [28070, 28338, null], [28338, 28475, null], [28475, 28749, null]], "google_gemma-3-12b-it_is_public_document": [[0, 233, true], [233, 399, null], [399, 663, null], [663, 885, null], [885, 1205, null], [1205, 1536, null], [1536, 1822, null], [1822, 2185, null], [2185, 2616, null], [2616, 2949, null], [2949, 3298, null], [3298, 3609, null], [3609, 3819, null], [3819, 3907, null], [3907, 4347, null], [4347, 4818, null], [4818, 5453, null], [5453, 5876, null], [5876, 6324, null], [6324, 7057, null], [7057, 7747, null], [7747, 8348, null], [8348, 8561, null], [8561, 8733, null], [8733, 8961, null], [8961, 9335, null], [9335, 9542, null], [9542, 9919, null], [9919, 10298, null], [10298, 10758, null], [10758, 11419, null], [11419, 12176, null], [12176, 12401, null], [12401, 12732, null], [12732, 13290, null], [13290, 13872, null], [13872, 14461, null], [14461, 14825, null], [14825, 15111, null], [15111, 15491, null], [15491, 15766, null], [15766, 16068, null], [16068, 16285, null], [16285, 16482, null], [16482, 16710, null], [16710, 17126, null], [17126, 17579, null], [17579, 17724, null], [17724, 18056, null], [18056, 18333, null], [18333, 18520, null], [18520, 18729, null], [18729, 19059, null], [19059, 19385, null], [19385, 19507, null], [19507, 19633, null], [19633, 19796, null], [19796, 19990, null], [19990, 20199, null], [20199, 20418, null], [20418, 20641, null], [20641, 20706, null], [20706, 20994, null], [20994, 21289, null], [21289, 21574, null], [21574, 21894, null], [21894, 22202, null], [22202, 22522, null], [22522, 22897, null], [22897, 23232, null], [23232, 23578, null], [23578, 23875, null], [23875, 24176, null], [24176, 24494, null], [24494, 24724, null], [24724, 25020, null], [25020, 25258, null], [25258, 25554, null], [25554, 25793, null], [25793, 25952, null], [25952, 26220, null], [26220, 26541, null], [26541, 26903, null], [26903, 27207, null], [27207, 27600, null], [27600, 28070, null], [28070, 28338, null], [28338, 28475, null], [28475, 28749, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28749, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28749, null]], "pdf_page_numbers": [[0, 233, 1], [233, 399, 2], [399, 663, 3], [663, 885, 4], [885, 1205, 5], [1205, 1536, 6], [1536, 1822, 7], [1822, 2185, 8], [2185, 2616, 9], [2616, 2949, 10], [2949, 3298, 11], [3298, 3609, 12], [3609, 3819, 13], [3819, 3907, 14], [3907, 4347, 15], [4347, 4818, 16], [4818, 5453, 17], [5453, 5876, 18], [5876, 6324, 19], [6324, 7057, 20], [7057, 7747, 21], [7747, 8348, 22], [8348, 8561, 23], [8561, 8733, 24], [8733, 8961, 25], [8961, 9335, 26], [9335, 9542, 27], [9542, 9919, 28], [9919, 10298, 29], [10298, 10758, 30], [10758, 11419, 31], [11419, 12176, 32], [12176, 12401, 33], [12401, 12732, 34], [12732, 13290, 35], [13290, 13872, 36], [13872, 14461, 37], [14461, 14825, 38], [14825, 15111, 39], [15111, 15491, 40], [15491, 15766, 41], [15766, 16068, 42], [16068, 16285, 43], [16285, 16482, 44], [16482, 16710, 45], [16710, 17126, 46], [17126, 17579, 47], [17579, 17724, 48], [17724, 18056, 49], [18056, 18333, 50], [18333, 18520, 51], [18520, 18729, 52], [18729, 19059, 53], [19059, 19385, 54], [19385, 19507, 55], [19507, 19633, 56], [19633, 19796, 57], [19796, 19990, 58], [19990, 20199, 59], [20199, 20418, 60], [20418, 20641, 61], [20641, 20706, 62], [20706, 20994, 63], [20994, 21289, 64], [21289, 21574, 65], [21574, 21894, 66], [21894, 22202, 67], [22202, 22522, 68], [22522, 22897, 69], [22897, 23232, 70], [23232, 23578, 71], [23578, 23875, 72], [23875, 24176, 73], [24176, 24494, 74], [24494, 24724, 75], [24724, 25020, 76], [25020, 25258, 77], [25258, 25554, 78], [25554, 25793, 79], [25793, 25952, 80], [25952, 26220, 81], [26220, 26541, 82], [26541, 26903, 83], [26903, 27207, 84], [27207, 27600, 85], [27600, 28070, 86], [28070, 28338, 87], [28338, 28475, 88], [28475, 28749, 89]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28749, 0.00129]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
17e459df5e95c52c33694cefebc6c30eb651aad6
|
RDB2XSD: AUTOMATIC SCHEMA MAPPING FROM RDB INTO XML
1LARBI ALAOUI, 2OUSSAMA EL HAJJAMY, 3MOHAMED BAHAJ
1International University of Rabat, 11100 Sala Al Jadida, Morocco
2,3University Hassan I, FSTS Settat, Morocco
E-mail: 1larbi.alaoui@hotmail.de, 2elhajjamyoussama@gmail.com, 3mohamedbahaj@gmail.com
ABSTRACT
Extensible Markup Language (XML) is nowadays one of the most important standard media used for exchanging data on the internet. Massive data is still however treated, transferred and stored using relational database systems (RDBs). Therefore, there is a need for an integrated method that deals with database migration from RDB schema to XML schema. In this paper we provide and develop a new solution called RDB2XSD that migrates the conceptual schema of RDB into XSD through a MA (multidimensional array) model. This solution takes an existing RDB as input and extracts its metadata with as much constraints as possible, creates the MA model to capture the semantics of the relational database and applies our mapping algorithm to generate the hierarchical XSD schema. For the implementation of our approach we developed a tool based on java and tested it using Oracle and Mysql databases. Our experimental results based on this tool show that our mapping strategy is feasible and efficient.
Keywords: XML, XSD, Relational Database RDB, Schema Conversion, MA Model
1. INTRODUCTION
The use of XML schema is rapidly increasing to represent structured and unstructured data in the Internet. However, very large volumes of data are always stocked in relational databases. So, in order to exchange data between relational database (RDB) and XML, a translation algorithm is necessary.
Currently, there are two options recommended by theW3C for defining an XML schema. One is the Document Type Definition (DTD) and the other is the XML Schema (XSD). We choose XML Schema because:
• it has a powerful set of types and constraints which leads to a better translation;
• it provides us with a more flexible and powerful mechanism through “key” and “keyref” constructs;
• and with XSD we are able to model composite, multi-valued attributes and complex cardinality constraints.
Our aim in this paper is to tackle the problem of translation of relational database schema models to XML schema models. As it is detailed in section 2 the existing works in this sense do not provide a complete solution, and so far there still be no effective proposals that could be considered as a standard method that preserves the whole original structure and constraints of the relational database.
For a complete and efficient translation we provide a mapping strategy that takes into account several issues in order to preserve all details related to the relational structure of a relational database so that all its static and semantic information will be reflected by the resulting XML schema. In order to achieve such a complete mapping our approach first extracts the metadata of the considered database, generates the multi dimensional model (MA model) to capture the semantics of the source RDB and apply our algorithm to build the XML structure. Our mapping algorithm uses a set of transformation rules that we give according to a categorization of the types of relations and the types of the constraints we are dealing with in a relational database following some ideas we gave in our previous works [1-2] that are related to mapping RDB to OWL (Ontology Web Language). To validate our solution we have developed a prototype that implements this algorithm and tested its effectiveness using concrete examples.
The rest of the paper is organized as follows. In section 2 we review the existing RDB to XML transformation works. Needful terminology and several rules to convert a relational database
schema into XML schema along with a corresponding categorization of the RDB relations and attributes are given in Section 3. Section 4 discusses the methods for extracting semantics using the MA model and provides our mapping algorithm based on the list of rules. It also presents a result of the performance test of our developed mapping tool. Section 5 concludes this paper.
2. RELATED WORK
The conversion from relational database to XML has recently received significant attention and become an active research domain. Various algorithms have been developed to reflect information about relational database using transformations into XML documents.
The first works associating RDB with XML were either XML views based or DTDs based. The XML based methods consisted in presenting XML views of RDB data without providing any schema for the structure of such views. Users should therefore have a better knowledge of what the obtained views represent, in order to be able to query such views. Among these works we cite Silkroute in [6] that aims at publishing relational data as XML views using a transformation language RXL. Users can then issue queries against these views. In this sense we can also mention the works in [3-4], [15], [18-19] and [20].
The DTDs based methods dealt with the mapping of RDB schemas into DTD schemas providing the users with conceptual structures of the considered RDB relations. These were the starting point for the upcoming transformations that map RDB schemas into XSD schemas. It is however preferable for the reasons mentioned in the previous section to have an XSD schema representation of the RDB schema rather than a DTD one.
Among the RDB to DTD transformations we cite the work in [11] that mainly considers the static constraints on attributes and do not handle the functional dependencies between relations. Another RDB to DTD mapping technique is given in [8] and consists in joining normalized relations into tables that are mapped into DOMs that are then integrated into a user specified XML document trees which are converted into XML DTDs. As DTD-based translation algorithms we also mention Nesting-based Translation (NeT) and Constraints-based Translation (CoT) algorithms [12-14]. However, NeT does not support any referential integrity constraint. CoT considers the structural part of RDB schema such as cardinality and only a restrained semantic part such as foreign key constraint. In [17] an algorithm NeT-FD is also proposed for an RDB to DTD mapping that takes into account the functional dependencies and keys.
To come up with solutions to the limitations of DTD some mapping techniques have appeared to transform RDB schemas into XSD (XML schema). The work in [7] gave an approach in this sense that does not handles semantic details and uses a transformation into Extended Entity Relationship model and an XSD graph as intermediary steps. In [10] VP-T (Values Pattern-based translation) and QP-T (Query Pattern-based translation) algorithms have been proposed to resolve the problem of CoT. However, both have a critical restriction that cannot extract a semantic relationship between column titles. Another approach is given in [5] but uses intermediary adjacency matrix and oriented graph. In the same sense transformation approaches were proposed in [16] and [21] but they do not handle all details and they respectively use a reference graph and an ER model as intermediary steps. Also a so-called holistic transformation algorithm is proposed in [22] to transform relational database into a nested XML schema without building a reference graph. This solution classifies relations into three categories (base relation, single related relation and multi related relation according to the number of foreign keys in the relation tables) and gives transformation rules to map these categories. However, marking dominant relations for circular relations and dominant participant relations for multi-related relations based on queried data can provide different XML schema results when an update of the data is performed on the source relational database. Therefore, this solution cannot guarantee an exact XML document creation. Another technique was presented in [9] where the authors consider the case where referential integrity constraints are not included in the RDB schema due the designer’s fault or old and poor documentation and extract such constraints from users’ queries.
All the aforementioned XSD based transformation present limitations in treating various important RDB elements related to the art of either relations or attributes such as composite keys, foreign keys, self referenced relations and cyclic relations. In the following section we give a more concise and complete categorization of RDB relations that reflects all associated static and semantic details. This categorization will be the basis of our mapping algorithm we are presenting in section 4. We
assume that all relations in the RDB schema are at least in 3NF.
3. RDB TO XML SCHEMA MAPPING RULES
In this section we give a complete list of rules for building the XML schema from the RDB source. To this end we consider relevant categorizations related to the various constraints in a relational database. The first categorization aims at classifying the relations in the database into four categories based on different types of the foreign keys. This classification is as follows:
- NormalRel(R): R is a relation with no foreign keys;
- PKAndFKRel(R): the primary key of R also acts as a foreign key;
- OneFKRel(R): R is a relation with one foreign key;
- MoreThanOneFKRel(R): R is a relation with more than one foreign key.
Then, to preserve the semantics contained in the database source we take into account all the integrity constraints, such as primary keys, foreign keys, not null and unique characteristics:
- NormalAttr(A, R): A is an attribute in relation R that is not part of a primary or foreign key and that is not declared as unique or as not null;
- PK(x, R): x is a single or composite primary key of the relation R;
- FK(x, R, y, S): x is a single or composite foreign key in relation R that references y in relation S;
- Unique(x, R): x is declared as a unique attribute;
- NotNull(x, R): x is declared as not null attribute.
Finally we capture all circular relations in the database source and find a way to convert it to a hierarchical XML schema. Circular relationships are divided into two categories:
- SelfRefRelation(R): the relation R has a foreign key x referencing itself and we denote it by SelfRefAttribute(R, x);
- CyclicRel: A cyclic relation is defined as a set of relations R₁, ..., Rₙ (n > 1), where Rᵢ is referenced by Rᵢ₋₁ (1 ≤ i ≤ n) and Rₙ is referenced by R₁.
With all these categorizations we are now ready to give the associated mapping rules.
3.1. Mapping relations
Based on the categorization of relations we gave in the previous section with respect to their types and to the various related constraints we are now able to list in the following the associated mapping rules for the transformation of RDB schema into XML schema.
Rule 1. XML schema root element: To ensure the XML Schema has a single root, we need to prepare our XSD schema by creating a root element. The root element is created using the name of the database source.
```xml
<xsd:schema
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
targetNamespace="targetNamespaceURI"
xlns="targetNamespaceURI"
elementFormDefault="qualified">
<xsd:element name="DatabaseName">
<xsd:complexType>
<xsd:sequence>
<!--mapped relational schema is here -->
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd>
```
Rule 2. NormalRel(R): For every normal relation R, we create an element named R as a child of the root element.
```xml
<xsd:element name="R">
<xsd:complexType>
<!-- details of attributes in R -->
</xsd:complexType>
</xsd:element>
```
Rule 3. OneFKRel(R): If a relation S with one foreign key references another relation R, then the generated element from S must be a sub-element of the generated element from R. In this case we have a 0:n relationship. So we add the minOccurs="0" and the maxOccurs="unbounded" constraints to the relation S (maxOccurs = "unbounded" indicates the element S may appear more than once).
```xml
<xsd:element name="R">
<xsd:complexType>
<xsd:sequence>
<!-- mapped relational schema is here -->
</xsd:sequence>
</xsd:complexType>
</xsd:element>
```
Rule 4. PKAndFKRel(R): If the primary key of a relation S is at the same time a foreign key that is
referencing a field in another relation R, then the generated element from S must be a sub-element of the generated element from R. In this case we have a 0:1 relationship, so we add the minOccurs="0" and the maxOccurs="1" to relation S.
```xml
<xsd:element name="R">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="S" minOccurs="0" maxOccurs="1"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
```
**Rule 5.** MoreThanOneFKRel(R): If a relation R with more than one foreign key and reference R1…Rn relations with (n>1), then, to preserve the integrity constraints, we add "keyRef" element for each foreign key attribute in R as follow:
```xml
<xsd:element name="R">
<xsd:complexType>
<xsd:attribute name="FK1" type="xsd:TypeOfFK1" />
<xsd:attribute name="FKn" type="xsd:TypeOfFKn" />
</xsd:complexType>
</xsd:element>
```
```
<xsd:keyref name="R_Ref_R1" refer="R1_y1">
<xsd:selector xpath="R"/>
<xsd:field xpath="FK1"/>
</xsd:keyref>
```
```
<xsd:keyref name="R_Ref_Rn" refer="Rn_yn">
<xsd:selector xpath="R"/>
<xsd:field xpath="FKn"/>
</xsd:keyref>
```
### 3.2. Mapping attributes
**Rule 6.** NormalAttr(x, R): For each normal attribute x in relation R, we create an "attribute" element with the XSD type corresponding to the type of the field in the RDB.
```xml
<xsd:element name="x" type="xsd:TypeOfx" />```
**Rule 7.** PK(x, R): A primary key is transformed into a "key" element with a selector to select the XPath of its relation.
To ensure the uniqueness of the key element name we propose to give for each of them a name obtained by concatenating the name of the Relation and the primary key value corresponding to the converted record.
```xml
<xsd:element name="CheckValue">
<xsd:simpleType>
<xsd:restriction base="xsd:integer">
<xsd:minInclusive value="0"/>
<xsd:maxInclusive value="120"/>
</xsd:restriction>
</xsd:simpleType>
</xsd:element>
```
**Rule 8.** FK(x, y, S): To capture the reference relationship between two relations, a foreign key is converted to a "keyRef" element. Note that the foreign key of a OneFKRel(R) is not converted to a "keyRef" element.
```xml
<xsd:element name="S">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="y" type="xsd:TypeOfy"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
```
### 3.3. Mapping Constraints
**Rule 9.** Unique(x, R): For each attribute declared as UNIQUE we create a "unique" element with a selector to select the XPath of the element and a field to specify the attribute that must be unique.
```xml
<xsd:unique name="UniqueR">
<xsd:selector xpath="R"/>
<xsd:field xpath="x"/>
</xsd:unique>
```
**Rule 10.** NotNull(x, R): If the attribute is declared as NOT NULL, we add use="required" into the mapped attribute element.
```xml
<xsd:attribute name="x" type="xsd:TypeOfx" use="required"/>
```
**Rule 11.** For attributes with the special constraints Length, CHECK VALUES or CHECK IN we treat them as follows:
- To limit the length of a value in an attribute we can use the xsd:maxLength.
```xml
<xsd:element name="CheckValue">
<xsd:simpleType>
<xsd:restriction base="xsd:string">
<xsd:maxLength value="100"/>
</xsd:restriction>
</xsd:simpleType>
</xsd:element>
```
- CHECK VALUE x: denotes all values that x can take. In this case we use the facets xsd:minInclusive, xsd:maxInclusive, xsd:minExclusive or xsd:maxExclusive.
```xml
<xsd:element name="CheckValue">
<xsd:complexType>
<xsd:restriction base="xsd:integer">
<xsd:minInclusive value="0"/>
<xsd:maxInclusive value="120"/>
</xsd:restriction>
</xsd:complexType>
</xsd:element>
```
Rule 13. CyclicRel: In this case it is important, in this cyclic relation, to make at least one element (or a reference to an element) as optional. Otherwise an infinite loop will occur and an error will be thrown while validating the XML schema. For example, for a cyclic relationship (R1→R2→R3→R1) we get the following transformation:
```
<xsd:element name="R1">
<xsd:complexType>
<xsd:sequence>
<xsd:element ref="R2" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="R2">
<xsd:complexType>
<xsd:sequence>
<xsd:element ref="R3" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="R3">
<xsd:complexType>
<xsd:sequence>
<xsd:element ref="R1" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
```
3.4. Mapping Circular Relations
Rule 12. SelfRefRelation(R): In this case we consider the SelfRefAttribute(R, x) as a normal attribute, we apply the relation mapping rules to convert R (rule 1-5) and we add the following element:
```
<xsd:element name="R" type="typeR"/>
```
For example, consider the following Relation "Author" with "NameChefProj" as foreign key referencing "NameAuthor" in the same relation "Author":
```
Author(NameAuthor, TitlePaper, #NameChefProj)
```
The corresponding transformation rule is:
```
<xsd:element name="Author" type="typeAutor"/>
<xsd:complexType name="typeAutor">
<xsd:sequence>
<xsd:element name="Author" type="xsd:string"/>
</xsd:sequence>
</xsd:complexType>
```
4. OUR METHODOLOGY FOR MAPPING
Our approach aims to define a correspondence between the RDB and XML schema using a multidimensional array model (MA) to build the XML structure. Our approach consists of three separate phases, as shown in figure 1. The first phase extracts tables, fields, relationships and metadata (MTRDB) from the relational database using java database connectivity (JDBC) components. In the second phase a multidimensional array model is generated to facilitate the migration process. Once the MA model is created we apply our algorithm based on the list of rules to create the equivalent XML schema.
4.1. Extraction MetaData of RDB schema
Our process starts by extracting the metadata from the relational database including fields and relations, by using Java Database Connectivity (JDBC) components.
\[ MTRDB = \{R_N, R_{Ref}, R_{Refby}, R_{Type}, Nbr_{FK}, Type, F\} \]
- **R\_N**: The relation name
- **R\_Ref**: All relation referenced by R\_N
- **R\_Refby**: All relations that reference R\_N
- **Nbr\_FK**: Number of foreign key in R\_N
- **Type**: (PFK) if the primary key of R also acts as a foreign key, (SelfR) if R is a SelfRefRel and (Simple) else
**F**: List of the fields of the relation R\_N
\[ F = \{F_N, F_T, F_{Key}, F_U, F_{Null}\} \]
- **F\_N**: The field name
- **F\_T**: The field type
- **F\_Key**: (PK) if the field is a primary key, (FK) foreign key, (PFK) if act as both PK and FK or (CFK) if foreign key in a cyclic relation that references another field in the same cyclic relation
- **F\_U**: (Uq) for unique constraint
- **F\_Null**: (N) for Not null constraints
4.2. The multidimensional Array model of the RDB schema
The next step of our mapping approach consists in generating the MA model based on classification of elements extracted from MTRDB to facilitate the migration process.
MA model is a set of array elements that defines the list of relations taken from RDB schema with the necessary metadata for our mapping algorithm.
To illustrate the MA model we will use the following examples of relational database schema. Underlined attributes represent primary keys. Attributes endowed with a "#" represent foreign keys. The first example is given without any circular relation (SelfRefRel or CyclicRel) and its equivalent MA model is represented in Table 1:
- Communication(idCom, CName)
- Country(idCountry, CountName)
- City(idCity, CityName, #idCountry)
- Company(idCompany, CompName, #idCity)
- University(idUniversity, UnivName)
- Author(idAuthor, name, #idUniversity, #idCity)
- Professor(#idProfessor, Grade)
- Address(idAddress, Address)
- PrivateAddress(#idPAddress, Type)
- Student(#idStudent, Age, #idAddress)
- Department(#idDept, DeptName, #idChefDept)
- Paper(idPaper, PaperTitle, Year)
- WritePaper(#idPaper, #idAuthor)
In the second example we illustrate how to represent a SelfRefRel in MA model.
Employee(idEmp, nameEmp, Job, #Chef, #idDept)
The attribute "Chef" is a foreign key that reference idEmp in the same table. In this case we present "Chef" as a normal attribute and we put "SelfR" in the type Column of our MA model (Table 2).
Table 1: Representation Of MA Model For Example 1
<table>
<thead>
<tr>
<th>R_N</th>
<th>R_Ref</th>
<th>R_RefBy</th>
<th>NbrFK</th>
<th>Type</th>
<th>Fields</th>
</tr>
</thead>
<tbody>
<tr>
<td>Communication</td>
<td></td>
<td></td>
<td>0</td>
<td>Simple</td>
<td>RRef idCom Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef CName VarChar N</td>
</tr>
<tr>
<td>Country</td>
<td>City</td>
<td></td>
<td>0</td>
<td>Simple</td>
<td>RRef idCountry Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef CountName VarChar U N</td>
</tr>
<tr>
<td>City</td>
<td>Country</td>
<td>Author</td>
<td>1</td>
<td>Simple</td>
<td>RRef idCity Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef CityName VarChar N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef idCountry Int FK N</td>
</tr>
<tr>
<td>Company</td>
<td>City</td>
<td></td>
<td>1</td>
<td>Simple</td>
<td>RRef idCompany Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef CompName VarChar N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef idCity Int FK N</td>
</tr>
<tr>
<td>University</td>
<td>Author</td>
<td></td>
<td>0</td>
<td>Simple</td>
<td>RRef idUniversity Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef UnivName VarChar N</td>
</tr>
<tr>
<td>Author</td>
<td>City</td>
<td>Department</td>
<td>2</td>
<td>Simple</td>
<td>RRef idAuthor Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Professor</td>
<td></td>
<td></td>
<td>RRef Name VarChar N</td>
</tr>
<tr>
<td></td>
<td></td>
<td>University</td>
<td></td>
<td></td>
<td>RRef idUniversity Int FK</td>
</tr>
<tr>
<td></td>
<td></td>
<td>Student</td>
<td></td>
<td></td>
<td>RRef idCity Int FK N</td>
</tr>
<tr>
<td>Professor</td>
<td>Author</td>
<td></td>
<td>0</td>
<td>PFK</td>
<td>RRef idProfessor Int PFK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef Grade VarChar N</td>
</tr>
<tr>
<td>Address</td>
<td>Student</td>
<td></td>
<td>0</td>
<td>Simple</td>
<td>RRef idAddress Int PK U N</td>
</tr>
<tr>
<td></td>
<td>PrivateAdress</td>
<td></td>
<td></td>
<td></td>
<td>RRef Address VarChar U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef idPAddress Int PFK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef Type VarChar N</td>
</tr>
<tr>
<td>Student</td>
<td>Address</td>
<td></td>
<td>2</td>
<td>PFK</td>
<td>RRef idStudent Int PFK U N</td>
</tr>
<tr>
<td></td>
<td>Author</td>
<td></td>
<td></td>
<td></td>
<td>RRef Age Int N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef idAddress Int FK N</td>
</tr>
<tr>
<td>Department</td>
<td>Author</td>
<td></td>
<td>1</td>
<td>Simple</td>
<td>RRef idDept Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef DeptName VarChar N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef idChefDept Int FK N</td>
</tr>
<tr>
<td>Paper</td>
<td>WritePaper</td>
<td></td>
<td>0</td>
<td>Simple</td>
<td>RRef idPaper Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef PaperTitle VarChar</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>RRef Year Date</td>
</tr>
<tr>
<td>WritePaper</td>
<td>Author</td>
<td></td>
<td>2</td>
<td>PFK</td>
<td>RRef idPaper Int PFK N</td>
</tr>
<tr>
<td></td>
<td>Paper</td>
<td></td>
<td></td>
<td></td>
<td>RRef idAuthor Int PFK N</td>
</tr>
</tbody>
</table>
In the second example we illustrate how to represent a SelfRefRel in MA model.
Employee(idEmp, nameEmp, Job, #Chef, #idDept)
The attribute "Chef" is a foreign key that reference idEmp in the same table. In this case we present "Chef" as a normal attribute and we put "SelfR" in the type Column of our MA model (Table 2).
Table 2: Representation Of MA Model For Example 2
<table>
<thead>
<tr>
<th>R_N</th>
<th>R_Ref</th>
<th>R_RefBy</th>
<th>NbrFK</th>
<th>Type</th>
<th>Fields</th>
</tr>
</thead>
<tbody>
<tr>
<td>Employee</td>
<td>Departement</td>
<td>Employee</td>
<td>1</td>
<td>SelfR</td>
<td>idEmployee Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>nameEmp VarChar N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Job VarChar</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>Chef Int</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>idDept Int FK</td>
</tr>
</tbody>
</table>
Finally we illustrate a MA model representation for a CyclicRel example:
City(idCity, NameCity, #idCountry)
Country(idCountry, NameCountry, #idUniversity)
University(idUniversity, NameUniversity, #idCity)
To resolve and extract all cyclic relations in the database source we use the MappingCircularRelation() algorithm in our previous work done in [1]. This algorithm uses a recursive function to detect if there is any cyclic relation in RDB schema and produces a list of cyclic relations as output.
We put "CFK" in FKey column for every foreign key in a cyclic relation that references another field in the same cyclic relation.
### Table 3: Representation Of MA Model For Cyclic Relationship Example
<table>
<thead>
<tr>
<th>R_N</th>
<th>R_Ref</th>
<th>R_RefBy</th>
<th>NbrFK</th>
<th>Type</th>
<th>Fields</th>
</tr>
</thead>
<tbody>
<tr>
<td>City</td>
<td>Country</td>
<td>University</td>
<td>1</td>
<td>Simple</td>
<td>idCity: Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>NameCity: VarChar</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>idCountry: Int CFK</td>
</tr>
<tr>
<td>Country</td>
<td>University</td>
<td>City</td>
<td>1</td>
<td>Simple</td>
<td>idCountry: Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>NameCountry: VarChar</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>idUniversity: Int CFK</td>
</tr>
<tr>
<td>University</td>
<td>City</td>
<td>Country</td>
<td>1</td>
<td>Simple</td>
<td>idUniversity: Int PK U N</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>NameUniversity: VarChar</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>idCity: Int CFK</td>
</tr>
</tbody>
</table>
### 4.3. Mapping Algorithm
In this section, we present our algorithm for the automatic construction of XML schema from a relational database. This algorithm takes into consideration all the aforementioned conversion rules.
Given a MA model as input, the algorithm captures all relations types in order to assemble the mapped XML schema into a reasonable tree pattern.
The algorithm used by "MapRelation()" for mapping attributes is as follows:
Apply rule 3: create element \( R_i' \) as a child of \( R \) with min\( \text{Occurs} = 0 \)
and max\( \text{Occurs} = \) unbounded
add \( R_i' \) to MapRelationList
MapAttribute of \( R_i' \)
If \( (R_{\text{RefBy}}(R_i')) \neq \) null then
Apply step 3-1 for all \( R_{\text{RefBy}}(R_i') \)
End If
Else If \( ( \text{NbrFK}(R_i') = 1 \) and Type\( (R_i') = \text{PFK} \) \) then
Apply rule 4: create element \( R_i' \) as a child of \( R \) with min\( \text{Occurs} = 0 \)
and max\( \text{Occurs} = 1 \)
add \( R_i' \) to MapRelationList
MapAttribute of \( R_i' \)
If \( (R_{\text{RefBy}}(R_i') \neq \) null) then
Apply step 3-1 for all \( R_{\text{RefBy}}(R_i') \)
End If
Else If( \( \text{NbrFK}(R_i') = 1 \) and Type\( (R_i') = \text{SelfR} \) \) then
Apply rule 12 for self referenced relation
MapAttribute of \( R_i' \)
If \( (R_{\text{RefBy}}(R_i') \neq \) null) then
Apply step 3-1 for all \( R_{\text{RefBy}}(R_i') \)
End If
Else if(\( \text{NbrFK}(R_i') > 1 \) then
Add \( R_i' \) \( \text{FKMoreThanOneList} \)
End if
End If
End loop
End If
End loop
Step 4: For each \( R \) in \( \text{FKMoreThanOneList} \) loop
create element \( R \) as a child of the root element
add \( R \) to MapRelationList
MapAttribute of \( R \)
For each \( R_{\text{Ref}}(S_i) \) in MA model loop // \( S_i \) is a \( R_{\text{Ref}}(R) \)
If(\( \text{Type}(S_i) = \text{PFK} \) then
Apply rule 5: create element \( R_\_S_i \) as a child of \( R \) with min\( \text{Occurs} = 1 \)
and max\( \text{Occurs} = 1 \)
Else
Apply rule 5: create element \( R_\_S_i \) as a child of \( R \) with min\( \text{Occurs} = 0 \)
and max\( \text{Occurs} = \) unbounded
End If
End loop
For each \( R_{\text{RefBy}}(R_i') \) in MA model loop // \( R_i' \) is a \( R_{\text{RefBy}}(R) \)
Apply step 3-1 for all \( R_{\text{RefBy}}(R_i') \)
End loop
End
The algorithm used by "MapRelation()" for mapping attributes is as follows:
4.4. Implementation and validation
To demonstrate the effectiveness and validity of our approach a tool has been developed (figure 2 & 3). This tool takes as input an RDB, then extracts its MTRDB, creates the corresponding MA model and applies our algorithm to create the resulting XML schema document.
To develop our prototype, we used Java as a programming language, and to store the data and metadata we used Mysql DBMS which contains system tables that define the structure of the database (including names of tables, columns, constraints ...). Our implementation can however also work with any other relational database system. We used the JDBC-API to establish the connection with the database. This API allows full access to relational database metadata and quickly retrieves a description of the tables and constraints of the database from data dictionaries.
For the example of a relational database schema considered above Fig. 2 and Fig. 3 at the end of the paper show the obtained XML schema and the conversion of the cyclic relationship of Table 3.
Figure 2: Mapping result of RDB schema
Figure 3: Mapping result of cyclic relation
5. CONCLUSION
In this paper, we have presented a new mapping process for converting relational database schema into XML schema. This process handles the mapping of the static and semantic constraints based on a well chosen categorization of the relations in the starting relational database. This categorization takes into accounts various aspects with respect to the referential integrity properties and to the various constraints on attributes. The mapping process first extracts the metadata from the RDB source, then a multidimensional array model (MA model) is generated automatically to capture the categorization structural designs and comes up with a complete and well structured hierarchical XML schema reflects all details in the initial relational database schema. The results obtained from our implementation prove the accuracy and performance of our mapping strategy.
REFERENCES:
|
{"Source-Url": "http://www.jatit.org/volumes/Vol81No2/13Vol81No2.pdf", "len_cl100k_base": 8427, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 36814, "total-output-tokens": 9766, "length": "2e13", "weborganizer": {"__label__adult": 0.0003101825714111328, "__label__art_design": 0.00039577484130859375, "__label__crime_law": 0.0003871917724609375, "__label__education_jobs": 0.0009746551513671876, "__label__entertainment": 6.0439109802246094e-05, "__label__fashion_beauty": 0.00015091896057128906, "__label__finance_business": 0.0003888607025146485, "__label__food_dining": 0.0003235340118408203, "__label__games": 0.0003736019134521485, "__label__hardware": 0.0007114410400390625, "__label__health": 0.0005273818969726562, "__label__history": 0.0003104209899902344, "__label__home_hobbies": 9.60230827331543e-05, "__label__industrial": 0.0004646778106689453, "__label__literature": 0.0003705024719238281, "__label__politics": 0.0002081394195556641, "__label__religion": 0.0004684925079345703, "__label__science_tech": 0.064208984375, "__label__social_life": 0.00012505054473876953, "__label__software": 0.0240478515625, "__label__software_dev": 0.904296875, "__label__sports_fitness": 0.00018894672393798828, "__label__transportation": 0.0004017353057861328, "__label__travel": 0.00018656253814697263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34447, 0.01143]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34447, 0.21592]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34447, 0.79802]], "google_gemma-3-12b-it_contains_pii": [[0, 3802, false], [3802, 8752, null], [8752, 12415, null], [12415, 16081, null], [16081, 18302, null], [18302, 20486, null], [20486, 24451, null], [24451, 26317, null], [26317, 28347, null], [28347, 29411, null], [29411, 29495, null], [29495, 33640, null], [33640, 34447, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3802, true], [3802, 8752, null], [8752, 12415, null], [12415, 16081, null], [16081, 18302, null], [18302, 20486, null], [20486, 24451, null], [24451, 26317, null], [26317, 28347, null], [28347, 29411, null], [29411, 29495, null], [29495, 33640, null], [33640, 34447, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34447, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34447, null]], "pdf_page_numbers": [[0, 3802, 1], [3802, 8752, 2], [8752, 12415, 3], [12415, 16081, 4], [16081, 18302, 5], [18302, 20486, 6], [20486, 24451, 7], [24451, 26317, 8], [26317, 28347, 9], [28347, 29411, 10], [29411, 29495, 11], [29495, 33640, 12], [33640, 34447, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34447, 0.13418]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
7c484d7bf593d6df0b0279cc0a195143a228c961
|
**PReach: A Heuristic for Probabilistic Reachability to Identify Hard to Reach Statements**
Seemanta Saha
University of California, Santa Barbara
Santa Barbara, CA, USA
seemantasaha@cs.ucsb.edu
Tegan Brennan
University of California, Santa Barbara
Santa Barbara, CA, USA
tegan(cs.ucsb.edu
Mara Downing
University of California, Santa Barbara
Santa Barbara, CA, USA
maradowning@cs.ucsb.edu
Tevfik Bultan
University of California, Santa Barbara
Santa Barbara, CA, USA
bultan@cs.ucsb.edu
**ABSTRACT**
We present a heuristic for approximating the likelihood of reaching a given program statement using 1) branch selectivity (representing the percentage of values that satisfy a branch condition), which we compute using model counting, 2) dependency analysis, which we use to identify input-dependent branch conditions that influence statement reachability, 3) abstract interpretation, which we use to identify the set of values that reach a branch condition, and 4) a discrete-time Markov chain model, which we construct to capture the control flow structure of the program together with the selectivity of each branch. Our experiments indicate that our heuristic-based probabilistic reachability analysis tool PReach can identify hard to reach statements with high precision and accuracy in benchmarks from software verification and testing competitions, Apache Commons Lang, and the DARPA STAC program. We provide a detailed comparison with probabilistic symbolic execution and statistical symbolic execution for the purpose of identifying hard to reach statements. PReach achieves comparable precision and accuracy to both probabilistic and statistical symbolic execution for bounded execution depth and better precision and accuracy when execution depth is unbounded and the number of program paths grows exponentially. Moreover, PReach is more scalable than both probabilistic and statistical symbolic execution.
**CCS CONCEPTS**
- *Software and its engineering → Software testing and debugging; Software verification; Automated static analysis; Theory of computation → Program analysis.*
---
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).
ICSE ’22, May 21–29, 2022, Pittsburgh, PA, USA
© 2022 Copyright held by the owner/author(s).
ACM ISBN 978-1-4503-9221-1/22/05
https://doi.org/10.1145/3510003.3510227
---
1 INTRODUCTION
Software quality assurance is one of the most fundamental problems in computing. The most common software quality assurance technique is software testing. Although there has been a surge of progress in automated software testing techniques such as random testing, fuzzing and symbolic execution in recent years, there are remaining challenges. On one hand, fuzzing and random testing techniques are comparatively scalable, but have difficulty in exploring hard to reach program paths. On the other hand, symbolic execution based techniques can explore hard to reach program paths by solving path constraints, but are not as scalable.
Hybrid testing techniques [16, 26, 38, 39, 42] combine concrete (e.g., random testing, fuzzing) and symbolic techniques in order to improve testing effectiveness. Typically, a strategy function for hybrid testing decides when to apply concrete techniques and when to apply symbolic techniques to achieve scalable and effective exploration of the program behaviors. In order to choose between concrete and symbolic approaches, most existing strategies assess the difficulty of concrete testing based on the saturation of random testing [26, 38] or probabilistic program analysis [39, 42]. Determining the likelihood (or, conversely, difficulty) of reaching a program statement is critical for assessing the difficulty of concrete testing, and hence developing an effective hybrid testing strategy. There are two existing approaches that address this problem: probabilistic and statistical symbolic execution.
Probabilistic symbolic execution [20] is an extension of symbolic execution that computes probabilities of program paths. However, probabilistic symbolic execution suffers from the same limitations as symbolic execution: 1) It can only analyze program behaviors up to a certain fixed execution depth, hence it cannot analyze behaviors of arbitrarily large program paths. 2) Due to exponential increase in number of paths with increasing execution depth (path explosion problem), the cost of symbolic execution increases exponentially with increasing execution depth. 3) Although the sizes of path constraints generated by symbolic execution increase linearly with the execution depth, since the worst case complexity of constraint
solvers is exponential, the linear increase in path constraint sizes can lead to exponential increase in analysis cost. Hence, path explosion combined with increasing sizes of path constraints can lead to double exponential blow up in the cost of symbolic execution, limiting its practical applicability.
Statistical symbolic execution [18] is more efficient and scalable compared to probabilistic symbolic execution [20]. However, it cannot compute precise reachability probabilities, rather provides approximate reachability probabilities with statistical guarantee. Statistical symbolic execution suffers from similar issues as probabilistic symbolic execution. There are two variants of statistical symbolic execution: 1) statistical analysis based on Monte Carlo sampling of symbolic paths, and 2) hybrid analysis combining both statistical and exact analysis based on informed sampling. One of the drawbacks of pure statistical sampling is that it needs to sample a large number of paths to achieve high statistical confidence. Informed sampling obtains more precise results and converges faster than a purely statistical analysis, but its effectiveness suffers when the number of program paths grows exponentially.
In this paper, we present a heuristic for probabilistic reachability analysis to identify hard to reach program statements that addresses the above shortcomings of probabilistic symbolic execution and statistical symbolic execution. In particular, 1) our approach can model behaviors of arbitrarily long paths, 2) it does not suffer from path explosion, i.e., the cost of our analysis increases polynomially with the size of the program (and does not depend on the execution depth) [23], and finally, 3) it solves constraints arising from branch conditions rather than path constraints which reduces the cost of constraint solving.
Our approach, which we implemented in our tool PReach, works as follows (Figure 1). In order to compute reachability probability of statements, we introduce a concept called branch selectivity that determines the proportion of values satisfying a given branch condition. A branch is very selective if only a few values satisfy the branch condition. On the other hand, if a lot of values satisfy the branch condition, then the branch is not very selective. Given a target statement in a program, PReach identifies the input dependent branch conditions that influence the reachability probability of that statement using dependency analysis. Then, PReach constructs a discrete-time Markov chain model from the control flow graph of the program by computing branch selectivity of each branch condition that influences the reachability probability of the target statement. PReach uses abstract interpretation to determine the set of values that reach each branch condition and model counting to compute the branch selectivity value for each branch in the program that influences statement reachability. Finally, PReach uses a probabilistic model checker to compute the reachability probability of the target statement based on the constructed discrete-time Markov chain model.
One shortcoming of our approach is that it is not a sound program analysis technique and hence, it does not provide guarantees in terms of the precision or accuracy of the reachability probabilities it reports. On the other hand, though, bounded symbolic execution is theoretically sound up to the execution bound, and probabilistic symbolic execution can quantify how much of the execution space is not explored due to the execution bound [18], for unbounded executions, both probabilistic symbolic execution and statistical symbolic execution are not sound either.
We experimentally evaluate PReach on programs from the SV-COMP benchmark set used in Competition on Software Verification [8] and Competition on Software Testing [9]. Each program in this benchmark set contains an assert statement. We use these assert statements as the target of our probabilistic reachability analysis. We evaluate the effectiveness of our technique in separating hard to reach assert statements (i.e., assert statements with low reachability probability) from easy to reach assert statements (i.e., assert statements with high reachability probability) using a probability threshold (i.e., if the reachability probability of a statement is below the given threshold we classify it as hard to reach).
In order to determine the ground truth, we use a generator based random fuzzer that is based on JQF [28] and ZEST [29]. We set a time limit for the random fuzzer, and the assert statements that are not reached within the given timeout are marked as the hard to reach assert statements. Of the 142 programs we used in our experiments, the random fuzzer times out on 51 programs. PReach classifies the programs that the random fuzzer times out on as hard-to-reach, with 95.8% precision and 95.1% accuracy. In particular, our technique correctly classifies 135 out of 142 programs and generates only 2 false positives (reports hard to reach although the fuzzer does not time out) and 5 false negatives (reports easy to reach although the fuzzer times out).
In order to further evaluate the effectiveness of our probabilistic reachability analysis, we provide a detailed experimental comparison with the probabilistic symbolic execution (PSE) [20] and statistical symbolic execution (SSE) [18] extensions to Symbolic PathFinder (SPF) [30] tool. Experimental results show that for programs with bounded execution depth, PSE achieves very high precision and accuracy to identify hard to reach cases. However, PReach outperforms PSE for programs with unbounded execution depth in terms of precision, accuracy and average analysis time. For large search depths PSE is unable to analyze 38% of the target programs demonstrating its limitations in terms of applicability and scalability, whereas PReach can analyze 100%. We compare PReach with SSE on the set of programs that PSE performs poorly. SSE was unable to analyze 27% of these programs and PReach outperforms SSE in terms of precision, accuracy, and average analysis time.
Finally, we analyze 24 target statements in 18 methods from Apache Commons Lang [1] and DARPA STAC Benchmarks [4]. PReach can classify 19 of the 24 target statements correctly demonstrating its effectiveness on real world programs, whereas PSE and SSE were able to successfully analyze and classify only one.
2 OVERVIEW
We formalize probabilistic reachability analysis as follows. Given a program $p$, let $i$ denote the input for the program, and $l$ denote the domain of inputs (i.e., $i \in I$). Note that $i$ can be a scalar value, a tuple, or a list of values. Given a target statement $t$ in program $p$, the goal of probabilistic reachability analysis is to determine how likely it is to reach target statement $t$. We do this by determining how likely it would be to pick inputs that result in an execution that reaches $t$. In order to determine how likely it would be to pick
We assume uniform distribution of inputs in our current implementation. We report the branch selectivity values computed at these branches, such inputs, we determine the probability of picking such inputs if inputs are chosen randomly. We define $P(p, t)$ as:
$$P(p, t) \text{ denotes the probability of reaching statement } s \text{ during the execution of program } p \text{ on input } i \text{ if } i \text{ is selected randomly from the input domain } I.$$
We assume uniform distribution of inputs in our current implementation. However, our technique can be easily extended to support any input distribution by integrating usage profiles [18] used in other probabilistic analysis techniques.
It is well-known that determining reachability of a statement in a program is an uncomputable problem. Hence, determining $P(p, t)$ precisely is also an uncomputable problem. In this paper we present a heuristic approach that approximates $P(p, t)$. We report the reachability probability as a real number between 0 and 1.
**Branch Selectivity.** Our heuristic approximation of $P(p, t)$ relies on a concept we call branch selectivity. Given a branch $b$, branch selectivity $S(b)$ is proportional to the ratio of the number of values that satisfy the condition for branch $b$ to the total number of values in the domain of condition for branch $b$. Formally, given a branch $b$, let $D_b$ denote the Cartesian product of the domains of the variables that appear in $b$, and let $T_b \subseteq D_b$ denote the set of values for which branch $b$ evaluates to true. Let $|D_b|$ and $|T_b|$ denote the number of elements in these sets, respectively. Then, $S(b) = \frac{|T_b|}{|D_b|}$.
So, the selectivity of a branch gets closer to 0 as the number of values that satisfy the branch condition decreases, and it gets closer to 1 as the number of values that satisfy the branch condition increases. If we think of branch as a sieve, when $S(b) = 0$ branch $b$ does not allow any value to pass, and when $S(b) = 1$ branch $b$ allows all values to pass. Note that, if we pick values from the domain $D$ randomly with a uniform distribution, then $|T_b|/|D_b|$ corresponds to the probability of picking a value that satisfies the branch condition. The branch becomes more selective as the probability of picking a value decreases.
**An Example.** Consider the integer-manipulating program in Figure 2. This program is a modified version of an example from the jpf-regression directory of the SV-COMP benchmark used for software verification and testing competitions [8]. The target statement is the assertion statement in line 19. The $arg$ variable’s value is a randomly generated integer value and it denotes the input to this program. The question we want to answer for this program is: how likely it is to reach the assertion statement at line 19 if we randomly generate values for the $arg$ variable?
The first conditional statement at line 4 ignores all the negative values. At line 15, possible values for $z$ can be any randomly generated positive value, divided by 5, minus 7. Now, the assertion at line 19 is reachable when value of $z$ is equal to 0. The likelihood of the value of $z$ being equal to 0 is low if the input is a random number generated from a uniform distribution. Therefore, the probability of reaching the assert statement in this program is low.
Our analysis uses branch selectivity based on model counting to successfully determine the reachability probability of the assert statement in this program. We inspect each branch condition leading to the assertion to determine how selective the branch is (i.e. what ratio of input values satisfy the branch). If we assume a domain of integer values, then for the conditional statement $arg < 0$, branch selectivity is calculated as half of the domain. Therefore, the possible values reaching the assertion is reduced to half. Next, for the next conditional statement, $z \neq 0$, branch selectivity is close to 1. Most values satisfy this constraint and conversely, only 1 value of $z$ satisfies its negation. The assertion lies on the else branch of this condition, making it reachable only for one value of $z$.
Using the branch selectivity values computed at these branches, we convert the control flow graph of the program to a discrete time Markov chain as shown in Figure 4c. We use a probabilistic model checker to analyze the Markov chain and obtain a probabilistic...
We approximate $P(p, t)$ using a combination of control flow, dependency analysis, abstract interpretation, model-counting and probabilistic model checking. First, we discuss how model counting constraint solvers and abstract domains can be used to compute branch selectivity. Then, we use control flow and dependency analysis and branch selectivity to transform the program’s control flow graph into a Markov chain. We form queries on this Markov chain solvable by probabilistic model checking whose solutions approximate $P(p, t)$. If $P(p, t)$ is less than a given threshold $T_H$, target statement is predicted as hard to reach. We discuss these steps below.
### 3.1 Branch Selectivity
The enabling technology for computing branch selectivity is model counting. Model counting is the problem of determining the number of satisfying solutions to a set of constraints. A model counting constraint solver is a tool which, given a constraint and a bound, returns the number of satisfying solutions to the constraint within the bound. For a branch condition $b$, recall that $S(b) = \frac{|T_b|}{|D_b|}$, where $D_b$ is the Cartesian product of the domains of the variables that appear in $b$ and $T_b$ is the set of values in $D_b$ for which $b$ evaluates to true. For a given $b$ and $D_b$, a model-counting constraint solver computes $|T_b|$. Then, using $|T_b|$ we compute $S(b)$.
We use the Automata-Based Model Counter (ABC) tool, which is a constraint solver for string and numeric constraints with model counting capabilities [2]. The constraint language for ABC supports linear arithmetic constraints as well as typical string operations. In order to compute $S(b)$ we first extract the branch condition from the program and then generate a formula in the SMT-LIB format that corresponds to the branch condition. Then, we send the formula to ABC as model counting query.
### 3.2 Refined Branch Selectivity
Abstract interpretation techniques overapproximate program behaviors by interpreting programs over abstract domains. Our key insight here is that it is possible to use abstract interpretation to refine and restrict the set of values that variables can take at each branch in order to better approximate the branch selectivity. Given a branch $b$, using abstract interpretation we generate a refinement condition $R_b$ to overapproximate the set of values that the variables can take at that branch. $R_b$ is then joined with $T_b$ and $D_b$ to compute refined branch selectivity $RS(b)$. For a branch condition $b$, refined branch selectivity is defined as $RS(b) = \frac{|T_b \land R_b|}{|D_b \land R_b|}$.
To implement the refined branch selectivity, we use state-of-the-art Java numeric analysis tool JANA [41] which supports two different abstract domains, intervals [21] and polyhedra [37], where polyhedra domain leads to more precise results however it is less scalable. We experimented with both of these domains to extract the refinement conditions $R_b$ for each branch using interval analysis and relational (using polyhedra domain) analysis. We call these implementations $\text{PREACH-I}$ and $\text{PREACH-P}$, respectively.
Consider the two code snippets from Fig. 3a and 3b. At line 4 in Fig. 3a, $T_b$ and $D_b$ are $y > 0$ and $True$ respectively. $S(b)$ computed by $\text{PREACH}$ is 0.25 predicting incorrectly that the assertion is reachable. Applying either interval or relational analysis, $R_b$ is extracted as $y < 0$ (at line 4, possible reachable values of $x$ is greater than 0 and hence possible reachable values for $y$ is less than 0 due to the update on variable $y$ at line 3). $T_b$ and $D_b$ are updated as $y > 0 \land y < 0$ and $y < 0$ respectively using $R_b$ and $RS(b)$ computed by $\text{PREACH}$ is 0 predicting correctly that the assertion is not reachable. Similarly, at line 6 in Fig. 3b, $T_b$ and $D_b$ are $x < z$ and $True$ respectively. $S(b)$ is computed as 0.5 predicting incorrectly that the assertion is reachable. Applying an interval analysis, there will be no refinement conditions as it is not possible to catch the relation between the variables $x$ and $z$ using the interval domain. But, applying relational analysis using the polyhedra domain, $R_b$ is extracted as $x > z$ (possible reachable values of $z$ is equal to $x - 7$). $T_b$ and $D_b$ are then updated as $x < z \land x > z$ and $x > z$ respectively and $RS(b)$ is computed as 0, correctly predicting that the assertion is not reachable.
Note that, for general function invocation including recursion, it may be expensive to obtain precise interprocedural analysis, reducing the effectiveness of refinement.
The control flow graph of a program is a representation of all paths that may be traversed during execution. Given a program \( p \), a target statement \( t \) in \( p \) and the input domain \( I \), we extract the control flow graph of \( p \), \( G(p) \), and mark the node of the control flow graph containing the target statement \( t \) as the node \( n^t \).
We expedite our analysis by extracting the target statement subgraph, \( G(p, t) \) of \( G(p) \). \( G(p, t) \) contains all the control flow graph information needed to perform our analysis. We define this subgraph using standard concepts from control flow analysis. We define a branch node \( b \) in a control flow graph to be any node with more than one outgoing edge. The corresponding merge node \( m \) of a branch node \( b \) is its immediate post-dominator. The component \( C \) defined by \( b \) is the union of branch node \( b \), its merge node \( m \) and all nodes of the control flow graph reachable from \( b \) without going through \( m \). The maximal component of a node is the largest component containing that node. Any non-maximal component containing this node will be contained in this maximal component.
To extract \( G(p, t) \), we first find the maximal component of \( n^t \). If \( n^t \) is not contained in any component, then \( n^t \) must lie on every path through \( G(p) \). Therefore, it is reached with certainty, \( P(p, t) = 1 \), and our analysis can be terminated. Otherwise, the maximal component of \( n^t \) is the maximal statement subgraph.
\( G(p, t) \) is a subgraph of the maximal statement subgraph. To obtain \( G(p, t) \), we remove any component of the maximal statement subgraph that does not contain the statement node \( n^t \). The branch and merge nodes of these components remain in the subgraph with one outgoing edge from the branch node to the merge node. \( G(p, t) \) results from this procedure.
Figure 4 shows the process of the target statement subgraph extraction on the running example from Figure 2. Figure 4a gives the control flow graph \( G(p) \) with the statement node \( n^t \) highlighted in red. Figure 4b shows the target statement subgraph \( G(p, t) \) extracted from \( G(p) \). In this example, the branch corresponding to \( y \neq 0 \) is removed from the control flow graph structure. The decision made at this branch does not impact the probability of reaching the target statement node.
Note that the target statement subgraph extraction phase is a heuristic to speed up our analysis. The subsequent stages can be performed on the entire control flow graph but this would result in unnecessary work including extra model counting queries which would slow down the analysis.
### 3.4 Markov Chain Construction
We define a weight for each edge of \( G(p, t) \). These weights transform \( G(p, t) \) into a Discrete Time Markov Chain (DTMC), \( M(p, t) \). A DTMC is a tuple \((S, \delta, P, L)\) where \( S \) is a finite set of states, \( \delta \in S \) is the initial state, \( P: S \times S \rightarrow [0, 1] \) is the transition probability matrix where \( \sum_{s' \in S} P(s, s') = 1 \) for all \( s \in S \). Each element \( P(s, s') \) of the transition probability matrix gives the probability of making a transition from state \( s \) to state \( s' \).
We use dependency analysis in the construction of the Markov Chain as we want to identify the branches dependent on input to set the weights of the edges accordingly.
**Dependency Analysis.** A branch condition is input dependent if the evaluation of the condition depends on the value of the program input. Given a program and its marked input, we use static dependency analysis to identify the input dependent branches. Dependency analysis provides an over approximation of the set of branch conditions whose evaluation depends on the inputs. We use Janalyzer [5], an existing static analysis tool, to perform the dependency analysis. Janalyzer is implemented on top of the WALA [40] program analysis framework.
Then, we construct the Markov chain by assigning weights to each edge of \( G(p, t) \). \( G(p, t) \) is a directed graph: each edge begins at a source node \( s \) and ends at a destination node \( d \). Given an edge \( e : s \rightarrow d \): If \( e \) is the only edge beginning at \( s \), the weight of \( e \) is 1. Else, \( s \) is a branch node by definition. To determine its weight we use a combination of dependency analysis and branch selectivity.
Since \( b \) is a branch node, there is a branch condition associated.
- **If the branch condition is independent from the program input,** we weigh edge \( e \) as follows. Let \( E \) be the number of edges originating at \( s \) and \( E^f \leq E \) be the number of edges originating at \( s \) which lie on a path to the target statement node \( n^t \). If \( E^f = 0 \), then the weight of \( e \) is 1/\( E \). Otherwise, if \( e \) lies on a path to \( n^t \) weight of \( e \) is 1/\( E^f \). If \( e \) does not lie on a path to \( n^t \), weight of \( e \) is 0.
- **If the branch condition is dependent on the program input,** we compute the weight of the edge \( e \) as follows. We use a model-counting constraint solver to determine the branch selectivity of \( b \), \( S(b) \). If \( e \) is the edge corresponding to the if condition, the weight of \( e \) is \( S(b) \). Else, 1 – \( S(b) \).
At the end of this phase, \( G(p, t) \) has been transformed into Markov chain \( M(p, t) \) where the probability of transitioning from one state to the next is given by the edge weight.
Figure 4c shows \( M(p, t) \) for the running example. The transition probabilities are given as edge weights. The two branch conditions yield the only non 1 edge weights in the graph. Both of these branch conditions are input dependent as determined by the dependency analysis. For each branch condition, the model-counting constraint solver ABC was used to find its branch selectivity. This selectivity was used to compute the weight of the edge corresponding to the if branch and its complement was used to compute the weight of the edge corresponding to the else branch.
Note that, the first-order Markov chains do not encode any context sensitivity; thus branch probabilities, e.g., loop conditions,
would always result in the same selectivity measure regardless of the call site or iteration number.
3.5 PCTL Query Formulation
We automatically synthesize queries over $M(p, t)$, whose solutions yield an approximation of $P(p, t)$. The query we synthesize is:
- What is the probability that the target node $n'$ is reached at least once?
The answer to this query approximates $P(p, t)$. We use a probabilistic model checker PRISM [22], a tool that analyzes systems that exhibit probabilistic behavior, to answer this query. We generate a discrete time Markov chain (DTMC) model based on the syntax supported by the PRISM tool. We can synthesize queries like what is the probability of reaching a state in the Markov chain eventually?
In PRISM, a PCTL formula is interpreted over the DTMC model. Two types of formulas are supported: state formulas and path formulas where path formulas occur only when there is a probabilistic measure that needs to be included in the specification. For our analysis, the queries we synthesize are path formulas and are of the form $P = \mu[\phi]$ which is the probabilistic analogue of the path quantifiers of CTL. For example, the PCTL formula $P=?[F \phi]$ states what is the probability of reaching state $\phi$.
The complexity of PCTL query verification for DTMC is polynomial in the number of states [23]. Since the number of states of the DTMC is linear in the size of the program, overall complexity of PCTL query verification is polynomial of program size.
Loop Analysis. In analyzing programs which contain back edges (either from loops or from recursion), we consider two different queries for programs with loops.
- What is the probability that target node $n'$ is reached at least once within a given loop bound?
- What is the probability that target node $n'$ is reached at least once?
The first query enables us to model bounded loop executions. To answer this query, we fix a loop bound and unroll any loops in the Markov chain. If the target node $n'$ is duplicated during this loop unrolling process, then the query becomes
- What is the probability that any target node $n'$ is reached at least once?
Once the loops in the Markov Chain are unrolled, the first query becomes the initial query on the unrolled Markov chain except that there might be multiple instances of the target node.
In answering the second query, we leave the Markov chain as is including any back edges and generate the DTMC model for PRISM as it is. PRISM calculates a steady state probability for unbounded loop scenario. Bounding the loop and asking the bounded version of the reachability query under approximates the unbounded case. As the loop bound increases, the solution for the bounded case approaches that of the unbounded case and in some cases it is possible to reach the steady state probability, i.e., to reach a fixpoint. Note that, in PRISM, we are able to compute the steady state probability, so it is not necessary to compute the fixpoint by increasing loop bounds. This is one of the advantages of our approach over probabilistic symbolic execution.
4 IMPLEMENTATION
We have implemented our technique in a tool called PREACH (Probabilistic Reachability Analyzer) targeting programs written in Java programming language.
Using the static analysis tool Janalyzer [5] we first extract the control flow graph from the given program. After marking inputs for which we want to calculate reachability probability, we use dependency analysis for the marked inputs and identify all input-dependent branches. We identify the target statement node and do dominator and post-dominator analysis in order to extract the target statement subgraph.
For calculating branch selectivity of input-dependent branches we first translate the branch conditions to SMT-LIB format constraints using Spoon [31] and then we use ABC [2] for model counting. To compute refined branch selectivity we applied two abstract domains, interval and polychedia using Jana [41], a numeric analysis tool for Java. We call these implementations as PREACH-1 and PREACH-P respectively. We define the domain size for integers as signed 31 bit, for strings as length of 16 with all printable ASCII characters, for char as unsigned 8 bit integers. Once we get the model count from ABC, we calculate the branch selectivity. To compute bounded reachability of a target statement, we look for back-edges and if there is one, we unroll the loop to a certain bound. For unbounded cases, we compute the steady state probability.
Once we have all the branch selectivity values, we construct the discrete time Markov chain (DTMC). Using the target statement node, we formulate the queries to calculate the reachability probability. We use the probabilistic model checker PRISM [22] for computing the target statement reachability probability. We convert the Markov chain to a DTMC model in PRISM syntax and synthesize queries. Then, we execute PRISM to compute the probability. We use PRISM as it provides features to reduce the reachability checking of a statement in a program with unbounded loops to reachability checking of a state in DTMC. Our current implementation determines reachability probability for each target statement separately. We can extend our approach to handle reachability of multiple statements by synthesizing slightly more complex queries.
For collecting ground truth values of hard to reach statements, we run a generator based random fuzzer for all the programs. We use JQF [28] tool which is a feedback directed fuzz testing platform for Java. JQF incorporates coverage-guided fuzz testing technique ZEST [29]. We use generator-based random fuzzing option provided by ZEST. We set a timeout of one hour and if the fuzzer fails to generate inputs to reach the target statement, we determine that the target statement is hard to reach.
Note that, the PREACH approach can be extended to support alternative concrete testing techniques and the definition of hard to reach statements can be adapted accordingly. For example, for a random testing tool like JDoop [27] (used in the hybrid testing tool JDoop [16]), the definition of hard to reach can be changed by considering an input distribution that is different from uniform distribution, by using different usage profiles [18].
5 EXPERIMENTAL EVALUATION
To evaluate PREACH, we experimented on benchmark programs from the Competition on Software Verification (SV-COMP) [8] and
the Competition on Software Testing (Test-Comp) [9], which we call the SV-COMP benchmark. So far, Test-Comp have only used C programs from the SV-COMP benchmark. Among the benchmarks used for Java in SV-COMP 2021, We use 4 modules (jayhorns-recursive, jbmc-regression, jpf-regression, algorithms) for evaluation. We mark all the non-deterministic inputs in the SV-COMP benchmarks as inputs for reachability analysis. We use the assertion statements in these programs as target statements. We use two criteria to select the programs from these directories for our experiments. We exclude programs if one of the following two conditions hold:
(1) Target statement reachability does not depend on the inputs: PReach is not applicable for these programs as it assesses reachability probability with respect to inputs.
(2) Verification tasks are specific to floating point arithmetic: The model-counting constraint solver we use does not support constraints generated from such programs.
Based on the above criteria, our final dataset consists of a total of 142 programs. We modify these programs in order to allow us to run both our analysis and the generator based random fuzzer while keeping the program semantics unchanged. These modified programs are available at [35].
We run experiments on a virtual box equipped with an Intel Core i7-8750H CPU at 2.20GHz and 16 GB of RAM running Ubuntu Linux 18.04.3 LTS and the Java 8 Platform Standard Edition, version 1.8.0_232, from OpenJDK 64-Bit Server VM.
5.1 Results for the SV-COMP benchmark
Reachability probability computed by PReach is a value between 0 and 1. In order to assess how good PReach is to identify hard to reach statements, we classify program statements to two groups: hard to reach and easy to reach. As ground truth, we classify the programs for which the random fuzzer is unable to reach the target statement within the given time bound as hard to reach. We list the number of true positives (TP: ground truth is hard to reach and PReach predicts hard to reach); false positives (FP: ground truth is easy to reach and PReach predicts hard to reach); true negatives (TN: ground truth is easy to reach and PReach predicts easy to reach); false negatives (FN: ground truth is hard to reach and PReach predicts easy to reach). A hard to reach threshold ($T_H$) value 0.05 means statements having reachability probability less than 0.05 are classified as hard to reach. Then, we evaluate PReach with respect to the ground truth.
Table 1 shows the overall precision, recall and accuracy results of PReach-P. Precision, recall and accuracy for different implementations of PReach is shown in Table 4. We demonstrate results for multiple values of $T_H$ to analyze changes in precision, recall and accuracy across the benchmarks. Reducing $T_H$ from 0.05 to 0.01 does not change the results at all. Increasing $T_H$ to 0.1 leads to interesting changes in the results: some of the true negative cases are updated to false positives, reducing precision and accuracy. Increasing $T_H$ to 0.25 changes the results further: the number of false positive cases are increased and number of true negative cases are decreased. Increasing the value of $T_H$ changes the prediction of more cases from easy to reach to hard to reach and hence, the overall precision is reduced from 95.8% to 79.3% and the overall accuracy is reduced from 95.1% to 88.0%. The ability of using different threshold values demonstrates the quantitative nature of our analysis rather than being a fixed binary classification.
Accuracy of PReach-P setting $T_H$ as 0.05 or 0.01 is 95.1%. Across all the benchmarks, accuracy is greater than or equal to 87.0%, reflecting the effectiveness of our heuristic. PReach-P fails to identify 5 of the hard to reach program statements having a recall of 90.2%, but it is very precise in identifying hard to reach program statements with a precision of 95.8%.
Among 142 cases, only 2 cases are false positives and 5 cases are false negatives. The remaining 135 cases are correctly classified by PReach. The reasons behind the 2 false positive cases and the 5 false negative cases are: 1) most of the input values generated by the fuzzer lead to exceptions and the fuzzer cannot generate enough valid inputs, 2) the numeric analysis tool cannot handle complex operations such as multiplication, division and modulus between more than one variables using the abstract domains.
Experimental results show that among the 3 variations of the tools, PReach-P performs the best with a precision, recall and accuracy of 95.8%, 90.2% and 95.1% respectively. Without applying refined branch selectivity, PReach cannot catch two scenarios: 1) two dependent branch conditions cancel out each other, 2) input values are updated in a way that the branch condition becomes always true or false. Hence, the number of false negatives increases from 5 to 13. PReach-I uses interval domain for refinement analysis which is not as precise as PReach-P using a polyhedra domain. As a result, 2 extra false negatives are introduced by PReach-I.
5.2 Probabilistic Symbolic Execution (PSE)
We provide an experimental comparison of PReach with probabilistic symbolic execution (PSE) [20]. We use SPF [30] as the symbolic execution engine for PSE. PSE is unable to analyze some of the target programs due to unsupported constraints such as non-linear path constraints, PReach does not face this issue as much since it only considers branch conditions. The rest of the programs are marked as analyzable by PSE, as shown in Table 2. For programs where the number of recursive calls or loop iterations depend on the input, PSE can not explore all possible paths since it can only search programs behaviors up to a bounded execution depth (search depth), and since the number of program paths grows exponentially. Therefore, we set a timeout of 1 hour for PSE and evaluate for different search depths. Since PSE is unable to cover all program paths, the probabilistic measurement computed by PSE is not exact. Increasing the search depth allows PSE to obtain more accurate results but also increases the number of program paths exponentially. This leads PSE to time out for some programs, as shown in Table 2. This is not the case for the jpf-regression and jbmc-regression benchmarks, as there is no input dependent recursive calls or loops.
We show the comparison of reachability probabilities computed by PReach and PSE in Table 3. As we do not have any ground truth for the probability measurement, we calculate probability differences between these two techniques and analyze the differences in case of agreement and disagreement for hard to reach statement assessment. PReach and PSE agree if their predictions match, disagree otherwise. Based on agreement and disagreement, we divide all the cases into 3 groups: 1) agreement, 2) disagreement
Table 1: Effectiveness of PReach-P in terms of precision, recall and accuracy scores for sv-comp benchmarks
<table>
<thead>
<tr>
<th>Benchmarks</th>
<th>TP</th>
<th>FP</th>
<th>TN</th>
<th>FN</th>
<th>Precision</th>
<th>Recall</th>
<th>Accuracy</th>
<th>TP</th>
<th>FP</th>
<th>TN</th>
<th>FN</th>
<th>Precision</th>
<th>Recall</th>
<th>Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td>jayhorn-recursive</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td>90.0</td>
<td>75.0</td>
<td>82.4</td>
<td>9</td>
<td>0</td>
<td>11</td>
<td>3</td>
<td>100.0</td>
<td>75.0</td>
<td>87.0</td>
</tr>
<tr>
<td>jpf-regression</td>
<td>25</td>
<td>7</td>
<td>43</td>
<td>2</td>
<td>78.1</td>
<td>92.6</td>
<td>88.3</td>
<td>25</td>
<td>2</td>
<td>48</td>
<td>2</td>
<td>92.6</td>
<td>92.6</td>
<td>94.8</td>
</tr>
<tr>
<td>jbmc-regression</td>
<td>8</td>
<td>1</td>
<td>12</td>
<td>0</td>
<td>88.8</td>
<td>100.0</td>
<td>100.0</td>
<td>8</td>
<td>0</td>
<td>13</td>
<td>0</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>algorithms</td>
<td>4</td>
<td>3</td>
<td>14</td>
<td>0</td>
<td>57.1</td>
<td>100.0</td>
<td>85.7</td>
<td>4</td>
<td>3</td>
<td>14</td>
<td>4</td>
<td>57.1</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>Total</td>
<td>46</td>
<td>12</td>
<td>79</td>
<td>5</td>
<td>79.3</td>
<td>90.2</td>
<td>88.0</td>
<td>46</td>
<td>8</td>
<td>86</td>
<td>5</td>
<td>90.2</td>
<td>90.2</td>
<td>91.1</td>
</tr>
</tbody>
</table>
Table 2: Number of programs analyzed by PReach and Probabilistic Symbolic Execution within 1 hour timeout
<table>
<thead>
<tr>
<th>Benchmarks</th>
<th>Number of programs analyzed</th>
<th>PReach</th>
<th>PReach with Search Depth</th>
<th>Agreement</th>
<th>Disagreement</th>
<th>All Cases</th>
</tr>
</thead>
<tbody>
<tr>
<td>jayhorn-recursive</td>
<td>25</td>
<td>21</td>
<td>15</td>
<td>6</td>
<td>3</td>
<td>0.420</td>
</tr>
<tr>
<td>jpf-regression</td>
<td>77</td>
<td>69</td>
<td>69</td>
<td>69</td>
<td>69</td>
<td>0.420</td>
</tr>
<tr>
<td>jbmc-regression</td>
<td>21</td>
<td>16</td>
<td>16</td>
<td>16</td>
<td>16</td>
<td>0.420</td>
</tr>
<tr>
<td>algorithms</td>
<td>21</td>
<td>9</td>
<td>9</td>
<td>9</td>
<td>9</td>
<td>0.420</td>
</tr>
<tr>
<td>Total</td>
<td>142</td>
<td>115</td>
<td>111</td>
<td>100</td>
<td>98</td>
<td>92.8</td>
</tr>
</tbody>
</table>
Table 3: Probabilistic measurement differences and hard to reach statement prediction disagreements between PReach (PR) and PSE
<table>
<thead>
<tr>
<th>Benchmarks</th>
<th>Search Depth</th>
<th>#Cases Analyzable</th>
<th>Tool</th>
<th>Agreement Diff</th>
<th>Disagreement Diff</th>
<th>All Cases Diff</th>
</tr>
</thead>
<tbody>
<tr>
<td>jayhorn-recursive</td>
<td>10</td>
<td>21</td>
<td>PR-I</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<td></td>
<td>100</td>
<td></td>
<td>PR-P</td>
<td>0.00</td>
<td>0.00</td>
<td>0.00</td>
</tr>
<tr>
<td>jpf-regression</td>
<td>69</td>
<td></td>
<td>PR-I</td>
<td>0.04</td>
<td>0.02</td>
<td>0.06</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>PR-F</td>
<td>0.04</td>
<td>0.02</td>
<td>0.06</td>
</tr>
<tr>
<td>jbmc-regression</td>
<td>16</td>
<td>9</td>
<td>PR-I</td>
<td>0.01</td>
<td>0.00</td>
<td>0.01</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>PR-F</td>
<td>0.01</td>
<td>0.00</td>
<td>0.01</td>
</tr>
<tr>
<td>algorithms</td>
<td>9</td>
<td></td>
<td>PR-I</td>
<td>0.08</td>
<td>0.00</td>
<td>0.08</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>PR-F</td>
<td>0.08</td>
<td>0.00</td>
<td>0.08</td>
</tr>
</tbody>
</table>
and PSE is correct, 3) disagreement and PReach is correct. The average difference in probability is low for the cases of agreement. The difference is even lower for jpf-regression and jbmc-regression benchmarks as PSE achieves very high precision and accuracy (see Table 5) and PReach agrees with the predictions. For the cases of disagreement, the difference is very high for most of the cases when PSE predicts correctly but PReach does not. One of the main reasons for this is variable updates making some of the program paths infeasible. PSE can catch the infeasible paths whereas PReach gives an approximate result for these cases using branch selectivity. Both PReach-I and PReach-P can address this issue. Using refined branch selectivity, the number of agreement cases are increased and average probability difference is reduced for jpf-regression and jbmc-regression benchmarks. Another reason is PReach predicting a program statement as easy to reach but the ground truth is hard to reach as fuzzer cannot reach the target statement due to recursion stack overflow error. Average difference is also high for jayhorn-recurisve and algorithms benchmarks when PReach predicts correctly but PSE does not, as there is an exponential increase in the number of paths and PSE poorly approximates the probability.
We now compare these two techniques in terms of hard to reach statement prediction accuracy and precision. To compare PReach and PSE, we set the hard to reach threshold to 0.05. Table 4 shows precision, recall and accuracy for PReach and PSE with search depth 10 and 1000. We evaluate all 142 programs analyzable by PReach. The programs for which PSE times out are marked as easy to reach as our target is to identify the hard to reach program statements. Different search depths do not change results for jpf-regression and jbmc-regression benchmarks as these programs are free of recursive calls and loops that depend on inputs. The precision and accuracy values for PReach are comparable to PSE for these benchmarks. The prediction results are improved a lot using PReach-I and PReach-P. For jpf-regression and jbmc-regression benchmarks, precision, recall and accuracy are increased. For jbfm-regression benchmarks, both PReach-I and PReach-P performs better than PSE and for jpf-regression benchmarks, overall scores achieved by PReach-P are better than PReach-I and very close to the scores achieved by PSE. For jayhorn-recurisve and algorithms benchmarks, PSE can not achieve as good results as PReach, PReach-I or PReach-P since these programs need to deal with input dependent recursive calls and loops. For lower search depth (10), PSE can not explore all the program paths and as a result the computed probability is an under-approximation (worse than a heuristics-based approach used in PReach). For higher search depth (1000), most of the programs time out and hence are marked as easy to reach. As a result there are no true-positive cases making precision and recall values 0 as well as no false-positive cases keeping the total precision high (96.9). For the algorithms benchmark, even with search depth 10, the precision and recall is 0 as PSE can not support most of the programs (marked as easy to reach) as array size is input dependent and marked as symbolic, which is not analyzable by SPF. Though for programs with bounded execution depth due to the absence of loop and recursion (jpf-regression and jbmc-regression benchmarks), PSE performs better than PReach but PReach-P is as good as or even better in some cases than PSE.
We show precision and accuracy for the 85 programs in these two benchmarks that are analyzable by PSE in Table 5. The scores for PSE are not 100% due to situations like integer arithmetic overflow that are not caught by symbolic execution. The precision (95.7) and accuracy (87.1) for PReach is comparable to PSE and is impressive given that it is a scalable heuristic approach. The precision (96.8) and accuracy (96.5) by PReach-P is very close to the scores achieved by PSE. Moreover, PSE performs very poorly on programs with unbounded execution depth (jayhorn-recurisve and algorithms
We compare it can provide statistical guarantees for the computed probabilities is also unable to explore all program paths within an hour, but SSE indexing during symbolic execution. As before, we set a timeout of PReach-P and algorithms benchmarks, the average analysis time increases and percentage of analyzed cases within the time bound decreases as the search depth is increased. For the jayhorn-recursive benchmark, even for a search depth of 30 the average analysis time increases by an order of magnitude. This is because the number of recursive function calls are input dependent. The average analysis time shown in the table is less than or equal to 3600 seconds since we set the timeout to 1 hour (i.e., 3600 seconds is the maximum analysis time). The time for jayhorn-recursive benchmarks with search depth greater than or equal to 30 would be very high without this timeout. Average analysis time also increases for the algorithms benchmarks when the search depth is increased as number of loop iterations depend on the inputs. These results show that PSE is not scalable for unbounded execution depth whereas PReach is.
PReach-I and PReach-P require more analysis time compared to PSE for jpf-regression and jbmce-regression benchmarks. As programs in these benchmarks are loop and recursion free, PSE runs fast whereas PReach-I and PReach-P perform abstract interpretation for branch selectivity refinement. However, as the search depth of the programs increases, the branch selectivity refinement analysis time becomes less significant compared to the exponential time increase due to path constraint solving performed by PSE, reflected in the jayhorn-recursive and algorithms benchmarks. For these benchmarks, as the search depth increases to 100, the analysis time by PSE is orders of magnitude higher than the analysis time required by PReach-I or PReach-P. These results clearly indicate that PReach, PReach-I and PReach-P maintain a balanced trade-off between precision and scalability for probabilistic reachability analysis and among these three implementations, PReach-P performs the best considering its high precision and accuracy.
### 5.3 Statistical Symbolic Execution (SSE)
In this section, we provide an experimental comparison of PReach-P with statistical symbolic execution (SSE). Prior work has demonstrated that SSE is more precise and faster than PSE when large execution bounds are necessary, preventing PSE from terminating [18], SSE uses SPF [30] as the symbolic execution engine similar to PSE. We compare PReach-P and SSE only for the jayhorn-recursive and algorithms benchmarks from SV-COMP, as PSE achieves very high precision and accuracy for jpf-regression and jbmce-regression benchmarks, and we have already compared the performance of PReach-P and PSE on those benchmarks.
SSE is unable to analyze 12 out of 44 target programs due to inability to handle non-linear path constraints or symbolic array indexing during symbolic execution. As before, we set a timeout of 1 hour for SSE and evaluate for different search depths. Like PSE, SSE is also unable to explore all program paths within an hour, but it can provide statistical guarantees for the computed probabilities with respect to accuracy ($\epsilon$) and confidence ($\delta$) parameters [18]. SSE has two different sampling approaches: 1) Monte Carlo and 2) Informed sampling. We compare PReach-P to both of these sampling techniques in SSE. In both cases, we set $\epsilon$ to be $10^{-5}$ and target $\delta$ to be 0.99 following the experimental setup in [18]. For Monte Carlo sampling, we set the maximum sample size ($N_1$) as 100,000 and for informed sampling, we set $N_1$ as 100 and maximum number of iterations as 100.
Precision, recall and accuracy for SSE is presented in Table 7. SSE has better precision, recall and accuracy compared to PSE but not compared to PReach-P. Recall and accuracy for SSE drops with increasing search depth. For algorithms, precision and recall is 0.0 (marked with a *), as there were no true positive cases among the analyzable programs by SSE. Similar to the experimental setup for the comparison to PSE, we mark a program statement as easy to reach if it times out.
We do not take the reported statistical confidence into account to determine which program statements should be marked as hard to reach or easy to reach by SSE. One could use a threshold value for the statistical confidence, and accept only the predictions achieving a certain confidence. In that case, the precision and accuracy of SSE would drop further. Instead, we present average confidence achieved by SSE in Table 8 separately. Statistical confidence achieved by SSE drops as the search depth for symbolic execution is increased and more programs time out. Even though we set a large maximum number of samples (100,000) for Monte Carlo sampling, SSE can not achieve a high confidence. On the other hand, informed sampling can achieve high confidence with search depths 10 or 100 for some cases. But, with an infinite search depth, none of the sampling techniques can achieve high confidence.
Average analysis time for SSE is presented in Table 8. In general, PReach-P is orders of magnitude faster than SSE. Monte Carlo sampling is consistently slower for all the programs compared to PReach-P. Informed sampling performs much better than Monte Carlo sampling. Analysis time of SSE with informed sampling is close to PReach-P for some programs when a short search depth value is used. But, irrespective of search depth, for a good number of programs, informed sampling is also orders of magnitude slower than PReach-P, and hence its average analysis time is significantly higher than PReach.
These results demonstrate that PReach-P is more scalable compared to SSE and achieves better precision and accuracy, especially for programs containing large number of paths.
### 5.4 Case Studies
In this section, we evaluate the effectiveness of PReach to detect hard to reach program statements in larger projects. We are particularly interested in program points where inputs need to pass through numerous branches to reach. We selected a set of methods from Apache Commons Lang [1] and DARPA STAC Benchmarks [4] and identified target program statements. We have analyzed 24 program statements in 12 methods from Apache Commons Lang project and 12 program statements from 6 methods across 5 projects from DARPA STAC Benchmarks.
Table 9 shows PReach results for the selected 24 cases. First, we run PSE to compute reachability probability on all these cases.
Table 4: Precision, Recall and Accuracy of PReach (PR) and PSE, computed for 142 programs, program is marked easy to reach if analysis times out
<table>
<thead>
<tr>
<th>Benchmarks</th>
<th>Precision</th>
<th>PSE with Search Depth</th>
<th>Recall</th>
<th>PSE with Search Depth</th>
<th>Accuracy</th>
<th>PSE with Search Depth</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>PR</td>
<td>PR-I</td>
<td>PR-P</td>
<td>PR</td>
<td>PR-I</td>
<td>PR-P</td>
</tr>
<tr>
<td>jayhorn-recursive</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
<td>79.9</td>
<td>90.2</td>
<td>75.0</td>
</tr>
<tr>
<td>jpf-regression</td>
<td>90.5</td>
<td>92.0</td>
<td>92.6</td>
<td>96.2</td>
<td>96.2</td>
<td>70.4</td>
</tr>
<tr>
<td>jbmce-regression</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
<td>75.0</td>
</tr>
<tr>
<td>Total</td>
<td>95.8</td>
<td>95.7</td>
<td>95.8</td>
<td>90.4</td>
<td>93.9</td>
<td>74.5</td>
</tr>
</tbody>
</table>
Table 5: Precision, Recall and Accuracy of PSE and PReach (PR), out of 85 programs computed within 1 hour for jpf- and jbmce-regression benchmarks
<table>
<thead>
<tr>
<th>Benchmarks</th>
<th>Precision</th>
<th>Recall</th>
<th>Accuracy</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>PR</td>
<td>PR-I</td>
<td>PR-P</td>
</tr>
<tr>
<td>jayhorn-recursive</td>
<td>94.7</td>
<td>95.7</td>
<td>96.0</td>
</tr>
<tr>
<td>jpf-regression</td>
<td>100.0</td>
<td>100.0</td>
<td>100.0</td>
</tr>
<tr>
<td>Total</td>
<td>95.7</td>
<td>96.8</td>
<td>96.8</td>
</tr>
</tbody>
</table>
Table 6: Average Analysis Time for PReach (PR) and PSE, maximum average analysis time is limited to 3600 seconds, cases with timeout are included
<table>
<thead>
<tr>
<th>Benchmarks</th>
<th>Average Analysis time in seconds (% Cases Analyzed in 1 hour)</th>
<th>Probabilistic Symbolic Execution</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>PR</td>
<td>PR-I</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>jayhorn-recursive</td>
<td>4.43</td>
<td>4.35</td>
</tr>
<tr>
<td>jpf-regression</td>
<td>0.81</td>
<td>3.11</td>
</tr>
<tr>
<td>jbmce-regression</td>
<td>6.69</td>
<td>4.90</td>
</tr>
<tr>
<td>Total</td>
<td>4.99</td>
<td>4.38</td>
</tr>
</tbody>
</table>
6 RELATED WORK
There has been an increasing amount of research on quantitative program analysis techniques based on model counting constraint solvers, and there has been a surge of progress in model counting constraint solvers [2, 11, 13, 14, 24, 25]. Model counting constraint solvers have been used in a variety of quantitative program analysis tasks such as probabilistic analysis [10, 18, 20], reliability analysis [17], estimating performance distribution [15], quantitative information flow [3, 6, 19, 33, 34], and side-channel attack synthesis [7, 32, 36]. Branch selectivity and probabilistic reachability heuristic we introduce in this paper are fundamental quantitative program analysis techniques and rely on the recent developments in model counting constraint solvers.
PReach can predict 19 out of 24 cases correctly with an accuracy of 79.2% setting \( T_H \) as 0.001. We used the same value of \( T_H \) across all domains. Different values of \( T_H \) for Integer/mixed domain (0.01) and String domain (0.001) increases the accuracy to 83.33% supporting the quantitative nature of our analysis. 5 of the cases that PReach can not predict correctly is due to the similar reasons as SV-COMP benchmarks. The value of the input is updated inside the program and as a result the following branches do not depend on the initial input value anymore.
Among 18 methods we analyze we find that PSE is not able to handle 9 methods due to either variable type conversion or lack of support for some String library functions. PSE fails on 2 other methods due to incapability to model count for non-linear path constraints and another 4 methods due to lack of support for translation of expressions to string path constraints. PReach does not have any of these issues as the underlying technique is simpler than symbolically executing a program, and it can avoid dealing with non-linear path constraints and complex string path constraints as it needs to consider individual branch conditions only. Finally, PSE successfully runs on 3 methods but for 2 of the methods it times out, predicting only 1 case correctly as hard to reach. These results demonstrate the limitations and poor scalability of probabilistic symbolic execution on realistic programs. We also cannot analyze these cases using PReach-I and PReach-P as the programs perform string operations and the abstract interpretation tool [41] we use for computing refined branch selectivity is limited to numeric analysis. Even without refining the branch selectivity, our results for these case studies demonstrate that even the base technique (PReach) using branch selectivity is capable of predicting hard to reach program statements efficiently for sizable programs.
We presented a novel heuristic for probabilistic reachability analysis named PR-P. Our approach is used to identify hard-to-reach program statements based on probabilistic reachability heuristic. We compared our approach on a set of benchmark programs and demonstrated that our approach can identify statements that are hard to reach with reasonable precision and accuracy. We provided detailed comparison of our approach against probabilistic symbolic execution and statistical symbolic execution, demonstrating that our approach is more efficient and scalable.
7 CONCLUSIONS
We presented a novel heuristic for probabilistic reachability analysis to identify hard to reach program statements that uses dependency analysis, model counting, abstract interpretation, and probabilistic model checking to compute probability of reaching a program statement given random inputs. We experimentally evaluated our approach on a set of benchmark programs and demonstrated that our approach can identify statements that are hard to reach with reasonable precision and accuracy. We provided detailed comparison of our approach against probabilistic symbolic execution and statistical symbolic execution, demonstrating that our approach is more efficient and scalable.
ACKNOWLEDGEMENT
We would like to thank all the reviewers for their useful technical comments and insightful suggestions towards improving this paper.
REFERENCES
|
{"Source-Url": "https://vlab.cs.ucsb.edu/papers/ICSE2022_preach.pdf", "len_cl100k_base": 14333, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 46887, "total-output-tokens": 15241, "length": "2e13", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.0002951622009277344, "__label__crime_law": 0.0004169940948486328, "__label__education_jobs": 0.00054931640625, "__label__entertainment": 7.343292236328125e-05, "__label__fashion_beauty": 0.00015807151794433594, "__label__finance_business": 0.0001913309097290039, "__label__food_dining": 0.0003552436828613281, "__label__games": 0.00083160400390625, "__label__hardware": 0.0007677078247070312, "__label__health": 0.0005025863647460938, "__label__history": 0.00024890899658203125, "__label__home_hobbies": 8.422136306762695e-05, "__label__industrial": 0.0003695487976074219, "__label__literature": 0.00028634071350097656, "__label__politics": 0.000301361083984375, "__label__religion": 0.0004799365997314453, "__label__science_tech": 0.0236053466796875, "__label__social_life": 9.268522262573242e-05, "__label__software": 0.0052490234375, "__label__software_dev": 0.9638671875, "__label__sports_fitness": 0.00036072731018066406, "__label__transportation": 0.0005025863647460938, "__label__travel": 0.00020992755889892575}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61887, 0.05132]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61887, 0.23313]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61887, 0.8944]], "google_gemma-3-12b-it_contains_pii": [[0, 4997, false], [4997, 12029, null], [12029, 16483, null], [16483, 21138, null], [21138, 27462, null], [27462, 33928, null], [33928, 40819, null], [40819, 48230, null], [48230, 54816, null], [54816, 60173, null], [60173, 61887, null], [61887, 61887, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4997, true], [4997, 12029, null], [12029, 16483, null], [16483, 21138, null], [21138, 27462, null], [27462, 33928, null], [33928, 40819, null], [40819, 48230, null], [48230, 54816, null], [54816, 60173, null], [60173, 61887, null], [61887, 61887, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61887, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61887, null]], "pdf_page_numbers": [[0, 4997, 1], [4997, 12029, 2], [12029, 16483, 3], [16483, 21138, 4], [21138, 27462, 5], [27462, 33928, 6], [33928, 40819, 7], [40819, 48230, 8], [48230, 54816, 9], [54816, 60173, 10], [60173, 61887, 11], [61887, 61887, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61887, 0.2359]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
a9f1c556dd03bdd1d7d5b39c77a0b2a147e6f652
|
Wei, Linfeng; Luo, Weiqi; Weng, Jian; Zhong, Yanjun; Zhang, Xiaqian; Yan, Zheng
Machine Learning-based Malicious Application Detection of Android
Published in: IEEE Access
DOI: 10.1109/ACCESS.2017.2771470
Published: 05/12/2017
Please cite the original version:
ABSTRACT In this paper, we propose a machine learning-based approach to detect malicious mobile malware in Android applications. This paper is able to capture instantaneous attacks that cannot be effectively detected in the past work. Based on the proposed approach, we implemented a malicious app detection tool, named Androidetect. First, we analyze the relationship between system functions, sensitive permissions, and sensitive application programming interfaces. The combination of system functions has been used to describe the application behaviors and construct eigenvectors. Subsequently, based on the eigenvectors, we compare the methodologies of naive Bayesian, J48 decision tree, and application functions decision algorithm regarding effective detection of malicious Android applications. Androidetect is then applied to test sample programs and real-world applications. The experimental results prove that Androidetect can better detect malicious applications of Android by using a combination of system functions compared with the previous work.
INDEX TERMS Malicious applications of Android, machine learning, system function.
A machine learning-based malicious application detection method of Android is presented in this paper. The system call sequence is used to construct the dynamic model, which has high classification precision but a high false positive rate. In [8], a malicious software detection method is proposed based on system call. In this paper, the vector characteristics are generated by collecting the system invocation information and the corresponding frequency information variable construction model in the Android environment. The kNN classification algorithm is used to identify the normal behavior and the malicious behavior. The method can build the dynamic running characteristics of the application, but suffers from high false positive rate. In [9], a malware detection method based on kernel behavior analysis is proposed. A rule base is created by extracting the names and parameters of malicious system calls. The invocation to rule base of the application is used to detect unknown malicious samples. The method is relatively simple, and therefore the test results are imprecise. The shortcomings of these two analytical methods are that they can not simulate the dynamic operation of the application or the constructed eigenvectors can not effectively detect the instantaneous attacks. Therefore, in [10], a MNDAM system, which can detect the abnormalities of Android system status, and analyze the existence of malicious applications is proposed. In [11], the DroidRanger tool is developed to detect the variants of malware. Li et al. [13] proposed a dynamic stain detection method based on control dependency analysis. The malicious application software can be detected by analyzing the dependence of sensitive operation and pollution data. These methods can detect anomaly in the system, but cannot determine which application has caused the problem [15], [16].
This paper presents an Android malicious application detection method based on machine learning, which identifies instantaneous attacks with low false positive rates. Our method constructs a feature vector based on the system call function and classifies Android applications based on source, where the feature vector is used as training data of the classifier. Compared with the above methods, the behavior description method of dynamic fine-grained application solves the problem of static detection against code obfuscation and poor encryption ability, and can also identify the instantaneous attack and better describe the application behavior in the stage of constructing eigenvector. In the application testing, the Android application is classified in accordance with the functional types to increase the success rate of detection and reduce false positives by improving the quality of data.
The structure of this paper is as follows. Androidetect system is given in Sec. II. Key technologies and algorithms is presented in Sec. III. Experimental design and analysis is showed in IV. In the end, the conclusions are provided in Sec. V.
## II. ANDROIDETECT SYSTEM
This paper presents a malicious application detection method based on machine learning, which is implemented as the Androidetect tool that can automatically detect malicious applications. The system uses the process injection technology, Hook technology and inter-procedural communication that constructs the eigenvectors by extracting the characteristics of the Android application. An algorithm for application function class judgment algorithm is designed to establish the classification of normal and malicious applications. The two machine learning algorithms, namely the naive Bayesian and decision tree, are used to train and test classifiers.
The structure of Androidetect detection system is shown in Fig. 1. The system consists of a log access module and a log analysis module, where the log access module is used to obtain the behavior log corresponding to each sample and transform it into eigenvectors. The security threat uses the transformed eigenvector to train classifier which detects malicious Android applications.
### A. LOG ACCESS MODULE
1) **Code injection.** We adopt the code injection technology and Hook technology to complete the intercep-
tion process of a calling system function during the operation of malicious samples. In particular, the .so file is injected into the system_server to replace the function IOCTL and complete the interception.
2) Function analysis. We employ the Binder communication mechanism in Android processes to complete the analysis of system calls. The progress communication using the Binder mechanism is the process of system calling functions to perform sensitive behavior. According to the Binder protocol, by analyzing the command code BINDER_WRITE_READ, we can obtain two kinds of parameters of the intercepted system function ioctl command code and data to realize the analysis of the ioctl function.
3) Behavior logging. In this paper, the log protocol is used to output the analysis information by extracting and combining the underlying information. In the application layer, a specific program gets permission to open the service monitor log information and record log operations.
4) Log analysis. Since the output information is spliced by customized rules, we can import the information by using the Java string processing method to analyze and obtain data adopting the customize rules line-by-line.
B. LOG ANALYSIS MODULE
1) Vector construction. In the stage of feature description, the combination of system functions are used to describe the application behavior. Considering the relationship of system functions and permissions, sensitive permissions and sensitive APIs, they can also be considered as a combination of sensitive behaviors to describe the application behavior. At the same time, the combinations of sensitive behaviors are obtained dynamically from the log information to construct 34 different eigenvectors.
2) Classifier training. We use simple algorithms, naive Bayesian and decision tree to train the eigenvectors of the samples. The combination of sensitive behaviors in the log information are counted and used to construct the eigenvectors, where these eigenvectors are trained as input data.
3) Application detection. We adopt the training classifier to detect a large number of applications. We can classify the applications based on \( k \) nearest neighbor algorithm and 13 functional types, and then incorporate them into the training classifier which then use them to detect and determine security threaten.
III. KEY TECHNOLOGIES AND ALGORITHMS
A. FEATURE EXTRACTION TECHNOLOGY
The application behavior description method, process injection technology and Hook technology are combined to extract features from different types of Android applications including instantaneous attack behavior.
1) DESCRIPTION OF ANDROID APPLICATION BEHAVIOR
Different description of Android application behavior method affects the accuracy of behavior characteristics. Malicious behavior may send text messages until obtaining permission rights of SEND_SMS and RECEIVE_SMS before calling the functions sendTextMessage() and sendDataMessage() in the API layer. In order to prevent users from sending text messages in the background, malicious applications receive priority messages via registered SMS receiver SMS_RECEIVED, and call abortBroadcast() system functions to avoid sending text messages. The implementation principles of malicious behavior are shown in Fig. 2.
The schematic diagram of Fig. 2 shows that the Android application behavior can be described by permissions rights, API, and system functions. The research in [14] has shown the correspondence between system functions and permissions, and defined 137 permissions that may be applied. The work of [17] and [18] further classifies Android application permissions regarding sensitive permissions. These sensitive permissions can better describe Android malicious behavior. Some of the sensitive permissions are listed in TABLE 1.
The system function and the permission have the corresponding relations, the sensitive authority and the sensitive API also have the mapping relation [19]. The relationship between them can be understood as follows. The implementation of malicious behavior must first apply for sensitive permissions, and the granted permissions are used to call the application layer API, in order to intercept the system function. Therefore, the sensitive behavior of the application can be used to characterize system functionalities. As a combination of sensitive behaviors can be more dangerous than a single sensitive behavior, we select a set of single system behaviour and combined system behaviors, which leads to 34 items that need to be recorded, as shown in TABLE 2.
2) INTERCEPTION OF ANDROID APPLICATION BEHAVIOR
The interception of Android application behavior, in essence, is to replace system calls in an application. During an
application run, any request of calling a system function is sent to the system_server system process. For an important sensitive operation API, permissions rights are needed to pass through the system core, so it is useful to intercept calls to all sensitive APIs. During the operation of the code, the `ioctl` system function needs to be called, and process injection and Hook technology can be used to complete the behavior interception. At the same time, the Binder communication mechanism in Android processes is used to complete the analysis of system call function. This is implemented in the log capture module as shown in Fig. 3.
i) Injection of the `so` base and interception of behavior code. In the system, the log capture module employs the process injection technology, Hook technology and behaviour interception code. The process is shown in Fig. 4.
(a1) Call `ptrace()` to track and debug the target process `system_server`.
(a2) Call the `MMAP()` function to open up a large enough memory space in the target process space.
(a3) Copy shellcode into memory space.
(a4) Load customized `so` base. Call the Property function to write `ioctl`’s real address and replace `ioctl` by the new function address. Call the `get_module_base` function to load `libbinder` and complete the injection of `so` base.
(a5) Analysis of ELF files and interception of behavior code. Open the `/system/lib/libbinder.so` file to get the
---
### TABLE 1. Some classical sensitive permissions.
<table>
<thead>
<tr>
<th>Name</th>
<th>Permission description</th>
<th>Module</th>
<th>Classification</th>
</tr>
</thead>
<tbody>
<tr>
<td>Make a call</td>
<td>Android.permission.CALL_PHONE allows the program to enter a phone number from a non system dialer</td>
<td>Telephone</td>
<td>Security</td>
</tr>
<tr>
<td>Access networks</td>
<td>Android.permission.INTERNET access network connections, which may generate GPRS traffic</td>
<td>Networking</td>
<td>Security</td>
</tr>
<tr>
<td>Send text messages</td>
<td>Android.permission.SEND_SMS sends text messages without user confirmation and consumes costs</td>
<td>Send</td>
<td>Security</td>
</tr>
<tr>
<td>Read schedule</td>
<td>Android.permission.READ_CALENDAR allows the program to read the user’s schedule information</td>
<td>Schedule</td>
<td>Privacy</td>
</tr>
</tbody>
</table>
### TABLE 2. Sensitive Behaviors and their combination.
<table>
<thead>
<tr>
<th>Single system function</th>
<th>Combination of system functions</th>
</tr>
</thead>
<tbody>
<tr>
<td>getAccounts</td>
<td>setAudioSource+Internet</td>
</tr>
<tr>
<td>getAuthenticatorTypes</td>
<td>getContentResolver(telephony)+Internet</td>
</tr>
<tr>
<td>getLocation</td>
<td>getRuinningAppProcesses+Internet</td>
</tr>
<tr>
<td>getCompleteVoiceMailNumber</td>
<td>getIccSerialNumber+Internet</td>
</tr>
<tr>
<td>android.bluetooth.Bluetooth.getEnable</td>
<td>getMsisdn+Internet</td>
</tr>
<tr>
<td>android.bluetooth.BluetoothManager.enable</td>
<td>getDeviceSn+Internet</td>
</tr>
<tr>
<td>android.bluetooth.BluetoothManager.disable</td>
<td>getCompleteVoiceMailNumber+Internet</td>
</tr>
<tr>
<td>getDataActivity</td>
<td>getContentResolver(settings)+Internet</td>
</tr>
<tr>
<td>sendmultiparttext</td>
<td>getContentResolver(sms)+Internet</td>
</tr>
<tr>
<td>setAudioSource</td>
<td>getContentResolver(media)+Internet</td>
</tr>
<tr>
<td>isAdminActive</td>
<td>getContentResolver(calander)+Internet</td>
</tr>
<tr>
<td>Telephony.call</td>
<td>getInstalledPackages+Internet</td>
</tr>
<tr>
<td>sendtext</td>
<td>getAccounts+Internet</td>
</tr>
<tr>
<td>setWithEnabled</td>
<td>getContentResolver(contact)+Internet</td>
</tr>
<tr>
<td></td>
<td>getContentResolver(browser)+Internet</td>
</tr>
<tr>
<td></td>
<td>getContentResolver(call-log)+Internet</td>
</tr>
<tr>
<td></td>
<td>getLocation+Internet</td>
</tr>
<tr>
<td></td>
<td>getDeviceId+Internet</td>
</tr>
</tbody>
</table>
---
**FIGURE 3.** Schematic diagram of application system function interception.
The detection of Android application is divided into three stages as follows.
(b1) Preparation. The eigenvectors are constructed by extracting the feature information of the training samples.
(b2) Classifier training. The eigenvectors constructed by the samples are used as input data, which can be trained by the classifier to obtain the classifier.
(b3) Application security detection. The trained classifier is used to classify the application and detect the security threats.
1) TRAINING ALGORITHM OF CLASSIFIER
The classifier training algorithm is used to train the eigenvectors of the samples to obtain classifier. In order to validate the efficiency of application behavior description method, we adopt the Naive Bayesian algorithm and the decision tree algorithms to train the classifier respectively. The best classification performance algorithm is as the classifier training algorithm of the system.
a: NAIVE BAYESIAN ALGORITHM
Naive Bayesian is based on the Bias principle, which is to calculate the posterior probability of each category under concrete condition. The maximum posterior probability is considered as items to be classified subordinate category.
Assume a set of items to be classified \(X = \{x_i | i = 1, 2, \ldots, l\}\). Every item \(x_i\) have \(n\) attributes \(A_1, A_2, \ldots, A_n\) expressed as \(x_i = \{a_1, a_2, \ldots, a_n\}\). Let the set of categories be \(Y = \{y_h | h = 1, 2, \ldots, m\}\). Then judge the category of each \(x_i\), that is
\[
p(y_h|x_i) = \text{argmax}\{P(y_1|x_i), \ldots, P(y_m|x_i)\} \quad (1)
\]
According to Bayes’ rule, we get
\[
p(y_h|x_i) = \frac{p(x_i|y_h) \cdot p(y_h)}{p(x_i)} \quad (2)
\]
Considering the independence of the attributes in the item \(x_i\), the conditional probability of each characteristic attribute in the class \(y_h\) is obtained from
\[
p(x_i|y_h) = p(a_1|y_h) \cdot p(a_2|y_h) \cdots p(a_n|y_h) \quad (3)
\]
Considering the higher computing performance of the naive Bayes algorithm and the decision of Android application security, a large number of applications need to be detected, so the algorithm can be used to train the classifier. The steps are as follows.
**Step 1:** Statistical analysis of the training samples, we get the set of category samples \(\{y_1, y_2, \ldots, y_m\}\) and the corresponding condition probability set \(\{p(y_1), p(y_2), \ldots, p(y_m)\}\).
**Step 2:** The eigenvector set \(\{x_1, x_2, \ldots, x_j\}\) can be constructed by running training samples and analyzing the log information of these samples, where the related eigenvector is \(\{a_1, a_2, \ldots, a_n\}\) and the probabilities are \(\{p(x_1), p(x_2), \ldots, p(x_j)\}\). We get the condition probabilities of each feature attribute \(\{y_1, y_2, \ldots, y_m\}\) as follows.
\[
p(a_1|y_1) \cdot p(a_2|y_1) \cdots p(a_n|y_1) \\
p(a_1|y_2) \cdot p(a_2|y_2) \cdots p(a_n|y_2) \\
\vdots \\
p(a_1|y_m) \cdot p(a_2|y_m) \cdots p(a_n|y_m) \quad (4)
\]
Step 3: Combined Eqs. (2) and (3), we get the probability
\[ p(y_k|x) = \arg\max \{p(y_1|x), \ldots, p(y_m|x)\} \] (5)
Some behavior can not be performed in Android application process which will causes \( P(A|y_k) = 0 \) to affect the performance of the classifier. Therefore, the Laplace correction is introduced in step 2, and the number of all characteristic attributes in each class is added 1 to improve the classification efficiency.
b: DECISION TREE CLASSIFICATION ALGORITHM
ID3 and C4.5 are widely used in decision tree classification algorithms. C4.5 uses information gain rates to select properties to overcome the shortcomings of the multi-attribute value selection in ID3 algorithm contributing to a higher efficiency. So C4.5 is used as a selection algorithm of the attribute decision tree classifier.
In the category item set \( X \) and \( Y \), the \( X \) is divided into \( m \) subsets \( \{X_h|h = 1, 2, \ldots, m\} \). Thus, the average information of \( X \) is
\[
I(X) = -\sum_{h=1}^{m} P_h \cdot \log_2 P_h \] (6)
where \( P_h = |X_h|/|X| \), \( |X_h| \) and \( |X| \) represent the number of elements in \( X_h \) and \( X \), respectively.
In the set \( \{A_1, A_2, \ldots, A_n\} \), suppose that \( A_j \) has \( q \) attribute values. Based on the attribute \( A_j \), the classification item set \( X \) can be divided into \( q \) subsets \( \{X'_1, X'_2, \ldots, X'_q\} \). The average information quantity of \( X \) is
\[
I_{A_j}(X) = \sum_{j=1}^{q} \frac{|X'_j|}{|X|} I(X'_j) \] (7)
Divide \( X \) by using the attribute set \( A_j \), we get the information gain as follows.
\[
G(A_j) = I(X) - I_{A_j}(X) \] (8)
The information gain rate \( R(A_j) \) is obtained by using C4.5 algorithm
\[
R(A_j) = \frac{G(A_j)}{S(A_j)} \] (9)
where \( S(A_j) = -\sum_{j=1}^{q} \frac{|X'_j|}{|X|} \cdot \log_2 \frac{|X'_j|}{|X|} \). The information gain rate \( R(A_j) \) can be used as the basis for attribute selection. The process of training classifier using the C4.5 algorithm are as follows.
Step 1: Statistical analysis of the sample in the training stage, and the set of samples is obtained \( \{y_1, y_2, \ldots, y_m\} \).
Step 2: Run the training samples, and analyze the log information of these samples to construct the eigenvector set \( \{x_1, x_2, \ldots, x_l\} \) which corresponds to eigenvector \( \{a_1, a_2, \ldots, a_n\} \).
Step 3: The eigenvector set \( \{x_1, x_2, \ldots, x_l\} \) is divided into \( \{X_1, X_2, \ldots, X_m\} \) according to \( \{y_1, y_2, \ldots, y_m\} \).
Step 4: Calculating the average amount of information
\[
\sum_{h=1}^{m} P_h \log_2 P_h \] of \( \{x_1, x_2, \ldots, x_l\} \).
Step 5: Combined with Eqs. (5-7), we calculate the information gain rate of each attribute information \( \{R(A_1), R(A_2), \ldots, R(A_n)\} \) in the set \( A = \{A_1, A_2, \ldots, A_n\} \).
Step 6: The max information gain rate \( R_{\text{max}}(A^*_j) \) is chosen from \( \{R(A_1), R(A_2), \ldots, R(A_n)\} \) as split attribute to construct nodes of a decision tree.
Step 7: Cut the \( A^*_j \) attribute in set \( A \), repeat the steps.
Step 8: Judge whether the set \( A \) is \( \emptyset \), and if it is \( \emptyset \), the decision tree is output. Otherwise, it returns to steps 5-7.
2) THE APPLICATION SECURITY DETECTION ALGORITHM
FUNCTIONAL CLASSIFICATION METHODS
The method used for classification of Android applications has a great impact on the classification effect at the classifier training stage, as well as on the detection result at the security detection stage.
In order to reduce false positive rates, the system adopts the application function classification method to divide Android applications into 13 categories, as shown in TABLE 3.
<table>
<thead>
<tr>
<th>System security</th>
<th>Social communication</th>
<th>Audio and video</th>
</tr>
</thead>
<tbody>
<tr>
<td>News reading</td>
<td>Life and leisure</td>
<td>Theme wallpaper</td>
</tr>
<tr>
<td>Office business</td>
<td>Photography</td>
<td>Shopping discount</td>
</tr>
<tr>
<td>Map travel</td>
<td>Education and learning</td>
<td>Financial management</td>
</tr>
<tr>
<td>Healthy care</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
In the classifier training stage, we need to classify the adopted sample according to the functional classification methods, and then distinguish benign and malicious applications. In the security testing stage, we need to classify the applications according to the known categories. A large number of applications needs to be detected in the security detection stage. Since the sample class domains intersect and overlap, the system automatically discriminates application function categories based on the \( k \) nearest neighbor algorithm.
The \( k \) nearest neighbor classification algorithm works on the commonality of adjacent samples, that is, if most of \( k \) samples, which is nearest to a given sample to be classified, belong to a certain category, then the sample also belongs to the category. In the Android system permissions mechanism, we can find commonness based on the similarity of application permissions between similar applications.
Assume that the set of the category is \( X = \{x_i|i = 1, 2, \ldots, l\} \). Each application item \( x_i \) to be classified applies the permission with an \( n \) dimension vector \( x_i = [x_{i1}, x_{i2}, \ldots, x_{in}] \) (where \( n \) is the total number of permissions in the Android system, and in this test \( n = 137 \)). The \( v \)-th
dimension element $x'_i$ in the permission vector of $x_i$ represents the $i$-th application permission. If $x'_i = 1$ holds, then the permission is applied, and vice versa.
In order to obtain a vector of estimated similarity, the latitude values in the vector are adjusted as follows.
$$x'_i = \frac{p'_i}{n} \log_2 \frac{D}{d + 1}.$$
(10)
where $p'_i$ represents whether the $i$-th application has the $i$-th dimensional permission, $n$ stands for the total number of Android system, $D$ represents the number of applications to be tested in the sample, and $d$ indicates that the number of application in the $i$-th permission.
Furthermore, the cosine of the vector denotes the similarity between the two applications as follows.
$$S_{x_i,y_j} = \frac{\sum_{i=1}^{n} x'_i y'_i}{\sqrt{\sum_{i=1}^{n} (x'_i)^2 \cdot \sum_{i=1}^{n} (y'_i)^2}}.$$
(11)
where $S_{x_i,y_j}$ represents the similarity between $x_i$ and $y_j$. The vector of $x_i$ is $(x'_1, x'_2, \ldots, x'_n)$, and the vector of $y_j$ is $(y'_1, y'_2, \ldots, y'_n)$. The function classification algorithm based on the application of $k$ nearest neighbor classification method is as follows.
Step 1: Analyze the permission of application to be detected, we get $X = \{x_i|i = 1, 2, \ldots, l\}$, and the set of $n$-dimensional vectors $(p'_1, p'_2, \ldots, p'_n), (p'_1, p'_2, \ldots, p'_n), \ldots, (p'_1, p'_2, \ldots, p'_n)$ generated by the corresponding set of permissions.
Step 2: The set of $n$-dimensional vectors is calculated combined with Eq. (8) to obtain a corresponding set of permission vectors of calculating the similarity
$$\{(x'_1, x'_2, \ldots, x'_n), (y'_1, y'_2, \ldots, y'_n), \ldots, (x'_1, x'_2, \ldots, x'_n)\}.$$
(12)
Step 3: Set $v = 1$, where $v$ is the $v$-th element in the permission vectors of step 2.
Step 4: We calculate the similarity between the permission vector $(x'_1, x'_2, \ldots, x'_n)$ and the other vector according to Eq. (11) to get the similarity set $S_{(v,v+1)}$, $S_{(v,2)}$, $\ldots$, $S_{(v,v-1)}$, $S_{(v,v+1)}$, $S_{(v,v+2)}$, $\ldots$, $S_{(v,l)}$.
Step 5: We choose the first $K$ apps of highest similarity with the $x_v$ application with the application $x_t$ to be detected, where $p$ represents the number of divided by $K$ applications, $r$ is application type $S_{(v,x_t(k,p,r,w,m_r ))} \in \{1,2,3,\ldots,v-1,v+1,\ldots,l\}$, i.e. $\{r|r = 1, 2, \ldots, p\}$, $m_r$ represents the number of application types, i.e. $\sum_{r=1}^{p} m_r = k$, $w$ is the $w$-th application of type $r$, i.e. $\{w|w = 1, 2, \ldots, m_r\}$.
Step 6: Calculate the average similarity between each type in the $K$ applications and the $x_v$ of the application to be detected in step 5.
$$u(x_v) = \frac{\sum_{w=1}^{m_r} S_{(v,x_t(k,p,r,w,m_r ))}}{m_r}.$$
(13)
Step 7: We choose the type of application of the highest similarity by analyzing $u(x_v)$.
Step 8: If $v \neq n$ holds, then $v = v + 1$ and turn to step 4. Otherwise, the result of the classification is the output.
IV. EXPERIMENTAL DESIGN AND ANALYSIS
A. EXPERIMENTAL ENVIRONMENT
In this paper, we use C/C++ and Java to implement the malicious application detection tool Androideetect, where C/C++ is used in the application call function and Java is used for the other tasks. Androideetect detects 219 malicious application samples, where 102 applications are reading applications, and the remaining 117 applications are of unknown types, as are included in the virus database. The experiment is performed on the Mils mobile phone, where the baseband version of 8660-AAAQBQBYA-g4271bc1, the Android system version of 4.4.4, and the kernel version of 3.4.0-g1cceb5.
B. EXPERIMENTAL RESULTS AND ANALYSIS
1) ANALYSIS ON THE EFFECT OF BEHAVIOR INTERCEPTION ON MOBILE PERFORMANCE
Behavioral interception may lead to change in mobile performance, and ultimately affects the application classification and accuracy of application detection. In this section, we analyze the impact of behavioral interception on mobile performance.
In the experiment, we first install and run malicious applications that can send text messages and get private information through the background. The behavior interception through the inject, librecorder.so base files, behaviour-encoderapk to complete process injection, ioctl function intercept and parameter analysis, and log record. The results in Fig. 6 show that the system can successfully intercept the application behavior.
FIGURE 6. Schematic diagram of the application behavioral interception.
In TABLE 4, the notations $a$ and $b$ represent the changes in the rate of CPU usage and memory usage before and after the injection of the System_server. The rate of CPU usage is below 1%, while the rate of memory occupancy is not obviously changed. The notation $c$ represents the rate of occupation of resources, and the rate of CPU and memory resources are not high. Thus, application behavior interception does not affect mobile performance.
2) ALGORITHM ANALYSIS OF CLASSIFIER TRAINING STAGE
In TABLE 5, the confusion matrix of the classification result distribution is showed, where benign applications of the right classification TN, benign applications of the wrong classification FP, malicious applications of the right classification FN, malicious applications of the wrong classification TP.
TABLE 4. Occupation of resources in application behavior interception.
<table>
<thead>
<tr>
<th>Resource usage</th>
<th>PID</th>
<th>The rate of CPU usage</th>
<th>Virtual consumption of memory</th>
<th>Actual usage of physical memory</th>
<th>Process</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>15530</td>
<td>0%</td>
<td>608644K</td>
<td>62448K</td>
<td>System_server</td>
</tr>
<tr>
<td>b</td>
<td>15530</td>
<td>0%</td>
<td>608804K</td>
<td>62588K</td>
<td>System_server</td>
</tr>
<tr>
<td>c</td>
<td>22222</td>
<td>0%</td>
<td>524500K</td>
<td>30876K</td>
<td>Com.zhongyj.behaviorrecorder</td>
</tr>
</tbody>
</table>
TABLE 5. The classification result distribution.
<table>
<thead>
<tr>
<th>Inputs</th>
<th>Benign applications</th>
<th>Malicious applications</th>
</tr>
</thead>
<tbody>
<tr>
<td>Benign applications</td>
<td>TN</td>
<td>FP</td>
</tr>
<tr>
<td>Malicious applications</td>
<td>FN</td>
<td>TP</td>
</tr>
</tbody>
</table>
TABLE 6. The classifier evaluation parameters.
<table>
<thead>
<tr>
<th>Evaluate parameters</th>
<th>Equations</th>
<th>Meanings</th>
</tr>
</thead>
<tbody>
<tr>
<td>TPR</td>
<td>TPR = \frac{TP}{TP + FN}</td>
<td>The proportion of malicious application of samples correctly classified</td>
</tr>
<tr>
<td>FPR</td>
<td>FPR = \frac{FP}{FP + TN}</td>
<td>The proportion of benign application samples of the wrong classification</td>
</tr>
<tr>
<td>ACC</td>
<td>ACC = \frac{TP + TN}{TP + TN + FP + FN}</td>
<td>The proportion of applied samples of the right classification</td>
</tr>
</tbody>
</table>
The classifier evaluation parameters are shown in Table 6.
In the experiment, the J48 decision tree classifier and the naive Bayesian classifier adopt 10-fold cross-validation to detect 200 samples, where 100 benign applications and 100 malicious applications respectively. The benign application categories include system security, lifestyle, shopping, and map tourism. The results are shown in Table 7 and Table 8.
Combined Table 6, 7 and 8, we get the bar graphs with the detection rate, false positive and classification accuracy in Fig. 7-9.
It implies that the classification accuracy of the two algorithms reach 82.5% and 86%, respectively, and the J48 decision tree algorithm is superior to the naive Bayesian algorithm in TPR, FPR and ACC. The system function is used to describe the behavior to distinguish benign and malicious applications.
3) ALGORITHM ANALYSIS OF IMPLEMENTATION STAGE
In the experiment, we still use the 10-fold cross-validation to test the selected 200 news reading (100 for benign and 100 for malicious applications, respectively), and verify the validity of the application function classification algorithm based on $k$ nearest neighbor algorithm. In addition, we also selected 180 hybrid application types of the training set (90 for benign and malicious applications, respectively) to verify the effect
TABLE 7. The results of J48 decision tree classifier.
<table>
<thead>
<tr>
<th></th>
<th>TP rate</th>
<th>FP rate</th>
<th>Precision</th>
<th>Recall</th>
<th>F-measure</th>
<th>MCC</th>
<th>ROC area</th>
<th>PRC area</th>
<th>Class</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.840</td>
<td>0.120</td>
<td>0.875</td>
<td>0.840</td>
<td>0.857</td>
<td>0.721</td>
<td>0.884</td>
<td>0.836</td>
<td>0.857</td>
<td>malicious</td>
</tr>
<tr>
<td>0.880</td>
<td>0.160</td>
<td>0.846</td>
<td>0.880</td>
<td>0.863</td>
<td>0.721</td>
<td>0.884</td>
<td>0.836</td>
<td>0.847</td>
<td>benign</td>
</tr>
<tr>
<td>Weighted Avg.</td>
<td>0.860</td>
<td>0.140</td>
<td>0.861</td>
<td>0.860</td>
<td>0.860</td>
<td>0.721</td>
<td>0.884</td>
<td>0.847</td>
<td>/</td>
</tr>
</tbody>
</table>
Confusion matrix
<table>
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>→classified as</th>
</tr>
</thead>
<tbody>
<tr>
<td>84</td>
<td>16</td>
<td>a=malicious</td>
</tr>
<tr>
<td>12</td>
<td>88</td>
<td>b=benign</td>
</tr>
</tbody>
</table>
TABLE 8. The results of the naive Bayesian classifier.
<table>
<thead>
<tr>
<th></th>
<th>TP rate</th>
<th>FP rate</th>
<th>Precision</th>
<th>Recall</th>
<th>F-measure</th>
<th>MCC</th>
<th>ROC area</th>
<th>PRC area</th>
<th>Class</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.820</td>
<td>0.170</td>
<td>0.828</td>
<td>0.820</td>
<td>0.824</td>
<td>0.650</td>
<td>0.907</td>
<td>0.908</td>
<td>0.908</td>
<td>malicious</td>
</tr>
<tr>
<td>0.8300</td>
<td>0.180</td>
<td>0.822</td>
<td>0.830</td>
<td>0.826</td>
<td>0.650</td>
<td>0.907</td>
<td>0.915</td>
<td>0.911</td>
<td>benign</td>
</tr>
<tr>
<td>Weighted Avg.</td>
<td>0.825</td>
<td>0.175</td>
<td>0.825</td>
<td>0.825</td>
<td>0.825</td>
<td>0.650</td>
<td>0.907</td>
<td>0.911</td>
<td>/</td>
</tr>
</tbody>
</table>
Confusion matrix
<table>
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>→classified as</th>
</tr>
</thead>
<tbody>
<tr>
<td>82</td>
<td>18</td>
<td>a=malicious</td>
</tr>
<tr>
<td>17</td>
<td>83</td>
<td>b=benign</td>
</tr>
</tbody>
</table>
FIGURE 10. Schematic diagram of classification accuracy of classification algorithm.
The comparison bar graphs of TPR, FPR, and ACC, where a and b are two experiments in Fig. 11-13.
The results of TPR, FPR and ACC in experiment b are better than those in the experiment a, which proves that it is more effective to further divide the application according to the functional type. At the same time, using this classification method can more effectively reduce the probability that benign applications are wrongly determined as malicious applications.
C. COMPARISON WITH RECENTLY RELATED WORK
In order to further prove the effectiveness of the proposed method, the same test samples will be compared with some typical testing tools.
Comparisons with recently related work are shown in TABLE 9. In Andromaly system [20], we use the dynamic method to extract the API features. The disadvantage is fewer samples and not using real malicious application validation compared with BN, J48, K-means and other algorithms to detect malicious applications. In PUMA system [6], we adopt the application authority as characteristics, and use the random forest algorithm to detect malicious applications, but the result has high false positive rate. Peiravian and Zhu [21] proposed the combination of permissions and Android API calls as characteristic to improve the detection rate and the accuracy of classification. Amos et al. [22] chose the system state as a characteristic and used a real malware sample for verification.
To sum up, a variety of features are used in these systems to better reflect the behavior feature of malicious Android applications. The instantaneous attack can be accurately described by Androidetect system using a unified relationship of system functions, sensitive operation APIs and permissions. While other tools cannot determine the detection of abnormal behavior generated by which application. And Androidetect system uses the application of functional classification algorithm with better detection effects.
Comparing with the relevant results, we find that Androidetect system has a high classification accuracy and low false positive rate in the detection of malicious Android applications. Androidetect system has a better detection result in the categories FPR and ACC, and a slightly lower TPR, as shown in TABLE 10.
V. CONCLUSION
In this paper, the dynamic analysis technique is used to extract the feature of system functions to construct the eigenvectors. The classification model is established by naive bayesian, J48 decision tree and Android application function type decision algorithm to realize the detection system Androidetect. The advantages of the system are as follows. First, the description method based on the system function can be used to identify the instantaneous attack. Second, the detection method based on the algorithm of the Android application function type can judge the source of the detected abnormal behavior. Finally, compared with the related work, Androidetect system has a better performance regarding FPR and ACC. In the future, we will improve the TPR in the Androidetect system.
REFERENCES
**LINFENG WEI** received the M.S. degree from the Department of Computer Science, Jinan University, China, in 2010. He is currently a Lecturer with the Information Science and Technology/Network Space Security Institute, Jinan University. He is also a Deputy Director with the Guangdong Province Network Security Detection and Protection Engineering Technology Research and Development Center. His research interests include cloud computing security, block chain security application, and mobile security.
**WEIQI LUO** received the Ph.D. degree in control theory and control engineering from the South China University of Technology, China, in 2000. He is currently a Professor with the Network Space Security Institute, Jinan University, Guangzhou, China. His research interests include network security, and management science and engineering.
**JIAN WENG** received the Ph.D. degree in computer science from Shanghai Jiaotong University, China, in 2007. He is currently a Professor with the College of Information Science and Technology, Jinan University, Guangzhou, China. His research interests include multimedia forensics and security and image/video intelligent analysis.
**YANJUN ZHONG** received the M.S. degree from the Department of Computer Science, Jinan University, China, in 2016. His main research is mobile internet security.
**XIAOQIAN ZHANG** received the M.S. degree from the Department of Mathematics, Jinan University, China, in 2013, where she is currently pursuing the Ph.D. degree with the Department of Computer Science. Her research interests include quantum secure communication, quantum computing, and quantum information processing.
**ZHENG YAN** received the B.Eng. degree in electrical engineering and the M.Eng. degree in computer science and engineering from Xi’an Jiaotong University in 1994 and 1997, respectively. The M.Eng. degree in information security from the National University of Singapore in 2000, and the Licentiate of Science and Doctor of Science in Technology degrees in electrical engineering from the Helsinki University of Technology in 2005 and 2007, respectively. She is currently a Professor with Xidian University, Xi’an, China, and a Visiting Professor with Aalto University, Espoo, Finland. Her research interests are in trust, security, and privacy; mobile applications and services; social networking; cloud computing; pervasive computing; and data mining.
**LINFENG WEI** received the M.S. degree from the Department of Computer Science, Jinan University, China, in 2010. He is currently a Lecturer with the Information Science and Technology/Network Space Security Institute, Jinan University. He is also a Deputy Director with the Guangdong Province Network Security Detection and Protection Engineering Technology Research and Development Center. His research interests include cloud computing security, block chain security application, and mobile security.
|
{"Source-Url": "https://research.aalto.fi/files/16856450/yan_et_al08101455.pdf", "len_cl100k_base": 9791, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37870, "total-output-tokens": 11440, "length": "2e13", "weborganizer": {"__label__adult": 0.00058746337890625, "__label__art_design": 0.00054168701171875, "__label__crime_law": 0.00429534912109375, "__label__education_jobs": 0.002227783203125, "__label__entertainment": 0.0001976490020751953, "__label__fashion_beauty": 0.0002949237823486328, "__label__finance_business": 0.00040984153747558594, "__label__food_dining": 0.0004055500030517578, "__label__games": 0.0021266937255859375, "__label__hardware": 0.005725860595703125, "__label__health": 0.0009183883666992188, "__label__history": 0.0004401206970214844, "__label__home_hobbies": 0.00018346309661865232, "__label__industrial": 0.0007085800170898438, "__label__literature": 0.0005068778991699219, "__label__politics": 0.0005860328674316406, "__label__religion": 0.0005331039428710938, "__label__science_tech": 0.355224609375, "__label__social_life": 0.00019741058349609375, "__label__software": 0.07708740234375, "__label__software_dev": 0.5458984375, "__label__sports_fitness": 0.0003709793090820313, "__label__transportation": 0.0004732608795166016, "__label__travel": 0.00016570091247558594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43229, 0.03936]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43229, 0.4763]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43229, 0.82999]], "google_gemma-3-12b-it_contains_pii": [[0, 474, false], [474, 1618, null], [1618, 5836, null], [5836, 10594, null], [10594, 14890, null], [14890, 17842, null], [17842, 23255, null], [23255, 28591, null], [28591, 31790, null], [31790, 33916, null], [33916, 36827, null], [36827, 43229, null]], "google_gemma-3-12b-it_is_public_document": [[0, 474, true], [474, 1618, null], [1618, 5836, null], [5836, 10594, null], [10594, 14890, null], [14890, 17842, null], [17842, 23255, null], [23255, 28591, null], [28591, 31790, null], [31790, 33916, null], [33916, 36827, null], [36827, 43229, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43229, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43229, null]], "pdf_page_numbers": [[0, 474, 1], [474, 1618, 2], [1618, 5836, 3], [5836, 10594, 4], [10594, 14890, 5], [14890, 17842, 6], [17842, 23255, 7], [23255, 28591, 8], [28591, 31790, 9], [31790, 33916, 10], [33916, 36827, 11], [36827, 43229, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43229, 0.24615]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
9f8c0d5006c2f02630e11a42f0daef43b679f226
|
[REMOVED]
|
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01334272/document", "len_cl100k_base": 9443, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 49489, "total-output-tokens": 10654, "length": "2e13", "weborganizer": {"__label__adult": 0.0004239082336425781, "__label__art_design": 0.0005249977111816406, "__label__crime_law": 0.0006890296936035156, "__label__education_jobs": 0.0016012191772460938, "__label__entertainment": 0.00012218952178955078, "__label__fashion_beauty": 0.0002416372299194336, "__label__finance_business": 0.0006380081176757812, "__label__food_dining": 0.00048065185546875, "__label__games": 0.0010786056518554688, "__label__hardware": 0.0009760856628417968, "__label__health": 0.0010118484497070312, "__label__history": 0.0004649162292480469, "__label__home_hobbies": 0.00015926361083984375, "__label__industrial": 0.0009632110595703124, "__label__literature": 0.0004153251647949219, "__label__politics": 0.00048160552978515625, "__label__religion": 0.0007305145263671875, "__label__science_tech": 0.1988525390625, "__label__social_life": 0.00014889240264892578, "__label__software": 0.010894775390625, "__label__software_dev": 0.77734375, "__label__sports_fitness": 0.0004732608795166016, "__label__transportation": 0.0008578300476074219, "__label__travel": 0.0002727508544921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40649, 0.02034]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40649, 0.67524]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40649, 0.90061]], "google_gemma-3-12b-it_contains_pii": [[0, 1016, false], [1016, 3488, null], [3488, 6604, null], [6604, 9765, null], [9765, 12430, null], [12430, 14858, null], [14858, 17863, null], [17863, 20783, null], [20783, 22696, null], [22696, 25192, null], [25192, 27942, null], [27942, 30846, null], [30846, 33526, null], [33526, 36171, null], [36171, 38587, null], [38587, 40649, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1016, true], [1016, 3488, null], [3488, 6604, null], [6604, 9765, null], [9765, 12430, null], [12430, 14858, null], [14858, 17863, null], [17863, 20783, null], [20783, 22696, null], [22696, 25192, null], [25192, 27942, null], [27942, 30846, null], [30846, 33526, null], [33526, 36171, null], [36171, 38587, null], [38587, 40649, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40649, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40649, null]], "pdf_page_numbers": [[0, 1016, 1], [1016, 3488, 2], [3488, 6604, 3], [6604, 9765, 4], [9765, 12430, 5], [12430, 14858, 6], [14858, 17863, 7], [17863, 20783, 8], [20783, 22696, 9], [22696, 25192, 10], [25192, 27942, 11], [27942, 30846, 12], [30846, 33526, 13], [33526, 36171, 14], [36171, 38587, 15], [38587, 40649, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40649, 0.04464]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
c8a206166e7801c9cc3a13c26fdb5b7ad68280f3
|
Deep learning with Othello
Application and analysis of deep neural networks and tree search on Othello
Sun Peigen (3035084548)
Worked with Nian Xiaodong (3035087112) and Xu Chaoyi (3035084328)
Under supervision of Prof. Kwok-Ping Chan
Department of Computer Science
The University of Hong Kong
Submission Date: Apr 16, 2017
Project Website: i.cs.hku.hk/fyp/2016/fyp16017
Contact Information: sunbacon@hku.hk
Abstract
Recently, deep learning is becoming prevalent in the AI field. However, currently most of the game AI are still using the manually extracted features. What if we apply the technology of deep learning to game AI? This report is inspired by AlphaGo and going to discover the potential of deep neural network (DNN) to be the evaluation functions of the game Othello. In this report, design, implementation as well as findings of our program will be discussed in detail. We used the winning rate with other AIs to measure the strength of evaluation functions. By comparing the different AI based on DNN and other methods, the applicability of using DNN for evaluation has been verified. However, the effectiveness and efficiency of using DNN is not Satisfactory due to the size of the problem. This finding may have an enormous impact on game AI design.
Acknowledgement
We would like to express our special thanks of gratitude to our supervisor Prof. Kwok-Ping Chan as well as our principal Peter Mathieson who gave us the golden opportunity to do this wonderful project on the topic deep learning, which also helped us in doing a lot of Research and we came to know about so many new things we are grateful to them.
Table of Contents
Abstract ............................................................................................................................. 1
Acknowledgement .......................................................................................................... 2
Table of Contents ............................................................................................................ 3
Abbreviations .................................................................................................................. 5
Figures and Tables .......................................................................................................... 6
1 Introduction .................................................................................................................. 7
1.1 Rules of Othello ....................................................................................................... 7
1.2 Analysis of Othello .................................................................................................. 7
1.3 Deliverables ............................................................................................................. 8
1.4 Scope ....................................................................................................................... 8
1.5 Contribution to this project .................................................................................... 8
2 Previous Works ............................................................................................................. 11
3 Theoretical Background ............................................................................................... 12
3.1 Problem setting ...................................................................................................... 12
3.2 Evaluation in Game ............................................................................................... 12
3.3 Game Tree Searching ........................................................................................... 13
4 Methodology ................................................................................................................ 15
4.1 Development Environment ................................................................................... 15
4.2 Algorithms ............................................................................................................. 15
4.2.1 Minimax Search and Alpha-beta Pruning ....................................................... 15
4.2.2 Weighted Square Strategy ............................................................................... 16
4.2.3 Evaluation Network ......................................................................................... 17
4.2.4 Monte Carlo Tree Search ............................................................................... 17
4.2.5 Policy Networks ............................................................................................... 19
4.2.6 Value Networks .............................................................................................................. 20
5 Results ................................................................................................................................... 21
5.1 Training Data Set .................................................................................................................. 21
5.1.1 Overview of Data Set ........................................................................................................ 21
5.1.2 Symmetry augmentation .................................................................................................. 22
5.2 Evaluation Networks ............................................................................................................ 23
5.2.1 Training ............................................................................................................................. 23
5.2.2 Evaluation on the Playing Strength .................................................................................. 24
5.3 Policy Networks with MCTS ............................................................................................... 24
5.3.1 Training ............................................................................................................................. 24
5.3.2 Evaluation on the Playing Strength .................................................................................. 25
5.4 Value Networks with MCTS ............................................................................................... 26
5.4.1 Training ............................................................................................................................. 26
5.4.2 Evaluation on the Playing Strength .................................................................................. 27
5.5 Random policy with MCTS ............................................................................................... 27
5.6 Discussions .......................................................................................................................... 28
6 Conclusions ............................................................................................................................. 30
References ................................................................................................................................... 31
## Abbreviations
<table>
<thead>
<tr>
<th>Abbreviation</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>AI</td>
<td>Artificial Intelligence</td>
</tr>
<tr>
<td>CNN</td>
<td>Convolutional Neural Network</td>
</tr>
<tr>
<td>CSS</td>
<td>Cascading Style Sheets</td>
</tr>
<tr>
<td>DNN</td>
<td>Deep Neural Network</td>
</tr>
<tr>
<td>GPU</td>
<td>Graphic Processing Unit</td>
</tr>
<tr>
<td>GUI</td>
<td>Graphic User Interface</td>
</tr>
<tr>
<td>HTML</td>
<td>HyperText Markup Language</td>
</tr>
<tr>
<td>JSON</td>
<td>JavaScript Object Notation</td>
</tr>
<tr>
<td>MCTS</td>
<td>Monte Carlo tree search</td>
</tr>
<tr>
<td>PUCT</td>
<td>Polynomial Upper Confidence Trees</td>
</tr>
<tr>
<td>SL</td>
<td>Supervised Learning</td>
</tr>
<tr>
<td>tanh</td>
<td>hyperbolic tangent function</td>
</tr>
<tr>
<td>UCT</td>
<td>Upper Confidence Bounds for Trees</td>
</tr>
</tbody>
</table>
Figures and Tables
Figure 1 | Illustration of Othello rules ................................................................. 7
Figure 2 | Home page of the project website .......................................................... 9
Figure 2 | Illustration of how an evaluation function will give score to moves ........ 13
Figure 3 | Game tree in opening of a round in Othello ............................................ 14
Figure 4 | Weighted square strategy in Project Tempo ............................................ 16
Figure 5 | Architecture of deep neural networks used in Project Tempo .................. 17
Figure 6 | Monte Carlo tree search in Project Tempo ............................................. 18
Figure 8 | Categorical score distribution ................................................................. 22
Figure 9 | Symmetry of Othello chess board ............................................................ 23
Figure 10 | Accuracy of value networks with epochs .............................................. 23
Figure 11 | Training result of policy networks ......................................................... 25
Figure 12 | Training result of value networks ............................................................ 26
Table 1 | Result of battles \( v_z \) against random choice ........................................ 24
Table 2 | Result of battles \( v_z \) against weighted square strategy ........................... 24
Table 3 | Result of battles \( p_{SL} \) against random choice ...................................... 25
Table 4 | Result of battles \( p_{SL} \) against weighted square strategy ....................... 25
Table 5 | Result of battles MCTS (policy) against random choice .............................. 26
Table 6 | Result of battles MCTS (policy) against weighted squared strategy ............. 26
Table 7 | Result of battles MCTS (policy + value) against random choice .................. 27
Table 8 | Result of battles MCTS (policy + value) against weighted squared strategy .. 27
Table 9 | Result of battles MCTS (random) against random choice .......................... 27
Table 10 | Result of battles MCTS (random) against weighted squared strategy .......... 28
1 Introduction
In the first half of 2016, AlphaGo became rather well-known due to a victory against Mr. Lee Se-dol. It is the first time that, on a full-sized board, a computer Go program defeated a top professional human Go player. The core techniques that AlphaGo used is deep neural networks (DNN) and Monte Carlo tree search (MCTS) algorithm. [1] As a field developing astonishingly in recent years, deep learning benefits from the huge improvement of computational capability of modern processors and becomes one of the most popular research topic of artificial intelligence. Motivated by AlphaGo, the objective of this project - Tempo, is to develop a similar game artificial intelligence (AI) program, applying the same technologies of neural network and tree search algorithm as AlphaGo, to play another chess game, Othello (also known as Reversi).
1.1 Rules of Othello
The basic rule of Othello is that players take turns placing discs to bound opponent’s ones and reverse them into his own. As shown in Figure 1, after placing a new disc, opponent’s discs bounded in straight line by the newly placed disc and other disc(s) of current player will be turned into the current player’s color. Each move must have at least one opponent’s disc flipped, otherwise the player will be skipped until he can make a move to flip opponent’s disc(s). Thus, for each turn, there will always be limited valid moves that could be chosen to be next move, and the amount is usually not more than 10. When the chess board is full-filled by discs, or any party have no disc left (all discs were flipped by the opponent), or neither of players can make a valid move, the current game round ends, and the player with more disc wins the game.

The game will start with an initial board of 4 discs in the middle, as shown in left most board. This figure shows a simple opening of 2 steps of one round, and the opponent’s discs bounded by current own discs and new disc are flipped.
1.2 Analysis of Othello
The board size of Othello is relatively small, which is only 8×8, and the number of legal moves
during each step is also limited. Thus, both the total number of steps and the number of possible moves during one step are much smaller than that of Go, and this is one of the reasons why we choose Othello: a simpler game will be easier for us to handle.
However, Othello is estimated to have the number of legal positions up to the level of $10^{28}$, and the game-tree searching complexity of approximately the level of $10^{58}$. On the other hand, Othello remains unsolved\footnote{A mathematically “solved game” is a game whose outcome can be correctly predicted from any position, if both players play perfectly.} mathematically. [2] Thus, the further study to make stronger game AI programs on Othello is still meaningful to solve the game.
### 1.3 Deliverables
As the outcome, an online Othello battle AI program with interactive graphic user interface (GUI) is available at [i.cs.hku.hk/fyp/2016/fyp16017/demo.html](http://i.cs.hku.hk/fyp/2016/fyp16017/demo.html), which can play Othello against other player by calling APIs on cloud computing backend to get computer’s move.
Different AIs were developed in this project, including weighted square strategy with minimax search, DNN with minimax search and MCTS.
### 1.4 Scope
This project mainly focused on the software implementation of the Othello AI program, including the game tree searching algorithm, preprocessing of game data, structure design of policy networks and value networks, as well as discussion of the results. Studies and research about these fields were carried out to build up a strong enough AI program for this project. Since this is an individual report of a group project, the report will mainly focus on my parts in this project but still include relative parts.
### 1.5 Contribution to this project
This section will specify my work within the scope of this project. Since all team members have studied the topic and made mutual contributions to the project, it is hard to precisely separate the work in this project, so I will include all individual works and cooperative works I have participated in.
In the beginning, I built an Othello game engine and implemented Minimax search with Alpha-beta tuning to generate training samples, perform AI battles and record each move into a game
file in a standard recording method. Also, I develop a JavaScript version of Othello game engine to enable the visitors of our website to play Othello with our AI. The website is on [i.cs.hku.hk/fyp/2016/fyp16017](http://i.cs.hku.hk/fyp/2016/fyp16017). Figure 2 is the home page of the website. The UI design is developed by another group member Nian Xiaodong.

Click the button “Play” will let the visitor be able to play with our AI.
To generate the categorical data and numerical training data for neural networks, I wrote several python scripts to read from existing game books and combine the data into an array. Together with Nian Xiaodong, we cleaned the duplicate data out of our set and did the symmetry augmentation to enhance the robustness of our model. During the model construction and tuning, we together tried different input features of the game board and distinct model architectures of neural networks.
In the latter half of the project, we found that the original Othello engine was not fast enough to run a large scale of testing. Thus, Nian Xiaodong and I learned the algorithms from an open-source game engine, namely 'paip-python' [3], and reconstructed our Othello engine. Also, we refined our weighted square strategy to make it more powerful.
Also, during the development of MCTS, Nian Xiaodong and I collaboratively adapted the implementation in ‘MuGo’ engine [4] to be compatible with our game engine and developed a battle bot for different AI to play against each other.
In the following sections, the report will introduce some previous works done by others in the field of Othello AI in Section 2, more theoretical background about this game in Section 3, the theories used and algorithms applied in the project in Section 4, the total results and assessment of the project in Section 5.
2 Previous Works
Even though Othello is unsolved, computer scientists still devoted themselves in developing stronger Othello programs. *Iago* developed by Paul S. Rosenbloom in 1981 became the first program which beat the human world champion. But later in 1986, it was defeated “consistently” by *Bill*, which was developed by Kai-Fu Lee and Sanjoy Mahajan, adopting the concept of machine learning (quite shallow, though). [6] *Bill*, of course, is also surpassed in a few years. In 1992, Michael Buro started the Othello program *Logistello*, which used human-defined features to abstract useful information from the game board. [7] In 1997, it turned out that *Logistello* can beat the greatest human player and achieved remarkable success. And similarly, *Logistello* is far surpassed by later stronger programs. Nevertheless, the main ideas behind *Iago*, *Bill* and *Logistello* are worth studying and all have been patient teachers and qualified opponents of our program. In our future research and development, they will still be of significant help.
3 Theoretical Background
The objective of our project is to develop a game AI. In the following parts, the game will first be abstracted into a simple problem. Then comes with other definitions and tools used in this project.
3.1 Problem setting
Based on the rules of Othello, it is a game where both players have perfect information about the whole game and can be defined as an alternating Markov game [6]. Thus, the general problem setting for alternating Markov games is also suitable for Othello. Here, we follow the descriptions in the way that AlphaGo used to abstract Go: there is a state space $S$, an action space $A(s)$, and a state transition function $f(s, a)$. The major differences between Go and Othello are the size of $S$ and $A(s)$: the state space and action space of Othello are far smaller than those of Go.
Based on the setting, if we have different probability to choose among moves, we can define this prior probability as a policy $p(a|s)$, which is a probability distribution over the legal moves $a \in A(s)$. Specially, we can regard the random strategy as a policy which has uniform distribution over the legal moves.
3.2 Evaluation in Game
To obtain advantages over the opponent, one player needs to have a clear knowledge on the game, has capability to evaluate the current state and find the most valuable move. For AI, it needs an evaluation function to help make decisions.
Here, we define the evaluation function as a map from board configurations to values. If we define the function as $v$, the board features as $s$ and the outcome score as $G$, the equation is
$$G = v(s)$$
Combined with the problem setting, we can have that
$$G = v(s') = v(f(s, a))$$
where $s' = f(s, a)$. Thus, to find the best move for board $s$ is equivalent to find $a^*$ s.t. $v(f(s, a^*)) = \max(v(f(s, a)))$ for any possible $a \in A(s)$.
Obviously, the strength of an AI is mostly constrained by the accuracy of its evaluation function. A good evaluation function should never be worse than the random strategy, as the random strategy can be considered as a constant function.
Take an example from Othello: if we have three possible moves on the board, marked as A, B
and C in Figure 3, mostly an ideal evaluation function should give the result as:
\[ v(s, A) > v(s, B) > v(s, C) \]
This is because in Othello, the “unchangeable discs” are the much more valuable than other discs. The corner, A, is a common unchangeable disc as the opponent can never regain it according to rules. The edge, B, is less possible to be changed as it can only be sandwiched in two directions while normal discs may suffer attack from at most four directions. Thus, in most situations, the corner is the best choice, followed by the edge and the middle.

**Figure 3** | Illustration of how an evaluation function will give score to moves. If the evaluation function of Othello agrees with the experience that usually corners are more important than edges and edges are more important than the middle, it will evaluate different moves on board \( s \) ordering in:
\[ v(s, A) > v(s, B) > v(s, C) \]
There are different ways to build strong evaluation functions. One way is basing on the experience of human players to design the equation used for calculating the score from the board. *Iago* and *Bill* mentioned above used this way and achieved astonishing strength. However, it requires the developer to have deep insights and rich experience with the game. Another way of constructing an evaluation function is to let the computer give a score by learning or simulations. In our project, the second method is adopted: neural networks are used to learn from existing samples and Monte Carlo Tree Search is used for simulation.
### 3.3 Game Tree Searching
In game theory, a game tree is a directed graph whose nodes are positions in a game and whose edges are moves. As shown in Figure 4, by listing all possible moves and corresponding results, a thorough analysis can be obtained to help find the best move for current step. If one can write down the whole game tree, he will be able to find a way to maximize his rewards all the time, which is the “win strategy” for some games. However, due to that the space of search trees usually grow exponentially, it is hard to exhaust all possible leaf of a game tree.
Figure 4 | Game tree in opening of a round in Othello. A game tree is a directed graph that represents the game theory logic. Each edge denotes a possible move and each node denotes the possible position corresponding to the move of the edge.
To find the best move in a limited size of a game tree, different strategy can be used: one is to make the evaluation function as precise as possible; another is to try to discard those nodes that has little value to expand and spare the resources to exploit the useful nodes. A good AI should combine both strategies to maximize its possibility of finding the best move.
4 Methodology
In this section, the scope of this project will be first introduced, then different algorithms implemented for the game AI used in the project will be discussed, including weighted square strategy, alpha-beta pruning based on Minimax search, convolutional neural networks (CNN) and Monte Carlo Tree Search (MCTS). The implementation details and other trials will be described in the next section.
4.1 Development Environment
In our project, Python is used as the main language to build the game engine to play Othello, the neural networks and the search trees. We chose Python for its high cost-effective value and wide supporting packages related to deep learning and mathematical calculations. Among deep learning frameworks designed for Python, Keras is chosen for its rapid develop cycle, light scale and high-level integrations as these features fit the size and duration of our project. Other packages like scikit-learn are also used for the simplicity of hyper-parameter tuning of models.
Other languages are also used during the development. For example, HTML, CSS and JavaScript are used in the construction of our website and GUI.
4.2 Algorithms
In this project, different algorithms are used to help enhance the performance of AI. To compute the optimal value function, minimax search can be used recursively. However, if efficiency is taken into consideration, the performance of normal tree search drops quickly as the search space grows. In the prior work like Bill, minimax search with alpha-beta pruning was widely used combined with an elaborately designed value function. In our project, minimax search with alpha-beta pruning will used as a tester of our other AI.
Another algorithm used in our project is the Monte Carlo Tree Search (MCTS), which can be considered as an alternative to minimax search. The use of MCTS has achieved success widely in other chess game, including Go.
4.2.1 Minimax Search and Alpha-beta Pruning
Minimax search is a way to select the best move based on a game tree. Its core idea is to predict the counter strategy of the opponent and avoid the worst situations. Alpha-beta pruning is an effective pruning algorithm based on Minimax search [7], and Alpha-beta pruning together with Minimax search is widely used in the game AI design. By implementing the alpha-beta pruning to the Minimax search, the AI program can prune the useless nodes if the algorithm find that the value of current subtree is already equal to or worse than other subtrees, and save the time to evaluate other nodes.
4.2.2 Weighted Square Strategy
In this project, a simple traditional AI program based on alpha-beta pruning and weighted square strategy was built to serve as a baseline. Weighted square strategy is one of the widely-used strategies in Othello [8]. This strategy is abstracted from the observations that occupying different squares on the Othello game board has distinct influences on the game result. From earlier experience, the outer places such as the four sides, play much more important roles than those at the inner board. Especially, the corners are the most influential places as once been taken, they cannot be re-occupied by the opponent, thus they provide unimpeachable stability for the player who occupies them and can help to possess the sides and the inner board afterwards. According to the theory of this strategy, a scoring matrix storing the different importance of places is needed to evaluate the board. If we denote the scoring matrix as $M$, the evaluation function should be
$$v(s, a) = \sum_{i=1}^{n} \sum_{j=1}^{n} M_{ij} \times s'_{ij}$$
where $s$ is the current game board and $s'$ is the board after an action $a$ is taken. Here, a three-way representation is used to encode the game board. $s'_{ij}$ is 1 if the place at $i^{th}$ row $j^{th}$ column is occupied by the current player, and is $-1$ if that place is occupied by the opposite player. If that place is not occupied by either, $s'_{ij}$ is 0. As the game board of Othello has the size of $8\times8$, $n = 8$ in this function.



**Figure 5** | Weighted square strategy in Project Tempo. Let $s'$ be the board after black taking move $a$ based on $s$, and $M$ be the weighted squares. An intuitive way to evaluate the board is superpose $M$ on $s'$, which makes it easy to see the weight of each disc, and sum all disc's weight as positive for player while negative for opponent. For $s'$, $v(s, a) = 40$, which indicates an advantage.
The score matrix of weighted square strategy is usually pre-defined. Thus, it is an evaluation function designed manually and its accuracy depends on the designer’s experience.
4.2.3 Evaluation Network
CNN (Convolutional Neural Network) is a kind of feed-forward artificial neural network; whose artificial nerve units can respond to the surrounding patterns in detection fields. A CNN model consists of one or multiple convolutional layers and fully-connected layers, and can also include pooling layers and relevance weights. This kind of structure enables CNN make use of the 2-dimensional structure of the input data. Due to this, compared to other deep learning models, CNN could give a better result on processing 2-dimensional data, such as the image and game board processing. In our project, an evaluation CNN is used as an evaluation function combined with Minimax search to calculate the best move.
CNN evaluation networks were constructed to automatically learn how to evaluate the game board. As shown in Figure 6, the neural network consists of 2 convolutional layers and 2 fully-connected layers with a $tanh$ activation function. It is used to predict the evaluated numerical score of the game board. We did not apply max-pooling layers because the game board is relatively too small.

**Figure 6 | Architecture of deep neural networks used in Project Tempo.**
Evaluation network $v_z(s)$ (z for WZebra) was trained by supervised learning. And the training data of the network were from the self-playing games of another Othello AI program - WZebra, which is one of the strongest Othello AIs in the world. This AI provides various levels of search depths and evaluation scores of moves. We generated training games with six search steps, considering the balance of search strength and generating efficiency. Currently, over 4000 self-playing games with evaluation scores of each step were recorded as the training set.
This value network $v_z(s)$ was used as evaluation function in alpha-beta pruning searcher, which provide a deep learning AI program. The assessment of this AI is available in result section.
4.2.4 Monte Carlo Tree Search
MCTS [9] is a heuristic search algorithm, which makes move based on results of copious self-gaming. AlphaGo has implemented an asynchronous policy and value MCTS algorithm, which combined both policy network and value network into MCTS. [1] Based on this idea, we
constructed a similar MCTS (as shown in Figure 7) algorithm using the policy network $p_{SL}(s)$ and value network $v_p(s)$, whose details will be discussed later.
**Figure 7 | Monte Carlo tree search in Project Tempo.** Each loop in a typical MCTS consists of 4 steps: selection, expansion, simulation, and backpropagation (backup), but the MCTS here is modified slightly.
- **a.** Select the leaf node along the edges with maximum action value $Q + u(P)$ positively correlated to the stored probability $P$ in each edge, which is a kind of variant UCT.
- **b.** Expand the selected leaf node, generate the probabilities for next move on the board fitting by $p_{SL}$ and store the probabilities as the priority $P$ for valid moves.
- **c.** Simulate the game with $p_{SL}$ by self-play to the end of the game, also evaluate the leaf node board by value network $v_p$.
- **d.** Update the action value along the backup path for each node, the action values $Q$ are the mean of simulation winning rate and score given by value networks.
Each node of MCTS has the following fields for $s$ in state space $S$ and $a$ in action space $A(s)$
$$\{P(s,a), N(s,a), W_r(s,a), W_p(s,a), Q(s,a)\}$$
$P(s,a)$ is the prior probability that generated by the policy; as stated before, a random policy will generate same prior probability for all children of a node. $N(s,a)$ is the total times that this node is visited or simulated to the end. $W_r(s,a)$ is the winning times when the simulations starts with this node with the rollout policy. $W_v(s,a)$ is the value evaluated from the value network. $Q(s,a)$ is the final score of this node, which is also called as the "action value". When the MCTS is asked to give a best move $a^*$ for a board state $s$, it will return
$$a^* = \arg\max_a Q(s,a)$$
Thus, $Q$ can be taken as the evaluated score generate from the MCTS.
**Selection** At the beginning of simulation, a node should be selected as the starting node. To balance exploration and exploitation, we use PUCT algorithm [10] to determine which node to be selected:
$$a^* = \arg\max_a (Q(s,a) + u(s,a))$$
where
Policy network is a 64 except the last layer uses softmax as its activation function. The output of the policy network is a 64-length 1-D vector representing the probability of each move on the board.
Policy network \( p_{SL}(s) \) (SL for supervised learning) was trained by supervised learning. The training set was the same as evaluation network, but using moves as the data label instead of
\[
u(s, a) = c_{\text{puct}} \sqrt{P(s, a) \times \sum_b N(s, b)} / (1 + N(s, a))
\]
\( b \) is the possible values of \( a \), and \( \sum_b N(s, b) = N(s', a') \) where \( f(s', a') = s \) (i.e. \( \sum_b N(s, b) \) is equal to the \( N \) value of this node’s parent). The \( c_{\text{puct}} \) is a constant to adjust the balance of exploration and exploitation. This algorithm will prefer nodes with high prior probability and low visit counts, and gradually shift to exploit nodes with high action values.
**Expansion** When a node \((s, a)\) is visited, it will be expanded by all possible moves. All its children \((s', b)\) will be initialized as
\[\{ P(s', b) = p(b|s'), N(s', b) = 0, W_r(s', b) = W_v(s', b) = 0, Q(s', b) = Q(s, a) \}\]
where \( s' = f(s, a) \). \( p(b|s') \) is based on the prior policy that MCTS is used. A good prior policy should be able to inhibit the expansion of useless nodes.
**Evaluation** When a node is going to be evaluated, its action score comes from two parts: one is directly from the value network \( v_p \), and the other is from the quick simulation following the rollout policy \( p_r(a|s) \) where each move \( a^* = \text{argmax}_a(p_r(a|s)) \). When the game reaches the end, a score \( z \) indicating whether the current player of this node wins or loses will be returned as the evaluation value from the rollout policy.
\[z_r = \begin{cases}
0 & \text{win} \\
0.5 & \text{draw} \\
1 & \text{lose}
\end{cases}\]
**Backup** When backing up the value from evaluation to the root, we let \( N(s, a) \leftarrow N(s, a) + 1 \), \( W_r(s, a) \leftarrow W_r(s, a) + z_r \), \( W_v(s, a) \leftarrow W_v(s, a) + v_p \) and \( Q(s, a) \leftarrow \frac{(1-\lambda)W_v(s, a) + \lambda W_r(s, a)}{N(s, a)} \).
Thus, the final action score \( Q(s, a) \) of this node is a mixture of the results from the rollout policy and value networks.
### 4.2.5 Policy Networks
Policy networks are used for calculating the prior probability of a board \( s \). It worked as the prior policy \( p(a|s) \) in MCTS. Also, it can also be used as the rollout policy in simulations.
Policy networks were also constructed by CNN, but with a different structure. In policy networks \( p_{sl} \), we used six convolutional layers before output: the first layer adds zero paddings to the input and convolves 128 filters of kernel size 5*5 with stride 1, second to the fifth layer adds zero padding and convolves 128 filters of kernel size 3*3 with stride 1, and the last layer convolves 1 filter of kernel size 1*1 with stride 1. All layers use the activation function ‘RELU’ except the last layer uses softmax as its activation function. The output of the policy network is a 64-length 1-D vector representing the probability of each moves on the board.
scores provided by WZebra. Since these over 4000 game transcripts were generated by WZebra with human-style randomness, these training samples can have some degree of variety on chess playing routine, which is good for neural network training to avoid overfitting.
After processing the initial data, we have 660,000 training samples for our policy network. The loss function for the network is the categorical cross entropy and a stochastic gradient descent update is applied to minimize the loss with learning rate as 0.04 and momentum set as 0.
4.2.6 Value Networks
Value networks are used to help evaluate the state $s$ together with the simulations. It is an evaluation function mentioned in the theoretical background.
The structure of value networks ($v_s$) is almost the same as policy networks, except that a fully connected layer with 128 hidden units and one output using $tanh$ as the activation function is added after the policy networks.
The training data of value networks are generated from 500 self-playing games of MCTS using the policy networks $p_{sl}$. We stored all nodes searched in MCTS with its $Q(s, a)$ value. After processing the raw data, we have 6,000,000 training samples. The loss function for this model is mean square error. The optimizer is the same as that of policy networks.
5 Results
This section will firstly describe the processing methods of training data set, then show the
details of training tendency of neural networks, and provide the training accuracy and battle
winning rate of Project Tempo as the assessment result at last.
In the evaluation of AIs’ strength, we use two indicators: one is the random strategy, and the
other is the weighted square strategy with searching depth 3 based on Minimax search with
Alpha-beta tuning. To make the comparison standardized, we let each AI play with these two
testers for 100 games, with 50 playing as black and 50 playing as white.
5.1 Training Data Set
5.1.1 Overview of Data Set
As mentioned in section 4.2.3 (Page 19), the training data of evaluation network and policy
network are generated by self-playing of WZebra. The games are evaluated using search depth
of 6 with last 14 perfect moves, and played with high-level randomness. Using search depth of
6 is under the consideration of both time efficiency and evaluation qualifications. By setting
randomness as median, the training data can cover more situations of chess board
configuration but remain reasonable moves. And the neural network takes the evaluation score
provided by WZebra as the label for supervised training. The training set of Project Tempo has
more than 4,000 games in total.
In this project, each board configuration (namely the board after each step) is treated as a data
sample for evaluation, and generally each game contains about 60 steps, which means each
game can be transferred into about 60 board configuration samples. After extension by rotation
and flipping, and deduplication, the total size of the data set is over 660,000.
As training data, each input sample of board configuration is encoded into 10-layer 8*8
matrices, which is a 10*8*8 3-densiontional matrix. Each 8*8 matrix represents a certain feature
or specific information about the game-board. In total, we have 10 layers: there are 3 layers
representing the current discs on the board, 2 constant layers with all ones and all zeros, 1 layer
of valid moves for current players, and 4 layers to mark the internal discs and external discs for
own discs and opponents’ discs. (More details about input features are in Appendix I)
Another source of our training set was from the self-playing games of MCTS with policy
network \( p_{\text{sl}} \), which were used to train out value network used in MCTS. The extension of
rotation and flipping as well as deduplication were also applied on this set, forming a set of
more than 6,000,000 training samples.
As to labels, the evaluation networks \( v_{\text{e}}(s) \) used categorical labels of 17 classes. The original
scores provided by WZbera followed the normal distribution. After rescaling, these scales were
transferred into 17 classes, denoting values from -8 to 8. The processed data roughly followed
uniform distribution as shown in Figure 8, which decreased the risk to be overfit.

**Figure 8 | Categorical score distribution.** The distribution of the scores given by WZebra roughly follows the uniform distribution.
The labels for policy networks $p_{SL}$ were the moves made by WZebra. As there are 64 squares on the game-board of Othello, labels are represented by 64-dim vectors.
The labels for value networks $v_p$ were the $Q(s, a)$ values from the MCTS. These float value ranged from 0 to 1.
### 5.1.2 Symmetry augmentation
In the early stage, every single board configuration was treated as a data sample, and the total size of the training set was over 250,000. However, the neural network model trained based on such a data set was not ideal. The model even predicted every input to be same class. This problem puzzled our team for quite a long time and we tried many probable solutions on neural network structure, while the solutions didn’t work. After that we realized that the problem may existed in training data: 1. there existed too many same sample in the data set especially in first few steps, and 2. the square chess board is symmetrical (as shown in Figure 9) but samples in data set are not the same in different rotation direction. Both too many duplicate data and unbalanced data affected the classification accuracy.
To eliminate the shortage, we extended the whole data set by rotating the board by 180° and flipping over two diagonals as in Figure 9, thus the training set is 4 times as large as before. And then remove duplicate scenarios in the new data set by removing the same board configuration and take the average of the score as the new label. As a result, the size of final training data set is 660,000 as mentioned earlier in this subsection.
Project Tempo - Deep Learning with Othello
Figure 9 | Symmetry of Othello chess board. a, the basic board $G = \{s[x][y] \mid x, y \in \text{range}(8)\}$. b, the mirror board along the diagonal from top left to bottom right, $G' = \{s[y][x] \mid x, y \in \text{range}(8)\}$. c, reverse board, $G'' = \{s[8-x][8-y] \mid x, y \in \text{range}(8)\}$. d, the mirror board along the diagonal from top right to bottom left $G''' = \{s[8-y][8-x] \mid x, y \in \text{range}(8)\}$.
5.2 Evaluation Networks
5.2.1 Training
The structure of CNN evaluation networks in Project Tempo consists of one input layer, four hidden layers, and one output layer. Within hidden layers, there are two convolutional layers and two fully connected layers and dropout layers. Limited by the board size, more hidden layer is workable. And we do not apply max-pooling layers because the game board is relatively too small. The output layer has 17 neurons stands for 17 classes (-8 ~ 8) defined by input data. The figure of training and test accuracy is shown below.
Figure 10 | Accuracy of value networks with epochs. This figure shows how the training and test
accuracy changed with the iterations of training. Each Epoch has 10 iterations. We stopped at 150 iterations before it becomes overfit. The batch-size of training is 2000.
### 5.2.2 Evaluation on the Playing Strength
We used the evaluation networks together on the Minimax search with Alpha-beta pruning. To make the duel more balanced, we set the search depth of it also be 3. Each game needs around 4 seconds. The results are shown below.
<table>
<thead>
<tr>
<th></th>
<th>(v_z) wins</th>
<th>RC first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>(v_z) wins</td>
<td>47</td>
<td>42</td>
<td>89</td>
<td>89%</td>
</tr>
<tr>
<td>Random choice wins</td>
<td>1</td>
<td>6</td>
<td>7</td>
<td>7%</td>
</tr>
<tr>
<td>Draw</td>
<td>2</td>
<td>2</td>
<td>4</td>
<td>4%</td>
</tr>
</tbody>
</table>
**Table 1 | Result of battles \(v_z\) against random choice**
<table>
<thead>
<tr>
<th></th>
<th>(v_z) first</th>
<th>WS first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>(v_z) wins</td>
<td>20</td>
<td>18</td>
<td>38</td>
<td>38%</td>
</tr>
<tr>
<td>Weighted square wins</td>
<td>28</td>
<td>27</td>
<td>45</td>
<td>45%</td>
</tr>
<tr>
<td>Draw</td>
<td>2</td>
<td>5</td>
<td>7</td>
<td>7%</td>
</tr>
</tbody>
</table>
**Table 2 | Result of battles \(v_z\) against weighted square strategy**
From these results, we can conclude that \(v_z\) have certain intelligence and do help in analyzing the game board, but it is still a little bit weaker than the carefully designed weighted square strategy. This result pushed us to find more effective algorithms.
### 5.3 Policy Networks with MCTS
#### 5.3.1 Training
The structure of CNN policy networks was discussed in detail in the methodology part. The training accuracy with iterations is shown in Figure 11.
5.3.2 Evaluation on the Playing Strength
To test the evaluation of policy network, we first let it battle directly with our two testers without any search. Each game needs less than 1 second.
<table>
<thead>
<tr>
<th></th>
<th>( p_{SL} ) first</th>
<th>RC first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>( p_{SL} ) wins</td>
<td>34</td>
<td>33</td>
<td>67</td>
<td>67%</td>
</tr>
<tr>
<td>Random choice wins</td>
<td>14</td>
<td>12</td>
<td>26</td>
<td>26%</td>
</tr>
<tr>
<td>Draw</td>
<td>2</td>
<td>5</td>
<td>7</td>
<td>7%</td>
</tr>
</tbody>
</table>
Table 3 | Result of battles \( p_{SL} \) against random choice
<table>
<thead>
<tr>
<th></th>
<th>( p_{SL} ) first</th>
<th>WS first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>( p_{SL} ) wins</td>
<td>17</td>
<td>8</td>
<td>25</td>
<td>25%</td>
</tr>
<tr>
<td>Weighted square wins</td>
<td>33</td>
<td>42</td>
<td>75</td>
<td>75%</td>
</tr>
<tr>
<td>Draw</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0%</td>
</tr>
</tbody>
</table>
Table 4 | Result of battles \( p_{SL} \) against weighted square strategy
The above results show that the policy networks are significantly stronger than the random choice strategy. However, its strength is still weaker than the weighted square strategy.
Then we combined the policy network with the MCTS, using the policy network both as prior prediction and rollout policy. The \( \epsilon_{pred} \) is set as 5. The maximum search time of each step is set as 5 seconds. Each game consumes more than 120 seconds.
### Table 5 | Result of battles MCTS (policy) against random choice.
<table>
<thead>
<tr>
<th></th>
<th>MCTS first</th>
<th>RC first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>MCTS wins</td>
<td>50</td>
<td>49</td>
<td>99</td>
<td>99%</td>
</tr>
<tr>
<td>Random choice wins</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>1%</td>
</tr>
<tr>
<td>Draw</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0%</td>
</tr>
</tbody>
</table>
### Table 6 | Result of battles MCTS (policy) against weighted squared strategy.
<table>
<thead>
<tr>
<th></th>
<th>MCTS first</th>
<th>WS first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>MCTS wins</td>
<td>24</td>
<td>24</td>
<td>48</td>
<td>47%</td>
</tr>
<tr>
<td>Weighted square wins</td>
<td>19</td>
<td>24</td>
<td>43</td>
<td>43%</td>
</tr>
<tr>
<td>Draw</td>
<td>7</td>
<td>2</td>
<td>9</td>
<td>9%</td>
</tr>
</tbody>
</table>
From the result in Table 6, it can be said that MCTS and weighted squared strategy evenly matched. However, the time used for each step is too long. Considering the efficiency, this algorithm is not as good as weighted square strategy with Minimax search.
### 5.4 Value Networks with MCTS
#### 5.4.1 Training
The structure of CNN policy networks was discussed in detail in the methodology part. The training accuracy with iterations is shown in Figure 12.
After 20 iterations of training, the ASE dropped to 0.11 where the value range of targets is [0, 1].

In each epoch, the model is trained for two iterations. Further training has little effects in decreasing the MSE.
5.4.2 Evaluation on the Playing Strength
We used the value function together with MCTS to calculate the \( Q \) value as a mixture of value networks evaluation and results of simulations. We set \( c_{puct} \) as 5 and the mix parameter \( \lambda \) as 0.5. The maximum search time of each step is still 5 seconds.
<table>
<thead>
<tr>
<th></th>
<th>MCTS first</th>
<th>RC first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>MCTS wins</td>
<td>44</td>
<td>46</td>
<td>90</td>
<td>90%</td>
</tr>
<tr>
<td>Random choice wins</td>
<td>6</td>
<td>3</td>
<td>9</td>
<td>9%</td>
</tr>
<tr>
<td>Draw</td>
<td>0</td>
<td>1</td>
<td>1</td>
<td>1%</td>
</tr>
</tbody>
</table>
Table 7 | Result of battles MCTS (policy + value) against random choice.
<table>
<thead>
<tr>
<th></th>
<th>MCTS first</th>
<th>WS first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>MCTS wins</td>
<td>25</td>
<td>20</td>
<td>45</td>
<td>45%</td>
</tr>
<tr>
<td>Weighted square wins</td>
<td>24</td>
<td>25</td>
<td>49</td>
<td>49%</td>
</tr>
<tr>
<td>Draw</td>
<td>1</td>
<td>5</td>
<td>6</td>
<td>6%</td>
</tr>
</tbody>
</table>
Table 8 | Result of battles MCTS (policy + value) against weighted squared strategy.
There is no significant improvement of winning rate and even leads to a bit decrease. We tried to analyze the reasons, and suggested that it might because the results from simulations with rollout policy are more accurate when the game is close to the end, as its result is obtained from brute force which can exhaust all possible endings. Based on this assumption, we adjusted the algorithm to update our \( Q \) value as
\[
Q(s, a) \leftarrow \frac{(30 – depth)(1 – \lambda)w_p(s, a) + depth \cdot \lambda w_r(s, a)}{N(s, a)}
\]
where depth increases from 0 to 30 as the game goes. However, this adjustment did not influence the winning rates of the AI.
5.5 Random policy with MCTS
We also try the basic MCTS with random policy as both prior policy and rollout policy. As illustrated in the previous section, random policy is a constant function \( p(a|s) = \frac{1}{k} \) where \( k \) is the total number of possible values of current state \( s \). We also set the maximum search time as 5 seconds for each step and the \( c_{puct} \) as 5.
<table>
<thead>
<tr>
<th></th>
<th>MCTS first</th>
<th>RC first</th>
<th>Sum</th>
<th>Winning rate</th>
</tr>
</thead>
<tbody>
<tr>
<td>MCTS wins</td>
<td>50</td>
<td>50</td>
<td>100</td>
<td>100%</td>
</tr>
<tr>
<td>Random choice wins</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0%</td>
</tr>
<tr>
<td>Draw</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0%</td>
</tr>
</tbody>
</table>
Table 9 | Result of battles MCTS (random) against random choice.
Surprisingly this algorithm overwhelmed the weighted square strategy. It seems that the application of deep learning even hold back the performance of MCTS. After analyzing, we found several reasons that can help explain this phenomenon.
1. From the battle results of policy networks without search against random choice strategy, we can conclude that policy networks are much stronger than random choice strategy. However, when doing simulations, as the possible valid moves of Othello is restricted, the policy network may always choose the same move for the board and follow the same route of game tree, which makes the accuracy of the simulations biased.
2. The differences between prior probabilities of the policy network may hinder its exploration to some nodes. Once the simulation keep gave several bad results of a promising node with low prior probability, the MCTS may abandon this node and stop exploring it. However, in random strategy, all nodes have the same prior probabilities, so priors will not be a barrier to stop exploration. Also, due to the randomness, it is hard for random strategy to always give the same result, which always happens for policy networks as it is too inflexible.
3. The time to make a random choice is much less than the prediction of a CNN. Thus, within the same amount of time, random policy is able to do much more simulations than policy networks do. The imbalance between the simulation numbers may directly cause the huge differences in strength.
5.6 Discussions
We used the similar algorithms that implementing AlphaGo; however, the strength of the AI is not as strong as expected. There may exist problems in the dataset we used to train and the structures of our models. If larger sets and more delicate models are used, the strength of AI may have a breakthrough.
The disappointing performance may also result from the differences between Othello and Go: while random strategy is extremely bad for Go as the board is too large and the disc can be placed anywhere on the board, the rules of Othello guarantee that even random choice can flip at least one of opponent’s disc and the number of choices of each move is much less than that of Go. Also, the search space of Go is overwhelmingly larger than Othello. Thus, pruning is especially important for Go but not necessary for Othello.
Based on the analysis above, deep learning is useful to evaluate the game state or make
simulations based on current state. However, when it comes to problems of which sizes are small, it is too expensive to use such a strong tool and even brings about drawbacks.
6 Conclusions
This report has described the idea and implementation of our project, whose objective is to adopt the technology of deep learning neural networks to play Othello. Our expectation is that with the help of recent technologies, the program developed by us can achieve, if not transcend, the level of traditional algorithms. Regrettably, this aim has not been accomplished yet.
However, our DNN AI using MCTS and our enhanced traditional AI being well-matched in strength is quite inspiring. It proved itself that the trained deep neural network is indeed intelligent on the game and does have a strong potential (although still not tremendous enough) power in playing Othello as expected.
More importantly, this project might slightly discourage the hope to use deep learning on small problems: not only because it consumes more computations and time, but also the accuracy may not be comparable with manually designed evaluation functions.
This report is not intended to prove that deep learning is not suitable for Othello. The methods we tried are only a tiny part of possible ways to apply deep learning on Othello. In the future, we will keep trying other algorithms to explore other effective approaches to use deep learning with Othello.
References
Available: https://github.com/brilee/MuGo.
### APPENDIX I | Input features for neural networks
<table>
<thead>
<tr>
<th>Feature</th>
<th># of planes</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Disc color</td>
<td>3</td>
<td>Player disc / opponent disc / empty</td>
</tr>
<tr>
<td>Ones</td>
<td>1</td>
<td>A constant plane filled with 1</td>
</tr>
<tr>
<td>Zeros</td>
<td>1</td>
<td>A constant plane filled with 0</td>
</tr>
<tr>
<td>Valid moves</td>
<td>1</td>
<td>Valid moves for current player</td>
</tr>
<tr>
<td>Internal discs</td>
<td>2</td>
<td>Internal discs of player and opponent</td>
</tr>
<tr>
<td>External discs</td>
<td>2</td>
<td>External discs of player and opponent</td>
</tr>
</tbody>
</table>
### APPENDIX II | Structure of evaluation network $v_x(s)$ of Project Tempo
<table>
<thead>
<tr>
<th>Layer</th>
<th>Detailed Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Input layer</td>
<td>64 kernels, each size 4*4, with border type = “same”, activation as sigmoid</td>
</tr>
<tr>
<td></td>
<td>128 kernels, each size 3*3, with border type = “same”, activation as sigmoid</td>
</tr>
<tr>
<td></td>
<td>Dropout layer with a dropout rate of 0.3 (optional)</td>
</tr>
<tr>
<td></td>
<td>Fully connected layer with 256 neurons, activation as $tanh$, initialization as uniform</td>
</tr>
<tr>
<td></td>
<td>Fully connected layer with 128 neurons, activation as $tanh$, initialization as uniform</td>
</tr>
<tr>
<td>Output layer with 17 neurons, activation as SoftMax, initialization as uniform</td>
<td></td>
</tr>
</tbody>
</table>
APPENDIX III | Structure of policy network $p_{SL}(a|s)$ and of Project Tempo
<table>
<thead>
<tr>
<th>Input layer</th>
</tr>
</thead>
<tbody>
<tr>
<td>↓ 128 kernels, each size 5*5, with border type = “same”, activation as ReLU, stride = 1</td>
</tr>
<tr>
<td>↓ 128 kernels, each size 3*3, with border type = “same”, activation as ReLU, stride = 1</td>
</tr>
<tr>
<td>↓ 128 kernels, each size 3*3, with border type = “same”, activation as ReLU, stride = 1</td>
</tr>
<tr>
<td>↓ 128 kernels, each size 3*3, with border type = “same”, activation as ReLU, stride = 1</td>
</tr>
<tr>
<td>↓ 128 kernels, each size 3*3, with border type = “same”, activation as ReLU, stride = 1</td>
</tr>
<tr>
<td>↓ 1 kernels, each size 1*1, with border type = “same”, activation as Softmax, stride = 1</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Output layer</th>
</tr>
</thead>
<tbody>
<tr>
<td>Fatten to 64 linear neurons</td>
</tr>
</tbody>
</table>
APPENDIX IV | Structure of value network \( v_p(s) \) and of Project Tempo
---
**Input layer**
↓
128 kernels, each size 5\*5, with border type = “same”, activation as ReLU, stride = 1
↓
128 kernels, each size 3\*3, with border type = “same”, activation as ReLU, stride = 1
↓
128 kernels, each size 3\*3, with border type = “same”, activation as ReLU, stride = 1
↓
128 kernels, each size 3\*3, with border type = “same”, activation as ReLU, stride = 1
↓
128 kernels, each size 3\*3, with border type = “same”, activation as ReLU, stride = 1
↓
1 kernels, each size 1\*1, with border type = “same”, activation as linear, stride = 1
↓
**Fully connected layer with 128 neurons, activation as linear, initialization as uniform**
↓
**Output layer**
**Fully connected layer with 1 neurons, activation as \textit{tanh}, initialization as uniform**
---
*The blue parts in Appendix III and Appendix IV are the same.*
|
{"Source-Url": "http://i.cs.hku.hk/fyp/2016/report/final_report/SUN%20PEIGEN_11424096_assignsubmission_file_FYP_final_report_SunPeogen.pdf", "len_cl100k_base": 13420, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 88440, "total-output-tokens": 14971, "length": "2e13", "weborganizer": {"__label__adult": 0.0011987686157226562, "__label__art_design": 0.0023899078369140625, "__label__crime_law": 0.001125335693359375, "__label__education_jobs": 0.00390625, "__label__entertainment": 0.0008554458618164062, "__label__fashion_beauty": 0.0007834434509277344, "__label__finance_business": 0.0009098052978515624, "__label__food_dining": 0.001312255859375, "__label__games": 0.045806884765625, "__label__hardware": 0.003810882568359375, "__label__health": 0.0014190673828125, "__label__history": 0.0016012191772460938, "__label__home_hobbies": 0.0004131793975830078, "__label__industrial": 0.0016832351684570312, "__label__literature": 0.0009093284606933594, "__label__politics": 0.0009074211120605468, "__label__religion": 0.0013208389282226562, "__label__science_tech": 0.373046875, "__label__social_life": 0.0003159046173095703, "__label__software": 0.01186370849609375, "__label__software_dev": 0.541015625, "__label__sports_fitness": 0.0016489028930664062, "__label__transportation": 0.0012845993041992188, "__label__travel": 0.0005726814270019531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58137, 0.03207]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58137, 0.25326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58137, 0.88849]], "google_gemma-3-12b-it_contains_pii": [[0, 412, false], [412, 1272, null], [1272, 1636, null], [1636, 4681, null], [4681, 7147, null], [7147, 7975, null], [7975, 10210, null], [10210, 12356, null], [12356, 14640, null], [14640, 16200, null], [16200, 16519, null], [16519, 17581, null], [17581, 19779, null], [19779, 21931, null], [21931, 22547, null], [22547, 25108, null], [25108, 27298, null], [27298, 29568, null], [29568, 31689, null], [31689, 34871, null], [34871, 36189, null], [36189, 39084, null], [39084, 40898, null], [40898, 42036, null], [42036, 43706, null], [43706, 45236, null], [45236, 46755, null], [46755, 49303, null], [49303, 51738, null], [51738, 51914, null], [51914, 53174, null], [53174, 54985, null], [54985, 55028, null], [55028, 55826, null], [55826, 56475, null], [56475, 57205, null], [57205, 58137, null]], "google_gemma-3-12b-it_is_public_document": [[0, 412, true], [412, 1272, null], [1272, 1636, null], [1636, 4681, null], [4681, 7147, null], [7147, 7975, null], [7975, 10210, null], [10210, 12356, null], [12356, 14640, null], [14640, 16200, null], [16200, 16519, null], [16519, 17581, null], [17581, 19779, null], [19779, 21931, null], [21931, 22547, null], [22547, 25108, null], [25108, 27298, null], [27298, 29568, null], [29568, 31689, null], [31689, 34871, null], [34871, 36189, null], [36189, 39084, null], [39084, 40898, null], [40898, 42036, null], [42036, 43706, null], [43706, 45236, null], [45236, 46755, null], [46755, 49303, null], [49303, 51738, null], [51738, 51914, null], [51914, 53174, null], [53174, 54985, null], [54985, 55028, null], [55028, 55826, null], [55826, 56475, null], [56475, 57205, null], [57205, 58137, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58137, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58137, null]], "pdf_page_numbers": [[0, 412, 1], [412, 1272, 2], [1272, 1636, 3], [1636, 4681, 4], [4681, 7147, 5], [7147, 7975, 6], [7975, 10210, 7], [10210, 12356, 8], [12356, 14640, 9], [14640, 16200, 10], [16200, 16519, 11], [16519, 17581, 12], [17581, 19779, 13], [19779, 21931, 14], [21931, 22547, 15], [22547, 25108, 16], [25108, 27298, 17], [27298, 29568, 18], [29568, 31689, 19], [31689, 34871, 20], [34871, 36189, 21], [36189, 39084, 22], [39084, 40898, 23], [40898, 42036, 24], [42036, 43706, 25], [43706, 45236, 26], [45236, 46755, 27], [46755, 49303, 28], [49303, 51738, 29], [51738, 51914, 30], [51914, 53174, 31], [53174, 54985, 32], [54985, 55028, 33], [55028, 55826, 34], [55826, 56475, 35], [56475, 57205, 36], [57205, 58137, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58137, 0.20423]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
27050083b4f899f5662903f81f4f181828fe802d
|
For questions with **circular bubbles**, you may select exactly *one* choice on Gradescope.
- Unselected option
- Only one selected option
For questions with **square checkboxes**, you may select *one* or more choices on Gradescope.
- You can select
- Multiple squares
For questions with a **large box**, you need to write a short answer in the corresponding text box on Gradescope.
You have 170 minutes. There are 10 questions of varying credit (250 points total).
The exam is open note. You can use an unlimited number of handwritten cheat sheets, but you must work alone.
Clarifications will be posted at [https://cs161.org/clarifications](https://cs161.org/clarifications).
**Q1 MANDATORY – Honor Code**
Read the honor code on the Gradescope answer sheet and type your name. *Failure to do so will result in a grade of 0 for this exam.*
Q2 True/false (56 points)
Each true/false is worth 2 points.
Q2.1 True or False: You should always use HMAC instead of any other MAC because HMAC has stronger integrity and authentication guarantees than any other MAC.
○ True ● False
Solution: False. All MACs provide the same integrity and authentication guarantees.
Q2.2 True or False: A MiTM during the Diffie-Hellman Key Exchange can force both parties to derive a shared key (that the MiTM doesn’t necessarily know) that is different than the one they would’ve derived otherwise.
● True ○ False
Solution: True. Mallory can modify $g^a \rightarrow g^{am}$ and $g^b \rightarrow g^{bm}$, causing both parties to derive the key $g^{abm}$.
Q2.3 True or False: A MiTM during the Diffie-Hellman Key Exchange can force both parties to unknowingly derive different keys that the MiTM knows.
● True ○ False
Solution: True. This is the standard MiTM attack from lecture.
Q2.4 True or False: A MiTM during the Diffie-Hellman Key Exchange can force both parties to derive a set of pre-determined keys that the MiTM knows.
○ True ● False
Solution: False. The MiTM can force the parties to derive keys that they know, but they cannot predetermine these keys since both parties contribute randomness. For example, if Mallory wants Alice to derive the key $y$, and is given $g^a$, she must find $x$ s.t. $(g^a)^x = y$ which would require breaking discrete log.
Q2.5 True or False: CSRF tokens are an effective defense against CSRF attacks only if clients’ browsers respect the same-origin policy.
● True ○ False
Solution: True. By SOP, websites on another domain are unable to access the content of the website on the target domain. If browsers did not respect SOP, a malicious website could access the CSRF token in another page.
Q2.6 True or False: An XSS vulnerability in a website cannot be exploited to gain control over a user’s session if the session cookie has the HttpOnly flag set.
<table>
<thead>
<tr>
<th>Question</th>
<th>True or False</th>
<th>Solution</th>
</tr>
</thead>
<tbody>
<tr>
<td>Q2.7</td>
<td>True or False: <a href="https://secure.bank.com">https://secure.bank.com</a> is able to set the following cookie using the Set-Cookie header: <code>session=1234567; Domain=bank.com; HttpOnly</code>.</td>
<td>False. While the attacker may not be able to actually learn the value of the cookie, the XSS vulnerability still allows the attacker to violate SOP and make malicious requests under the user’s session.</td>
</tr>
<tr>
<td>Q2.9</td>
<td>True or False: In Bitcoin, once a transaction is successfully added to the blockchain, it can never be lost.</td>
<td>False. The blockchain could fork and not include your transaction.</td>
</tr>
<tr>
<td>Q2.10</td>
<td>True or False: When you log in to Zoom, you make a POST Request to <code>https://zoom.us/berkeley/signin</code> with an email and password in the form data. The Response contains a session token cookie without the Secure flag set. An on-path attacker could steal your session token by observing only this request.</td>
<td>False. The request is an HTTPS request, which indicates that the username and password are encrypted under TLS.</td>
</tr>
<tr>
<td>Q2.11</td>
<td>True or False: When you go to <code>https://berkeley.zoom.us/m/stanford</code>, you see an image of Stanford’s lawn. The page source shows that the image is being loaded from <code>http://stanford.zoom.us/i/stanford.png</code>.</td>
<td>True. If all the user wants is to bounce their location, a VPN will be faster than Tor.</td>
</tr>
</tbody>
</table>
Solution: False. The Same-Origin Policy does not restrict sites from loading third-party images.
Q2.12 You’re using Tor with three intermediate nodes. Assume all nodes are handling a large amount of traffic.
True or False: Even if two of those nodes are compromised, your anonymity is still protected.
[Clarification during exam: This question was thrown out during the exam, and both True and False were accepted as valid answers. See solution for why.]
Solution: The intended answer was true. Since one of the nodes is honest, the malicious nodes won’t be able to link any specific traffic to you.
However, we did not specify if two nodes could collude. If two nodes can collude, they might be able to use timing patterns to link traffic to your identity, depending on how much traffic constitutes "a large amount of traffic."
Because we felt this question was ambiguous, both True and False were accepted as valid answers.
Q2.13 Instead of using Tor, you forward your traffic through three intermediate proxies unencrypted. Using these proxies, you log into https://twitter.com
True or False: Assuming the entry proxy is honest, the middle and exit proxies cannot figure out your identity
Solution: True. This proxy does not see your IP address, and since your communication with Twitter is over TLS, the proxy doesn’t learn your session cookies, content you’re reading/sending, etc.
Q2.14 You decide to use a recursive resolver which uses DNSSEC. Your client uses standard DNS.
True or False: An on-path adversary cannot poison your client’s cached DNS records.
Solution: False. An on-path attacker can still do basic DNS spoofing between the resolver and client.
Q2.15 A recursive resolver supports DNSSEC. The resolver contacts three other nameservers to answer a certain query.
**True or False:** All three nameservers must support DNSSEC in order for DNSSEC to provide any guarantees.
- [ ] True
- [ ] False
**Solution:** True. If any of the nameservers don’t support DNSSEC, then the certificate chain will be broken.
Q2.16 **True or False:** DHCP is secure against an on-path attacker.
- [ ] True
- [ ] False
**Solution:** False. If the on-path attacker sends a fake response before the legitimate response, they can convince the victim to accept an incorrect configuration.
Q2.17 **True or False:** Using HTTPS is a good defense against clickjacking attacks.
- [ ] True
- [ ] False
**Solution:** False. In a clickjacking attack, the victim is already interacting with a malicious website. Even if the victim was contacting the malicious website securely, the attack would still be possible.
Q2.18 **True or False:** Spearphishing is more dangerous than standard phishing because it uses information about the victim.
- [ ] True
- [ ] False
**Solution:** True. The victim is more likely to be fooled by a spearphishing attack because it includes information specific to the victim, such as their name.
Q2.19 **True or False:** If a website only allows HTTPS connections, it is secure from SQL injection attacks.
- [ ] True
- [ ] False
**Solution:** False. HTTPS protects the website against network attackers. The attacker can make a secure connection to the website and inject SQL.
Q2.20 **True or False:** Parameterized SQL stops all SQL injection attacks.
Solution: True. As shown in lecture, parameterized SQL precompiles queries so user input cannot be interpreted as code.
Q2.21 Consider a website which inserts user input into a database using a SQL query. The information in the database is then used in subsequent internal SQL queries.
True or False: If the SQL query that accepts user input is parameterized, but the internal ones do not, then the website will be secure from SQL injection attacks.
☐ True ☐ False
Solution: The second-order SQL injection as shown in discussion can still occur. User input is sanitized in the query that accepts user input, but not in the internal queries, so user input can still be treated as code in the internal inputs.
Q2.22 True or False: Return-oriented programming (ROP) is not effective if non-executable pages (DEP or W’X) are enabled.
☐ True ☐ False
Solution: False. ROP relies on existing library code in memory. DEP would make this code read-only, but still executable. The attacker never needs to execute any code that they write into memory.
Q2.23 True or False: Format string vulnerabilities are not effective if ASLR is enabled.
☐ True ☐ False
Solution: False. Format strings can still leak addresses on the stack which can lead to memory safety exploits.
Suppose you find a stored XSS vulnerability on https://berkeley.zoom.us/m/1234.
Q2.24 True or False: Some cookies set by https://berkeley.zoom.us/ could be read using your exploit.
☐ True ☐ False
Solution: True. Any cookies with the HttpOnly flag set to FALSE would be readable by this XSS exploit.
Q2.25 True or False: Some cookies set by https://berkeley.zoom.us/ could be modified using your exploit.
<table>
<thead>
<tr>
<th>Q2.26</th>
<th><strong>True or False:</strong> Some cookies set by <a href="http://zoom.berkeley.edu/m/1234">http://zoom.berkeley.edu/m/1234</a> could be read using your exploit.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><img src="true.png" alt="True/False" /> <img src="false.png" alt="False" /></td>
</tr>
<tr>
<td>Solution: True. XSS would allow you to overwrite any cookies in the appropriate scope.</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Q2.26</th>
<th><strong>True or False:</strong> Some cookies set by <a href="http://zoom.berkeley.edu/m/1234">http://zoom.berkeley.edu/m/1234</a> could be read using your exploit.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><img src="true.png" alt="True/False" /> <img src="false.png" alt="False" /></td>
</tr>
<tr>
<td>Solution: False. zoom.berkeley.edu would only be able to set cookies for Cookie-Domain=zoom.berkeley.edu or Cookie-Domain=berkeley.edu - neither of which are accessible via the site with our reflected XSS attack.</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Q2.27</th>
<th><strong>True or False:</strong> Some cookies set by <a href="https://berkeley.zoom.us/m/1234">https://berkeley.zoom.us/m/1234</a> could be modified using your exploit.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><img src="true.png" alt="True/False" /> <img src="false.png" alt="False" /></td>
</tr>
<tr>
<td>Solution: True. JavaScript code executed from a site can always set arbitrary cookies for that site.</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>Q2.28</th>
<th><strong>True or False:</strong> Some cookies set by <a href="http://stanford.zoom.us/m/1234">http://stanford.zoom.us/m/1234</a> could be read using your exploit.</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><img src="true.png" alt="True/False" /> <img src="false.png" alt="False" /></td>
</tr>
<tr>
<td>Solution: True. Any cookies with the domain .zoom.us and the HttpOnly flag set to FALSE would be readable by JavaScript run from berkeley.zoom.us.</td>
<td></td>
</tr>
</tbody>
</table>
This is the end of Q2. Proceed to Q3 on your answer sheet.
Q3 Password Storage (28 points)
Bob is trying out different methods to securely store users’ login passwords for his website.
Mallory is an attacker who can do some amount of offline computation before she steals the passwords file, and some amount of online computation after stealing the passwords file.
Technical details:
• Each user has a unique username, but several users may have the same password.
• Mallory knows the list of users registered on Bob’s site.
• Bob has at most 500 users using his website with passwords between 8–12 letters.
• Mallory’s dictionary contains all words that are less than 13 letters. [Clarification during exam: Mallory’s dictionary contains all possible user passwords.]
• Mallory can do N online computations and 500N offline computations where N is the number of words in the dictionary.
• Slow hash functions take 500 computations per hash while fast hash functions require only 1 computation.¹
Notation:
• Hₛ and H, a slow and fast hash function
• Sign, a secure signing algorithm
• uname and pwd, a user’s username and password
• k, a signing key known only by Bob
If Bob decides to use signatures in his scheme, assume he will verify them when processing a log-in.
Q3.1 (2 points) How many times could Mallory hash every word in the dictionary using Hₛ with offline computation?
☐ (A) She can’t hash the whole dictionary ☐ (D) None of the above
☐ (B) 1
☐ (C) 500
Solution: Since evaluating a slow hash function takes 500 computations, hashing the entire dictionary will take 500N computations which is the exact amount of offline computation Mallory has
Q3.2 (2 points) How many times could Mallory hash every word in the dictionary using H with online computation?
☐ (G) She can’t hash the whole dictionary ☐ (J) None of the above
☐ (H) 1
☐ (I) 500
¹Keep in mind this is much faster than a real-life slow hash function.
Solution: Since evaluating a fast hash function takes 1 computation, hashing the entire dictionary will take \( N \) computations which is the exact amount of online computation Mallory has.
Q3.3 (2 points) How many times could Mallory hash every word in the dictionary using \( H_S \) with online computation?
- (A) She can’t hash the whole dictionary
- (B) 1
- (C) 500
- (D) None of the above
Solution: As before, hashing the whole dictionary with the slow hash function takes \( 500N \) computation but Mallory only has \( N \) online computation. Thus, she can’t has the whole dictionary.
For each part below, indicate all of the things Mallory can do given the password storage scheme. Assume Mallory knows each scheme. Unless otherwise specified, assume that she can use both offline and online computation.
Q3.4 (4 points) Each user’s password is stored as \( H_F(\text{pwd} || \text{'Bob'}) \).
- (G) Learn whether two users have the same password with only online computation
- (H) Learn a specific user’s password
- (I) Change a user’s password without detection
- (J) Learn every user’s password
- (K) None of the above
- (L) —
Solution: Since this is a hash function with the same salt, Mallory can do one full run through of the dictionary with online computation to learn each user’s password. Additionally, there are no authenticity checks so Mallory can edit a password.
Q3.5 (4 points) Each user’s password is stored as the tuple \( (H_S(\text{pwd} || \text{'Bob'}), \text{Sign}(k, H_F(\text{pwd}))) \).
- (A) Learn whether two users have the same password with only online computation
- (B) Learn a specific user’s password
- (C) Change a user’s password without detection
- (D) Learn every user’s password
- (E) None of the above
- (F) —
Solution: Because of the slow hash, Mallory can only longer do a full run through of the dictionary using online computation. However, she can do so using offline computation since the salt is the same for all passwords. Since the signature does not include the username, password entries can be swapped without detection.
An earlier version of the solutions incorrectly marked (A) as incorrect. However, since signatures are unsalted, an attacker can learn if two users have the same password by comparing signatures (which requires no computation).
Q3.6 (4 points) Each user’s password is stored as the tuple \((H_F(pwd || uname), \text{Sign}(k, uname || H_F(pwd)))\)
- [ ] (G) Learn whether two users have the same password with only online computation
- [ ] (J) Learn every user’s password
- [ ] (H) Learn a specific user’s password
- [ ] (K) None of the above
- [ ] (I) Change a user’s password without detection
- [ ] (L) ___
Solution: Because the salt is now different, Mallory only has enough online computation to bruteforce a single password. However, using offline computation she can still learn all the passwords since she can bruteforce the dictionary 500 times. Since each signature is tied to a specific user and Mallory doesn’t know \(u_1D458\), she can’t edit a user’s password.
Q3.7 (4 points) Each user’s password is stored as \((H_S(pwd || uname), \text{Sign}(k, H_S(pwd)))\)
[Clarification during exam: The expression was missing a leading parenthesis.]
- [ ] (A) Learn whether two users have the same password with only online computation
- [ ] (D) Learn every user’s password
- [ ] (B) Learn a specific user’s password
- [ ] (E) None of the above
- [ ] (C) Change a user’s password without detection
- [ ] (F) ___
Solution: Mallory only has enough total computation to learn a single user’s password, denoted as \(pwd’\). She can now edit a different user’s password to be this by computing \(H_S(pwd’ || uname)\) and using the signature \(\text{Sign}(k, H_S(pwd’))\). Note this is possible because the signature isn’t bound to any specific user.
An earlier version of the solutions incorrectly marked (A) as incorrect. However, since signatures are unsalted, an attacker can learn if two users have the same password by comparing signatures (which requires no computation).
Q3.8 (3 points) Describe a DoS attack Mallory can launch against Bob’s server if he uses the scheme in Q3.7.
**Solution:** Basic amplification attack - Mallory makes a bunch of invalid logins which causes Bob to attempt to verify many signatures.
Q3.9 (3 points) Bob decides to add two-factor authentication to the scheme in Q3.7. Does this change your answer to Q3.7?
- (A) Yes ✔️ (B) No (C) (D) (E) (F)
**Solution:** Two factor authentication prevents an attacker from logging in if they know the password, doesn’t help with preventing the attacks mentioned previously.
This is the end of Q3. Proceed to Q4 on your answer sheet.
Q4 Forwards, Backwards, Left, and Right (16 points)
Consider the following properties. The solid part of each timeline denotes the time frame where messages remain confidential, even after Eve, an on-path eavesdropper, steals a key.
• **Forward secrecy**: If Eve steals a key, past messages remain confidential.
\[\text{Eve}\quad\text{steals key}\]
• **Backward secrecy**: If Eve steals a key, future messages remain confidential.
\[\text{Eve}\quad\text{steals key}\]
• **Weak forward secrecy\(^2\)**: If Eve stops recording messages, then steals a key, any messages Eve recorded before she stopped recording remain confidential.
\[\text{Eve stops recording}\quad\text{Eve}\quad\text{steals key}\]
• **Weak backward secrecy\(^3\)**: If Eve steals a key, then starts recording messages, any messages Eve records remain confidential.
\[\text{Eve}\quad\text{steals key}\quad\text{Eve starts recording}\]
Consider the following modified symmetric encryption schemes where Alice and Bob change their encryption key for each message they send. For each scheme, determine which of the given properties is ensured. Assume that all keys are 128 bits long, and no party will send more than one message in a row.
Q4.1 (4 points) Alice and Bob increment their shared key \(k\) by 1 for each new message, so \(k' = k + 1\).
- □ (A) Forward secrecy
- □ (B) Backward secrecy
- □ (C) Weak forward secrecy
- □ (D) Weak backward secrecy
- □ (E) None of the above
- □ (F) ________
---
\(^2\) *Weak forward secrecy* in practice requires that Eve be able to MITM past communication before key compromise, rather than just eavesdropping.
\(^3\) This is a coined term for the purposes of this question.
Solution: Eve can increment and decrement her stolen key in order to attain both past and future keys.
Q4.2 (4 points) Alice and Bob’s current shared key is $k$. For each new message, the sender generates a small, 8-bit random number $n$ and attaches it to the message before encryption. The next message will be encrypted under key $k' = k \oplus \text{PRG}(n)[:128]$, where PRG is a secure PRG.
☐ (G) Forward secrecy ☐ (J) Weak backward secrecy
☐ (H) Backward secrecy ☐ (K) None of the above
☐ (I) Weak forward secrecy
Solution: Even though the amount that the key is incremented each time is encrypted, the seed space is small enough for Eve to search through all possible future keys even without access to past or future messages.
Q4.3 (4 points) Alice and Bob’s current shared key is $k$. For each new message, the sender generates a new symmetric key $k'$ and attaches it to the message before encryption. The next message will be encrypted under $k'$.
☐ (A) Forward secrecy ☐ (D) Weak backward secrecy
☐ (B) Backward secrecy ☐ (E) None of the above
☐ (C) Weak forward secrecy
Solution: If Eve has accesses to all messages, she also has access the key for the next message $k'$, allowing her to decrypt future messages as long as she records every message. She also still has no way of determining what the keys for the previous messages are, since they are randomly generated and have no relation to the given message.
An earlier version of the solutions incorrectly marked A, B, D as the correct answers.
Q4.4 (4 points) For each new message, Alice and Bob conduct Diffie-Hellman key exchange to generate a new symmetric key.
☐ (G) Forward secrecy ☐ (J) Weak backward secrecy
☐ (H) Backward secrecy ☐ (K) None of the above
☐ (I) Weak forward secrecy
☐ (L) ——
**Solution:** An on-path attacker cannot learn the value of the shared key in Diffie-Hellman key exchange. Since a new Diffie-Hellman shared key is generated for every message, even if Eve steals the key for one message, she knows nothing about any messages before or after that message.
This is the end of Q4. Proceed to Q5 on your answer sheet.
Q5 *EvanBotOS*
EvanBot is building a new OS and wants to defend against buffer overflow attacks. Bot decides to use cryptography to secure values on the stack.
Assume any cryptography is executed separately and securely by the OS. This means that any cryptographic operations do not count as function calls on the program’s stack, and the attacker cannot see the operations being executed. Also, unless otherwise stated, any MACs or hashes generated are stored separately in the OS, not on the stack.
Assume stack canaries are four random bytes (no null byte). Assume the OS has a secret key $k$ that is unknown to any attacker.
For each part, mark which scheme is more secure (would defend against more buffer overflow attacks), or if both schemes would defend against the same set of attacks.
*Clarification during exam: For each scheme, unless otherwise specified all memory safety defenses are disabled.*
Q5.1 (3 points) Scheme A: When a function is called, push a random stack canary to the stack. Also, generate a MAC on the canary value using $k$. Before the function returns, in addition to checking that the canary is the same, also verify the canary with the MAC.
Scheme B: No cryptography, stack canaries are enabled, W’X and ASLR are disabled.
(A) Scheme A
(B) Scheme B
(C) The same
Solution: Any exploit on Scheme B would need to have the canary value be unchanged before the function returns (either by overwriting the canary with itself, writing around the canary, or brute-forcing the canary). If the canary value is unchanged, using a MAC on the canary won’t detect an exploit that changes other parts of the stack.
A bug in this question was discovered during the exam. For Scheme B, in practice, most compilers generate one stack canary per program, and the canary value is the same for every function. (We did not explicitly cover this in lecture this semester.) However, the wording of this question suggests that in Scheme A, the stack canaries are different for every function in one program. Under this interpretation, Scheme A would be better, since it does not reuse stack canaries. For this reason, we accepted Scheme A as an alternate valid answer.
Q5.2 (3 points) Scheme A: When a function is called, encrypt a randomly-generated stack canary using \( k \). Push the encrypted canary onto the stack. Before the function returns, decrypt the stack canary and verify that it is unchanged.
Scheme B: No cryptography, stack canaries are enabled, W’X and ASLR are disabled.
- (G) Scheme A
- (H) Scheme B
- (I) The same
- (J) —
- (K) —
- (L) —
**Solution:** Both schemes are powerless against exploits that don’t involve the canary or write around the canary. For exploits involving the canary, the encryption step doesn’t add any extra security - from the attacker’s perspective, the canary is still four random bytes that need to be left unchanged (by overwriting them with itself or brute-forcing).
This subpart has the same bug as the subpart above. We accepted Scheme A as an alternate valid answer.
Q5.3 (3 points) Scheme A: When a program is first started, generate a signature on every page of the memory space using \( k \). If the program tries to execute any instructions in memory, check that the page where the instruction is stored is correctly signed.
Scheme B: No cryptography, W’X is enabled, stack canaries and ASLR are disabled.
- (A) Scheme A
- (B) Scheme B
- (C) The same
- (D) —
- (E) —
- (F) —
**Solution:** Scheme A prevents any data written into memory from being executed (because it won’t be signed). This is equivalent to the functionality of the W’X bit.
Q5.4 (3 points) Scheme A: When a function is called, using a cryptographic hash \( H \), hash the RIP, and push the value of the hash onto the stack. Before the function returns, verify that the RIP still hashes to the same value.
Scheme B: When a function is called, generate a MAC on the RIP using \( k \), and push the value of the MAC onto the stack. Before the function returns, verify the RIP with the MAC.
Assume that the hash and the MAC are the same length.
- (G) Scheme A
- (H) Scheme B
- (I) The same
- (J) —
- (K) —
- (L) —
**Solution:** Scheme A doesn’t provide any extra protection because an attacker can hash the malicious RIP and overwrite the original hash with the hash of the malicious RIP. In Scheme B, the attacker cannot forge a MAC for the RIP because the attacker doesn’t have the value of \( k \).
Q5.5 (5 points) Consider Scheme A from the previous part. Briefly explain how you might create an exploit for Scheme A that overwrites the RIP. Assume you can debug only the vulnerable program with GDB, and you cannot access the OS-level cryptography operations.
Option A: __________ Option B: __________ Option C: __________ Option D: __________ Option E: __________ Option F: __________
**Solution:** As above, just hash the malicious RIP and overwrite the original hash with the hash of the malicious RIP.
Q5.6 (3 points) Scheme A: When a function is called, encrypt the RIP with a one-time pad, where the pad is a static value stored in the OS. (The pad value does not change when you rerun the program.) Before the function returns, decrypt the RIP and jump to that location.
Scheme B: No cryptography, stack canaries are enabled, W^X and ASLR are disabled.
Option G: Scheme A Option H: Scheme B Option I: The same Option J: __________ Option K: __________
**Solution:** OTP with key reuse is insecure, so it’s equivalent to not using any defenses at all.
Q5.7 (5 points) Consider Scheme A from the previous part. In 2-3 sentences, explain how you might create an exploit for Scheme A that overwrites the RIP. Assume you can debug only the vulnerable program with GDB, and you cannot access the OS-level cryptography operations.
Option A: __________ Option B: __________ Option C: __________ Option D: __________ Option E: __________ Option F: __________
**Solution:** In GDB, overwrite the RIP with 0x00000000. This will cause the program to try and jump to PAD ⊕ 0x00000000 = PAD. Now that you know the pad, just XOR the desired address with the pad when performing the exploit.
Note that solutions that don’t overwrite the RIP with a known value will not work, since the RIP is encrypted with the OTP, and even if you ran the program twice, you would only see the same encrypted RIP twice.
An alternate solution is to disassemble the entire set of instructions, look for a call instruction that calls the currently executing function, and then deduce the value of RIP based on where the call instruction is located. But this would take a lot of trial-and-error, especially if the currently executing function is called several times.
This is the end of Q5. Proceed to Q6 on your answer sheet.
Q6 **DNS over TCP** (20 points)
Standard DNS uses UDP to send all queries and responses. Consider a modified DNS that instead uses TCP for all queries and responses.
Q6.1 (3 points) Which of the following does DNS over TCP guarantee against a man-in-the-middle attacker? Select all that apply.
- (A) Confidentiality
- (C) Authenticity
- (E) —
- (B) Integrity
- (D) None of the above
- (F) —
**Solution:** TCP has no cryptographic guarantees, so a MITM attacker can read and modify any message.
Q6.2 (3 points) Compared to standard DNS, does DNS over TCP defend against more attacks, fewer attacks, or the same amount of attacks against an on-path attacker?
- (G) More attacks
- (I) Fewer attacks
- (K) —
- (H) Same amount of attacks
- (J) —
- (L) —
**Solution:** An on-path attacker can see all relevant header fields in TCP and UDP, so they only need to win the race against the legitimate response in both standard DNS and DNS over TCP.
Q6.3 (5 points) What fields does an off-path attacker **not know** and need to **guess** correctly to spoof a response in DNS over TCP? Assume source port randomization is enabled. Select all that apply.
- (A) TCP sequence numbers
- (C) Recursive resolver port
- (E) DNS NS records
- (B) Name server port
- (D) DNS A records
- (F) None of the above
**Solution:** To spoof a TCP packet, the off-path attacker needs to guess the TCP sequence numbers and the randomized resolver port (source port). The name server port (destination port) is public and well-known. The DNS records can be anything the attacker wants, so there is nothing to guess there.
Q6.4 (3 points) Is the Kaminsky attack possible on DNS over TCP? Assume source port randomization is disabled.
- (G) Yes, because the attacker only needs to guess the DNS Query ID
- (H) Yes, but we consider it infeasible for modern attackers
- (I) No, because the attacker cannot force the victim to generate a lot of DNS over TCP requests
(J) No, because TCP has integrity guarantees
Solution: The attacker would have to guess at least 32 bits of sequence numbers, which is the same defense as source port randomization in standard DNS.
Q6.5 (3 points) Recall the DoS amplification attack using standard DNS packets. An off-path attacker spoofs many DNS queries with the victim’s IP, and the victim is overwhelmed with DNS responses. Does this attack still work on DNS over TCP?
- (A) Yes, the attack causes the victim to consume more bandwidth than the standard DNS attack
- (B) Yes, the attack causes the victim to consume less bandwidth than the standard DNS attack
- (C) No, because the DNS responses no longer provide enough amplification
- (D) No, because the attacker cannot force the server to send DNS responses to the victim
Solution: To force the victim to receive a DNS response, the attacker would need to initiate a TCP connection that looks like it’s from the victim. However, an off-path attacker cannot do this, since they cannot see the SYN-ACK response sent to the victim.
Q6.6 (3 points) What type of off-path DoS attack from lecture is DNS over TCP vulnerable to, but standard DNS not vulnerable to? Answer in five words or fewer.
Solution: TCP SYN Flooding
EvanBot builds a new course feature that sends announcements to students over TCP. To receive announcements, a student initiates a TCP connection with the server. The server sends the announcements and terminates the connection.
Q7.1 (3 points) Assuming that no adversaries are present, which of the following does communication over a TCP connection guarantee? Select all that apply.
- (A) That both the server and client can detect if a particular announcement needs to be resent
- (B) That different announcements are delivered in the same order they were sent in
- (C) That announcements are delivered using the most efficient path through the internet
- (D) None of the above
Solution: TCP guarantees that messages will be retransmitted until they are successfully delivered, and that messages will be delivered in the correct order. TCP makes no guarantees about what path a packet takes through the Internet.
Q7.2 (3 points) When only an on-path adversary is present, which of the following does communication over a TCP connection guarantee? Select all that apply.
- (G) That both the server and client can detect if a particular announcement needs to be resent
- (H) That different announcements are delivered in the same order they were sent in
- (I) That announcements are delivered using the most efficient path through the internet
- (J) None of the above
Solution: An on-path attacker has access to the TCP sequence numbers, so they can inject arbitrary messages. Since the attacker can interfere with all messages, TCP no longer has any guarantees about message delivery. TCP still makes no guarantees about what path a packet takes through the Internet.
Q7.3 (3 points) Suppose that EvanBot instead sends announcements over UDP. Assuming that no adversaries are present, which of the following might happen? Select all that apply.
- (A) Students might not receive some announcements
(B) Students might receive the announcements more quickly
(C) The server might not detect some errors which it would have had it been using TCP
☐ (D) None of the above
☐ (E) ——
**Solution:** UDP no longer guarantees delivery, so some announcements might not be delivered. However, UDP does not require a handshake at the beginning, so announcements can be delivered more quickly. UDP has no guarantees about what order announcements arrive in, so the server will no longer detect if packets arrive out of order.
EvanBot realizes that the server is sending messages to the student, but the student only responds with ACKs and never sends any messages after the initial handshake. They design a *Half TCP* protocol which provides TCP’s properties for communications from the server to the student, but not for communications from the student to the server. This is accomplished using a modified version of the standard three step handshake pictured below.
Q7.4 (5 points) Some sequence numbers are no longer necessary in *Half TCP*. Which fields **do not** need to be transmitted? Select all that apply.
☐ (G) The sequence number in the SYN packet
☐ (H) The sequence number in the SYN-ACK packet
☐ (I) The ACK number in the SYN-ACK packet
☐ (J) The sequence number in the ACK packet
☐ (K) The ACK number in the ACK packet
☐ (L) None of the above
**Solution:** The key insight here is that because the student isn’t sending messages to the server, the student’s sequence numbers are no longer necessary. The SYN and ACK packets are sent from the student to the server, so their sequence numbers are no longer necessary. The SYN-ACK packet is sent from the server to the student, so its ACK number is no longer necessary.
An earlier version of the solutions incorrectly marked H, K as the set of correct answers. When revising the exam, we changed the question to be "which fields **do not** need to be transmitted,"
which caused the set of correct answers to be inverted.
Q7.5 (3 points) Which of these are consequences of moving from TCP to *Half TCP* for this application? Select all that apply.
- (A) The student will no longer receive announcements in the correct order
- (B) The server will not have to keep track of as much state
- (C) The student will not have to keep track of as much state
- (D) None of the above
- (E) ——
- (F) ——
**Solution:** Announcements are sent from the server to the student. We are still using sequence numbers in this direction, so the announcements are still received in the correct order. Because the server and student each only need to keep track of one sequence number instead of two, they both do not need to keep track of as much state.
The 161 staff likes security and decides to use TLS over *Half TCP*. Assume that the staff server has a valid certificate for their public key.
For each different adversary below, select all attacks which become *easier* when running TLS over *Half TCP* compared to normal TCP.
Q7.6 (3 points) Off-path adversary
- (G) RST Injection Attack
- (H) Interfere with a TLS handshake to learn the master key
- (I) Replay an encrypted command from a previous TLS connection
- (J) None of the above
- (K) ——
- (L) ——
Q7.7 (3 points) On-path adversary
- (A) RST Injection Attack
- (B) Interfere with a TLS handshake to learn the master key
- (C) Replay an encrypted command from a previous TLS connection
Q7.8 (3 points) Man-in-the-middle adversary
☐ (G) RST Injection Attack
☐ (H) Interfere with a TLS handshake to learn the master key
☐ (I) Replay an encrypted command from a previous TLS connection
☐ (J) None of the above
Solution: The key insight here is that attacks on the TLS protocol are not made any easier by using half-TCP, because the cryptographic messages sent between the student and the server are unchanged. The only attack that becomes easier is the RST injection attack for an off-path attacker, since the attacker doesn’t need to guess sequence numbers when injecting a RST packet from the student to the server. On-path and MITM attackers can see all sequence numbers, so RST injection is not any easier for them.
This is the end of Q7. Proceed to Q8 on your answer sheet.
Q8 Election Security
(23 points)
The 2020 elections are coming up, and the United States Government has tasked you with securing the nation’s voting machines!
Assume election headquarters are in a top-secret, undisclosed site. All incoming network requests pass through a network-based intrusion detection system (NIDS), as well as a firewall. Outside users can only access the server with HTTPS.
Q8.1 (3 points) Which of these attacks are always preventable in this setup? Assume the attacker is on-path. Select all that apply.
- [ ] (A) RST Injection Attack
- [ ] (B) SQL Injection Attack
- [ ] (C) Reflected XSS Attack
- [ ] (D) None of the Above
Q8.2 (3 points) Which of these attacks are always preventable in this setup? Assume the attacker is on-path. Select all that apply.
- [ ] (G) SYN Flooding Attack
- [ ] (H) DNS Spoofing Attack
- [ ] (I) DDoS Attack
- [ ] (J) None of the Above
Solution:
- RST Injection Attack - HTTPS doesn’t prevent RST Injection attacks, so they’re still a potential vulnerability.
- SQL Injection Attack - these attacks are generally application-layer (so transport-layer security and firewalls don’t protect against them)
- Reflected XSS Attack - same reasoning as above. Additionally, even if NIDS were capable of detecting these over HTTP, it wouldn’t be able to see any payloads under HTTPS.
- SYN Flooding Attack - these attacks are preventable using SYN Cookies!
- DNS Spoofing Attack - none of the defenses prevent DNS Spoofing
- DDoS Attack - not much a NIDS can do here, unfortunately
Q8.3 (3 points) An attacker injects malicious code on a server inside the election headquarters that changes all submitted votes to one candidate. Which detection system is best suited to defend against this attacker?
- [ ] (A) HIDS
- [ ] (B) NIDS
- [ ] (C) Firewall
- [ ] (D)____
- [ ] (E)____
- [ ] (F)____
Solution: Only a host-based system would be able to detect and/or prevent this attack from happening!
Q8.4 (3 points) An attacker realizes that the ballot boxes are running a vulnerable version of Linux, and uses a previously-known buffer overflow exploit. Which detection method is best suited to defend against this attacker?
- (G) Anomaly-Based Detection
- (J) Behavioral-Based Detection
- (H) Signature-Based Detection
- (K) —
- (I) Specification-Based Detection
- (L) —
Solution: Signature-based detection approaches are primarily responsible for catching known attacks!
Q8.5 (5 points) Ben, a computer scientist at the top-secret site, has a HIDS installed on his work laptop. He decides to sign into his personal email account, claiming that HTTPS will protect the government from seeing his emails. Is he correct? Justify your answer in 1–2 sentences.
- (A) Yes
- (B) No
- (C) —
Solution: Host-based intrusion detection systems are capable of reading data inbound/outbound HTTPS connections, so Ben’s use of HTTPS doesn’t really help him here.
We also accepted yes as an answer if it was justified by claiming he could use an email client that the HIDS didn’t have access to.
Q8.6 (3 points) You’re discovered that an attacker has managed to connect to a service running inside our network from IP Address 5.6.7.8 and is in the process of performing a DoS attack! Write a stateful firewall rule to block all traffic originating from the attacker. Our service is running on IP address 1.2.3.4 (port 443).
Solution: drop * 5.6.7.8 :*/ext -> 1.2.3.4 :443/int
Q8.7 (3 points) You’ve received a tip that attackers have devised a plan to spoof ballot submissions. Here’s the information that your source provides:
- 20 out of every 100 submissions are malicious.
- The cost to investigate an incorrectly flagged submission is $5.
• The cost of letting a spoofed submission through is $50.
You’re offered two different intrusion detection systems. System A offers a false positive rate of 10% and a false negative rate of 25%. System B offers a false positive rate of 50% and a false negative rate of 5%. Which do you choose?
- (A) System A
- (B) System B
- (C) Not enough information
- (D) Either system
- (E) ______
- (F) ______
**Solution:** The expected cost per 100 submissions:
- System A:
\[(0.10) \times (80) \times (5) + (0.25) \times (20) \times (50) = 290\]
- System B:
\[(0.50) \times (80) \times (5) + (0.05) \times (20) \times (50) = 250\]
So System B is better
**This is the end of Q8. Proceed to Q9 on your answer sheet.**
Q9 **Cookie Debugger**
(37 points)
EvanBot is adding a feature on the CS161 course website that lets students log in and view their grades. However, Bot forgot to remove a debugging feature—if anyone visits cs161.org/debug, the webpage will display all the cookies sent to the server.
Assume the cs161.org/debug page does not have any other functionality. Assume anyone can create an account on the website. Each subpart is independent.
Q9.1 (3 points) Which of the following URLs have the same origin as http://cs161.org/debug according to the same-origin policy?
- (A) http://cs161.org/
- (B) http://cs161.org:8081/debug
- (C) https://cs161.org/debug
- (D) None of the above
**Solution:** Two sites must have identical protocols, hostnames, and ports in order for them to be qualified as having the same origin (under the SOP). In this case, the two options that do not work are the one with Port 8081, and the one with protocol https://. Note: SOP is not affected by the URL Path.
Q9.2 (5 points) Which of the following cookies would be displayed when visiting https://cs161.org/debug? Assume the client’s origin is https://cs161.org.
- (G) Domain = cs161.org, Path = /, Secure
- (H) Domain = cs161.org, Path = /, HttpOnly
- (I) Domain = debug.cs161.org, Path = /, Secure, HttpOnly
- (J) Domain = cs161.org, Path = /debug
- (K) Domain = cs161.org, Path = /, SameSite=strict
- (L) None of the above
**Solution:** The HttpOnly attribute is irrelevant here, because we’re not concerned with modifying the cookie in JavaScript. The Secure attribute is also irrelevant here, since we are using HTTPS and the cookie will be sent regardless of whether the Secure attribute is set.
The domains and paths are valid in all options, so all cookies will be displayed when sent.
Q9.3 (3 points) Suppose you set a cookie test=<script>alert("This exam is hard!")</script> with valid attributes, and load https://cs161.org/debug. A pop-up that says This exam is hard! appears in your browser. Have you successfully found a server vulnerability?
Final Exam
Page 27 of 32
CS 161 – Summer 2020
(Clarification during exam: The pop-up had a typo in it.)
(A) Yes, you found an XSS vulnerability
(B) Yes, you found a CSRF vulnerability
(C) No, because you have not changed any state on the server side
(D) No, because the JavaScript does not run with the origin of cs161.org
Q9.4 (5 points) Consider a modification to the course website. Before rendering any page, the server searches for every pair of `<script>` and `</script>` tags and removes the tags and everything between the tags.
Can you still cause JavaScript to run in your browser using `<script>` tags? If yes, provide a cookie name and value (written as name=value) that would cause `alert(1)` to run. If no, briefly explain why.
(G) Yes (H) No
Solution: Yes. Consider the cookie `<script>alert(1)</script>`. After removing the `<script>` tags and everything in between, you’re left with `<script>alert(1)</script>`.
Q9.5 (5 points) Consider a modification to the course website. Before rendering any page, the server renders the cookie name in an isolated environment and ensures that no scripts are run, and then does the same for the cookie value.
Assume that the website displays the cookie name and value with no added text in between. Can you still cause JavaScript to run in your browser using `<script>` tags? If yes, provide a cookie name and value (written as name=value) that would cause `alert(1)` to run. If no, briefly explain why.
(A) Yes (B) No
Solution: Yes. Set the cookie name to `<script>alert(1)</script>` and the cookie value to `</script>`. Then neither part of the cookie runs a script in a sandbox, but together they cause the script to run.
Q9.6 (3 points) Is it possible to create a link to cs161.org/debug that will cause another user to run malicious JavaScript when they click on the link?
(G) Yes, because you can place JavaScript in the HTTP GET parameters
(H) Yes, because you can place JavaScript in the HTTP POST body
(I) No, because there is nowhere to place the JavaScript
(J) No, because the server is secure against this attack
(K) —
(L) —
Solution: The cs161.org/debug webpage only displays cookies, not any HTTP GET parameters or HTTP POST body. Cookies cannot be attached in a malicious URL.
Q9.7 (5 points) Suppose a victim visits the attacker-controlled evil.cs161.org. Write a JavaScript snippet that would cause the victim to run `alert(1)` in their browser with the origin of cs161.org. If you don’t know the exact Javascript syntax, pseudo-code is acceptable.
Hint: `window.location = "google.com";` in JavaScript causes the user to load google.com.
Solution:
```javascript
<script>
document.cookie="test=<script>alert(1)</script>;domain=cs161.org;path=/";
window.location = "cs161.org/debug";
</script>
```
The first part of the script sets a cookie that would cause `alert(1)` to run, with the appropriate domain and path. The second part of the script causes the user to load cs161.org/debug with the malicious cookie.
Q9.8 (5 points) Which of the following malicious pages would be able to run your Javascript exploit against the user?
- (G) http://very.evil.cs161.org/
- (H) http://very-evil.cs161.org/
- (I) http://evil.cs161.org/
- (J) http://cs161.org/evil
- (K) http://evil.com/
- (L) None of the above
Solution: very.evil.cs161.org, very-evil.cs161.org, and cs161.org all contain the cs161.org domain suffix, so they are able to set the XSS cookie and execute the attack. Note that the path is irrelevant.
Q9.9 (3 points) Consider a modification to the course website. The cs161.org/debug page only displays cookies if the request contains a valid session token. Does your Javascript exploit still work?
- (A) Yes, with no modifications
- (B) No, because the server is secure against this attack
Solution: The exploit still works as described in the previous solution, since it sets a cookie that can be accessed by any request containing the appropriate domain and path.
(B) Yes, with minor modifications (changing 1-2 lines of code)
(C) No
Solution: The attacker could create an account, receive a session token, and set a cookie in the victim’s browser with that session token. This will cause the victim’s request to look like it came from the attacker, but the JavaScript will still run in the victim’s browser.
This is the end of Q9. Proceed to Q10 on your answer sheet.
Q10 Bitcoin (12 points)
Assume a simplified Bitcoin model, where each block contains the following fields:
- **minerID**: The public key of the node who mined this block. Recall that the person who mined a block is given a mining reward in Bitcoin. Assume that a miner can redeem this award by simply referencing the block i.e. the initial award is *not* stored as a transaction.
- **prevHash**: The hash of the previous block
- **transactions**: The list of transactions. Recall each transaction contains references to its origin transactions, a list of recipients, and is signed using the private key of the coins’ owner.
- **nonce**: A value such that the hash of the current block contains the correct number of zeros
Assume that the hash of a block is computed as:
\[
\text{Hash(minerID || prevHash || transactions || nonce)}
\]
Bob wants to save on computing power by omitting certain fields in a block from being part of the hash. For each modified block hashing scheme below, select all the things an adversary with a single standard CPU can do.
Assume that if the adversary can come up with a modified blockchain of the same length, the rest of the network will accept it. Furthermore, assume the adversary has not made any transactions thus far. **Any option that could result in an invalid state should not be selected.**
Q10.1 (4 points) Each block hash is computed as \(\text{Hash(prevHash || transactions || nonce)}\)
- (A) Modify a block to gain Bitcoin
- (B) Given some amount of pre-computation, can consistently win proof of work
- (C) Modify some transaction amounts
- (D) Can remove any transaction in an arbitrary block by *only* modifying that block
- (E) None of the above
- (F) —
**Solution:** An adversary can change the **minerID** of some past blocks to give themselves the mining reward. Note that this mining reward can’t be used in a subsequent transaction or else we would reach an invalid state, but, at the very least, the most recently added block will always have a mining reward that hasn’t been spent yet.
Q10.2 (4 points) Each block hash is computed as \(\text{Hash(minerID || transactions || nonce)}\)
- (G) Modify a block to gain Bitcoin
- (H) Given some amount of pre-computation, can consistently win proof of work
- (I) Modify some transaction amounts
- (J) Can remove any transaction in an arbitrary block by *only* modifying that block
- (K) None of the above
- (L) —
Solution: Like before, an adversary can change any minerIDs that haven’t been spent yet since blocks no longer have a requirement on the past chain.
They can also precompute a valid nonce for a block they want to add, since the hash is independent of the chain.
Since the blocks aren’t directly dependent on each other anymore, the adversary can change any individual block. However, they can’t remove a transaction if a future transaction makes use of it (this would be an invalid state).
They cannot modify a transaction amount because each transaction is signed.
Q10.3 (4 points) Each block hash is computed as Hash(minerID || prevHash || nonce)
☐ (A) Modify a block to gain Bitcoin
☐ (B) Given some amount of pre-computation, can consistently win proof of work
☐ (C) Modify some transaction amounts
☐ (D) Can remove any transaction in an arbitrary block by only modifying that block
☐ (E) None of the above
Solution: We can’t modify minerIDs anymore since the blockchain has dependence on them. We can’t consistently win PoW via pre-computation since the blocks form a blockchain. We can’t remove any transaction in an arbitrary block as this might cause an invalid state and we can’t modify transaction amounts because of signatures
This is the end of Q10. Proceed to Q11 on your answer sheet.
|
{"Source-Url": "https://cs161.org/assets/exams/final/su20finalsolutions.pdf", "len_cl100k_base": 12866, "olmocr-version": "0.1.50", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 65050, "total-output-tokens": 14158, "length": "2e13", "weborganizer": {"__label__adult": 0.0012454986572265625, "__label__art_design": 0.0014524459838867188, "__label__crime_law": 0.0028667449951171875, "__label__education_jobs": 0.152099609375, "__label__entertainment": 0.0007877349853515625, "__label__fashion_beauty": 0.0006098747253417969, "__label__finance_business": 0.00313568115234375, "__label__food_dining": 0.0012454986572265625, "__label__games": 0.01128387451171875, "__label__hardware": 0.005664825439453125, "__label__health": 0.0015192031860351562, "__label__history": 0.0014581680297851562, "__label__home_hobbies": 0.0006580352783203125, "__label__industrial": 0.0019664764404296875, "__label__literature": 0.0016794204711914062, "__label__politics": 0.0010671615600585938, "__label__religion": 0.0013580322265625, "__label__science_tech": 0.29736328125, "__label__social_life": 0.0007519721984863281, "__label__software": 0.044708251953125, "__label__software_dev": 0.464111328125, "__label__sports_fitness": 0.0013532638549804688, "__label__transportation": 0.0011272430419921875, "__label__travel": 0.0005269050598144531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52273, 0.01624]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52273, 0.59219]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52273, 0.8837]], "google_gemma-3-12b-it_contains_pii": [[0, 850, false], [850, 2803, null], [2803, 4187, null], [4187, 5867, null], [5867, 7484, null], [7484, 9172, null], [9172, 11158, null], [11158, 13048, null], [13048, 14814, null], [14814, 17122, null], [17122, 17766, null], [17766, 19480, null], [19480, 21270, null], [21270, 21618, null], [21618, 23812, null], [23812, 26079, null], [26079, 28392, null], [28392, 30335, null], [30335, 31581, null], [31581, 33488, null], [33488, 35415, null], [35415, 36883, null], [36883, 37679, null], [37679, 39529, null], [39529, 41372, null], [41372, 42097, null], [42097, 44192, null], [44192, 46067, null], [46067, 48132, null], [48132, 48540, null], [48540, 50964, null], [50964, 52273, null]], "google_gemma-3-12b-it_is_public_document": [[0, 850, true], [850, 2803, null], [2803, 4187, null], [4187, 5867, null], [5867, 7484, null], [7484, 9172, null], [9172, 11158, null], [11158, 13048, null], [13048, 14814, null], [14814, 17122, null], [17122, 17766, null], [17766, 19480, null], [19480, 21270, null], [21270, 21618, null], [21618, 23812, null], [23812, 26079, null], [26079, 28392, null], [28392, 30335, null], [30335, 31581, null], [31581, 33488, null], [33488, 35415, null], [35415, 36883, null], [36883, 37679, null], [37679, 39529, null], [39529, 41372, null], [41372, 42097, null], [42097, 44192, null], [44192, 46067, null], [46067, 48132, null], [48132, 48540, null], [48540, 50964, null], [50964, 52273, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 52273, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52273, null]], "pdf_page_numbers": [[0, 850, 1], [850, 2803, 2], [2803, 4187, 3], [4187, 5867, 4], [5867, 7484, 5], [7484, 9172, 6], [9172, 11158, 7], [11158, 13048, 8], [13048, 14814, 9], [14814, 17122, 10], [17122, 17766, 11], [17766, 19480, 12], [19480, 21270, 13], [21270, 21618, 14], [21618, 23812, 15], [23812, 26079, 16], [26079, 28392, 17], [28392, 30335, 18], [30335, 31581, 19], [31581, 33488, 20], [33488, 35415, 21], [35415, 36883, 22], [36883, 37679, 23], [37679, 39529, 24], [39529, 41372, 25], [41372, 42097, 26], [42097, 44192, 27], [44192, 46067, 28], [46067, 48132, 29], [48132, 48540, 30], [48540, 50964, 31], [50964, 52273, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52273, 0.04037]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
96fcc225363aa916fa101f234d717d3e5bb71df9
|
An Empirical study of the JaMoPP project using code coverage based system tests.
NUI MAYNOOTH
Ollscoil na héireann Má Nuad
Department of Computer Science
Colin Bell B.Sc.
January 2010
Supervisor: James Power
Declaration
I hereby certify that this material, which I now submit for assessment on the program of study leading to the award of Master of Science in Software Engineering, is entirely my own work and has not been taken from the work of others save and to the extent that such work has been cited and acknowledged within the text of my work.
Signed: ________________________ Date: __________
Abstract
Model Driven Software Development is an attempt to build software systems by combining the two disciplines of software programming and modelling. But due to the gap in technology, there are a number of problems in using these models to generate the final source code implementation. This has meant that Model Driven Software Development has become a future goal rather than a current reality. The Java Model Parser and Printer project tries to bridge this gap by turning Java into a model. This means that Java can now be used and implemented like any of the other models within the Model Driven Software Development process.
This paper will take a look at the effectiveness of the Java Model Parser and Printer project by performing a number of System tests on it. This will help indicate now well the project can transform Java input files to java models. Coverage readings will be taken on these tests to look at how much of the project’s grammar and abstract syntax tree are being used during the process of transforming Java source code into Java models. A look will then be taken at the coverage results to see if a correlation exists between the grammar and the abstract syntax tree results. Finally conclusions and future work will be discussed to close the paper.
Contents
Section 1: Introduction ................................................................. 6
Section 2: Background................................................................. 8
2.1 Eclipse Development environment and Framework ....... 8
2.2 The EMF Framework and Ecore Modelling ............... 9
2.3 Java Model Parser and Printer ........................................ 11
2.4 Java Meta-Model .............................................................. 13
2.5 Text Syntax for Java ........................................................... 14
2.6 The Complete System View and Modelling process .... 17
2.7 Software Testing and Code Coverage ......................... 18
2.8 Goals of the paper .......................................................... 19
Section 3: Testing Approach .................................................... 20
3.1 JaMoPP setup and package descriptions ................... 21
3.2 Testing process overview ............................................... 23
Section 4: Testing and Coverage Results .............................. 26
Section 5: Conclusion .............................................................. 35
5.1 Future Work ................................................................. 37
References ............................................................................. 38
## Table of Figures
**Figure 1:** EMF Model as the common high level representation within the EMF framework [3].................................................................................................................................9
**Figure 2:** A subset of the Ecore meta-model [3]..............................................................................10
**Figure 3:** A subset of the Java meta-model defined in the JaMoPP project [1].
....................................................................................................................................................14
**Figure 4:** The process of creating an ANTLR Parser from a concrete syntax after a meta-model has been defined within the EMF framework [5]..................15
**Figure 5:** The full process that was used in generating the Java specific parser for the JaMoPP project[1]..............................................................................................................16
**Figure 6:** The complete JaMoPP project and modelling process [1].................17
**Figure 7:** A Java class beside its Java meta-model instance which has the structure of an Ecore model [4]..................................................................................................................25
**Figure 8:** Partial view of the subfolders of the generated folder found in the org.emftext.language.java package. .................................................................27
**Figure 9:** Break down of the Coverage results for the Parser methods that map to the corresponding impl folders of the org.emftext.language.java package. The names of the folders correspond to elements of the Java language. ...........28
**Figure 10:** Graph of the coverage results for the parser and AST for the modifier component of the Java language.................................................................29
**Figure 11:** Coverage results for the All Test tests for the parser and the AST. 31
**Figure 12:** The breakdown of the AST results for a blank input file. The total AST result was 24.6%..............................................................................................................................32
**Figure 13:** The breakdown of the parser and AST coverage results for the Netbeans test..................................................................................................................34
Section 1: Introduction
This paper describes a study that uses the Java Model Parser and Printer project. Its place within the world of software design and software design technology will be described. It will be evaluated in a serious of system tests that will indicate its effectiveness in handling the Java language.
Model driven software development (MDSD) methodologies are used in the production of software systems that start out as a process of representing each stage of the system as a model or abstraction. These models initially are more closely related to the particular domain that the final product is been created for, rather than software programming focused models. This allows for a number of advantages which would include maximised compatibility between systems and also allowing the development team and final end users to communicate about the system more clearly. This form of software development is considered to be effective if the models used to represent the system can provide ease of understanding to the high level modellers and the final stakeholders while providing enough detail for the low level coding implementation of the final system.
The MDSD methodology combines the two disciplines of coding and modelling. Practitioners of these two disciplines view their own discipline as being completely different and separate from the other. Due to this view being upheld, it has had a knock on effect in the world of MDSD. The MDSD process is designed to be able to create a full software system. Starting from an initial system model and then generating other models at different levels of abstraction all the way down to the final code implementation. But this final transformation is done in a weak structured manner [1] while the other model transformations are done in a well defined way. Structure is very important within the world of modelling and any loss of structure will have undesired consequences within transformations between the models.
This has been the state of MDSD until recently when a push to unite the two disciplines has started to occur. The Eclipse development environment is a piece of software that allows for software development for the Java language and many other languages. It has become very popular within the software community due to its extendable architecture and open source nature. A modelling plug-in for Eclipse, known as the Eclipse Modelling Framework (EMF), extends Eclipse to handle models such as UML. This framework itself can also be extended to allow it to handle any standardised model. The EMF plug-in can take
a Java source code file and transform it into its corresponding UML representation and also into a XML schema. While the EMF plug-in can transform Java into UML and back again it doesn’t treat Java as a modelling language but rather still just as code. This means that Java is not handled like other EMF models and this still leaves the gap between modelling and coding open.
A group of researchers from the Technical University Dresden in Germany identified the gap mentioned above and looked for ways to turn Java into a full modelling language. They used the EMF plug-in as its framework and architecture. They came up with the Java Model Parser and Printer (JaMoPP) project which supplies a parser for turning a Java source code file into a Java EMF model and then a printer which can take a Java EMF model and turn it back into source code.
This paper will look at the JaMoPP project and test its effectiveness. From these tests some coverage readings will be collected. The coverage results will provide information on how much of the grammar has been used and also the percentage of coverage achieved for the corresponding generated abstract syntax tree. Areas where coverage has been fully achieved can be identified and also areas where coverage has not fully been achieved or where it has only been particularly done can also be identified. From the results it can also be seen if there is a correlation between the coverage in the parser and the abstract syntax tree. There have been many studies conducted over the years on the coverage results that a set of input tests achieve for a language’s grammar. Many papers look at grammar coverage from a number of possible perspectives. One method is known as rule based coverage which is similar to decision coverage at the code level in a standard software testing context [2]. This paper will look at JaMoPP’s grammar using statement coverage which is getting the percentage of the lines of code that are executed during testing. This paper is unique in the fact that it is testing a grammar that transforms Java code to a Java meta-model instance which can then be transformed and handled like any EMF model.
In section 2 the background for the paper will be laid out. This will cover Eclipse, the EMF plug-in, the JaMoPP plug-in, software testing as it relates to this paper and also the goals of this paper. Section 3 will describe the testing process used in this paper. It will identify test suits as well as the packages and classes of the JaMoPP project that will be used for gathering coverage results. Section 4 will discuss the results achieved from the tests and
also provide information on which paths through the grammar and the abstract syntax tree were executed. Section 5 will conclude the paper.
**Section 2: Background**
This section will describe all the background details necessary to understand the JaMoPP project and its goals. It will describe the Eclipse project and its plug-in capacity including the EMF Framework plug-in with its core language Ecore. It will describe how the JaMoPP developers defined their Java meta-model and the need for a concrete syntax so that the meta-model can be of use in a practical sense. Also this section will describe the process of creating an instance of the Java meta-model and how it slots into the EMF Framework. Finally a brief description of software testing and testing techniques will be given with a focus on their relevance to this paper.
**2.1 Eclipse Development environment and Framework**
Eclipse is an open source software development project that is made available under the common public license initiative run by IBM. It supports the practice of software development in multiple languages and also can be run on numerous operating systems. It is comprised of an Integrated Development Environment (IDE) with an extendable architecture. It has a small run-time kernel and all other functionality is provided by plug-ins into its architecture. This is in contrast to other development environments which all have hard coded functionality which also makes Eclipse a lightweight program.
Eclipse is divided into three main projects. First is known as the Eclipse project which contains the core components required for development with Eclipse. These components are fixed and are commonly downloaded as the Eclipse Software Development Kit. This project itself divides down into three component projects [3], which are as follows:
1. The Eclipse platform: This is a framework used to build IDE’s. It simply defines the basic structure of an IDE and when specific tools are used to extend the framework this will define the particular IDE for the language that is required [3].
2. Java Development Tools (JDT): This is a Java tool set that expands the Eclipse platform above to allow for a development environment to develop Java programs.
3. Plug-in Development Environment (PDE): extends the JDT by providing tools to handle the non Java aspects of plug-in development. One example would be providing registration for plug-in extensions.
The second project is known as The Tools Project. This project defines and coordinates the integration of different frameworks and other tool sets for defining IDE’s for other languages. This includes model based frameworks i.e. the EMF Framework and also other IDE’s such as the CDT for C/C++ and many other languages. The final project is known as The Technology Project. This project provides the opportunity for researches and academics to get involved in the evolution of the eclipse project [3].
2.2 The EMF Framework and Ecore Modelling
The EMF framework is a modelling framework for eclipse. The EMF project provides a framework and code generation facility that allows for the generation of UML, XML or Java implementations of a system [3]. Regardless of which format the system is defined in an EMF model acts as the common high level representation that holds them all together.

Figure 1: EMF Model as the common high level representation within the EMF framework [3]
For example imagine a system that implements a library book catalogue. The system has been coded in Java and by only clicking a button the corresponding UML diagram can be generated or an XML schema implementation can be generated.
EMF can be envisioned as the start of the practical realisation of the combination of the two disciplines of modelling and coding. An EMF model is at the same level of abstraction as the class diagram subset of UML [3]. This is enough a level of abstraction for a number of benefits to occur for programmers and modellers. Some of these benefits would
be providing understandability for both programmers and modellers, data integration between
applications and also allows for the auto-generation of code and models. EMF is integrated
with and fine tuned for efficient programming [3]. This brings together high level modelling
with low level programming [3] which is one key area that the JaMoPP project wants to build
on.
EMF is described in an XMI model specification and provides a tool set known as the
MDT (Model Development tools) plug-in for Eclipse to provide support for adapter classes
and editors for models within Eclipse and all this was developed using Java. The EMF
framework needs a standardised well established meta-model to allow for the creation of
models, meta-models and the auto-generation of code. EMF uses Ecore which is a well
known meta-language to achieve this. Ecore is called a core language because it itself is also
an EMF model which makes Ecore a meta-meta-model.
Figure 2: A subset of the Ecore meta-model [3]
Ecore was used by the JaMoPP developers to create their Java meta-model and with
this they gained the advantage of being able to use Java like any other EMF model. The
above diagram is a simplified subset of the process that occurs when an Ecore model is been
created. It shows some of the major classes of the Ecore language used in building a model
from Ecore but it should be noted that there are many more relationships and other classes
involved in the Ecore meta-modelling process. A brief description of some of the more major
aspects and classes of Ecore will be given.
- The EClass class is used to model classes. These classes are identified by a unique
identifier and Ecore can also represent data about a class. These data components of
a class can represented using EAttributes and EReferences. To support inheritance, a class can refer to a number of other classes as its super types [3].
- The EAttribute class is used to model attributes which represent the attributes of the class and each of these have specific EDataTypes which need to be handled by the potential Java meta-model in order to faithfully create a instance of the Java meta-model without losing information.
- The EReference class is used for modelling associations between classes which has to be of type EClass.
- The EDataType class specifies attributes to model primitive types and object data types which are explicit to Java and Java models but would not be already defined within the EMF framework.
- Related EClasses and EDataTypes are grouped into packages called EPackage.
- The EFactory class is used to create instances of the EClasses and values of the EDataTypes that belong to the EPackages.
There are further structural, types, behavioural and classifiers classes involved in building a model from the Ecore meta-modelling language but are too many to mention in order to provide a brief discussion of the Ecore meta-model and how the JaMoPP developers would have used the Ecore language within their work.
When the JaMoPP developers used the Ecore language to define their Java meta-model this is how the Java language would map onto the Ecore classes. EClasses would be used to represent Java interfaces. The EAttributes and EReferences would be identified from the methods that are defined in each of the interfaces. An Ecore class known as EEums would be used to model Java classes. EPackages could also be used to model interfaces but their specific components would come from the files import statements. EDataTypes may be specified explicitly or can be mapped directly to corresponding Java types. Special comments are used by the EMF framework to identify model elements and they can also provide additional information that is not directly expressed in the Java interfaces [17].
### 2.3 Java Model Parser and Printer
As mentioned in the introduction section, the JaMoPP developers found a gap within the MDSD process between the final stages of moving from the models to the concluding code of the system. The code that is outputted is normally generated as plain text and loses the
structure that was present in the models thanks to the well defined and formal meta-modelling process. Losing structure causes many issues and problems within the modelling world. For one there would be no guarantees regarding syntactic or semantic correctness [4] in the generated model of the code. Also during model generation, if errors were to occur, there would be no way for the software to trace back and find the element of the model that caused the generation of that particular line of code.
The goal of Model-Driven Software Development is the (semi-)automatic generation of software systems from models across multiple stages [1]. Basically any Model that is created to represent the system no matter the level of abstraction should be able to be used to help generate any other model that represents the system at a different level of abstraction all the way down to the Java source code. This is achieved using a standardised model meta-language. A meta-language defines the types and constraints for all the models and the transformations between models within the MDSD process. This allows for all transformations to be checked for correctness. These models which represent the potential software system should be standardised, have the possibility to be redefined and finally transformed into other models.
This is where the JaMoPP project comes in to play. The developers of the JaMoPP project identified the above gap in the MDSD process and argued that it can be fixed. They see that people view the modelling process and the coding process as two completely separate disciplines and that if this viewpoint could be changed the gap could be removed. They argue that modelling tools could be developed that could handle the Java language in the same manner that the current tools handle modelling languages. Then the tools could be used to model Java just like any other modelling language.
The JaMoPP project provides the following three components in order to turn Java into a workable modelling language:
1. JaMoPP defines a complete meta-model for Java that covers the whole of the Java 5 language. The Java meta-model is defined in the commonly used meta-modelling language Ecore. This allows Java to be processed by meta-modelling tools such the EMF framework. [1]
2. JaMoPP defines a text syntax that conforms to the Java language specification. This is used to generate a parser which can be used to create instances of the meta-model
from Java source code and a printer which transforms instances of the meta-model back into Java source code.
3. JaMoPP’s Java meta-model was specially designed by the JaMoPP developers to handle Java’s referencing and type rules. This is done using generated objects known as Java resolvers. These resolvers guarantee that an instance of the Java meta-model correctly models a Java source file and also Java’s static semantics.
The final result of the JaMoPP project is that Ecore based modelling tools can process Java files in the same way that they can process other models.
2.4 Java Meta-Model
The Java Language Specification does not include a complete explicit meta-model, the syntax and semantics of the language are specified either informally or using syntax diagrams [1]. The closest thing to a meta-model that Java has is its build-in reflection methods. These do not capture fine-grained elements like statements and code control flow elements etc [1]. The Javac[18] parser has internal meta-models which are written in Java and help to create the abstract syntax tree which can model all aspects of the Java language but this is not done in a standard meta-modelling language format.
There are a number of standardised meta-models defined for Java but these suffer from incompleteness. The various problems for each of these meta-models range from not providing ways to capture blocks, statements or expressions to being purely tree structured. The tree structured meta-models cannot model static semantics such as identifiers and these are not resolved to their respective elements but only stored as plain strings [1]. This raises problems when trying to uphold consistency between the original model and newly manipulated versions of the model. Some of these meta-models can transform a Java program into another type of model but not vice versa.
The JaMoPP developers could not find a meta-model that both conforms to a well established meta-modelling language, for example Ecore and also fulfils their need for completeness. They decided to examine and compare the existing meta-models, extract commonalities and extend to fully support the Java Language Specification [1].
The meta-model that the JaMoPP project uses to define its Java meta-model is the core language Ecore. Ecore is a core model (meta-meta-model) that acts as its own meta-
model so it can be defined in terms of itself which allows for it to be used as the core meta-model for the EMF framework.
The JaMoPP developers ended up creating their meta-model using 80 abstract and 153 concrete classes which are distributed among 18 packages [1]. Their meta-model can model all of Java 5 including its new features, generics and annotations.
### 2.5 Text Syntax for Java
The current generation of MDSD developers have taken a slight turn away from solely using the conventional method of representing models graphically to now using a usable textual representation for modelling and editing. A number of tools for defining textual syntaxes have a risen over the last few years. Some can produce what is known as Generic syntaxes. These syntaxes are on the same level as meta-modelling languages. This means that a syntax can be derived automatically for concrete modelling languages [4]. Others produce what is known as custom syntaxes that have to be manually specified.
To bridge this gap between these two versions of textual syntax the developers behind the JaMoPP project created EMFText. EMFText allows developers to stepwise refine specifications that are automatically derived from given meta-models. This enables the developer to build custom syntax starting from a generic syntax [4]. EMFText is a plug-in for the Eclipse development environment. This allows for EMFText to work in conjunction with the EMF Framework and also with meta-models defined using EMF.
The EMFText developers used the Extended Backus-Naur Form (EBNF) to form the basis of their syntax specification language ConcreteSyntax (CS). From the CS a Java parser and printer were generated.
<table>
<thead>
<tr>
<th>Original Technology</th>
<th>Leveraged Technology</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Abstract Syntax</strong></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>Concrete Syntax</strong></td>
<td></td>
</tr>
<tr>
<td></td>
<td><strong>Develop Concrete Syntax</strong></td>
</tr>
<tr>
<td></td>
<td><strong>cs: EMFText</strong></td>
</tr>
<tr>
<td></td>
<td><strong>generates</strong></td>
</tr>
<tr>
<td></td>
<td><strong>Parser:ANTLR</strong></td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td><strong>as: Ecore</strong></td>
</tr>
<tr>
<td></td>
<td><strong>Design Abstract Syntax</strong></td>
</tr>
</tbody>
</table>
*Figure 4: The process of creating an ANTLR Parser from a concrete syntax after a meta-model has been defined within the EMF framework [5]*
The diagram above shows the process that the developers of the JaMoPP project used to create the Java meta-model using the EMF framework’s core language Ecore and then using that to create the concrete syntax. Using this, as mentioned above EMFText derives a context-free grammar and exports it as an ANTLR [6] parser specification. EMFText then
transparently delegates parser and lexer generation to ANTLR by passing the generated grammar file [7].

**Figure 5:** The full process that was used in generating the Java specific parser for the JaMoPP project[1]
Java has scoping and reference rules which need to be modelled correctly by EMFText. EMFText generates objects known as resolvers to take care of these requirements and to convert parsed tokens to an adequate representation of the Java language. The parser is used to create model representations of text based Java programs from the Java meta-model. While the printer performs the opposite operation, it takes an instance of the meta-model and turns it back into a text based representation which is formatted and is in a human readable format. Both instances of the model, whether it’s created by the printer or the parser, should be equal and should be able to be transformed between either model without losing or adding in any extra information.
Primarily the JaMoPP developers wanted the project to focus on Java source files but they also included a way to parse from class files. They use the BCEL byte code [8] to create an instance of the Java meta-model but they don’t support the use of creating text-based printer models.
2.6 The Complete System View and Modelling process
![Figure 6: The complete JaMoPP project and modelling process [1]](image)
The above image shows how the JaMoPP project operates when a modelling tool wants to use its Java meta-model. The image also shows the architecture of the project and how it slots into and expands upon the EMF Framework to model Java. The EMF core language, Ecore, brings many benefits to the table by helping EMFText to generate parsers and printers that can fit transparently into the EMF architecture. These generated components can then be used to transform Java into any EMF based model. The Framework can do this regardless of the specific syntax of Java or any other language it is required to model. In what follows, we provide a short description of the events that occur during the process of the EMF Framework transforming a meta-model instance from some model’s input with an emphasis on the Java meta-model.
A modelling tool looking to use an EMF meta-model would first connect to the EMF Framework and instantiate a Resource Set [1]. After this the Resource Set acquires a Resource Factory. The file that the modelling tool inputted into the EMF Resource Set is given a unique identifier. This is known as a URI. The URI is usually the location of the file in the operating system. The URI is stored in a global URI map for global resource registry [1]. The URI assists the EMF framework to select the correct factory for the inputted object, in this case the JavaResourceFactory, by looking at the URI’s file extension. This process helps to hide the encoding of the resource from the modelling tool that is using the EMF Framework. Next a new specialised resource is generated by EMFText from the Java ConcreteSyntax specification and connects it with a generated parser and a printer [1]. Also
at this point the Java resolver set is generated by EMFText to handle Java specifics such as name/type resolution and reference rules.
Now a new EMF model has been created and this is stored in the system as a resource (Java Resource). This resource handles the parsing, which is known as loading because it loads an instance of the Java meta-model into the EMF Framework, and printing, which is known as saving because it transforms the meta-model back to a textual representation, operations. The final component in the image above is the BCEL class parser. This is implemented in the JaMoPP project and extends the Java parser to handle class files which is outside the scope of this project.
2.7 Software Testing and Code Coverage
Software testing can be described as an empirical investigation into the quality of the software under test. Testing can provide essential information to the developers and the final end user about the effectiveness and operational capabilities of the product. The main purpose of testing is to execute a piece of software with the intent to detect failures in the running of the code. It can be found that not all failures are caused during the process of writing the code but some are caused by non-functional requirements that have not been met. These could be unforeseen security issues and also code performance issues etc. Software testing is a vast area and an essential element in the software development life cycle.
Two of the most important techniques of software testing are known as black box testing and white box testing. Black box testing views a piece of software as a black box and focuses on testing for all the possible input values and the expected output values. It is driven by the specification that was drawn up at the outset of the development life cycle. The major black box testing techniques are equivalence partitioning, boundary value analysis and use case testing. White box testing uses the internal workings of the code and data structures to derive its test cases. The main techniques that are used with this method of testing are branch testing, code coverage, Application Programming Interface (API) testing, path testing and control flow testing.
Code coverage will be used in this paper to evaluate the JaMoPP project. This testing technique is used to measure the degree to which a piece of software has been tested. The area that the coverage criteria covers are as follows:
• Method Coverage- which looks to see if all the methods within the code have been called at least once.
• Statement Coverage- how many lines of code have been executed and how many have not.
• Decision Coverage- Checks to see if each code control structure’s path has been tested fully. For example, it checks to see whether a for-loop’s true and false paths have both been executed
• Condition Coverage- Checks to see has each Boolean statement been executed for both of its true and false paths.
There are a number of different levels of software testing the one that is of important to this paper is the system test level. System tests are performed on completed and fully integrated software or hardware systems to evaluate how well it lives up to its specification. This phrase of the testing process is normally done through black box testing which has no internal knowledge of the code workings.
2.8 Goals of the paper
The main goal of this paper is to ascertain how well the JaMoPP project can model the Java 5 language. To system test the project to see if it can accept all valid Java 5 programmes that can be thrown at it. To test the project to see can it create a corresponding Java meta-model instance from a valid input file and then retransform the meta-model back to the correct Java source code file. This will prove that it is performing what it is suppose to be able to achieve. The retransformed source file can then be compared to the original input and checked for errors or missing data that was not captured by the JaMoPP project. A code coverage tool will then be used to see how many of the paths in the grammar rules have been covered and also how many of the abstract syntax tree paths have been created during the system tests. These measurements will be then used to see if a correlation exists between the grammar and the abstract syntax tree coverage results. The next two paragraphs will explain a little about grammars and abstract syntax trees to help provide an understanding of their use in programming languages and relevance to the Java meta-model.
Java is a formal language that has a syntax and a grammar. The grammar defines the rules which can be used to form valid strings, which are known as tokens, within the Java language. These tokens are formed from the language alphabet in accordance to the language
syntax. A Java ANTLR lexer, which as mentioned above was created by the CS specification, is used to scan the input code and split the code up into tokens. White space is ignored. These tokens are then fed into the parser which is a context free grammar. The parser will determine if each of these tokens are valid and also that each token appears in the correct order. Formally, a grammar is a four-tuple \((N, T, S, P)\) where \(N\) and \(T\) are disjoint sets of symbols known as non-terminals and terminals respectively, \(S\) is a distinguished element of \(N\) known as the start symbol, and \(P\) is a relation between elements of \(N\) and the union and concatenation of symbols from \((N \cup T)\), known as the production rules [2]. Java also has a symbol to represent the empty string. The production rules that are produced describe the valid sentences for the Java language. These sentences are the inputted programs that conform to the grammar of the language [2].
An Abstract syntax tree is the representation of the abstract syntactical structure of the parsed Java input source file. The tree is the result of the path taken through the context free grammar in the parsing stage. The generated abstract syntax tree only shows variables, operations and statements encountered during the parsing of the input file. These are represented as nodes on the tree. The tree does not show all the details of the file and will leave out end of line characters and brackets, which are implied by the visual structure of the tree. Within a standard Java compiler the tree is used for semantic analysis of the code and then for generation of the Java byte code representation of the input file. In the case of the JaMoPP project the abstract syntax tree is a generated model of the code with unresolved cross references [1].
**Section 3: Testing Approach**
This chapter will briefly describe the process of installing the JaMoPP project within the Eclipse environment. It will describe the packages that are required for the project to correctly work with an aim to add understanding about the system. While describing the packages it will provide an opportunity to identify the classes and sub packages that will be used to gather code coverage measurements. It will identify how the goals of this paper will be achieved. Finally the testing process will be described in detail including the kinds of test that will be run and also ideas on appropriate sources for the tests.
3.1 JaMoPP setup and package descriptions
The JaMoPP project comes as a number of plug-ins for the Eclipse environment. From the JaMoPP webpage [9] there are two ways to install the project into Eclipse. The first way is using the Eclipse download manager to install what is known as the stable JaMoPP project. This is the version that the developers provide people to use their project for model development and practical applications of Java source code to EMF model transformations or from EMF model to Java source code transformations. This version of the software cannot be tested as it does not include their testing base package and other features that are required. All documentation for the project focuses on this version of their software including the project setup demonstration on the website [10]. The other way to install this project is using a subversion tool to access the public repository through Eclipse. The link for the repository is also on the webpage. This version includes the latest builds that the developers are working on and also includes the methodology that they used to test their project on a number of big open sourced projects. This repository does not include the essential SDK of the project nor the base testing package. For this paper, this is the version of the JaMoPP project that will be used to perform the empirical study and test how well the JaMoPP project can model Java.
As mentioned above the JaMoPP project is not well documented and this caused many problems in the correct installation of the project. The closest installation information source is out of date and is only intended for the stable version. The public repository does not hold the complete project or the base testing package. Only by directly contacting the developers where the many issues listed above resolved. This next section of the paper will provide a very brief description of the installation process to add understanding of the JaMoPP project and to also provide information on the packages involved, some of which will be used as the basis for the testing metrics for this paper. The developers of the JaMoPP project were very helpful in finding all the packages and providing the information that is required to get the project fully compiling.
The first set of packages needed for the JaMoPP project, form the core elements of the software and are known as the EMFText SDK. This is the runtime environment plug-ins and ANTLR base code that are used to extend the EMF framework to handle Java the same as any other EMF model. These packages are checked out from a repository into the Eclipse environment and then exported as a “deployable plug-in and fragment project” from Eclipse.
into the Eclipse plug-in folder. After this step, the main plug-ins for the project can be checked out of the public repository mentioned earlier. These are the packages that will form the Java specific parser, resolver, and printer components which are generated from the ConcreteSyntax and also specify the Java abstract syntax tree and Java meta-model. There are a total of 5 of these packages.
The org.emftext.commons.antlr3_1_1 package provides the ANTLR runtime mechanisms and source code required by EMFText to extend the ANTLR classes to define a Java specific parser. This is achieved by the ConcreteSyntax which derives a context free grammar and exports it as an ANTLR parser specification to allow the JaMoPP project the power to transform Java source code to instances of the Java meta-model. The ANTLR package comes with four sub-packages only one of which is used to provide components for the JaMoPP project. This sub-package, as mentioned, is extended to provide the base for the parser, printer, lexer, token streams/tokenizer, file readers/file streams and classes that work on the current token’s state. None of the ANTLR package classes will be used as a metric during the testing phase because these are just extended by the org.emftext.language.java.resource.java classes.
The org.emftext.language.primitive_types package provides the Java specific primitive types for the Java meta-model. After the package has been checked out of the repository, its .genmodel file has to be run to generate the Java specific EDataTypes for the Ecore meta-model. None of these classes will be used as a metric for our tests as the grammar that will be generated later by the ConcreteSyntax will include the relevant information from these classes.
The org.emftext.langugae.java.resource is a hand written package created by the JaMoPP developers to extend their generated Java parser to handle Java class files and Java byte code. This package will not used as a testing metric.
The org.emftext.language.java.resource.java package is the package where the generated parser and printer will be generated to. The sub-packages will also contain the Java resolvers that need to be created to correctly handle Java specific rules. As mentioned above generated sub-packages will hold all the necessary Java specific parser, lexer, printer etc. Code coverage readings will be taken on the Java parser class to see how many of the paths through the Java grammar have been taken as a result of the parsing of the input files. Within the parser class are methods which transform Java statements, generics, primitive types/types
and classifiers etc to the corresponding Java meta-model/Ecore representation. These methods will be recorded for their code coverage and then contrasted against corresponding sub-packages of the Java Abstract syntax tree found in the org.emftext.language.java package.
The org.emftext.language.java package is where the ConcreteSyntax specification is used to generate the Java specific components for the package mentioned in the paragraph above. When the package is checked out of the public repository, its .genmodel file needs to be run first. After this file is run the Java Abstract syntax tree for the Java meta-model will now be generated within newly created sub-packages. As mentioned in the paragraph above these classes will be measured for code coverage and then compared to the corresponding methods in the Java parser class. Next the java.cs file within the package needs to be run. This will finally generated the much talked about grammar and now the entire JaMoPP project has been installed and should now compile fully.
The JaMoPP developers tested their project against big open source software such as Eclipse, JBoss and Netbeans. They have supplied a testing harness and packages in which their tests can be recreated. This is done using Eclipse’s built in testing plug-in, Junit [11]. As part of the testing strategy of this paper, these tests will be rerun in chapter 4. They were selected by the developers as a good way to test the majority of the Java meta-model and they will help to gain great code coverage for the majority of both the grammar and the abstract syntax tree which can then be compared against each other.
### 3.2 Testing process overview
This section will explain the testing process that will be used for testing the JaMoPP system within this paper. The main tests that will be performed on the JaMoPP project will be done using a black box testing technique known as system testing. Because of the black box nature of System testing the focus will be on inputting source code and then gaining results from the output. A system test for this paper will consist of grabbing a large set of input source code and then using the Junit testing framework with a code coverage tool to ascertained results. The two tests that will be run on an input files will be:
1. A test to see if an input file can be parsed to create an instance of the Java meta-model.
2. After this a test will follow to take the newly created instance of the meta-model and then using the Java printer, return the model back to a text based representation. This text based instance will then be checked to see if it is the same code as the original file.
Following this a code coverage tool will measure the paths that were taken through the grammar and it will also measure the paths taken through the abstract syntax tree. These results can then be compared and also the code coverage will show how much of the system is left untested. Results will be gathered for a number of system tests to see if there is a correlation between the grammar and the abstract syntax tree or if it is possible to fully reach 100% coverage of both.
As mentioned in the last paragraph, the number of input files per round of testing will have to be of a large size and cover as much as possible of the Java language to fully test JaMoPP abilities. Some examples that could be used during this process would be to recreate some of the JaMoPP developers’ tests and get code coverage readings from these. The tests that they used included Eclipse and Netbeans which are of a massive size and because of this they should bring in high coverage results. Another testing idea could be the use of beginner software development books and their example source code. These books teach students Java programming basics in a systematic and direct way, which would hopefully show the student most aspects of the language. For this reason they would make an ideal test for the JaMoPP system and should thus provide good coverage results.
The full overview of the testing process will now be discussed. A valid Java source code file is given to the JaMoPP system for parsing. This stage is a twofold process. The file is processed by the JaMoPP parser and as mentioned previously a model abstract syntax tree with unresolved cross references is created. Also at the same time the JDT parses the file and creates a standard Java abstract syntax tree. In the Junit test framework if this test is successful a green tick will appear beside the test name in the Eclipse GUI. If the parsing of the file is not successful or some problems are identify during the cross-referencing process the JaMoPP developers specified the system to return a parsing exception or run forever. The run forever case will be caught by a timeout exception in the Junit test framework. Any of the above exceptions will result in a failure in the Junit framework. A blue “X” will be placed beside the test name in the Eclipse GUI. Any problems with the Java heap size or
problems reading a file will register as an error in the testing framework and the test will receive a red “X” beside its name.
Figure 7: A Java class beside its Java meta-model instance which has the structure of an Ecore model [4]
Next the instance of the meta-model with the cross references is resolved and transformed back into text based code. This file is then taken and parsed using the JDT and another abstract syntax tree is created for this file. The second JDT abstract syntax tree is then compared to the original JDT abstract syntax tree using the JDT’S own abstract syntax tree matcher [1]. If these two abstract syntax trees are the same then the test passes.
Following these two tests the code coverage tool returns coverage measurements on all the files that were used during testing. The results for the Java parser and the abstract syntax tree will then be collected and compared. The coverage tool also colours the source code to show which lines of code have been used and which have not. In the case, where for example, a statement’s true path has only been used, the tool will colour this statement a different colour from the case where both a statement’s paths have all been traversed. More specialised testing could be created to achieve 100% coverage of these types of statements.
Section 4: Testing and Coverage Results
The JaMoPP project was heavily tested by its developers using open source industry sized software. They tested it against very popular Java development environments, web frameworks and some code generators. They did these tests, firstly to see if JaMoPP could be used in real world modelling development scenarios while at the same time testing JaMoPP’s ability to handle as much of the Java 5 language as could be thrown at it. This paper will also need to conduct large scale tests on the JaMoPP project to test the system for as much of the Java language as is possible and then from these tests gain coverage information to analysis the parser and abstract syntax tree.
The first test that will be preformed will help to establish some baseline coverage readings and this will also help to set up a comparison between the coverage of the abstract syntax tree and the parser later. The source code that will be under test will be the classic HelloWorld Java program. This is usually the first program that is shown to new software developers and while it is easy to understand it uses a number of elements of the Java language. Looking at this first will allow for a more in-depth look at the underlying coverage results for the parser and abstract syntax tree. It will allow for the results to be built up from a simple well known program all the way to the massive scale tests required to cover as much of Java as possible. Later these tests will consist of many hundreds of files of source code where it will be difficult to establish which line of code maps to an execution of a certain path through the grammar or abstract syntax tree. As has been described in section 3, prior to getting the coverage results, the parse and reprint tests must be performed and then from the execution of these tests, coverage readings can be gained from the coverage tool about the percentage of the parser grammar and abstract syntax tree that have been executed.
<table>
<thead>
<tr>
<th>Test Name</th>
<th>No Tests</th>
<th>Passed</th>
<th>Failed</th>
<th>Errors</th>
<th>Parser</th>
<th>AST</th>
</tr>
</thead>
<tbody>
<tr>
<td>HelloWorld</td>
<td>2</td>
<td>2</td>
<td>0</td>
<td>0</td>
<td>25%</td>
<td>31.7%</td>
</tr>
</tbody>
</table>
Table 1: Shows the results of the HelloWorld tests
In the table above, the results from the HelloWorld tests are shown. It can be seen that on the Java input code, two tests have been performed successfully. In the first test the JaMoPP project was able to take the source code and create an instance of the Java metamodel. Then in the second test the JaMoPP project was able to take the model and transfer it back to text based source code which was then compared to the original and found to be the same. In the last two columns of the table, the coverage results for the parser and the abstract syntax tree are displayed. A closer look will now be taken at the coverage results for the two
components in order to show how these overall results are achieved and also to highlight some issues that have been seen during the experimentations regarding the layout of the packages that make up the abstract syntax tree compared to the one file of the parser.

Figure 8: Partial view of the subfolders of the generated folder found in the org.emftext.language.java package.
The package org.emftext.language.java and its generated subfolder were described in section 3. It is this generated subfolder which contributes to the coverage readings for the abstract syntax tree. In table 1 we can see that 31.7% of the code of this subfolder was executed. Within this generated folder are many subfolders which are used to create the JaMoPP abstract syntax tree for the Java meta-model. As can be seen in figure 8 above, each subfolder of the generated folder corresponds to a component of the Java language; for example the arrays subfolders deal with every aspect of arrays for the Java language. As figure 8 shows there are 3 folders that hold the word array within their name and this is the same for all the components of the JaMoPP abstract syntax tree. For the array component these folders are called arrays, arrays.impl and arrays.util. The subfolder arrays holds all the interface classes that models each element of the array component of the Java language that helps to build the JaMoPP abstract syntax tree for input source code. The arrays.impl subfolder holds the implementing classes for the interface classes mentioned above, and this subfolder makes up all the contribution for the array component’s coverage results. The final subfolder, arrays.util, is just some adapter classes for arrays. This folder returns a coverage result of 0% for all the tests that are to be discussed in this section.
As mentioned the Java parser is a single class that has methods which correspond to classes in each of the impl subfolders mentioned above. These classes and methods can then be compared side by side to contrast the parser and abstract syntax tree results. In figure 9 below, it shows the breakdown of the results for each of the Java components’ coverage results for both the abstract syntax tree and the parser. Coverage results for each component of the abstract syntax tree are taken from the impl subfolders. While in the parser, for example any methods that deals with the array component classes, are added up and the percentage that this represents of the total possible array component is found.

As can be seen in figure 9, even though the HelloWorld input file contained no generics, imports and other such components of the Java language the abstract syntax tree that was created from the input file caused some of the generated folder’s code to be executed. While in the parser, as is shown in the graph, no paths for generics or imports were traversed. The same is true for the other components of the Java language within the test. Some of the components had 0% coverage in the parser or significantly less than the abstract syntax tree. Let’s take a closer look at one of the components of the Java language and compare the results of the parser and the abstract syntax tree.
The modifier component of the Java language has keywords that help set the visibility and accessibility of a class, its member variables, and methods [12]. These keywords would include public, private, protected, native and final etc. In figure 10 it can be seen that the abstract syntax tree has 100% coverage for most elements of this component including private, protected even though the only modifiers used in the HelloWorld class are public and static. In all the tests that were run during the testing phase, even though in the abstract syntax tree all the modifiers got 100% coverage, the highest the results ever got within the parser is 62.4% which is the same as the result achieved by the public keyword in this test.

Figure 10: Graph of the coverage results for the parser and AST for the modifier component of the Java language.
The next round of tests will focus on testing the JaMoPP project using source code from software development books. The idea behind these tests is that these books teach Java from the ground up and would cover all the basics of the Java language. This would include the String class, loops and programming control flow, generics and collections and modifiers etc.
<table>
<thead>
<tr>
<th>Test Name</th>
<th>No Tests</th>
<th>Passed</th>
<th>Failed</th>
<th>Errors</th>
<th>Parser</th>
<th>AST</th>
</tr>
</thead>
<tbody>
<tr>
<td>HeadFirst</td>
<td>116</td>
<td>98</td>
<td>18</td>
<td>0</td>
<td>45.3%</td>
<td>40.6%</td>
</tr>
<tr>
<td>Sams</td>
<td>212</td>
<td>196</td>
<td>16</td>
<td>0</td>
<td>46.8%</td>
<td>41.7%</td>
</tr>
<tr>
<td>Game</td>
<td>374</td>
<td>222</td>
<td>152</td>
<td>0</td>
<td>51.3%</td>
<td>42.7%</td>
</tr>
<tr>
<td>Nutshell</td>
<td>78</td>
<td>63</td>
<td>15</td>
<td>0</td>
<td>47.4%</td>
<td>41.1%</td>
</tr>
</tbody>
</table>
Table 2: Shows the results achieved from the software programming books. The All Test tests refer to all the tests from the books run together.
The table above shows the results from the software book tests. The HeadFirst [13] tests refer to the source code that is supplied with a book called ‘Head First Java’ released by O'Reilly Media. This book seems to have a focus on Java desktop application development. The Sams [14] tests above refer to the source code supplied with the book ‘Sams Teach Yourself Java 6 in 21 Days’. This book, like the last, focuses on creating Java desktop applications but it does this using version 1.6 of the Java language. It was hard to find good source code on the web so this was a way of testing the language with expected failures. All the failures that were found during testing JaMoPP for this book were just problems with JaMoPP understanding Java 6 specifics. The tests named game [15] refers to the source code found in the book known as ‘Developing Games in Java’. This book focuses on building games through Java. It was thought that this book would provide a good way to test the JaMoPP system for thread handling and maths operations. The failures that were found here are related to external libraries such as the javax library that doesn’t seem to be able to be handled by JaMoPP. The nutshell [16] test refers to a book known as ‘Java In A Nutshell’. This book focuses on building websites and website Java applications. The failures that were found here also relate to the javax library and GUI building errors. The All Tests test refers to all the tests that were run in the books that were discussed above but now these tests are run altogether to get the overall coverage results for this set of tests.
In the tests that where run in the table above, there were a lot of failures in the reprinting tests. The tests could all be parsed fine but when they where reprinted and compared to the original they were found to be different. The only time that the parse test can be made to fail is if it is given an invalid Java code file or if it times out within the Junit testing framework. Let’s look at the failures that were found in the headfirst test cases. These should have all passed because they used the Java 5 language in a way that would be expected in any standard programming practice.
The tests seem to fail for a number of ways that the Java keyword import is being used within the headfirst tests. There were 18 failures out of 116 tests. Four of the tests fail because they import the javax class. The rest of the tests fail because they import classes that are part of the headfirst source code. The JaMoPP project does not seem to be able to import user created classes and this has many knock-on effects within the methods that are invoked from the imported class by the importing class. So JaMoPP can only import code that has
| All Tests | 778 | 577 | 201 | 0 | 55.1% | 44.5% |
The table above shows the results from the software book tests. The HeadFirst [13] tests refer to the source code that is supplied with a book called ‘Head First Java’ released by O'Reilly Media. This book seems to have a focus on Java desktop application development. The Sams [14] tests above refer to the source code supplied with the book ‘Sams Teach Yourself Java 6 in 21 Days’. This book, like the last, focuses on creating Java desktop applications but it does this using version 1.6 of the Java language. It was hard to find good source code on the web so this was a way of testing the language with expected failures. All the failures that were found during testing JaMoPP for this book were just problems with JaMoPP understanding Java 6 specifics. The tests named game [15] refers to the source code found in the book known as ‘Developing Games in Java’. This book focuses on building games through Java. It was thought that this book would provide a good way to test the JaMoPP system for thread handling and maths operations. The failures that were found here are related to external libraries such as the javax library that doesn’t seem to be able to be handled by JaMoPP. The nutshell [16] test refers to a book known as ‘Java In A Nutshell’. This book focuses on building websites and website Java applications. The failures that were found here also relate to the javax library and GUI building errors. The All Tests test refers to all the tests that were run in the books that were discussed above but now these tests are run altogether to get the overall coverage results for this set of tests.
In the tests that where run in the table above, there were a lot of failures in the reprinting tests. The tests could all be parsed fine but when they where reprinted and compared to the original they were found to be different. The only time that the parse test can be made to fail is if it is given an invalid Java code file or if it times out within the Junit testing framework. Let’s look at the failures that were found in the headfirst test cases. These should have all passed because they used the Java 5 language in a way that would be expected in any standard programming practice.
The tests seem to fail for a number of ways that the Java keyword import is being used within the headfirst tests. There were 18 failures out of 116 tests. Four of the tests fail because they import the javax class. The rest of the tests fail because they import classes that are part of the headfirst source code. The JaMoPP project does not seem to be able to import user created classes and this has many knock-on effects within the methods that are invoked from the imported class by the importing class. So JaMoPP can only import code that has
been particularly defined by the developers within the process of creating the textual syntax. JaMoPP also can’t handle cases were a class creates a new Object instance of another user created class that is within the same folder as the class. So for example, in a folder there existed two classes, class1 and class2. Now within Class2 this statement appears, `Class1 example = new Class1();` this would be valid code within a Java file but yet JaMoPP is not able to handle this.

The results in figure 11 for the All Test tests results are not that much better than the results that were achieved in the HelloWorld tests. The abstract syntax tree results barely gained any more % coverage in any component while the parser did much better per component in the tests. It was decide to input a blank source code file into the JaMoPP project and see what abstract syntax tree results could be achieved. This would show how much of the abstract syntax tree code is executed regardless of the code in the input files. In figure 12 we can see the results of this test and the abstract syntax tree shows high coverage results per Java component even though the input file was blank. Only one method of the parser was executed. This method specifically deals with the instance of a blank model. The overall coverage for the parser from the blank test was 15.3% but when we look at the methods that are significant for this paper 0% is achieved except, in the case of the blank model method which got 21.1%, while in the abstract syntax tree results the overall is 24.6%. This may appear small but it must be remembered that the .util folders always achieve a 0%
so this brings the results down a bit but as can seen in the graph in figure 12, each of the .impl folders are quite high almost as much as during the book tests.

Figure 12: The breakdown of the AST results for a blank input file. The total AST result was 24.6%.
If a look is taken inside each of the .impl folders there is a file called (Java component)packageImpl file which scores almost 100% in each .impl folder and this appears to be by far the biggest file in any of these folders. For example in the modifier.impl folder there is a file called ModifiersPackageImpl this file achieves 92.9% coverage and the overall modifiers component result is 57.2%. The rest of the files that handle the actual modifier instances, for example public.java, achieve 0% coverage. These packageImpl files seem to handle creating instances of each class found within in the various .impl folders. It seems to create these instances whether they are used or not within the creation of a meta-model instance. So the question becomes, if the file with the biggest amount of code is getting almost 100% coverage and as has been shown in figure 10, most files within the .impl folder get 100% coverage in a simple program such as HelloWorld. Why can’t a 100% result be got for the modifiers component with only a few additions to the HelloWorld file? Well the answer to this is that some files only ever reach a certain level of coverage within every test that was run. For example the second biggest class within this folder is called ModifiersFactoryImpl this class only ever gets as far as 52.1% coverage, the AnnotationInstanceOrModifierImpl class always gets 60% coverage and the ModifierImpl
class gets 60% coverage. These results occur regardless of the size and number of the input tests. These kinds of situations occur in every Java component of the abstract syntax tree.
<table>
<thead>
<tr>
<th>Test Name</th>
<th>No Tests</th>
<th>Passed</th>
<th>Failed</th>
<th>Errors</th>
<th>Parser</th>
<th>AST</th>
</tr>
</thead>
<tbody>
<tr>
<td>Andromeda 3.3</td>
<td>698</td>
<td>698</td>
<td>0</td>
<td>0</td>
<td>52.9%</td>
<td>43.9%</td>
</tr>
<tr>
<td>Apache-tomcat</td>
<td>1127</td>
<td>1127</td>
<td>0</td>
<td>0</td>
<td>60.4%</td>
<td>46.1%</td>
</tr>
<tr>
<td>Eclipse 3.4.1</td>
<td>16696</td>
<td>16690</td>
<td>0</td>
<td>6</td>
<td>62.9%</td>
<td>46.6%</td>
</tr>
<tr>
<td>Netbeans 6.5.1</td>
<td>31223</td>
<td>31167</td>
<td>53</td>
<td>3</td>
<td>68.1%</td>
<td>48.4%</td>
</tr>
<tr>
<td>JBoss 5.0.0</td>
<td>6414</td>
<td>6414</td>
<td>0</td>
<td>0</td>
<td>64%</td>
<td>47.4%</td>
</tr>
</tbody>
</table>
Table 3: The results achieved by the recreation of some of the JaMoPP developer’s team’s tests.
The tests shown in table 3 are the recreation of the tests performed by the JaMoPP developers when they were testing their system. These tests are thousands of files long and should cover as much of the Java language as is possible using a real world project. These tests should have all passed because the developers mention in their paper [JaMoPP] that any tests that they inputted passed. As can be seen in table 3 this is not true. The nine errors in the above tests occurred during the parse test due to a timeout within Junit. This could happen for two reasons:
1. The input file was so big that it took longer than the length of time specified in the Junit testing code.
2. The JaMoPP developers designed the parser to handle the case where it couldn’t parse a valid input file, to either throw a parsing error or to run for ever. The run forever situation would then be caught by a timeout in the Junit testing framework.
There are 53 errors in the Netbeans test and these mostly occurred due to resolver issues in the reprint test.
The above figure shows the breakdown of the parser vs. the abstract syntax tree coverage results for the Netbeans test. The parser results are getting closer and closer to full coverage as the amount of the Java language that is being used increases. But as has been mentioned a number of parser methods only ever reach a definite code coverage result no matter the amount of files that are inputted. For example in the modifier component the methods that deal with public, private, static etc only ever reach a result of 62.4% while in the abstract syntax tree 100% is achieved. If a look at the files and methods that make up the Java operators component results, the same situation can be seen. The operators component deals with addition, various kinds of assignment, plus plus and minus minus etc. These elements were found to achieve 100% coverage in the abstract syntax tree while only achieving 51.2% in the parser.
There doesn’t seem to be a correlation between the results achieved by the parser and the abstract syntax tree. The abstract syntax tree results for most part float around the middle 40%’s. This is because certain files in each of the .impl folders only ever achieve a definite result. Also the second biggest file in each of the .impl folders, the factoryImpl class, only ever reaches a coverage result of around 50%. Each of the .util folders achieve a 0% coverage result for all the tests that were run and because of these factors the abstract syntax tree results don’t vary much. The parser’s coverage results can be seen to increase as the number of input files increase but a percentage of the parser’s methods only ever reach a
certain result and never go higher. These results are different for each Java component so a trend cannot be ascertained from component to component.
Section 5: Conclusion
The JaMoPP project helps to bridge the gap between the two disciplines of coding and modelling. With this gap closing, this will help to bring the theory of MDSD methodologies into real world reality. The Eclipse project with its extendable architecture and open sourced nature contributed greatly to the construction of the project. The EMF plug-in for Eclipse provided the JaMoPP project with a well defined and well known meta-model to build their java meta-model on top of. They were the first developers to create a standardized java meta-model which can be manipulated on and be handled in the same manner as any other EMF model.
Since the developers goal would be to get the JaMoPP project out and be used by real world developers and researchers there is little public information about it on the web. The project itself comes in two options, one called the stable version and other one is the most up to date version that the developers themselves are using to fix problems and add features. It would be thought that any serious development or research work using the JaMoPP project would require the most up to date version to ascertain its current limitations and future possibilities. The most up to date version also contains the base testing packages that are required to ascertain the overall benefits and likely success using the system. Yet the only information on their website is for installing and using the stable version. At the time that the JaMoPP project was being installed for use with this paper the only way to get the full most up to date version was by contacting the developers directly. Since then the website has been updated with a full repository of code and the very important technical document is also now available. Also on the website the grammar and textual syntax is up and can be seen.
On the website there is a link to a Java API for the generated subfolder of the org.emftext.language.java.resource.java package. This is the folder that contains the generated ANTLR files such as the parser, lexer and token Streams etc. The EMF Framework hides the underlying implementation of the model resource factories, so it would never be required to access the classes of the generated folder directly. At the beginning of the testing phase for this paper a false impression that the parser needed to be accessed
directly was held until the JaMoPP developers were contacted directly and it was realised that
the EMF framework handles this. It cannot be seen why the API would be up when it just
leads users to confusion.
The tests conducted by this paper reviled a number of things about the JaMoPP
project. Firstly most of the files that were inputted into the system as tests passed. Most of
the failures in the ‘book’ set of tests, failed due to what appears to be various issues with
importing user created packages and invoking methods from user created classes. The
JaMoPP developers claimed that any files that they used for their testing purposes passed but
upon recreation of some of their tests it was found that some inputs failed. These failures
mostly occurred in the Netbeans test due to resolving issues in the reprint tests. A number of
errors were also found in the Netbeans and Eclipse tests. These errors occurred because of
timeouts in the Junit framework.
After looking at the coverage results for both the parser and the abstract syntax tree
there was no obvious overall correlations found. Some of the individual files of the abstract
syntax tree got 100% coverage and within the corresponding methods of the parser it was
found that the parser only reached a certain level of results. No matter how big the input file
set became, the abstract syntax tree result comes in at around the high 40% mark. While the
parser results increased as the amount of the Java language that was been used increased, a
number of the methods that contributed to the final parser coverage result started to reach
certain definite levels.
The system was tested against big industry sized software and small programs. It
would be hard to tell how to attain a 100% coverage result in the parser. Within the technical
document the developers mention adding grammar rules to handle particular coding
situations but don’t really supply information about these scenarios. Maybe to get full
coverage, these situations would need to be reproduced if they were not occurring during the
tests. The abstract syntax tree had hit its coverage peak and most files were either at 100%
coverage or stopping at a particular level. The parser was increasing as mentioned, as the
amount of code increased but certain methods were starting to show a definite level of
reoccurring results. Java also supplies many ways to accomplish the same feat for example if
we take a look at the plus plus element of the java language. A variable can be incremented
either by coding this ++variable or by this variable++. To get a 100% coverage of the plus
plus element in the grammar both of these scenarios must occur in the input code. The same
would be true for other elements of the java language and also this would lead on to the issue of programming styles. For example Netbeans and Eclipse would be programmed in a certain programming style by its developers and this could miss out on some paths through the grammar for certain java elements. Also Java supports legacy ways of coding Java that may have fell out of practice by more modern programmers. All of these issues would lead to problems achieving a 100% result.
5.1 Future Work
The JaMoPP project opens up a number of areas in coding and modelling. The JaMoPP developers themselves have suggested some areas of future research including generating, analysing and visualising code. This year the Java 6 language was released. This means that the Java meta-model and textual syntax needs to be updated to reflect this. Other ideas for other areas of testing the JaMoPP project would be using it in a practical sense. Maybe inputting some test files and then using the generated meta-model instances for performing transformations between different EMF model types. Then these tests could be looked at for coverage results and see if the abstract syntax tree’s .util folders are used during this process. This form of testing would also test to see how well the JaMoPP’s Java meta-model operates and fits within the EMF Framework. Another testing area that was not covered in this paper was the extended parser that uses the BCEL project to read Java source code. Coverage results could be got for this and then also be compared to the abstract syntax tree. Maybe models generated from the BCEL code could also be tested to see how well they get on in EMF model to EMF model transformations.
References
|
{"Source-Url": "http://eprints.maynoothuniversity.ie/5350/1/10%20Colin%20Bell.pdf", "len_cl100k_base": 15850, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 89366, "total-output-tokens": 18182, "length": "2e13", "weborganizer": {"__label__adult": 0.0003757476806640625, "__label__art_design": 0.00031304359436035156, "__label__crime_law": 0.00023818016052246096, "__label__education_jobs": 0.0014657974243164062, "__label__entertainment": 6.014108657836914e-05, "__label__fashion_beauty": 0.0001748800277709961, "__label__finance_business": 0.00022780895233154297, "__label__food_dining": 0.00029349327087402344, "__label__games": 0.0005664825439453125, "__label__hardware": 0.0006480216979980469, "__label__health": 0.00027751922607421875, "__label__history": 0.00021660327911376953, "__label__home_hobbies": 8.827447891235352e-05, "__label__industrial": 0.0002777576446533203, "__label__literature": 0.0002913475036621094, "__label__politics": 0.0002038478851318359, "__label__religion": 0.0004181861877441406, "__label__science_tech": 0.0033435821533203125, "__label__social_life": 9.167194366455078e-05, "__label__software": 0.0033016204833984375, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00033473968505859375, "__label__transportation": 0.0004451274871826172, "__label__travel": 0.0001970529556274414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79679, 0.02461]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79679, 0.75688]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79679, 0.93043]], "google_gemma-3-12b-it_contains_pii": [[0, 213, false], [213, 609, null], [609, 1893, null], [1893, 3263, null], [3263, 5710, null], [5710, 8312, null], [8312, 10949, null], [10949, 13195, null], [13195, 14987, null], [14987, 16734, null], [16734, 19081, null], [19081, 21548, null], [21548, 23916, null], [23916, 24912, null], [24912, 26680, null], [26680, 27977, null], [27977, 29816, null], [29816, 32272, null], [32272, 34630, null], [34630, 37114, null], [37114, 39835, null], [39835, 42465, null], [42465, 44868, null], [44868, 47499, null], [47499, 48811, null], [48811, 51707, null], [51707, 53549, null], [53549, 55175, null], [55175, 56924, null], [56924, 62635, null], [62635, 64380, null], [64380, 66101, null], [66101, 67943, null], [67943, 69604, null], [69604, 72130, null], [72130, 74839, null], [74839, 76550, null], [76550, 78558, null], [78558, 79679, null]], "google_gemma-3-12b-it_is_public_document": [[0, 213, true], [213, 609, null], [609, 1893, null], [1893, 3263, null], [3263, 5710, null], [5710, 8312, null], [8312, 10949, null], [10949, 13195, null], [13195, 14987, null], [14987, 16734, null], [16734, 19081, null], [19081, 21548, null], [21548, 23916, null], [23916, 24912, null], [24912, 26680, null], [26680, 27977, null], [27977, 29816, null], [29816, 32272, null], [32272, 34630, null], [34630, 37114, null], [37114, 39835, null], [39835, 42465, null], [42465, 44868, null], [44868, 47499, null], [47499, 48811, null], [48811, 51707, null], [51707, 53549, null], [53549, 55175, null], [55175, 56924, null], [56924, 62635, null], [62635, 64380, null], [64380, 66101, null], [66101, 67943, null], [67943, 69604, null], [69604, 72130, null], [72130, 74839, null], [74839, 76550, null], [76550, 78558, null], [78558, 79679, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79679, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79679, null]], "pdf_page_numbers": [[0, 213, 1], [213, 609, 2], [609, 1893, 3], [1893, 3263, 4], [3263, 5710, 5], [5710, 8312, 6], [8312, 10949, 7], [10949, 13195, 8], [13195, 14987, 9], [14987, 16734, 10], [16734, 19081, 11], [19081, 21548, 12], [21548, 23916, 13], [23916, 24912, 14], [24912, 26680, 15], [26680, 27977, 16], [27977, 29816, 17], [29816, 32272, 18], [32272, 34630, 19], [34630, 37114, 20], [37114, 39835, 21], [39835, 42465, 22], [42465, 44868, 23], [44868, 47499, 24], [47499, 48811, 25], [48811, 51707, 26], [51707, 53549, 27], [53549, 55175, 28], [55175, 56924, 29], [56924, 62635, 30], [62635, 64380, 31], [64380, 66101, 32], [66101, 67943, 33], [67943, 69604, 34], [69604, 72130, 35], [72130, 74839, 36], [74839, 76550, 37], [76550, 78558, 38], [78558, 79679, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79679, 0.09635]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
220ad5222b3e7bcebaaca54d43879384cedc3c63
|
GMIS Framework Design and Resource Management Algorithms
The main objective of GMIS framework is to provide resource management services like discovery, monitoring and brokerage of resources in a grid network. This chapter discusses about the GMIS framework design with data models, data flow diagrams and algorithms. Data flow diagrams provide the communications and interfaces between the GMIS components. It also discusses about the design of resource’s agent software.
4.1 Introduction
GMIS provides information regarding the operating status of different resources to the grid users. The main services of GMIS framework are, keep track of hardware and software resources in a grid, poll resources and receive the dynamic state of the grid resources attributes such as load, memory and disk information of all the managed resources, process job requests from grid users and provide resource latest information to users such as current jobs running, current load of resources and status of resources. GMIS also does some degree of fault recovery, such as resubmission of the jobs in case of failures. As discussed in Chapter 3, the components contribute to the working of GMIS are Discovery Server, Data Collection Manager, Topology Manager, Data Handler, Monitor Service, Resource Selector, Viewer and Communicator Server. Interactions among these GMIS components are shown in the figure 4.1.
In this figure 4.1, resources are physical resources in the grid network. GMIS Communicator Server communicates with the resources on behalf of GMIS components. Each physical resource on grid network is represented by one or more objects in GMIS topology. Each physical resource contains a communicator agent which will respond to the GMIS communicator server requests. These agents also send events on behalf of resources based on resource status change or job status change.
Viewer provides user interface to access GMIS services. Agents of corresponding resources respond to the GMIS requests. The following table 4.1 shows the GMIS components interaction with other GMIS components.
<table>
<thead>
<tr>
<th>GMIS Component</th>
<th>Interacts with</th>
</tr>
</thead>
<tbody>
<tr>
<td>Discovery Server</td>
<td>Topology Manager, Communicator Server</td>
</tr>
<tr>
<td>Data Collection</td>
<td>Topology Manager, Communicator Server</td>
</tr>
<tr>
<td>Manager</td>
<td></td>
</tr>
</tbody>
</table>
Figure 4.1: GMIS Components Interaction
<table>
<thead>
<tr>
<th>Database</th>
<th>Data Handler (DH)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Topology Manager</td>
<td>Data Collection Manager, Communicator Server, Data Handler</td>
</tr>
<tr>
<td>Monitor Service</td>
<td>Data Handler</td>
</tr>
<tr>
<td>Resource Selector</td>
<td>Topology Manager, Data Handler, Communicator Server</td>
</tr>
<tr>
<td>Communicator Server</td>
<td>Discovery Server, Data Collection Manager, Resource Selector, Communicator Server</td>
</tr>
<tr>
<td>Agent Software</td>
<td>Communicator Agent (CA), Resource Discovery Agent (RDA), Critical Information Agent (CIA)</td>
</tr>
<tr>
<td>Viewer</td>
<td>Discovery Server, Monitoring Service, Resource Selection</td>
</tr>
</tbody>
</table>
### Table 4.1: GMIS Component Interaction with other GMIS Components
The important design considerations discussed in Chapter 1 are used in designing GMIS framework.
#### 4.2 Discovery Server
Resource discovery on grid network is a vital function in grid resource management system. Discovery server provides a simple and convenient way to add managed resources to the GMIS topology.
##### 4.2.1 Dynamic Grid Network Discovery
In GMIS dynamic grid network discovery takes place in three levels. The first level is node level, which uses an Internet protocol address or Fully Qualified Domain Name (FQDN) is used to discover resource information. Second and third levels are cluster level and grid level where a range of Internet protocol addresses are used to discover resources. A user interface is provided on viewer to provide Internet protocol addresses or FQDN. Discovery Server (DS) processes requests from the viewer to discover resources with the given IP addresses or FQDN. DS sends a message to the agent software running at resources using these IP addresses, through the communicator server. Upon receiving the request, agent running at the resource...
collects the resource information including application information and sends information to the DS through communicator server (CS). DS sends a message to the Topology Manager (TM) to update the topology with the new resource information. Topology Manager updates resource topology and database with the resource information. Data flow diagram of this process is shown in figure 4.2.
**Figure 4.2: Data Flow Diagram of Dynamic Network Discovery**
1. To discover resources on grid, user initiates Grid/Cluster/Node level discovery using user interface in viewer.
2. Discovery server sends a message to the communicator server (CS) to discover resources on grid network.
3. CS broadcasts message to all the IP addresses mentioned in user interface.
4. Agent software that runs on the resource(s) responds to the broadcast message sent by the CS with resource information.
5. CS forwards resource information to DS.
6. DS forwards resource information to Topology Manager (TM). Topology Manager updates hierarchical resource management information tree and stores in cache storage.
7. Topology Manager updates the database with resource information.
4.2.2 User based Resource Discovery
In user based resource discovery, resources information is added manually to the GMIS using user interface provided on the viewer. Resource information provided by the user will be sent to Discovery Server (DS). Upon receiving the user information, Discovery Server validates and sends resource information to Topology Manager (TM). TM adds resource information in GMIS topology and database. Data Collection Manager (DCM) polls the resource through communicator server, to verify the resource information provided by the user and to get the latest status of resource. Resource agent software running on the resource responds to the DCM poll request. DCM forwards the latest resource information to TM. TM updates the topology and database with the latest resource information. This process is shown as a data flow diagram in the following figure 4.3.
**Figure 4.3: Data Flow Diagram of User based Resource Discovery**
1. User provides resource information details on user interface provided on the viewer. This information is passed to the Discovery Server (DS).
2. DS validates the data and sends message to Topology Manager (TM). TM adds resource information to the GMIS topology.
3. TM adds the resource information in database.
4. TM sends a request message to Data Collection Manager (DCM) to verify the resource details.
5. DCM sends a request to the Communicator Server (CS) to verify and to get the latest status of the resource.
6. CS sends a request to the resource agent software of that resource.
7. Resource agent software responds to the CS request.
8. CS forwards the resource information to the DCM.
9. DCM updates topology manager with the latest resource information.
10. TM updates the latest resource information in topology as well as Database.
4.2.3 Discovery Server Algorithm
*Algorithm Discovery Server:*
1. Initialize the IPC library and create two communication channels. One channel is for receiving messages from the viewer and communicator server. Other channel for sending messages to communicator server and topology manager.
2. For each received message
3. BEGIN
4. Parse the message.
5. If (message is from viewer)
6. BEGIN
7. If (message type is DYNAMIC_RESOURCE_DISCOVERY) then
8. BEGIN
9. Prepare a message with Message type as RESOURCE_DISCOVERY with resource IP address
10. Send the message to communicator server.
11. END
12. Else
13. END
14. Else
15. if (message type is USER_RESOURCE_DISCOVERY) then
16. BEGIN
17. Validate the user inputs
18. Prepare a message with Message type as ADDRESOURCE
19. Send the message to topology manager
20. END
21. END
22. Else
23. BEGIN
24. If (resource information is valid information)
25. Forward the message to topology manager
26. END
27. Else
28. Ignore the message.
29. END
4.3 Topology Manager
The main objective of topology manager is maintenance of managed resource object data of various resources and maintenance of object hierarchy.
The functionalities of Topology Manager are:
1. Caching of Resource Information at startup for quicker access to other services.
2. Interfaces with DS to get the newly discovered resource information.
3. Updates resource availability (addition/deletion) information based on requests from Data Collection Manager.
4. Correlates event information and poll response from resources through DCM.
5. Updates the resource info in GMIS topology and database.
6. Maintains submitted job information in cache memory.
At the time of initialization GMIS retrieves the resource information from the database, and resource information is populated into the topology by topology manager. It verifies that resource is still available for that GMIS or not through data collection manager. If resource is still available then the resource information will be retained. Otherwise that resource will be deleted from the topology as well as from database. Topology manager updates the resource information in topology and database based on the information retrieved by the Discovery Server or Data Collection Manager.
4.3.1 Topology Manager Algorithm
*Algorithm: Topology Manager*
1. Initialize the IPC library and create two communication channels, one for sending and other for receiving messages from other GMIS components.
2. Initialize MIB in the shared memory.
3. Retrieve grids information from the database using data handler functions.
4. If (there is resource information in Database)
5. BEGIN
6. For each grid in database do
7. BEGIN
8. Create a topology object with object type as GRID
9. Initialize other attributes of GRID object with the retrieved information
10. Get all clusters information from the database for this grid.
11. For each cluster in database do
12. BEGIN
13. Create a topology object with object type as CLUSTER
14. Initialize other attributes of CLUSTER object with the retrieved information.
15. Initialize the parent id with the GRID ID
16. Get all node information for this cluster.
17. For each node in database do
18. BEGIN
19. Create a topology object with object type as NODE
20. Initialize other attributes of NODE object with the retrieved information
21. Initialize the parent id with CLUSTER ID
22. END
23. END
24. END
25. For each node added in the topology
26. BEGIN
27. Send a request to Data Collection Manager to poll the resource information
28. Get the request id and start timer
29. Mark as request retry as 1.
30. END
31. END
32. else /* no resource information in DB */
33. BEGIN
34. Create a default grid topology object.
35. Create a default cluster topology object under the default grid topology object.
36. END
37. If timer expires then
38. BEGIN
39. If request retry is less than 3
40. BEGIN
41. Send another request to Data Collection Manager to poll the resource information
42. Increase the request retry to 1
43. END
44. else
45. BEGIN
46. Get the parent cluster object
47. Remove the node object from topology
48. Delete node information in database
49. Delete all services/applications information under this node
50. Get the list of children of cluster object
51. If (number of children is zero)
52. BEGIN
53. Get the parent grid object
54. Remove the cluster object from topology
55. Delete cluster information in database
56. Get the list of children of grid object
57. If (number of clusters is zero)
BEGIN
Remove the grid object from topology
Delete grid information in database
END
END
END
For each message received do
BEGIN
if (message is from Discovery Server)
BEGIN
Parse the message and retrieve parent id.
If (parent id exists)
BEGIN
Retrieve parent cluster information.
Add the new node under this cluster in topology
Insert the new resource information in database.
END
else
BEGIN
Add the new node under the default cluster in topology.
Insert the new resource information in database
END
END
else if (message from Data Collection Manager)
BEGIN
if (message type is “POLL_RESPONSE”) BEGIN
Update the topology and the database
END
else if (message type is “RESOURCE_UPDATE”) BEGIN
if (resource already exists in topology) BEGIN
Update the topology and the database.
END
else
BEGIN
Retrieve the parent id of resource
if (parent id exists in topology)
BEGIN
Get the cluster topology information using parent id
Create a new object in topology under this cluster.
END
else
Create a new resource object under the default cluster object.
END
END
END
106. END
107. else if(message from resource selector)
108. BEGIN
109. Create a job object in cache memory.
110. END
111. else Ignore the messages
112. END
4.3.2 Data Model for Topology
The data model for topology takes care of grouping and maintaining logical and hierarchical relations of varied information available with different types of resources that will be supported. The objects in each grid are grouped into scheduled poll-able managed object groups and non-pollable object groups. The poll-hierarchy of the managed objects is also modeled to take care of handling poll responses and event handling.
Each resource type is associated with a listId for the list of ManagedObjectGroups. Topology manager reads the data model and for each type of resource, creates resource objects, which will contain a list of ManagedObjectGroups. For those ManagedObjectGroups, which need to be schedule-polled, the flag is set to TRUE in the data model and all such ManagedObjectGroups are registered with the Data Collection Manager to be polled.
The following figure 4.4 shows the data model for topology. The representation is done as class diagram for convenience.
Example usage: To initialize all nodes and their managed objects, the ResourceTable will have a row {Resource_Name = NODE; ManagedObejctGroupListId = 100(some unique id); defaultStatus=UP (so that DCM can start polling without waiting for link checking)}. This row is read and for resource type NODE all instances will be retrieved from Database and as many Node Instance objects are created in the topology with their IP addresses etc.
Next, for the ListId = 100, the ManagedObejctGroupList is read to get all the ManagedObject groups. Note that each resource can have one or more Managed Object
groups. E.g., Node may have two managed object groups: {NodeState – poll every 3 minutes}, {LoadState – poll every 5 minutes}.

**Figure 4.4: Data Model for Topology**
Using the ManagedObjectGroupIds create ManagedObjectGroups by taking data from ManagedObjectIdTable. For each ManagedObjectGroupId, a list of MO classes is given like NodeState, LoadState. If the number of instances is fixed then the source shall say 'FIXED' else need to look at database to get the number of instances.
Each managed object can be a container object like Node state, Hardware container etc. For all such objects, their child objects are added to the object repository by looking at MO_Contained table. For Node state, the child MO_Group will contain HW_Container. These are created recursively until there are no child objects. Later if the OS_container is added to NODE, just a new entry in the MO_Contained and appropriate group details in ManagedObjectIdTable should be sufficient to create the new objects without change in the code.
4.4 Database
Each GMIS has a Database. This database includes information about the local managed resources. Whenever a resource is added/deleted or modified in grid, then the database gets updated by the topology manager.
Data handler provides an interface between GMIS components and the database. GMIS components can perform query operations on the database and update database tables using data handler. Data handler provides the following data handling functionalities.
- Handle the parameters associated with each resource.
- Create the instance of resource.
- Retrieve the instance of resource.
- Modify the parameters associated with resource.
- Retrieve the parameters associated with resource.
- Handle the hierarchy of the resources
- Traversing through the hierarchy
- Retrieve the parent of a given resource.
- Retrieve the children of a given resource.
4.4.1 Data Handler Algorithm
Algorithm: DataHandler
1. Initialize IPC library and create two communication channels to send and receive messages.
2. Check whether GMIS database exists or not.
3. If GMIS database does not exist
4. Create GMIS database with the GMIS DB schema
5. For each received message
6. BEGIN
7. Parse the message.
8. Connect to the database.
9. if (connection is successful)
10. BEGIN
11. if (request type is “RESOURCE_ADD”)
12. Insert the resource information in the appropriate table.
13. else if (request type is “RESOURCE_UPDATE”)
14. Update the resource information based on resource id.
else if (request type is "RESOURCE_DELETE")
Delete the resource information based on resource id.
else if (request type is "RESOURCE_RETRIEVE")
Query the resource information and send the information.
END
else
Send DB_FAILURE message to the requestor component.
END
4.5 Data Collection Manager
Data Collection Manager (DCM) collects resource status periodically. DCM performs resource data collection requests based on the contents of topology object. Agent software running on the resources will respond to these requests. After creating a topology object, it schedules a data request for that object. When topology object is deleted, it stops collecting data from the resource.
DCM is responsible for finding faults and failures of managed resources. It polls the status information of the resources, handles events and updates the database through topology manager.
4.5.1 Polling and Parsing
Resource polling is a mechanism to periodically check the status of any resource connected to a GMIS managed grid network with the help of timer. In GMIS polling is done for resource node for every 10 minutes to get the status of the node. After response is received, it parses response and based on received response, topology and database gets updated, so that these changes will get reflected on resource monitoring view of viewer. If received response for a node is not operational, then it polls immediately its descendants i.e., memory, CPU, load and applications.
DCM polls each of the managed resource periodically. DCM gets the resources configuration information from topology manager. It sends resource status requests through Communicator Server to all managed resources to obtain their latest status. Upon receiving the new status information, it sends this information to the viewer through topology manager, data base and monitor service. DCM interacts with topology manager, and communicator server using Inter Process Communication (IPC) messages. This process is shown as a data flow diagram in the following figure 4.5.
Figure 4.5: Data Flow Diagram for Periodic Monitoring of Resources
1. Data Collection Manager (DCM) requests Topology Manager (TM) for schedule poll able resources and their information.
2. TM responds to the DCM request, with the resources information like FQDN, Internet protocol address, object identifier and hierarchy.
3. DCM sends a message to Communicator Server (CS) to poll for current status of resources using information provided by the TM.
4. CS sends a poll request to resource.
5. Resources’ agent software collects resource information and responds to the poll request.
6. CS forwards the poll request response to DCM.
7. DCM parses the response received from the resources’ agent software and sends updated resource information through internal message to TM.
8. TM updates topology and database with the latest resource information.
4.5.2 Event Handling
Event handling provides a mechanism at the GMIS for managing events generated by resource agents which are routed to the GMIS Communicator Server. Typical events are:
- Resource update information (addition/deletion/modification)
- Indication of a resource change of status
- Indication of overload
- Utilization of resource components i.e. threshold values exceeded
- Job execution information
Communicator Server provides centralized reporting of all events generated by resource agents across the grid network. Communicator Server sends events to Data Collection Manager (DCM). DCM correlates various events generated by resources to get the latest status of resource. This status is passed on the viewer through topology manager, database and monitor service.
The following types of events are supported in GMIS. Communicator Agents generate these types of events.
- **Resource Update Event** – This event will be generated whenever a component or service is added/deleted/modified to resource.
- **Resource Status Change Event** – This event indicates that one of resource component's exceeded a threshold value.
- **Equipment Failure Event** – This event will be generated whenever there is a resource failure due to hardware problems.
- **Availability Event** – This event is generated whenever there is a change in resource availability to the grid users.
- **Job Execution Start Event** – This event is generated whenever job execution starts.
- **Job Execution Completed Event** – This event is generated whenever job execution completes.
- **Job Execution Failure Event** – This event is generated by resource when resource is unable to access input files or unable to upload the output files. If it is unable to access the input file for job execution then, the status of job execution set to Job_Input_Download_Failed. If the resource is unable to upload the output file, then Job_Output_Upload_Failed status will be set to the status of job execution.
These events will be generated by the resource agents on behalf of resources. These resource agents are referred as Communicator Agents. For instance, whenever a resource is added, modified or deleted an event will be generated by resource agent software. This event reaches to Data Collection Manager (DCM) through Communicator Server of GMIS. DCM parses the message and updates Topology Manager (TM) to update GMIS topology. Topology manager updates topology and database about the updated information. This process is called as dynamic event based discovery. Dynamic event based discovery process data flow diagram is shown in the following figure 4.6.

1. When a new resource or application is added/deleted/modified a RESOURCE_UPDATE event sent by the agent software which is running on resources to Communicator Server (CS) of GMIS.
2. CS forwards the RESOURCE_UPDATE event to Data Collection Manager (DCM).
3. DCM parses the RESOURCE_UPDATE event and sends a message to Topology Manager (TM) to update topology.
4. TM constructs/updates hierarchical resource management tree and stores in cache storage and updates the database with resource information.
4.5.3 Data Collection Manager Algorithm
**Algorithm: Data Collection Manager:**
1. Initialize IPC library and open two communication channels for sending and receiving messages.
2. Get all instances of pollable objects from the topology manager.
3. For each object do
4. BEGIN
5. Get IP address of resource
6. Prepare the list of attributes to the message parameters list along with object identifiers
7. Set a transaction id to the poll request
8. Prepare a message structure
9. Send message to communicator server
10. Start timer
11. END
12. If timer expires again start polling of resources, follow steps 3 to 10.
13. For each message received do
14. BEGIN
15. Parse the message
16. if (message is from Communicator Server)
17. BEGIN
18. if (message type is “POLL RESPONSE”) BEGIN
19. Delete the transaction id
20. Forward message to topology manager to update topology and database
21. END
22. else if (message type is “EVENT”) BEGIN
23. Parse the event
24. Prepare a message
25. Send message to Topology Manager
26. END
27. END
28. else if (message is from Topology Manager)
29. BEGIN
30. Poll resources using step 3 to 10.
31. END
32. else Ignore the messages
33. END
34. END
35. END
4.5.4 Data Model for Poll Response and Event Handling
Poll-response handling and event handling for Status Change Events (SCE) or resource update events have a common behavior based on the new state of the managed object. Such behavior is captured in the following data model figure 4.7.
In the first part of the data model general event handling is captured, where-in event acknowledgement is covered. If an event required to be acknowledged, then it shall contain an acknowledgement Object Identifier (OID) and value. If it is a status change event the status of the resource Managed Object is updated, else, based on the appropriate action code specific processing is done.

**Figure 4.7: Data Model for Poll Response and Event Handling**
Second part of the data model looks at updating the status of a managed object, which is common for both status change event and poll-response. Whenever an object status changes, Topology Manager updates topology and as well as database.
**Example usage:** The Event Handle Table contains the list of all types and events and the action that needs to be taken. For instance, a Node Availability event has an entry in the table with \{NE_type = NODE, Event_Type = Node_Avail_event, Ack_Required = TRUE, ACK_OID = 1.2.3.4.3.2.2.22.0, ACK_Value = 1; isSCE = FALSE, actionCode = 0\}. Based on this the event handler shall handle the availability event by picking up the Ack_OID and sending the Ack_Value to that Node.
If the event is a status change event, which does not require acknowledgement like a Node-Hardware-Container status change event, it will have appropriate values and the isSCE flag will be TRUE. Based on that the Event Handler looks at the SCE Handle Table and finds that for the MO_Class Node-Hardware_container, and update the object status. The Managed object instance’s updateState() method is invoked by Data Collection Manager.
The updateState() method extracts the new status of resource managed object and constructs an internal message. This message will be sent to the Topology Manager to update the topology and database.
### 4.6 Monitor Service
Monitor Service provides inputs to the viewer to show the status of resources. The main functionalities of the Monitor Service are User Management and to retrieve GMIS managed resource information. When a grid user logged into the GMIS using user interface provided on viewer, authentication request comes to monitor service. Monitor service authenticates user information by checking user details in database through data handler. Monitor Service validates and updates user information in GMIS database for new users. Once the user authentication is done, GMIS managed resource information is provided to the viewer to display the resource information. Monitor Service periodically responds to viewer's resource information request. The basic Monitoring Service process data flow diagram is shown in the following figure 4.8.

1. User provides user credentials to access GMIS managed resource information. Viewer sends this request to Monitor Service.
2. Monitor Service validates the user information, and if the user is a valid user, then it sends a request to Database through data handler for the resource information.
3. Monitor Service retrieves resource information from the database.
4. Monitor Service updates the Viewer with the available resource information.
4.6.1 Monitor Service Algorithm
Algorithm: Monitor Service
1. Initialize IPC library and create two communication channels, one for receiving and other one for sending messages.
2. For each message received do
3. BEGIN
4. Parse the message
5. If (message is from Viewer)
6. BEGIN
7. if (message type is USER_AUTHENTICATION)
8. BEGIN
9. Validate the user information, by querying user information from database
10. If (validation is successful)
11. Retrieve the resource information
12. Convert all resource information into XML format
13. Send to the viewer
14. else
15. Send error message to viewer.
16. END
17. if (message type is RESOURCE_INFO)
18. BEGIN
19. Retrieve the resource information from Database using Data Handler
20. Convert retrieved information in XML format
21. Send the retrieved resource information to viewer
22. END
23. END
24. else
25. Ignore the message.
26. END
4.7 Brokerage Service
The Brokerage Service performs a number of functions. The first step is the identification and selection of resources that best fits the needs of grid application. The broker will then submit jobs in the application to the chosen machines. The broker handles submission of jobs but not how the job is actually executed on the resource, as that is part of the resource management system that resides on the resource involved.
Resource Selector component of GMIS takes care of the resource broker functionality. Once jobs are being executed Data Collection Manger monitors resources and the progression of the jobs. The following figure 4.9 shows the brokerage service design in GMIS framework.

User can submit the jobs from the user interface provided on viewer. All job requests are processed by the Resource Selector. Resource selector selects the appropriate resources for job execution, provides resource information to create activation graphs and provide interface to execute the jobs. Resource selector gathers all resource information from the database. Job execution request go to the resource agent through the communicator server. Data collection manager monitors the job execution status and status will get updated in database through topology manager. In GMIS framework, three types of job execution are provided:
• **Auto Job Execution** – In this type, user is not bothered which resource executes job. User selects a particular job to be executed and based on the resource availability Job execution will be done by the resource selector.
• **Job Execution on user defined resource** – In this type, user selects a resource on which a particular job requires to be executed.
• **Job Execution through activation graphs** – If a job can be divided into multiple tasks and tasks are to be executed on different nodes, then the user can create activation graphs. Generation of activation graphs can be done by using two methods. One is brokerage service itself which will create activation graphs automatically based on user inputs and other is creation of activation graphs by the user manually, by selecting different resources and applications available.
4.7.1 Resource Selection
Resource selection by grid user process data flow diagram is shown in the following figure 4.10.

1. Grid user requests for resources with specific job requirements like type of processor, memory size, application name by using user interface on viewer. Viewer sends this request to Resource Selector.
2. Resource Selector queries the database based on the user requirements using Data Handler methods.
3. Data Handler methods retrieves resource information from the database and sends to Resource Selector.
4. Resource Selector updates the Viewer with the available resource information.
**4.7.2 Creation of Activation Graph**
In a grid computing environment various services work in collaboration with each other. The output of one service acts as an input for another service. Each service after processing a input file will produce an output file. A graph can be created specifying the order in which various services are to be executed. The condition on execution and the input/output files of various services could also be specified. Such a graph is called as an Activation Graph.
A user interface is provided on viewer to invoke grid services. The following steps are required to be followed to create an activation graph:
- Choose a node on which task need to be executed.
- Choose a service on this node which is used for task execution.
- Specify input/output files
- Specify the dependency of execution for each task.
Activations graphs can be created manually or automatically. In Manual creation, user requires to select resources and has to specify all the required inputs, whereas in auto generation, activation graph will be created automatically by just specifying a minimal inputs like application names, input files and output files.
**4.7.3 Job Execution**
Job execution process data flow diagram is shown in the following figure 4.11.
1. User initiates job execution using user interface on viewer. Viewer sends this request to Resource Selector (RS).
2. RS stores the job information in database.
3. RS retrieves resource information from the database.
4. Resource Selector sends a request to the Topology Manager (TM) with job details.
5. TM sends a request to the communicator server to submit the job on resource.
6. CS sends a request to Agent software to execute job.
7. Agent software on resources executes the job and sends the status of job to the CS.
8. CS forwards the status of job to the Data Collection Manager (DCM).
9. DCM sends update to the TM.
10. TM updates the database with the latest status of the job.
The above mentioned job execution process is a simple job execution process. For efficient utilization of resources, fuzzy based scheduling algorithm [Pramod, 2006] can be used in GMIS job execution process.
4.8 Communicator Server
Communicator Server (CS) provides GMIS the ability to communicate with resources in grid. CS listens to responses and events from the resources, converts them into internal GMIS messages and sends them to the requested GMIS components. Communicator server maintains all the transactions data in a hash memory. The functionalities of communicator server include:
- Provides interface to send a request message to a resource on behalf of GMIS
- Creates and initializes the transaction and session tables
- Creates UDP/IP sessions
- Handles all communicator related messages like request/responses from GMIS components and events originated from the resources.
- Acts as an communicator agent for GMIS
Communicator server gets requests from Discover Server, Data Collection Manager, Topology Manager and Resource Selector. Communicator server communicates with resource agent with the help of Management Information Base (MIB) generated by the Topology Manager. This process is shown in the following figure 4.12.

4.8.1 Algorithm for Communicator Server
Algorithm: Communicator Server
1. Initialize IPC environment and create two communication channels for sending and receiving messages to/from GMIS components.
2. Open two UDP/IP sessions one for listen and other for send messages from/to communicator agents.
3. Initialize the head and tail shared memory pointers to mib tree head and tail respectively.
4. For each message received do
5. BEGIN
6. Parse the message
7. if (messages from GMIS components)
8. BEGIN
9. Initialize the transaction data
10. Interpret the IP address of resource from the header of message
11. Add IP address to the transaction data
12. Get the MIB of requested resource
13. Update the appropriate hash table to accommodate this request
14. Create an communicator Packet Data Unit (PDU) for the request
15. Send message to the resource communicator agent using IP address and port 261.
16. Start timer
17. END
18. else if (messages from Resource agents)
19. BEGIN
20. Check whether transaction id exists in the message
21. If (transaction id exists)
22. BEGIN
23. Retrieve the transaction information based on transaction id from hash table
24. Create internal message
25. Send message to the GMIS components
26. Delete the transaction details from hash table
27. END
28. else
29. BEGIN
30. Validate the received Packet Data Unit
31. Retrieve the Managed object instance (MOI) from PDU
32. Retrieve the resource type and message type
33. Construct internal message
34. Mark the message as event message
35. Send message to the Data Collection Manager
36. END
37. END
38. else if (message is from remote GMIS)
39. BEGIN
40. if (message type is "JOB_EXECUTION")
41. BEGIN
Initialize the transaction data
Retrieve the IP address and port of resource from the header of message
Add IP address to the transaction data
Get the MIB of requested resource
Update the appropriate hash table to accommodate this request
Create an communicator Packet Data Unit (PDU) for the request
Send message to the resource using IP address and default port 261.
END
END
else (ignore the message)
END
if timer expired
BEGIN
Retrieve the transaction from the hash table
Build transaction failed message
Send transaction failed message to requested GMIS component
END
4.9 Viewer
Grid users can use the viewer to access resource information, to discover resources on grid, for adding resources to grid and to submit jobs. Viewer is a web client which can be opened on any web browsers like Netscape or Mozilla firefox. Viewer uses Hyper Text Transfer Protocol (HTTP) to interact with the GMIS components through web server. Apache tomcat web server is used as a web server in GMIS. All classes related to interface between viewer and GMIS components are hosted by this web server. The following user interfaces are provided on the viewer.
1. User Management
2. Resources Discovery (Discover at node, cluster or grid level, add node, cluster or grid, add service, discover service and resource availability).
3. Job execution (Activation graph creation and execution, Job submission)
4. Resources monitoring at grid, cluster and node level.
5. Job execution monitoring
4.10 Agent Software
Agent software runs at each physical resource to provide resource information. Three types of agents are included as part of agent software.
• **Communicator Agent (CA)** – CA responds to communicator server requests includes job execution requests.
• **Resource Discover Agent (RDA)** – RDA collects resource specific details and generates events whenever there is a change in resource configuration.
• **Critical Information Agent (CIA)** – CIA sends any critical component change of the resource i.e., when CPU exceeds 80% CPU.
All agents use UDP/IP protocol to communicate with the GMIS communicator server as discussed in Chapter 3.
In operational setup these agents should be kept in the startup so that whenever the resource is booted these agents start automatically. In case the resource provider wants to use the system in a dedicated mode or wants to remove the resource from grid, for some reasons, then the resource provider could stop or kill these agents. If the resource provider stops the agents manually then before shut down RDA generates a resource delete event. Upon receiving this event GMIS updates its topology and database with this information. In case of killing agents, for the GMIS periodic request there will not be any response, so it will be treated as unavailable resource to the grid. Whenever resource provider wants to add the resource to grid, the resource owner can start this agent, by providing inputs that specifying which GMIS it needs to be added.
**4.10.1 Algorithm for Communicator Agent**
*Algorithm: Communicator Agent*
1. Initialize IPC environment and create two communication channels for sending and receiving messages to/from RDA and CIA.
2. Open two UDP/IP sessions, one for listening and another for sending from/to GMIS communicator server.
3. For each message received do
4. BEGIN
5. if (message is from GMIS communicator server)
6. BEGIN
7. Parse the Packet Data Unit (PDU)
8. Extract the communicator server IP address and port number
9. if (request type is RESOURCE_DISCOVER)
10. BEGIN
11. Invoke Resource Discover Agent (RDA) to collect both static and dynamic resource information
12. END
else if (request type is PHYSICAL_ RESOURCE_POLL)
BEGIN
Invoke Resource Discovery Agent (RDA) to collect only dynamic resource information
END
else if (request type is JOB_EXECUTION)
BEGIN
Create a new job id
Validate the job request details
If validation fails return appropriate error.
Create an entry in a hash table about the job execution with job id as a hash index
Invoke the corresponding application to execute
Send job execution information to CIA
Prepare a PDU with “JOB_EXECUTION_STARTED” with job id.
Send the PDU to GMIS communicator server
END
else if (request type is JOB_EXECUTION_POLL)
BEGIN
Retrieve the job id from PDU
Retrieve the job status from hash table based on job id
Prepare a PDU with job status
Send the PDU to GMIS communicator server
END
else if (message is from RDA or CIA)
BEGIN
if (message type is RESOURCE_DISOCOVER or PHYSICAL_ RESOURCE_POLL or EVENT)
BEGIN
Pack the information into a PDU
Send the PDU to GMIS communicator server
END
else if (message type is JOB_EXECUTION)
BEGIN
Retrieve the job details based on job id
Update it’s job status in hash table
Pack the information into a PDU
Send the PDU to GMIS communicator server
END
END
4.10.2 Algorithm for Resource Discovery Agent
Algorithm: Resource Discovery Agent
1. Initialize IPC library and create two communication channels one for receiving and other for sending messages.
2. For each message received do
3. BEGIN
4. Parse the message.
5. If (message is from Communicator Agent)
6. BEGIN
7. if (message type is “RESOURCE_DISCOVERY”)
8. BEGIN
9. Get static information like operating system, processor family, number of processors using operating system level commands.
10. Get dynamic information like memory, disk space available, and CPU idle time using system commands.
11. Pack both static and dynamic information into the internal message format.
12. END
13. else if (message type is “POLL_REQUEST”)
14. BEGIN
15. Get the component based on the object identifier (OID) of the request
16. Get the information of that particular component.
17. Construct an internal message with the resource/component information.
18. Send this message to communicator agent.
19. END
20. else
21. Ignore the message.
22. END
23. else if (message is from Controller handler)
24. BEGIN
25. if (message type is “START” or “STOP”)
26. BEGIN
27. Prepare an internal message with message type as RESOURCE_UPDATE event.
28. Send this message to communicator agent.
29. END
30. else
31. Ignore the message.
32. END
33. END
34. END
4.10.3 Algorithm for Critical Information Agent
Algorithm: Critical Information Agent
1. Initialize IPC library and create two communication channels to send and receive messages to/from the communicator agent.
2. Get the threshold values for different components from the threshold configuration file. Currently only two components are monitored, one is CPU usage and other is memory usage.
3. Get CPU and memory usage using the system commands.
4. Start timer
5. If CPU or memory usage is exceeding the threshold values then
6. BEGIN
7. Prepare an internal message with event type as CRITICAL_EVENT.
8. Send this message to communicator agent.
9. END
10. If timer expires
11. BEGIN
12. Repeat steps 3 to 10.
13. END
14. For each received message
15. BEGIN
16. Parse the message
17. If (message is from Communicator Agent)
18. BEGIN
19. Store the job information details in internal memory.
20. Start execution of the job.
22. END
23. END
24. If job timer expires
25. BEGIN
26. Get the status of job execution.
27. Prepare a message with the job execution status.
28. Send the message to Communicator Agent.
29. If (job execution status is not “JOB_EXECUTION_COMPLETED) then
30. BEGIN
31. Start job timer again.
32. Repeat steps 24 to 33.
33. END
34. END
4.11 Inter Process Communication
The components or processes of the GMIS communicate with each other through a set of library calls which contain the Inter Process Communication (IPC) library called message-queue IPC library. This IPC library is created to use message queues, which are resident as shared memory in the kernel of operating system. This new library handles communication among the GMIS components.
A socket-based IPC library has developed to communicate between GMIS communicator server and resource agent software. The following figure 4.13 shows the GMIS IPC mechanism.
4.12 GMIS-GMIS Communication
To manage multiple grid network resources and increase resource availability, multiple GMISs can be used across the grid network. In multiple GMIS architecture the advantages are:
- Network traffic and managing resources can be distributed among GMISs.
- When two or more GMISs are connected and need to transfer resource information, then one Communicator Server of GMISs acts as agent and other acts as manager.
- Grid users access resources and act on consistent management information, without regard to location.
- All information is gathered via management requests to GMIS.
- Any GMIS can locate requested managed information and execute requests, thereby distributing the processing load.
To achieve the GMIS – GMIS communication the following pre-conditions should meet:
1. Both running GMIS servers should have trusted host permissions.
2. GMISs users account should exist in both the machines.
3. Ensure that /etc/hosts of the GMIS contain the hostnames and IP addresses of all other GMISs to be connected.
The design of GMIS-GMIS is shown in the following figure 4.14. In this figure, GMIS-1 connected to GMIS-2 and all resource information of GMIS-2 is mounted to the Resource Management Information Tree (RMIT) of GMIS-1. An Application Programming Interface called GMIS-GMIS communicator is provided to connect the multiple GMIS-GMIS. This process is part of Communicator Server of GMIS.
4.13 Summary
This chapter discussed about the design of GMIS components with data flow diagrams and data models. Algorithms for various tasks are discussed in this chapter. This chapter also discussed about the communication methodologies between...
Communicator Server of GMIS and grid resources. Finally it discussed about the GMIS-GMIS communication design.
Next chapter discusses about GMIS processes and interface modeling with experimentation results.
|
{"Source-Url": "http://shodhganga.inflibnet.ac.in:8080/jspui/bitstream/10603/4197/12/12_chapter%204.pdf", "len_cl100k_base": 9891, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 61537, "total-output-tokens": 11469, "length": "2e13", "weborganizer": {"__label__adult": 0.0002541542053222656, "__label__art_design": 0.0004701614379882813, "__label__crime_law": 0.00023496150970458984, "__label__education_jobs": 0.0007028579711914062, "__label__entertainment": 7.641315460205078e-05, "__label__fashion_beauty": 0.00010627508163452148, "__label__finance_business": 0.00047707557678222656, "__label__food_dining": 0.0002396106719970703, "__label__games": 0.0005822181701660156, "__label__hardware": 0.002685546875, "__label__health": 0.0002803802490234375, "__label__history": 0.00028896331787109375, "__label__home_hobbies": 0.00011086463928222656, "__label__industrial": 0.0008006095886230469, "__label__literature": 0.00017011165618896484, "__label__politics": 0.0002238750457763672, "__label__religion": 0.00039768218994140625, "__label__science_tech": 0.0872802734375, "__label__social_life": 6.16312026977539e-05, "__label__software": 0.0254058837890625, "__label__software_dev": 0.87841796875, "__label__sports_fitness": 0.00018727779388427737, "__label__transportation": 0.0005712509155273438, "__label__travel": 0.0002079010009765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46994, 0.05924]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46994, 0.48224]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46994, 0.80767]], "google_gemma-3-12b-it_contains_pii": [[0, 1878, false], [1878, 2525, null], [2525, 4345, null], [4345, 5495, null], [5495, 6767, null], [6767, 8293, null], [8293, 10232, null], [10232, 11816, null], [11816, 12868, null], [12868, 14634, null], [14634, 15694, null], [15694, 17199, null], [17199, 19246, null], [19246, 20352, null], [20352, 22091, null], [22091, 23320, null], [23320, 24802, null], [24802, 26437, null], [26437, 27855, null], [27855, 29343, null], [29343, 30302, null], [30302, 31345, null], [31345, 33114, null], [33114, 34014, null], [34014, 35100, null], [35100, 36980, null], [36980, 38618, null], [38618, 40634, null], [40634, 42131, null], [42131, 43677, null], [43677, 45098, null], [45098, 45910, null], [45910, 46786, null], [46786, 46994, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1878, true], [1878, 2525, null], [2525, 4345, null], [4345, 5495, null], [5495, 6767, null], [6767, 8293, null], [8293, 10232, null], [10232, 11816, null], [11816, 12868, null], [12868, 14634, null], [14634, 15694, null], [15694, 17199, null], [17199, 19246, null], [19246, 20352, null], [20352, 22091, null], [22091, 23320, null], [23320, 24802, null], [24802, 26437, null], [26437, 27855, null], [27855, 29343, null], [29343, 30302, null], [30302, 31345, null], [31345, 33114, null], [33114, 34014, null], [34014, 35100, null], [35100, 36980, null], [36980, 38618, null], [38618, 40634, null], [40634, 42131, null], [42131, 43677, null], [43677, 45098, null], [45098, 45910, null], [45910, 46786, null], [46786, 46994, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46994, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46994, null]], "pdf_page_numbers": [[0, 1878, 1], [1878, 2525, 2], [2525, 4345, 3], [4345, 5495, 4], [5495, 6767, 5], [6767, 8293, 6], [8293, 10232, 7], [10232, 11816, 8], [11816, 12868, 9], [12868, 14634, 10], [14634, 15694, 11], [15694, 17199, 12], [17199, 19246, 13], [19246, 20352, 14], [20352, 22091, 15], [22091, 23320, 16], [23320, 24802, 17], [24802, 26437, 18], [26437, 27855, 19], [27855, 29343, 20], [29343, 30302, 21], [30302, 31345, 22], [31345, 33114, 23], [33114, 34014, 24], [34014, 35100, 25], [35100, 36980, 26], [36980, 38618, 27], [38618, 40634, 28], [40634, 42131, 29], [42131, 43677, 30], [43677, 45098, 31], [45098, 45910, 32], [45910, 46786, 33], [46786, 46994, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46994, 0.02077]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
67ef4b9a3b2d52fba053f51f004ff03499c2804e
|
Technology Roadmap for PeopleSoft
PeopleTools
Deepankar Narayanan
Vice President, IDC Development
Safe harbor statement
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.
Today’s PeopleTools
PeopleTools Initiatives and Planned Enhancements
For more information
Today’s PeopleTools
PeopleTools Patches
- Regular patches
- Critical system and security fixes
- Backported changes required by application features or configuration tools
Application Images
- Configuration tools
- Utilities
- Lifecycle management features
PeopleTools Images
- PeopleSoft Cloud Manager
- Integration Hub (critical fixes only)
PeopleTools Support Policy
- Support until the next major release (8.n+1)
- Support for 1 full year
- Critical patches for 1 full year
Delivered First from Cloud Manager
- Significant number of migrations to OCI
- Many success stories
- It’s easy (really?)
- Simplified patch and upgrade
- Where automation is being delivered
- Value for everyone
Premier Support
PeopleSoft’s Strategic Investments
**Analytics**
- Pivot Grids
- Simplified Analytics
- Personalized Analytic Notifications
- Kibana Analytics
**Cloud and Emerging Technology**
- Cloud Infrastructure
- PeopleSoft Cloud Manager
- Oracle Digital Assistant
- Oracle Asset Monitoring Cloud
**Configuration**
- Configuration tools
- Configurable features
- Isolate customization
- Personalization capabilities
PeopleSoft PeopleTools Initiatives
Migrate to Oracle Cloud Infrastructure
to leverage the value of the cloud
Integrate with Oracle Platform as a Service
to enable emerging technology
Analytics
for action and visualization
Configuration
to apply changes more easily
Simplify and Modernize
to make users more productive
Copyright © 2020, Oracle and/or its affiliates | Confidential: Restricted
Migrate to Oracle Cloud Infrastructure
Run all or part of your PeopleSoft environments on the Oracle Cloud Infrastructure and Database as a Platform Service
Shifting to Oracle Cloud Infrastructure
Selective Adoption
Create DR from on-premises environments to OCI
Limited benefits
Leverage Cloud Manager
- Subscription
- PUM Deployment
- PUM Configuration
Quick availability of PUM Dashboard
Platform independent
PRP Automation
Pre-Production
Pre-production environments
- Dev
- Test
- UAT
- Demo
- Pilot
- Training
- Sandbox
Complete Operations
Production ready Oracle Cloud Infrastructure
Database cloud service
Benefits
- Cost
- Scalability
- Performance
Copyright © 2020, Oracle and/or its affiliates | Confidential: Restricted
PeopleSoft Cloud Manager
Preserve customizations
Backup and restore
Pre-built scripts to automate the migration process
Lifecycle Management
Subscribe to apps and tools maintenance
Quick PUM Provisioning
Automated Tools upgrade and patch
Oracle Cloud Infrastructure *
Oracle Cloud Marketplace
PeopleSoft Image
No additional license cost
* No longer update Cloud Manager Image on Oracle Cloud Infrastructure Classic
Copyright © 2020, Oracle and/or its affiliates | Confidential: Restricted
Cloud Manager Roadmap
**CM 3,4,5,6**
- Lift and Shift
- PUM
- Clone
- Tools update
- Elastic Search
- Health Check
- CM Self update
- Tools upgrade
- REST API
- OCI Support
- TDE Support
**CM 7, 8**
- 8.57 Support
- First GA
- On-demand scaling
- RMAN Support
- Custom Scripts
**CM 9**
- Automate Lift
- Tools Update
- Tools upgrade
- Private Subnets
- Visual COBOL
**CM 10**
- Tools 8.58 on Cloud
- CM Auto update
- Import environment
- Elastic File System
- EXA-CS support
- Kibana Support
- Windows MT Support
- Clone in minutes
- Network deployment
- TDE Support
- 18c DB support
**Planned Features**
- DB Refresh
- Govt Cloud
- Resume/Restart
- Load Balancer
- Auto Patching
- ATP Support
- Shared PS_HOME
- Policy based Governance
• Results of the migration
With minimal internal headcount, no in-house infrastructure, and a tight budget, First Financial Northwest Bank was able to rapidly implement PeopleSoft FSCM directly onto Oracle Cloud Infrastructure in 4 months. With the help of SpearMC Consulting, FFNWB was able to migrate their Accounting, Asset Management and Accounts Payable functions off of a competing Fiserv product onto Oracle PeopleSoft.
By keeping the implementation "vanilla" and utilizing all the latest and greatest features and functionality, they are recognizing immediate improvements in their business processes.
Utilizing OCI’s native features like scalable computing power and storage, disaster recovery functions and speedy networking, FFNWB has modern, top of the line infrastructure that is secure and will meet their needs for years to come.
PeopleSoft on OCI
Customers running PeopleSoft on the Cloud
Partners that have supported them
Integrate with Oracle Platform as a Service
An easy way to extend your PeopleSoft capabilities is by adding features available through Oracle Platform as a Service Products
Integration Options with Oracle Platform as a Service
- Database as a Service
- Exa Cloud Service
- Identity Cloud Service
- Integration Cloud Service
- Management Cloud Service
- Autonomous Database Cloud Service
**Application Directive**
Integrate with the Asset Monitoring Cloud to track Asset Location Real-time
Delivered with FSCM Image 33
**PeopleTools Framework**
PeopleSoft Chatbot Integration Framework delivered with 8.57.07 to simplify and standardize use of Oracle Digital Assistant
**Early Phases**
Data Science Cloud Service - Artificial Intelligence and Machine Learning using models made from data extracted from PeopleSoft
Summary of PeopleSoft Chatbot Deliverables
PeopleTools (8.57.07)
- Application Services Framework (ASF)
- Security
Chatbot Integration Framework (HCM PI 31 in EC)
- Web Chatbot UI
- Standard Authorization Process
- PeopleSoft Library
- Skill Template (sample skill)
Lifecycle Management Guidelines for PSFT and ODA
- Initial delivery
- Ongoing maintenance
Applications Skills
- Absence Skill – HCM PI 32
- Company Directory – HCM PI 33
Chatbot Partner Ecosystem
- Chatbot pilots
- Demos
- Proofs of concept
- Integration with Oracle Digital Assistant (ODA)
- Pre-built libraries
- Multiple languages
- Business cases
- Training and Support
Copyright © 2020, Oracle and/or its affiliates | Confidential: Restricted
Artificial Intelligence/Machine Learning
Evaluating how PeopleSoft can work with Oracle Data Science Cloud Service to deliver AI/ML
- Extract data from PSFT into models
- Add external data to models
- Build models in Oracle Data Science
- Evaluate by individual organizations
Analytics
Improve the visualization of PeopleSoft data and the actions that can be done
Analytics
Use ours. Build your own. Put them on dashboards. Use them in Related Content.
**Simplified Analytics**
Extend the power of your Pivot Grids. Allow end users to personalize.
**Personalized Analytic Notifications**
Put the system to work for you. Monitor thresholds set on your Pivot Grids. Centralize or even better... personalize.
**Kibana**
In-app data visualizations that won't impact system performance. New data discovery capabilities. Powerful drillable analytics.
Threshold settings and notifications at individual data point level
**Personalized Analytic Notifications**
Threshold visualization for Dual Y-Axis charts
Notification options are set at system level with Personalization enabled
Notification options at system level or user level apply for the Personalized Analytic Notifications
Administrator may choose a list of users to be subscribed or leave it open for all users to subscribe.
Kibana Analytics
Kibana is an open source analytics and visualization platform delivered in the PeopleTools DPKs that provides data visualization for the Elasticsearch indices.
Kibana in PeopleSoft
PeopleSoft data security enforced on all Kibana visualizations
Ability to embed Kibana analytic on a Homepage, Dashboard or Related Content
System metrics, indexing metrics and indexing summary
PeopleSoft Healthcenter using Kibana analytics
Application teams enhancing Elasticsearch indices for more and richer data to use with Kibana
Kibana vs. Pivot Grids
Use Kibana
• Data only exists in index
• Large Volume
• Performance is important
• Complex Analysis
• Time series/historic
• Data trends
• Interactive exploration
Use Pivot Grids
• Actionable
• Real-time
• Online-relevant
• End-user model
• Triggers notification
• Transactional data
Data Masking in Query
In support of GDPR, you can now use the Application Data Privacy Framework to mask Query results. This impacts all sub-products using Query as a data source.
BI Publisher support for Excel Templates
Supporting the use of Excel Layout templates in our integrated reporting tool will greatly benefit our customers, allowing them the ability to design spreadsheet reports with much greater control than they have when using RTF or XSL templates.
New timeline object in PeopleSoft Charting
Timelines give applications a way to visualize a process using chronological order. When details are displayed graphically, important points in time can be more easily seen and understood.
Process Scheduler
API to delete run control id
Improved Charting
Donut pie charts, advanced visualization APIs like timeline and thematic maps, calendar visualization, reference line/area for Y-Axis charts, custom styles (gradient or heatmap) for bar charts.
Configuration
PeopleSoft changes and customizations are made using configuration frameworks or by isolating customizations, allowing maintenance and new features to be adopted much more easily.
Isolating Customizations
Event Mapping
- Subpages, secondary pages and nested pages
- Work record
- New events: FieldEdit and FieldDefault
Drop Zones
- Delivered in application images
- Support for Classic Pages
Page and Field Configuration Utility
- Secondary pages
- Inactivate sequences
- Hiding sensitive data
Pluggable AppEngine
- Insert steps before or after delivered
- Skip over steps without removing
- Limited to SQL and PeopleCode steps
We love the new Page and Field Configurator and Event Mapping, which has helped us eliminate some existing customizations and avoid several new customizations. Our objective is to eliminate 50% of our existing customizations in the next two years.
Simplify and Modernize
PeopleSoft applications will be intuitive and take advantage of the latest user interface features
New Stylesheet
PeopleSoft UI
Chatbots are a new User Experience
Legacy PSFT Components – Classic Plus
- Responsive PSFT UI – FluidUI
- Guided Processes – Activity Guides
- Chatbot
- User Driven
- Conversational
- Requires ODA
Putting More Power in the Hands of Your Users
- Users set notification preferences centrally
- Choose notification channels:
- Text (partnership with Twilio)
- Notification window (in app)
- Email
- Take action directly from notification window
What Happens to Approvals When You’re Not There?
Fluid Delegations Provide Continuity
- Delegate approvals when you’re away
- Centralized and consistent across HCM and FSCM
Coming soon in Applications updates
End of support for PeopleTools Classic
PeopleTools 8.57 will be the last PeopleTools release that supports Classic Navigation. Will not remove Classic Navigation style so it can be still be used 'out of support'.
Encourage use of Fluid Navigation.
MOS Tech Update Doc ID 2585909.1
Platform Enhancements
Infrastructure DPK
- Latest fully patched infrastructure components
- Used to override infrastructure delivered with Tools DPK
- Available after every CPU
DB2 Universal Table Space
8.58 based PUM Images will be based on Oracle Linux 7.x and Oracle Database 19c
Elasticsearch update with 8.58
- Incorporate Elasticsearch, Logstash, and Kibana 7.x
- No need to do full indexing for an upgrade
PeopleSoft has transitioned from Net/Server Express to Visual COBOL. PeopleTools 8.57 will be the last release certified with MF Net/Server Express. There is no additional cost to transition to Visual COBOL.
Low impact - our testing uncovered no changes to COBOL code.
Minimum PeopleTools patch levels:
- PT 8.56.16
- PT 8.57.05
Net/Server Express go into sustaining support for PS after December, 2020.
Additional information:
- FAQ – PS Visual COBOL (Doc ID 2525494.1)
- Tech Update - Main Page (Doc ID 764222.1)
Security Enhancements
Updates to Data Masking, including PSQuery
OAuth Support for Authorization
“Real” IP Address Support “X-FORWARDED …” type addresses
Updates to Red Paper, for example:
• Understanding OAuth vs Authentication
• New "real" IP Addresses
• Need to protect email servers and DNS (internal, external, and poisoning)
- Including Ransomware mitigation
• Discussion on CSRF and SSRF
• Restrict access to log/trace files
• Clarify use of certs for JSL and WSL
People Tools - Notifications
• Type of Notifications
Notification Features - traditional
• Worklist is accessible only from PIA
• Email is an offline notification with a link to worklist
Notification Features - traditional
- AWE
- TBE
- SendMail
- MCFOutboundEmail
Push Notification Framework - 8.54
- Updates UI with real-time notifications when a pre-defined Event is triggered
- Define Event or Event Collection
- Subscribe to the Event/Event Collection
- Publish an Event
- Provides an option to give real-time notification for Worklist
A framework to publish real-time alerts to user browser is provided through Publish To Window Framework.
**Notification Window**
- Worklist is accessible only when user navigates to the page. Requires constant monitoring.
- In 8.54, Publish To Window is implemented for TriggerBusinessEvent, giving users real-time alerts when a worklist is created.
ComponentLife
Publish Code
```java
PTPN_PUBLISH:PublishToWindow &wlSrch;
&msginfo = PTPN_BROADCAST.PTPN_BC_MSG.Value;
&wlsrch = create PTPN_PUBLISH:PublishToWindow("SENDNOTE", "BROADCAST");
&wlsrch.AddRecipient(&user, 1);
&wlsrch.SetCategoryAlias("BROADCAST");
&wlsrch.SetMsgInfo(&msginfo);
&wlsrch.SetCategoryTypeFyi();
&wlsrch.SetMsgStateNew();
&wlsrch.SetMsgKey(GenerateRandomDataKey());
&wlsrch.Publish('');
```
Text messaging is available as one of the notification options under Notification Configuration.
The URL identifier ‘PTTEXTMESSAGING’ is delivered to configure Twilio account for text messaging support.
- User phone number must be provided under Notifications section available at ‘My Preferences’ -> ‘General Settings’, to receive text messages.
- The phone number should be in E.164 format (+<Country Code><Area Code><Phone Number>)
Text Messaging
• ‘Notification Configuration’ component to control the notification types to be received by users.
• Any applications using Push Notification framework through PTPN_PUBLISH application package get the feature just by configuring notification with the existing ‘Category Name’ and ‘Event Name’.
• Administrator can configure a default notification type, allow users to personalize or restrict users from disabling all channels.
Notification Configuration
View Notification Configuration
3 results found.
<table>
<thead>
<tr>
<th>Notification Name</th>
<th>Event Name</th>
<th>Description</th>
<th>Owner ID</th>
<th>Functional Category</th>
<th>Push</th>
<th>Email</th>
<th>Text</th>
<th>Override</th>
</tr>
</thead>
<tbody>
<tr>
<td>PGThresholds</td>
<td>PTPG_THRESHOLD_NOTIF</td>
<td>Personalized Analytic Notifications</td>
<td>PT</td>
<td>Pivot Grid</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
<tr>
<td>TriggerBusinessEvent</td>
<td>TRIGGERBUSINESSEVENT</td>
<td>Trigger Business Event</td>
<td>PT</td>
<td>Workflow</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
<tr>
<td>BROADCAST</td>
<td>SENDNOTE</td>
<td>Broadcast Notifications</td>
<td>PT</td>
<td>Notifications</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
</tbody>
</table>
Primary Key – Same as Category Name, earlier
Event Name as registered with PUSHTOWINDOWCOLLECTION
Enable this to disallow users from disabling all the channels
Channels available for users. Pre-configurations may be required
Users get a notification by default in the selected channel
Enable to allow users to opt for different channels
Chose the Permission List/Role of users who can personalize. Empty grid enables it for all
### Notifications
#### Display Settings
- Notification items to display
#### Notification List
<table>
<thead>
<tr>
<th>Notification Name</th>
<th>Description</th>
<th>Notification Window</th>
<th>Email Notification</th>
<th>Text Message</th>
</tr>
</thead>
<tbody>
<tr>
<td>1 BROADCAST</td>
<td>Broadcast Notifications</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>2 PGTThresholds</td>
<td>Personalized Analytic Notifications</td>
<td>Yes</td>
<td>No</td>
<td></td>
</tr>
<tr>
<td>3 TriggerBusinessEvent</td>
<td>Trigger Business Event</td>
<td>Yes</td>
<td>No</td>
<td></td>
</tr>
</tbody>
</table>
TriggerBusinessEvent – Manage Notifications
• ‘TRIGGERBUSINESSEVENT’ is a pre-delivered notification configuration used for all the TBE notifications
• Configuration for an individual event can be changed by using ‘Notification Configuration Override’ option
• A notification configuration used to override, must be mapped with Event TRIGGERBUSINESSEVENT
PeopleTools -> Push notifications -> Notification Window -> Configuration -> System Configuration
<table>
<thead>
<tr>
<th>Business Process</th>
<th>Activity</th>
<th>Event</th>
<th>Worklist</th>
<th>Priority</th>
<th>Enable</th>
<th>Notification Configuration Override</th>
<th>Notification Text Message Set Number</th>
<th>Notification Text Message Number</th>
<th>Notification Category Message Set Number</th>
<th>Notification Category Message Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>GE_PROCESS_ORDERS</td>
<td>GE_VA_TEST</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>No</td>
<td>127</td>
<td>4</td>
<td>127</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>GE_PROCESS_ORDERS</td>
<td>GE_VA_TEST</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>Yes</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>9</td>
</tr>
<tr>
<td>3</td>
<td>GE_PROCESS_ORDERS</td>
<td>GE_APPROVE_ORDER</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>Yes</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>9</td>
</tr>
<tr>
<td>4</td>
<td>GE_PROCESS_ORDERS</td>
<td>GE_APPROVE_ORDER</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>No</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>9</td>
</tr>
<tr>
<td>5</td>
<td>GE_ACTIVITY_GUIDE_DEMO</td>
<td>GE_ACTIVITY_GUIDE_START</td>
<td>ACTIVITY_GUIDE_WKST</td>
<td>Activity Guide Demo</td>
<td>Default</td>
<td>No</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>9</td>
</tr>
</tbody>
</table>
## Notification Configuration
**Notification Category**: Recycle Test
- **Event Name**: TRIGGERBUSINESSEVENT
- **Description**: Recycle Order
- **Functional Group**: Workflow
- **Object Owner ID**: PeopleTools Demo
- **Mandatory**: No
### Notification Options
<table>
<thead>
<tr>
<th>Notification Type</th>
<th>Available</th>
<th>Enable By Default</th>
</tr>
</thead>
<tbody>
<tr>
<td>Push</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Email</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Text</td>
<td>Yes</td>
<td>No</td>
</tr>
</tbody>
</table>
### Personalization Settings
- **Allow Personalization**: Yes
### Authorized Roles and Permission Lists
- **Type of Authorization**:
- **Role Name**:
- **Role/Permission List**:
- **Add**:
- **Remove**:
### TriggerBusinessEvent
<table>
<thead>
<tr>
<th>Business Process</th>
<th>Activity</th>
<th>Event</th>
<th>Worklist</th>
<th>Priority</th>
<th>Enable</th>
<th>Notification Configuration Override</th>
<th>Notification Text Message Set Number</th>
<th>Notification Text Message Number</th>
<th>Notification Category Message Set Number</th>
<th>Notification Category Message Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>QE_PROCESS_ORDERS</td>
<td>QE_VA_TEST</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>No</td>
<td></td>
<td>137</td>
<td>4</td>
<td>137</td>
<td>3</td>
</tr>
<tr>
<td>QE_PROCESS_ORDERS</td>
<td>QE_VA_TEST</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>Yes</td>
<td></td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>9</td>
</tr>
<tr>
<td>QE_PROCESS_ORDERS</td>
<td>QE_APPROVE_ORDER</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>Yes</td>
<td>RecycleTest</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>QE_PROCESS_ORDERS</td>
<td>QE_APPROVE_ORDER</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>No</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>QE_ACTIVITY_GUIDE_DEMO</td>
<td>QE_ACTIVITY_GUIDE_START</td>
<td>ACTIVITY_GUIDE_WORKLIST</td>
<td>Activity Guide Demo</td>
<td>Default</td>
<td>No</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
• Pivot Grids can be personalized to send notifications based on a threshold criteria.
• All Personalized Analytic Notifications use the pre-delivered 'PGThresholds' notification configuration.
• A Pivot Grid view allows user to either enable or disable all notifications for it.
## Notifications
### Display Settings
Notification Items to display: [ ]
### Notification List
<table>
<thead>
<tr>
<th>Notification Name</th>
<th>Description</th>
<th>Notification Window</th>
<th>Email Notification</th>
<th>Text Message</th>
</tr>
</thead>
<tbody>
<tr>
<td>1. PGThresholds</td>
<td>Personalized Analytic Notifications</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>2. Approvals</td>
<td>Expense Approvals</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>3. BROADCAST</td>
<td>Broadcast Notifications</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
</tr>
</tbody>
</table>
1 rows matching threshold criteria for Display Paid Amount (Payment Spend Analysis) for VP1
Page=PTPG_NUI_VWR&PGNAME=PO_SPEND_FL_PAID_PVG&VIEWNAME=PO_SPEND_FL_PAID_PVG.View
• Support for Bulk Notifications
• Selection by Role or User Id
• Broadcast
• Notification channels can be configured
• Meta-variables for personalized messages
Other 8.58 features
- Notification
- Archive and Purge needs Message State/inactive choice
- Ability to send Broadcast Notifications to large groups of users
Other 8.58 features
- Pivot Grids
- Pivot Grid – Threshold Improvements
- Ability to format non-currency fields to Display 1000 separator
- Facet category UX
- Expand/Collapse/Invisible configuration
- Sequencing of facet categories
- Ability to export current Pivot Grid view
- Ability to define order or sequence values on X-Axis
Other 8.58 features
- **Application Services Framework (Chatbot)**
- File Processing – Attachment support for Chat bot application
- OAuth2 uptake for Application Service Framework
- **Process Scheduler**
- SetEmailOption have an option to add "From" email address
- Run Control ID Management
Thank you
PeopleSoft Search and Internet Architecture
PeopleTools 8.58
Ramasimha Rangaraju
Director, PeopleSoft Internet Architecture and Search Development
Feb 06, 2020
Safe harbor statement
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.
PeopleSoft Search
Program agenda
1. Introduction to Kibana
2. PeopleTools 8.57 – Visualize System and Indexing Metrics using Kibana
3. PeopleTools 8.58 – Visualize Application Data using Kibana
5. Developing Visualizations
6. Dashboard Security
7. Elasticsearch 7.0
8. Multi Select Facet Improvements
9. Search Results Data masking
Kibana
An Open Source analytics and Visualization platform from Elastic
• Data Aggregation based on various fields and parameters
• Multiple mechanisms to filter data
• Flexible and Intuitive interface with variety of Visualizations
• Geographic distribution of data
• Time Series distribution of data
PeopleTools 8.57 – Visualize System and Indexing Metrics using Kibana
- Elasticsearch DPK in 8.57 ships Elasticsearch and Kibana (ESK DPK on 6.1.2)
- Usage of Kibana for analytics was limited to monitoring ES server
- Application Search Indexes were not shown in Kibana
- Kibana can be accessed only by esadmin user
- Delivered dashboards
- System Metrics
- Indexing Metrics
- Indexing Summary
PeopleTools 8.58 – Visualize Application Data using Kibana
- PeopleTools 8.58 extends Kibana to PeopleSoft Application data
- All PeopleSoft data which has been indexed for Search can be Visualized and analyzed using Kibana
- Application users can Create /View Visualizations
- Data security is same as in Global Search or Keyword Search
- Kibana dashboards can be presented in PeopleSoft as Tile or Related Information
- Kibana access can be secured further with dashboard specific roles
Kibana Login
Direct Login
- Uses Kibana URL
- Applicable for Admins or limited users
- PeopleSoft User/Password/Database
- Authentication is real-time by PeopleSoft callback
Access in PeopleSoft
- Users can access Kibana dashboards in PeopleSoft
- Presented as Tile or Related Information
- User will be authenticated in the background, no additional login required
Creating Index Pattern
- Existing application search indexes are accessible in Kibana to build analytics
- Access to data is secured with row level security
- Document access will be same as in Global/Component search
Kibana Dashboard Development – Step 2
Create Visualization/Dashboards
- Visualization is a tabular or chart representation of aggregated data from Elasticsearch
- Dashboard is a logical grouping of visualizations
Import Kibana Dashboards
Search Framework > Administration > Import Kibana Dashboards
- PeopleSoft Applications define Dashboards in Kibana and import them to PeopleSoft Database
- The Imported Dashboards are delivered through PeopleSoft database
Kibana Visualizations
Search Framework > Administration > Kibana Visualizations
- Imported Dashboards can be configured as
- PeopleSoft Tile
- Related Information of a component
Deploy Kibana Dashboards
Search Framework > Administration > Deploy Kibana Dashboards
- Deploy is Done in Kibana in Customer Environment
- Dashboards delivered in PeopleSoft Database are copied to Kibana using Deploy Action
Kibana Privileges
Search Framework > Administration > Kibana Privileges
- User with Search Administrator role has full control in Kibana
- Permissions to the Kibana dashboards can be restricted using Privileges mapped
- Dashboard can be provided with Edit Roles or View Roles
- User with Edit Roles would be allowed to change dashboards in Kibana
- View Roles, limits the usage to view and filter but no edit
Elasticsearch 7.0.0
- PeopleTools 8.58 supports Elasticsearch, Logstash & Kibana on 7.0.0
- 8.58 **does not** support ES 6.1.2
- DPK supports upgrade of ES 6.1.2 to ES 7.0 – **No need for re-indexing**
- ELK DPK on 7.0 certified on Linux and Windows
- ELK-DPK-WIN-7.0.0_01.zip
- ELK-DPK-LNX-7.0.0_01.zip
- Logstash installation is necessary only for Health Center implementation
Multi Select Faceting- Improvements
- Hierarchy Facets by default are enabled for Multi Select Faceting
- Default display is Collapsed Tree nodes
- Select the checkbox to filter the facets
- One breadcrumb for each selected tree node, instead of one for entire tree facet
- Configurable option for Tree nodes default state collapsed/expanded.
- System level
- Search Category Facet level
Multi Select Faceting - Improvements
View Search Results
14 results for keyword: "%"
- 2019/06 | 2018/03 | Clear All
- **Job Posting: Registered Nurse - RN - 505002 | External Posting**
Recruiting Location: General Hospital | Department: Nursing | Job Family: Nursing
- **Job Posting: Administrative Assistant - Confidential - Human Resources - 505008 | External Posting**
Recruiting Location: Corporation Headquarters | Department: Human Resources | Job Family: Administrative Support
- **Job Posting: Executive Assistant - 500051 | Internal Posting**
Recruiting Location: Corporation Headquarters | Department: Human Resources | Job Family: Administrative Support
- **Job Posting: Senior Nurse Manager - 504006 | Internal Posting**
Recruiting Location: Arizona Operations | Department: Lab Facility | Job Family: Clinical
- **Job Posting: Executive Assistant - 500051 | External Posting**
Recruiting Location: Corporation Headquarters | Department: Human Resources | Job Family: Administrative Support
- **Job Posting: Procurement Manager - 504072 | External Posting**
Recruiting Location: | Department: Revenue Management | Job Family: Accounting
- **Job Posting: Procurement Manager - 504072 | Internal Posting**
Recruiting Location: | Department: Revenue Management | Job Family: Accounting
- **Job Posting: Sales Product Consultant - 303867 | External Posting**
Recruiting Location: California Location | Department: Western Sales Region | Job Family: Sales
Search Results Data Masking
- To mask certain fields from showing sensitive data in search results
- Provided new APIs to mask on specific code area
- Field.SetDisplayMask is the peoplecode API to mask any field
- SearchInit peoplecode on a Search record key field to mask any sensitive data in search results.
- Data masking feature is available for both classic and fluid component search
- Applicable on both real-time and keyword component search
PeopleSoft Internet Architecture
Topics
1. Event Mapping
2. Activity Guide
3. Fluid Core
4. Core Tech
5. Drop Zones
6. Combining Related information and simplified Analytics
7. Updated Style Sheets
8. Configuration Specialist
9. Classic Plus Updates
Event Mapping
EVENT MAPPING FOR NESTED SUB OR SECONDARY PAGES/RECORDS/FIELDS
- A new option to allow users to select Pages (Standard and Secondary), Records, Record Fields system wide [and not just restricted to Level 0].
- A new check box called 'unrestricted prompt' would enable users to achieve this selection.
- This also allows user to handle use cases where the modal pages that are invoked via DoModalX() call would need event mapping configurations on the pages or records and fields.
Event Mapping
SUPPORT FOR RECORD FIELD EDIT AND FIELD DEFAULT EVENTS
- Users will now be able to configure Event Mapping for additional two events in addition to the existing FieldChange event.
- FieldEdit
- FieldDefault
Event Mapping
**SUPPORT FOR SEARCHINIT AND SEARCHSAVE EVENTS**
- Users should be allowed to be able to associate event mapping hook on to additional events of the component's search record at the record and field level.
- Search Init Event
- Search Save Event
- This should be applicable for Search and Prompt Pages.
Event Mapping
SUPPORT FOR ADDITIONAL EVENTS – Visualisation/Reference In Application Designer
- Users will be intimated about the events associated via event mapping in application designer for the new events as well.
Event Mapping
ABILITY TO USE SINGLE APPCLASS SERVICE ID FOR MAPPING MULTIPLE EVENTS
- Users will be able to create one application class for a RC Service and can map it to multiple event mapping configuration for all types of events.
- User will provide a parameter that will be passed on at the Application Class level for users to decide on the functionality that they would need to run.
- This is considered as a good to have feature that saves time and creation of multiple service ids and application classes.
The following application class presents an example of how `eve_execute` could be implemented:
```java
import PT_RCF:ServiceInterface;
class App_Class1 implements PT_RCF:ServiceInterface
method execute();
method eve_execute(&str As string);
end-class;
method execute
/* Extends/implements PT_RCF:ServiceInterface.execute */
end-method;
method eve_execute
/* &str as String */
/* Extends/implements PT_RCF:ServiceInterface.eve_execute */
/* &str variable will have the Event Map Parameter that was set in the configuration page for that particular event that is being triggered.*/
Evaluate &str
When = "1"
/* Code block for PreBuild event */
When = "2"
/* Code block for PostBuild event */
end-method;
```
Event Mapping:ABILITY TO USE SINGLE APPCLASS SERVICE ID FOR MAPPING MULTIPLE EVENTS (Cont..)
To implement masking of sensitive/PII fields in Prompt records. This can be achieved using the SearchInit and SearchSave events on the Prompt Record fields. However, the Search events for prompt records will be triggered only if the Record-Field property is enabled.
Activity Guide – Supporting Multiple Languages in an instance
Activity Guide – Display steps based on User Role
Welcome to Marital Event
- Betty Locherty
- A marital status change is a good time to reconsider your health care coverage, tax withholdings, and other important information.
- This guide will take you through all the steps necessary to ensure that your personal profile, benefits, and payroll information are updated to reflect this event in your life.
Activity Guide - Re-parenting Activity Guide folder navigation
Activity Guide configuration pages are moved under PeopleTools root folder. Portal Administrator role is NOT mandatory anymore to administer activity guide templates.
New Window Option in Fluid Navigation Collection
PIA – New Custom Property in Web Profile
PIA maintains default timeout value of 600 sec.
If the customer want to change the timeout then a new custom property ("tokenExchangeGuidTimeout") needs to be added in the custom properties.
In addition to this, tools is providing a new people code API to read the timeout value programmatically.
API details: GetTokenExchangeGuidTimeout() in PT_WEBPROFILECONFIG(application_package)
PeopleSoft Drop Zones - Enhancements in PeopleTools 8.58
- Drop Zone support for Classic pages.
**Drop Zone Enhancements**
- Configure Drop Zone for unregistered components.
- Drop zones can be added to subpages or secondary pages at any nesting level.
PeopleSoft Drop Zones
- Classic Pages
- Nested Subpages / Secondary pages
- Drop Zones
- Unregistered Components
Drop Zone support for Classic pages.
- PeopleTools 8.58 has a new functionality to create Drop zones in Classic Pages.
- Classic Sub-page containing fields can be added to an existing Classic Page.
- The Fields from the main Classic Page and the configured Sub Page can be saved simultaneously.
User is provided with an option to limit search to Classic or Fluid components only.
Configure Drop Zone for unregistered components.
- In 8.57, Drop zone works based on Content Reference. Drop zone configuration does not show components without CREF registration in search results.
- In 8.58, Drop zone configuration search results contain components without any CREF registered.
- User can configure drop zones on unregistered components and get it successfully rendered on target pages.
- No Changes from end user perspective in terms of GUI
Support for all nesting levels in drop zone configuration
- In 8.57 GA, Drop Zone configuration support subpages or secondary pages in two nesting levels.
- In PeopleTools 8.58, Drop zones can be added to subpages or secondary pages at any nesting level.
- Examples of Drop zone configurations supported as follows:
- Main Page -> Sub page -> Sub page -> Sub page -> Sub page -> drop zone
- Main Page -> Sub page -> Secondary page -> Sub page -> Secondary page -> Sub page -> drop zone.
- This feature is available in 8.57 patch 04 onwards.
Combined sequencing of Related Information and Analytics
- Link and Tile items can be sequenced as per user preference.
- Link and Tile sections can be swapped.
- Sequencing done by user can be reset to default ordering.
Updates to Personalize
Display Properties tab
Most frequently used operations, Display (Show/Hide) and reordering are handled in this tab. Recording achieved by drag and drop.
Additionally, this tab provide options as follows.
- Reset user reordering to default order
- Swap the Link and Tile sections.
Manage Analytics tab
This tab will have items applicable only to Analytic Items.
- This tab will handle conversion of Analytics from Link to Tile and Vice versa as well as Delete.
Personalize window split into two tabs.
Updated Style Sheets
PeopleSoft user interface is updated with a contemporary look in PeopleTools 8.58. All Fluid / Classic Plus components are updated to new look and feel. No layout changes or size changes. No user interaction changes.
Configurable items delivered by PeopleTools help customers to safely and easily upgrade and navigate business changes.
Configuration Specialist Tiles provide customers an easy and intuitive access to certain PeopleTools delivered configuration items.
PeopleTools 8.58 includes these seven tile definitions:
- Manage Dashboard Pages Tile
- Activity Guide Tile (Nav Collection)
- Notifications (Nav Collection)
- Pivot Grid (Nav Collection)
- Related Content Service (Nav Collection)
- Navigation Collections Tile
- Tile Wizard Tile
Branding changes to support updated style sheet
- Updated style sheets are applied to branding theme “DEFAULT_THEME_FLUID” and Theme Style Type “Classic Plus”
- A new branding theme “PS_REVERT_TO_PRE858” and Theme Style Type “Classic Plus Pre 858” is delivered to revert the style sheet to pre-8.58 one.
All Remaining PeopleTools pages (828) are enabled Classic Plus in PeopleTools 8.58
People Tools - Notifications
Arokiar Rajasekar
Director, PeopleTools Integration & Reporting
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.
Safe harbor statement
Copyright © 2020, Oracle and/or its affiliates | Confidential: Internal/Restricted/Highly Restricted
Notification Features - traditional
• Type of Notifications
• Worklist
• Email
• Worklist is accessible only from PIA
• Email is an offline notification with a link to worklist
Notification Features - traditional
- Type of Notifications
- AWE
- TBE
- SendMail
- MCFOutboundEmail
Push Notification Framework - 8.54
- Updates UI with real-time notifications when a pre-defined Event is triggered
- Define Event or Event Collection
- Subscribe to the Event/Event Collection
- Publish an Event
- Provides an option to give real-time notification for Worklist
Notification Window
• A framework to publish real-time alerts to user browser is provided through Publish To Window Framework
• Worklist is accessible only when user navigates to the page. Requires constant monitoring.
• In 8.54, Publish To Window is implemented for TriggerBusinessEvent, giving users real-time alerts when a worklist is created
### Category Name
- PTPG_THRESHOLD_NOTIF (1)
- Treasury Deals Approval (4)
- Approve Treasury Settlements (3)
- Field Request Approval Process (2)
- Treasury Deal confirmation (2)
- BROADCAST (1)
- Customer Statement generation (1)
- GL Journal Approval Process (1)
### Category Type
- Actions (10)
- Alerts (10)
### Message State
- Unread (11)
- Dismissed (11)
### Priority
- Unread (9)
### View All Notifications
<table>
<thead>
<tr>
<th>Notification Name</th>
<th>Category Type</th>
<th>Message</th>
<th>Message State</th>
<th>Last Update Date/Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>PTPG_THRESHOLD_NOTIF</td>
<td>Alerts</td>
<td>1 rows matching threshold criteria for Display Paid Amount (Payment Spend Analysis)</td>
<td>Unread</td>
<td>01/23/2020 2:35:16AM</td>
</tr>
<tr>
<td>Customer Statement generation</td>
<td>Alerts</td>
<td>A new statement is available for Alliance Group.</td>
<td>Dismissed</td>
<td>07/14/2017 4:01:03AM</td>
</tr>
<tr>
<td>Treasury Deal confirmation</td>
<td>Actions</td>
<td>Deal 000000000376 is awaiting your confirmation</td>
<td>Unread</td>
<td>03/13/2017 8:45:52AM</td>
</tr>
<tr>
<td>Treasury Deals Approval</td>
<td>Actions</td>
<td>Deal '000000000376' is awaiting your approval</td>
<td>Unread</td>
<td>03/13/2017 8:45:40AM</td>
</tr>
<tr>
<td>Treasury Deal confirmation</td>
<td>Actions</td>
<td>Deal 000000000375 is awaiting your confirmation</td>
<td>Unread</td>
<td>03/13/2017 8:40:32AM</td>
</tr>
<tr>
<td>Treasury Deals Approval</td>
<td>Actions</td>
<td>Deal '000000000375' is awaiting your approval</td>
<td>Unread</td>
<td>03/13/2017 8:40:26AM</td>
</tr>
<tr>
<td>Field Request Approval Process</td>
<td>Alerts</td>
<td>Field Request 0000000005 has been approved.</td>
<td>Dismissed</td>
<td>03/08/2017 11:40:05AM</td>
</tr>
<tr>
<td>Field Request Approval Process</td>
<td>Alerts</td>
<td>Field Request 0000000003 has been denied.</td>
<td>Dismissed</td>
<td>03/08/2017 11:17:01AM</td>
</tr>
<tr>
<td>Treasury Deals Approval</td>
<td>Actions</td>
<td>Deal 'MAP2' is awaiting your approval</td>
<td>Unread</td>
<td>05/29/2016 11:53:16AM</td>
</tr>
<tr>
<td>Treasury Deals Approval</td>
<td>Actions</td>
<td>Deal 'MAP1' is awaiting your approval</td>
<td>Unread</td>
<td>05/29/2016 11:49:12AM</td>
</tr>
<tr>
<td>Approve Treasury Settlements</td>
<td>Actions</td>
<td>'EFT Requests' with Source ID 'MAP_EFT_EFT' is awaiting your approval</td>
<td>Unread</td>
<td>05/29/2015 11:36:22AM</td>
</tr>
</tbody>
</table>
Two panel Action View
### Job Offer Information
- **Applicant**: Milicent Forbes
- **Applicant ID**: 800136
- **Job Posting Title**: Manager-Procurement
- **Offer Date**: 03/22/2017
- **Offer Expiration Date**: 04/01/2017
- **Recruiter**: Emmyyou Dell
- **Hiring Manager**: John Patterson
### Additional Information
- **Job Type**: Standard Requisition
- **Job Family**: Accounting - KACC
- **Company**: Global Business Institute - GBI
- **Job Code**: Manager-Procurement - 600160
- **Position Number**: Business Unit - Global Business Institute BU - GIBI
### Job Offer
<table>
<thead>
<tr>
<th>Component</th>
<th>Frequency</th>
<th>Offer Amount</th>
<th>Payment Mode</th>
</tr>
</thead>
<tbody>
<tr>
<td>Base Salary</td>
<td>Monthly</td>
<td>4,560.00 USD</td>
<td>Cash</td>
</tr>
<tr>
<td>Annual Bonus</td>
<td>Annual</td>
<td>6,000.00 USD</td>
<td>Cash</td>
</tr>
</tbody>
</table>
### View Job Offer
- **Approver Comments**
- **Approval Chain**
Publish Code
ComponentLife PTPN_PUBLISH:PublishToWindow &wlSrch;
&msginfo = PTPN_BROADCAST.PTPN_BC_MSG.Value;
&wlsrch = create PTPN_PUBLISH:PublishToWindow("SENDNOTE", "BROADCAST");
&wlsrch.AddRecepient(&user, 1);
&wlsrch.SetCategoryAlias("BROADCAST");
&wlsrch.SetMsgInfo(&msginfo);
&wlsrch.SetCategoryTypeFyi();
&wlsrch.SetMsgStateNew();
&wlsrch.SetMsgKey(GenerateRandomDataKey());
&wlsrch.Publish(""
Text Messaging - 8.58
- Text messaging is available as one of the notification option under Notification Configuration
- URL identifier ‘PTTEXTMESSAGING’ is delivered to configure Twilio account for text messaging support
- User phone number must be provided under Notifications section available at ‘My Preferences’ -> ‘General Settings’, to receive text messages
- The phone number should be in E.164 format (+<Country Code><Area Code><Phone Number>)
Text Messaging
URL Maintenance
URL Identifier: PTTEXTMESSAGING
*Description: URL ID for text messaging
*URLID: https://api.twilio.com/2010-04-01/Accounts/A
Comments: https://api.twilio.com/2010-04-01/Accounts/A

- **ACCOUNTID**: ACF3456b5107d983bed8d97
- **ACCESSSTOKEN**: 7FypzuP#r9rS#RbQ#-4uMilZ1KcUgq
- **TEXTFROM**: +16122229290
*Password Encryption*
Text Messaging
Notification Configuration - 8.58
• ‘Notification Configuration’ component to control the notification types to be received by users
• Any applications using Push Notification framework through PTPN_PUBLISH application package get the feature just by configuring notification with the existing ‘Category Name’ and ‘Event Name’
• Administrator can configure a default notification type, allow users to personalize or restrict users from disabling all channels
## Notification Configuration
PeopleTools -> Push Notifications -> Notification Window -> Notification Configuration

### View Notification Configuration
<table>
<thead>
<tr>
<th>Notification Name</th>
<th>Event Name</th>
<th>Description</th>
<th>Owner ID</th>
<th>Functional Category</th>
<th>Push</th>
<th>Email</th>
<th>Text</th>
<th>Override</th>
</tr>
</thead>
<tbody>
<tr>
<td>PGThresholds</td>
<td>PTPG_THRESHOLD_NOTIF</td>
<td>Personalized Analytic Notifications</td>
<td>PT</td>
<td>Pivot Grid</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
<tr>
<td>TriggerBusinessEvent</td>
<td>TRIGGERBUSINESSEVENT</td>
<td>Trigger Business Event</td>
<td>PT</td>
<td>Workflow</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
<tr>
<td>BROADCAST</td>
<td>SENDNOTE</td>
<td>Broadcast Notifications</td>
<td>PT</td>
<td>Notifications</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
</tbody>
</table>
Notification Configuration
- **Unique identifier (Same as Category Name earlier)**
- **Event Name as registered with PUSHTOWINDOWCOLLECTION**
- **Enable this to disallow users from disabling all the channels**
- **Channels available for users. Pre-configurations may be required**
- **Users get a notification by default in the selected channel**
- **Enable to allow users to opt for different channels**
- **Chose the Permission List/Role of users who can personalize. Empty grid enables it for all**
User Personalization
TriggerBusinessEvent – Manage Notifications
PeopleTools -> Push notifications -> Notification Window -> Configuration -> System Configuration
• ‘TRIGGERBUSINESSEVENT’ is a pre-delivered notification configuration used for all the TBE notifications
• Configuration for an individual event can be changed by using ‘Notification Configuration Override’ option
• A notification configuration used to override, must be mapped with Event – TRIGGERBUSINESSEVENT
TriggerBusinessEvent – Manage Notifications
PeopleTools -> Push notifications -> Notification Window -> Configuration -> System Configuration
<table>
<thead>
<tr>
<th>Business Process</th>
<th>Activity</th>
<th>Event</th>
<th>Worklist</th>
<th>Priority</th>
<th>Enable</th>
<th>Notification Message Set Number</th>
<th>Notification Text Message Number</th>
<th>Notification Category Message Set Number</th>
<th>Notification Category Message Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_VA_TEST</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>No</td>
<td>127</td>
<td>4</td>
<td>127</td>
<td>3</td>
</tr>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_VA_TEST</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>Yes</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>0</td>
</tr>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_APPROVE_ORDER</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>Yes</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>0</td>
</tr>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_APPROVE_ORDER</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>No</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>0</td>
</tr>
<tr>
<td>GE_ACTIVITY_GUIDE_DEMO</td>
<td>GE_ACTIVITY_GUIDE_START</td>
<td>ACTIVITY_GUIDE_WORKLIST</td>
<td>Activity Guide Demo</td>
<td>Default</td>
<td>No</td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>0</td>
</tr>
</tbody>
</table>
## Notification Configuration
**Notification Category:** RecycleTest
**Event Name:** TRIGGERBUSINESSEVENT
**Description:** Recycle Order
**Functional Group:** Workflow
**Object Owner ID:** PeopleTools Demo
**Mandatory:** No
### Notification Options
<table>
<thead>
<tr>
<th>Notification Type</th>
<th>Available</th>
<th>Enable By Default</th>
</tr>
</thead>
<tbody>
<tr>
<td>Push</td>
<td>Yes</td>
<td>No</td>
</tr>
<tr>
<td>Email</td>
<td>Yes</td>
<td>Yes</td>
</tr>
<tr>
<td>Text</td>
<td>Yes</td>
<td>No</td>
</tr>
</tbody>
</table>
### Personalization Settings
**Allow Personalization:** Yes
### Authorized Roles and Permission Lists
**Type of Authorization:**
<table>
<thead>
<tr>
<th>Role Name</th>
<th>Role/Permission List</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
</tr>
</tbody>
</table>
### Manage Notification
**TriggerBusinessEvent**
- Yes
**Object Owner ID**
- [Dropdown]
<table>
<thead>
<tr>
<th>Business Process</th>
<th>Activity</th>
<th>Event</th>
<th>Worklist</th>
<th>Priority</th>
<th>Enable</th>
<th>Notification Configuration Override</th>
<th>Notification Text Message Set Number</th>
<th>Notification Text Message Number</th>
<th>Notification Category Message Set Number</th>
<th>Notification Category Message Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_VA_TEST</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>No</td>
<td>No</td>
<td>137</td>
<td>4</td>
<td>137</td>
<td>3</td>
</tr>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_VA_TEST</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>Yes</td>
<td></td>
<td>158</td>
<td>2</td>
<td>158</td>
<td>9</td>
</tr>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_APPROVE_ORDER</td>
<td>Recycle Order</td>
<td>Recycle</td>
<td>Default</td>
<td>Yes</td>
<td>RecycleTest</td>
<td>137</td>
<td>4</td>
<td>137</td>
<td>3</td>
</tr>
<tr>
<td>GE_PROCESS_ORDERS</td>
<td>GE_APPROVE_ORDER</td>
<td>Approve Order</td>
<td>Supervisor</td>
<td>Default</td>
<td>No</td>
<td>No</td>
<td>137</td>
<td>4</td>
<td>137</td>
<td>3</td>
</tr>
<tr>
<td>GE_ACTIVITY_GUIDE_DEMO GE_ACTIVITY_GUIDE_START</td>
<td>ACTIVITY_GUIDE_WORKLIST</td>
<td>Activity Guide Demo</td>
<td>Default</td>
<td>No</td>
<td>No</td>
<td>No</td>
<td>137</td>
<td>4</td>
<td>137</td>
<td>3</td>
</tr>
</tbody>
</table>
Personalized Analytic Notifications
- Pivot Grids can be personalized to send notifications based on a threshold criteria
- All Personalized Analytic Notifications use the pre-delivered ‘PGThresholds’ notification configuration
- A Pivot Grid view allows user to either enable or disable all notifications for it
### Notifications
#### Display Settings
Notification items to display
#### Notification List
<table>
<thead>
<tr>
<th>Notification Name</th>
<th>Description</th>
<th>Notification Window</th>
<th>Email Notification</th>
<th>Text Message</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>PGThresholds</td>
<td>Yes</td>
<td>No</td>
<td>Yes</td>
</tr>
<tr>
<td>2</td>
<td>Approvals</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
</tr>
<tr>
<td>3</td>
<td>BROADCAST</td>
<td>Yes</td>
<td>No</td>
<td>No</td>
</tr>
</tbody>
</table>
1 rows matching threshold criteria for Display Paid Amount (Payment Spend Analysis) for VP1
Page=PTPG_NUL_VWR&PG_NAME=PO_SPEND_FL_PAID_PVG&VIEWNAME=PO_SPEND_FL_PAID_PVG.View
Broadcast
- Support for Bulk Notifications
- Selection by Role or User Id
- Notification channels can be configured
- Meta-variables for personalized messages
Hi Allan Martin,
This is to inform you that your profile update is pending for past 10 days.
Thanks
Hi A Lee, This is to inform you that your profile update is pending due for past 10 days. Thanks.
Other 8.58 features
- Notification
- Archive and Purge needs Message State/inactive choice
- Ability to send Broadcast Notifications to large groups of users
Other 8.58 features
- Pivot Grids
- Pivot Grid – Threshold Improvements
- Ability to format non-currency fields to Display 1000 separator
- Facet category UX
- Expand/Collapse/Invisible configuration
- Sequencing of facet categories
- Ability to export current Pivot Grid view
- Ability to define order or sequence values on X-Axis
Other 8.58 features
- **Application Services Framework (Chatbot)**
- File Processing – Attachment support for Chat bot application
- OAuth2 uptake for Application Service Framework
- **Process Scheduler**
- SetEmailOption have an option to add "From" email address
- Run Control ID Management
Thank you
ORACLE
|
{"Source-Url": "https://docs.oracle.com/cd/E93433_01/pdfs/PeopleSoft_Tools_Update_SIG_2020.pdf", "len_cl100k_base": 13089, "olmocr-version": "0.1.53", "pdf-total-pages": 154, "total-fallback-pages": 0, "total-input-tokens": 173174, "total-output-tokens": 17411, "length": "2e13", "weborganizer": {"__label__adult": 0.0004544258117675781, "__label__art_design": 0.0007028579711914062, "__label__crime_law": 0.00041747093200683594, "__label__education_jobs": 0.0033550262451171875, "__label__entertainment": 0.00021696090698242188, "__label__fashion_beauty": 0.0002892017364501953, "__label__finance_business": 0.0260009765625, "__label__food_dining": 0.0003266334533691406, "__label__games": 0.0010204315185546875, "__label__hardware": 0.0013875961303710938, "__label__health": 0.0003457069396972656, "__label__history": 0.0002980232238769531, "__label__home_hobbies": 0.0001842975616455078, "__label__industrial": 0.0009222030639648438, "__label__literature": 0.0002770423889160156, "__label__politics": 0.00028967857360839844, "__label__religion": 0.0003139972686767578, "__label__science_tech": 0.01247406005859375, "__label__social_life": 0.00014770030975341797, "__label__software": 0.15625, "__label__software_dev": 0.79345703125, "__label__sports_fitness": 0.0002956390380859375, "__label__transportation": 0.00042629241943359375, "__label__travel": 0.0002739429473876953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54242, 0.01701]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54242, 0.01353]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54242, 0.77195]], "google_gemma-3-12b-it_contains_pii": [[0, 100, false], [100, 598, null], [598, 690, null], [690, 1036, null], [1036, 1403, null], [1403, 1811, null], [1811, 2205, null], [2205, 2363, null], [2363, 2945, null], [2945, 3437, null], [3437, 4179, null], [4179, 5027, null], [5027, 5123, null], [5123, 5297, null], [5297, 5941, null], [5941, 5941, null], [5941, 6381, null], [6381, 6661, null], [6661, 6939, null], [6939, 7028, null], [7028, 7513, null], [7513, 7951, null], [7951, 8129, null], [8129, 8490, null], [8490, 8709, null], [8709, 9026, null], [9026, 9986, null], [9986, 10181, null], [10181, 10633, null], [10633, 10881, null], [10881, 11004, null], [11004, 11019, null], [11019, 11238, null], [11238, 11490, null], [11490, 11702, null], [11702, 11986, null], [11986, 12402, null], [12402, 12923, null], [12923, 13398, null], [13398, 13427, null], [13427, 13588, null], [13588, 13667, null], [13667, 13946, null], [13946, 14299, null], [14299, 14299, null], [14299, 14717, null], [14717, 15154, null], [15154, 15154, null], [15154, 15154, null], [15154, 15169, null], [15169, 15599, null], [15599, 16379, null], [16379, 16813, null], [16813, 17478, null], [17478, 17836, null], [17836, 19949, null], [19949, 20702, null], [20702, 22750, null], [22750, 23032, null], [23032, 23032, null], [23032, 23032, null], [23032, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23940, null], [23940, 24101, null], [24101, 24264, null], [24264, 24611, null], [24611, 24914, null], [24914, 24924, null], [24924, 25086, null], [25086, 25584, null], [25584, 25602, null], [25602, 25974, null], [25974, 26278, null], [26278, 26679, null], [26679, 27169, null], [27169, 27537, null], [27537, 27756, null], [27756, 27970, null], [27970, 28218, null], [28218, 28401, null], [28401, 28626, null], [28626, 29042, null], [29042, 29426, null], [29426, 29819, null], [29819, 31309, null], [31309, 31765, null], [31765, 31798, null], [31798, 32016, null], [32016, 32512, null], [32512, 32739, null], [32739, 33062, null], [33062, 33282, null], [33282, 33799, null], [33799, 34671, null], [34671, 34938, null], [34938, 35000, null], [35000, 35406, null], [35406, 35637, null], [35637, 35686, null], [35686, 36115, null], [36115, 36372, null], [36372, 36486, null], [36486, 36868, null], [36868, 37329, null], [37329, 37978, null], [37978, 38200, null], [38200, 38727, null], [38727, 38966, null], [38966, 39500, null], [39500, 39889, null], [39889, 39889, null], [39889, 39983, null], [39983, 40583, null], [40583, 40767, null], [40767, 40878, null], [40878, 41155, null], [41155, 41504, null], [41504, 43484, null], [43484, 44375, null], [44375, 44778, null], [44778, 45235, null], [45235, 45628, null], [45628, 45643, null], [45643, 46105, null], [46105, 46849, null], [46849, 47352, null], [47352, 47373, null], [47373, 47832, null], [47832, 48951, null], [48951, 49719, null], [49719, 51863, null], [51863, 52177, null], [52177, 52177, null], [52177, 52177, null], [52177, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 53053, null], [53053, 53213, null], [53213, 53315, null], [53315, 53413, null], [53413, 53576, null], [53576, 53923, null], [53923, 54226, null], [54226, 54236, null], [54236, 54242, null]], "google_gemma-3-12b-it_is_public_document": [[0, 100, true], [100, 598, null], [598, 690, null], [690, 1036, null], [1036, 1403, null], [1403, 1811, null], [1811, 2205, null], [2205, 2363, null], [2363, 2945, null], [2945, 3437, null], [3437, 4179, null], [4179, 5027, null], [5027, 5123, null], [5123, 5297, null], [5297, 5941, null], [5941, 5941, null], [5941, 6381, null], [6381, 6661, null], [6661, 6939, null], [6939, 7028, null], [7028, 7513, null], [7513, 7951, null], [7951, 8129, null], [8129, 8490, null], [8490, 8709, null], [8709, 9026, null], [9026, 9986, null], [9986, 10181, null], [10181, 10633, null], [10633, 10881, null], [10881, 11004, null], [11004, 11019, null], [11019, 11238, null], [11238, 11490, null], [11490, 11702, null], [11702, 11986, null], [11986, 12402, null], [12402, 12923, null], [12923, 13398, null], [13398, 13427, null], [13427, 13588, null], [13588, 13667, null], [13667, 13946, null], [13946, 14299, null], [14299, 14299, null], [14299, 14717, null], [14717, 15154, null], [15154, 15154, null], [15154, 15154, null], [15154, 15169, null], [15169, 15599, null], [15599, 16379, null], [16379, 16813, null], [16813, 17478, null], [17478, 17836, null], [17836, 19949, null], [19949, 20702, null], [20702, 22750, null], [22750, 23032, null], [23032, 23032, null], [23032, 23032, null], [23032, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23682, null], [23682, 23940, null], [23940, 24101, null], [24101, 24264, null], [24264, 24611, null], [24611, 24914, null], [24914, 24924, null], [24924, 25086, null], [25086, 25584, null], [25584, 25602, null], [25602, 25974, null], [25974, 26278, null], [26278, 26679, null], [26679, 27169, null], [27169, 27537, null], [27537, 27756, null], [27756, 27970, null], [27970, 28218, null], [28218, 28401, null], [28401, 28626, null], [28626, 29042, null], [29042, 29426, null], [29426, 29819, null], [29819, 31309, null], [31309, 31765, null], [31765, 31798, null], [31798, 32016, null], [32016, 32512, null], [32512, 32739, null], [32739, 33062, null], [33062, 33282, null], [33282, 33799, null], [33799, 34671, null], [34671, 34938, null], [34938, 35000, null], [35000, 35406, null], [35406, 35637, null], [35637, 35686, null], [35686, 36115, null], [36115, 36372, null], [36372, 36486, null], [36486, 36868, null], [36868, 37329, null], [37329, 37978, null], [37978, 38200, null], [38200, 38727, null], [38727, 38966, null], [38966, 39500, null], [39500, 39889, null], [39889, 39889, null], [39889, 39983, null], [39983, 40583, null], [40583, 40767, null], [40767, 40878, null], [40878, 41155, null], [41155, 41504, null], [41504, 43484, null], [43484, 44375, null], [44375, 44778, null], [44778, 45235, null], [45235, 45628, null], [45628, 45643, null], [45643, 46105, null], [46105, 46849, null], [46849, 47352, null], [47352, 47373, null], [47373, 47832, null], [47832, 48951, null], [48951, 49719, null], [49719, 51863, null], [51863, 52177, null], [52177, 52177, null], [52177, 52177, null], [52177, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 52795, null], [52795, 53053, null], [53053, 53213, null], [53213, 53315, null], [53315, 53413, null], [53413, 53576, null], [53576, 53923, null], [53923, 54226, null], [54226, 54236, null], [54236, 54242, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54242, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54242, null]], "pdf_page_numbers": [[0, 100, 1], [100, 598, 2], [598, 690, 3], [690, 1036, 4], [1036, 1403, 5], [1403, 1811, 6], [1811, 2205, 7], [2205, 2363, 8], [2363, 2945, 9], [2945, 3437, 10], [3437, 4179, 11], [4179, 5027, 12], [5027, 5123, 13], [5123, 5297, 14], [5297, 5941, 15], [5941, 5941, 16], [5941, 6381, 17], [6381, 6661, 18], [6661, 6939, 19], [6939, 7028, 20], [7028, 7513, 21], [7513, 7951, 22], [7951, 8129, 23], [8129, 8490, 24], [8490, 8709, 25], [8709, 9026, 26], [9026, 9986, 27], [9986, 10181, 28], [10181, 10633, 29], [10633, 10881, 30], [10881, 11004, 31], [11004, 11019, 32], [11019, 11238, 33], [11238, 11490, 34], [11490, 11702, 35], [11702, 11986, 36], [11986, 12402, 37], [12402, 12923, 38], [12923, 13398, 39], [13398, 13427, 40], [13427, 13588, 41], [13588, 13667, 42], [13667, 13946, 43], [13946, 14299, 44], [14299, 14299, 45], [14299, 14717, 46], [14717, 15154, 47], [15154, 15154, 48], [15154, 15154, 49], [15154, 15169, 50], [15169, 15599, 51], [15599, 16379, 52], [16379, 16813, 53], [16813, 17478, 54], [17478, 17836, 55], [17836, 19949, 56], [19949, 20702, 57], [20702, 22750, 58], [22750, 23032, 59], [23032, 23032, 60], [23032, 23032, 61], [23032, 23682, 62], [23682, 23682, 63], [23682, 23682, 64], [23682, 23682, 65], [23682, 23682, 66], [23682, 23682, 67], [23682, 23940, 68], [23940, 24101, 69], [24101, 24264, 70], [24264, 24611, 71], [24611, 24914, 72], [24914, 24924, 73], [24924, 25086, 74], [25086, 25584, 75], [25584, 25602, 76], [25602, 25974, 77], [25974, 26278, 78], [26278, 26679, 79], [26679, 27169, 80], [27169, 27537, 81], [27537, 27756, 82], [27756, 27970, 83], [27970, 28218, 84], [28218, 28401, 85], [28401, 28626, 86], [28626, 29042, 87], [29042, 29426, 88], [29426, 29819, 89], [29819, 31309, 90], [31309, 31765, 91], [31765, 31798, 92], [31798, 32016, 93], [32016, 32512, 94], [32512, 32739, 95], [32739, 33062, 96], [33062, 33282, 97], [33282, 33799, 98], [33799, 34671, 99], [34671, 34938, 100], [34938, 35000, 101], [35000, 35406, 102], [35406, 35637, 103], [35637, 35686, 104], [35686, 36115, 105], [36115, 36372, 106], [36372, 36486, 107], [36486, 36868, 108], [36868, 37329, 109], [37329, 37978, 110], [37978, 38200, 111], [38200, 38727, 112], [38727, 38966, 113], [38966, 39500, 114], [39500, 39889, 115], [39889, 39889, 116], [39889, 39983, 117], [39983, 40583, 118], [40583, 40767, 119], [40767, 40878, 120], [40878, 41155, 121], [41155, 41504, 122], [41504, 43484, 123], [43484, 44375, 124], [44375, 44778, 125], [44778, 45235, 126], [45235, 45628, 127], [45628, 45643, 128], [45643, 46105, 129], [46105, 46849, 130], [46849, 47352, 131], [47352, 47373, 132], [47373, 47832, 133], [47832, 48951, 134], [48951, 49719, 135], [49719, 51863, 136], [51863, 52177, 137], [52177, 52177, 138], [52177, 52177, 139], [52177, 52795, 140], [52795, 52795, 141], [52795, 52795, 142], [52795, 52795, 143], [52795, 52795, 144], [52795, 52795, 145], [52795, 53053, 146], [53053, 53213, 147], [53213, 53315, 148], [53315, 53413, 149], [53413, 53576, 150], [53576, 53923, 151], [53923, 54226, 152], [54226, 54236, 153], [54236, 54242, 154]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54242, 0.08655]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
77221b2f1810a22b5b3e500de6470ea3a9502d30
|
Martti Korpioksa
Cooperation between Unity and PLC
Comparison of different PLCs and OPC-servers
Thesis
Autumn 2014
School of engineering
Electric automation
In this thesis operations of different programmable logic controllers and OPCs are compared. In this project there is a 3D-model of a drilling station in Unity, which receives commands from a programmable logic controller. These commands are then transferred to Unity via an OPC server and a socket server.
These kinds of setups are built three times with different programmable logic controllers and OPC servers. The logics used are Omron’s CPM1, CJ1M and Beckhoff’s twincat. The OPC-servers used were PLC data gateway and Kepware’s KEPServerEX5. When all these three setups had been built, their response times were measured. In other words, it was studied how long it takes from the signal to travel from Unity to logic and back.
Keywords: Programmable logic controller, 3D modelling
CONTENT
Opinnäytetyön tiivistelmä................................................................. 1
Thesis abstract ............................................................................. 2
1 Introduction .................................................................................. 5
1.1 Goals .......................................................................................... 6
1.2 Thesis structure .......................................................................... 6
1.3 CAVE and Virtual-laboratory .................................................... 6
2 Tools ............................................................................................ 8
2.1 Programmable Logic Controllers .............................................. 8
2.2 OPC ............................................................................................ 10
2.3 Unity ........................................................................................... 14
2.4 3D-modelling .............................................................................. 15
3 Software and Hardware ............................................................ 17
3.1 Changing Unity Script Editor ...................................................... 20
4 Test Setups .................................................................................. 23
4.1 First Setup .................................................................................. 23
4.1.1 PLC Program ....................................................................... 25
4.1.2 OPC Server, PLC Data Gateway ......................................... 27
4.1.3 Socket Server ....................................................................... 28
4.1.4 Unity Simulation ................................................................... 30
4.2 Second Setup ............................................................................... 31
4.2.1 OPC Server, Kepware .......................................................... 32
4.3 Third Setup .................................................................................. 33
5 Testing & Results ......................................................................... 35
6 Summary ....................................................................................... 38
Sources ............................................................................................ 39
APPENDICES .................................................................................. 42
Tables and figures
Figure 1. The interface of OPC test client, which uses .NET specification. (Advosol, [Ref 15.10.2014]) ................................................................. 13
Figure 2. Omron’s CPM1 ........................................................................ 17
Figure 3. Omron’s CJ1M ........................................................................ 17
Figure 4 An Interface of PLC Data Gateway Developer Environment .......... 18
Figure 5. KEPServerEX and OPC quick client ........................................... 19
Figure 6. An interface of the Unity 3D ........................................................ 20
Figure 7. Unity interface with edit tab open .................................................. 21
Figure 8. Unity interface, external tools ....................................................... 21
Figure 9. Choosing VWDExpress ............................................................... 22
Figure 10. Connections between PLC, OPC, Socket Server and Unity .......... 23
Figure 11. The drilling station in Unity ......................................................... 24
Figure 12. The actual drilling station ......................................................... 24
Figure 13. A laptop is connected to PLC ..................................................... 25
Figure 14. An I/O list of a PLC program .................................................... 26
Figure 15. A program which is uploaded to the PLC ................................. 26
Figure 16. OPC server made with PLC Data Gateway development Environment 27
Figure 17. Socket server making a connection to OPC-server ..................... 28
Figure 18. Socket server opening the streams .......................................... 29
Figure 19. Socket server command handling ............................................ 29
Figure 20. The 3D model in Unity ............................................................. 30
Figure 21. Unity Client ............................................................................ 31
Figure 22. KEPServerEX ........................................................................ 32
Figure 23. OPC quick client ..................................................................... 33
Figure 24. I/O list of the program ............................................................. 34
Figure 25. The program which was used in this setup .................................. 34
Figure 26. The program made for testing ................................................... 35
Figure 27. A code inserted to "kelkka ylempi" script, which gives a time when the sledge changes direction ....................................................... 35
Figure 28. A code inserted to "Anturit" script, which gives a time when the sensor is activated ................................................................. 36
1 Introduction
There have not been any major studies about the cooperation between Unity and programmable logic controller (PLC). This might also be why Unity does not have any PLC add-ons like for example the Visual Components’ 3DCreate. Unlike 3DCreate, Unity has a built in real-time physics engine which would make Unity a lot more useful than 3DCreate. Unity’s market share is also increasing, so it will probably be a more popular software in the future. Unity is used mainly in video game industry to develop different games. Despite that Unity has become quite popular among independent game developers, but not among major gaming studios.
In this thesis a test environment was built where a virtual drilling workstation is controlled by a PLC. The virtual drilling workstation was modelled by Unity. The PLC controls this 3D-model just like the real workstation. The data will be transferred from PLC to Unity with OPC server. However, because Unity is not capable of receiving data directly from the OPC server, there will also be an additional server using TCP/IP socket communication. The socket server is a program which receives the data from the OPC server and then sends it to Unity.
Even though there are not any major studies about this subject, there are companies that have concentrated on building and designing virtual simulators. One example of these companies is Mevea from Lappeenranta Finland. Mevea was founded in 2005 and its main focus is dynamic simulation applications. (Mevea ltd. 2013)
Mevea's product repertoire also includes education simulators like mining simulators, forestry machine simulators, product development simulators, as well as modelling and simulation services. Other simulation services are Mevea cabin and Mevea Cave. (Mevea ltd. 2013)
1.1 Goals
In this thesis a virtual learning environment of a mechatronic laboratory device will be created with a Unity game engine. This learning environment gives a chance to research the possibilities to control virtual device with a PLC. The information from PLC to PC will be transferred with OPC. This whole setup would significantly ease the designing of production lines. With this designers are able to test the production lines before actually building them. Unity is equipped with a real time physics engine, so the designer would also be able to test different scenarios that could affect the production lines.
1.2 Thesis structure
Chapter 2 contains theory about PLCs and PLC programming, OPC servers, Unity and 3D-modelling. Chapter 3 reviews all software and hardware used in this project. Chapter 4 illustrates the work that was done and all different environments what were used. Chapter 5 presents the results, how the response time of different environments was measured and compared. Chapter 6 includes a summary of this project. In the end all sources and attachments are listed.
1.3 CAVE and Virtual-laboratory
There was a plan in the early 2000s to build a new technology center in Seinäjoki, where also the school of engineering would be placed. At that time also an idea about building CAVE was announced. By that time the only places to have similar virtual-laboratories in Finland were the University of Jyväskylä, Tampere University of Technology and Helsinki University of Technology. The technology center in Seinäjoki was ready in 2003, but CAVE needed two more years and its opening was on 10.February.2005. It was funded by Seinäjoki University of Applied Sciences, but some of the funding came from Western-Finland’s provincial government EAKR-project. Even today SeAMK’s CAVE is one the most advanced virtual rooms in Finland. (Hellman. 2014.)
CAVE or Cave Automatic Virtual Environment is a real-time interactive 3-dimensional computer graphics studio. In the CAVE a user can get the 3D-plans in natural scale and in the most realistic form. A real-time interactive environment is built around the user. This is done by scanning the location of the spectator’s eyes and the picture is projected to each surface surrounding the spectator from all directions of the visual field. This will fully cover the spectator’s visual range. (Hellman. 2014.)
CAVE and other equipment in the visual laboratory are used for education, research and thesis work. CAVE can also be used in product development, because with CAVE developed products can be kept in virtual form without any physical prototypes. In CAVE motion capturing is also possible because of optical localization. This data of motion capturing can be used to create character animations by recording motion captured data to the computer to create a virtual skeleton. This skeleton can then be utilized in animation, ergonomics research and in robotics. There is also a haptic gadget, which is a 3-dimensional drawing-and-processing tool with a somatosensory system. This makes it possible to feel the surfaces of virtual 3D-models by simulating the touch of the surface, liquid’s viscosity, gravity, spring strength or inertia. There are several other pieces of equipment in the virtual laboratory, for example Kinect-character sensing devices, data gloves and leap motion-controllers. (Hellman. 2014.)
2 Tools
This chapter introduces the tools which were used to build the virtual drilling workstation. Also some basic information about the tools is presented.
2.1 Programmable Logic Controllers
Programmable logic controllers (PLC) were originally designed for car industry. In the year 1968 General Motors gave five demands for PLCs: The device has to be programmable and reprogrammable. It has to work perfectly in different workshops. It must tolerate 120V voltage used in United States electrical grid. It must stand the load of the electrical motors in continuous use as well as in starting. Its price must be competitive as compared to solidly wired logics. The first PLCs started to come to the markets already in 1968 - 1969. (Keinänen, Kärkkäinen, Metso & Putkonen 2001, 241-242.)
Basically there are two different types of logics: Stepping logics and freely programmable logics. In stepping logics the hierarchy of automation is straightforward and it goes on step by step. The biggest difference between freely programmable logics and stepping logics is that in freely programmable logics it does not matter in which order the program is written. Nowadays most of the logics are freely programmable logics. In freely programmable logic or shortly in programmable logic, input ports are coupled with all plausible sensors and buttons. Everything that is wanted to be controlled by the logic is coupled with the outputs, like different motors or cylinders. The program is written into the PLC’s memory that monitors the programs progress in real-time. Because of this, it does not matter in what order you write the program. (Keinänen, Kärkkäinen, Metso & Putkonen 2001, 243-244.)
PLC’s hardware consists of six different parts: inputs, a central processing unit, outputs, a programming device, a program memory and a power input. All signals that come from devices, like sensors, buttons and limit switches, are coupled into the inputs. Central processing unit (CPU) executes the program which is written to
the PLC. Usually microprocessors are used as central processing units, because then PLCs are able to do arithmetic calculations. Outputs control the actual device. These outputs send signals to the device’s motors, cylinders, indicator lights and all components that move the device. Programming device is the device, which is used to write the program to the PLC. Almost all programs are made with PCs, but in the old times special programming devices were used. These somewhat resembled a calculator. Program memory is a part of the PLC this is where the actual program is stored, the CPU reads the program from there. Nowadays there are basically three different memory types in use: CMOS-RAM, EPROM and EEPROM. (Keinänen, Kärkkäinen, Metso & Putkonen 2001, 245-248.)
PLCs also contain other functions, like auxiliary memory bits, timers, counters, shift registers, pulse functions and the main control functions. Auxiliary memory bits are normally used to save data. They have two states, 0=not in use and 1=in use. Auxiliary memory bits can be used in several different options. For example, all requirements which are needed to start the program can be connected to one memory bit and then use this auxiliary memory bit in the actual program. (Keinänen, Kärkkäinen, Metso & Putkonen 2001, 248-251.)
Timers are meant to delay the device’s work routine. Timers work on the principle that, timer starts with some input condition. Its output turns on when the timer’s time reaches the time set in the timer. Counters can be used for example, to set an exact number of work routines for a device. Counter can also calculate the passing product flow. This is used for example in reverse vending machines. Counters usually have two input values: counter value and reset input. To the counter value are set all the commands that are going to increase the counter value. Reset input will reset the counter’s counter value back to zero. Counter’s output stays normally off until its value reaches the set value and then the counter’s output turns on. (Keinänen, Kärkkäinen, Metso & Putkonen 2001, 248-251.)
There are four types of shift registers: SISO single input and single output, SIPO single input and multiple outputs, PISO multiple inputs single output and PIPO multiple inputs and multiple outputs. (Aalto-yliopisto 2003) Pulse of the pulse
function is very short and it is used in functions that need extreme speed. Main control function makes possible to stop the programs reading and by resetting main control function makes possible to continue programs reading at the exact point. (Keinänen, Kärkkäinen, Metso & Putkonen 2001, 251-255.)
Other common commands used in PLCs are for example LOAD, LOADNOT, AND, ANDNOT, OR, ORNOT, AND-LOAD, OR-LOAD, OUT, SET, RESET, JUMP, FUN, NOP and END. LOAD command is used to open the circuit, but for example Hitachi uses ORG command. NOP “No Operation” means empty row in program and END-command ends the program. In the most PLCs, commands are mostly same. The Biggest differences are in German Siemens’s STEP 6 command list, because they come from German words. Festo’s PLCs command lists also differ from others. They resemble more BASIC computer program. (Keinänen, Kärkkäinen, Metso & Putkonen 2001, 255-257.)
Programming languages that are approved by the IEC 61131-3 standard are ladder diagram, function block diagram, sequential function charts, structured text and instruction list. One PLC can support multiple different programming languages so the designer can choose which one to use. Ladder diagram is the most popular programming language when it comes to PLC. Ladder diagram resembles the actual hardware of the PLC. Ladder diagram has several rungs which are used to connect inputs to outputs. All PLC programs that are used in this project have been written in ladder diagram. (Kronotech, [Ref. 7.10.2014])
2.2 OPC
OPC is a way to transfer data created by OPC-foundation, which fulfills OPC data access specifications. Abbreviation OPC stands for OLE for process control, where OLE stands for Object Linking and Embedding. OLE is an older name for Microsoft’s COM data transfers. Originally OPC was meant to capitalize Microsoft’s component technology for the automation industry. First version of OPC came out in 1996. Most common OPC specifications are A&E (Alarms and
Events), HDA (Historical Data Access) and DA (Data Access). (Automotiveura
ry, [Ref. 16.9.2014])
Other specifications are, for example: Batch, Batch auto, Commands, Common,
CPX (Complex Data), DX (Data eXchange), Security, UA (Unified Architecture and
XMLDA (Honeywell international Inc., 2014). Data Access is meant for real-time
process data transfer between control systems and process machinery. Alarms
and Events are meant to transfer alarm and events data. For transferring historical
data, Historical Data access is used. Data exchange is meant for data transfer
between different OPC servers. XMLDA is similar to Data Access, but it uses
Webservices and XML for its data transfer. (Automotiveura ry, [Ref. 16.9.2014])
OPC Unified Architecture was first released in 2009, but some parts were
published already in 2006 (OPCconnect.com, 2013). It was built so that, it would
surpass all the previous OPC specifications. It was more extensive, when talking
about hardware platforms and operating systems. Unified Architecture was
compatible with following hardware platforms: PC hardware, cloud-based servers,
PLCs and micro controllers. It was also compatible with these operating systems:
Microsoft Windows, Apple OSX, Android and all distributions of Linux. Security
was also a big concern when designing Unified Architecture. Its messages are
sent in 128 or 256 bit encryption levels without corrupting original messages. It
also uses sequencing to eliminate message replay attacks. Transport of the data
can be OPC binary transport or SOAP-HTTTPS, but also other options are
available. Authentication is done by OpenSSL. In this OpenSSL all Unified
Architecture servers and clients will be identified. This will control which
applications and systems are allowed to connect with each other. All this can be
done without having any problems with firewalls. (OPC foundation, 2014)
There were a few main reasons why OPC foundation started creating Unified
Architecture. Microsoft’s COM and DCOM were becoming old and web services
had risen to the main option for a data transfer between computers. In earlier OPC
specification data models were different in every specification and there wasn’t
any consistency between them. There also was not any backward compatibility
between previous OPCs. (OPCconnect.com, 2013)
Unified Architecture differs from previous specifications by using IEC multipart specification and consists of twelve parts: Concepts, Address Space Model, Services, information model, service mappings and profiles. These six parts are core specifications and the other six parts are Access type specifications: Data Access, Alarms and Events, Commands, Historical Data, Batch and Data exchange. Unified architecture’s architecture core consists among other things: object model, address space and profiles. In unified architecture they renewed object model, address space and semantic information model. In Unified Architecture the structure of address space was changed to be more versatile as compared to older specifications. (Automaatioseura ry, [Ref. 16.9.2014])
When it comes to performance Unified Architecture does not reach the same level as Data Access. This results from WebServices that are much heavier than DCOM, what Data Access uses. Computer capacity’s rapid development will decrease this problem. OPC has created its own binary coding for the Unified Architecture, because binary coding xml would increase the performance of WebServices. It also increased the speed of a data transfer, because xml in text form wastes transfer resources. (Automaatioseura ry, [Ref. 16.9.2014])
The newest specification from OPC foundation is OPC.NET, which is based on framework of Microsoft’s WCF.NET (Windows Communication Foundation). OPC .NET makes it possible to communicate easily through firewalls with quite a simplistic data model and removes the need for .NET and DCOM wrappers. OPC.NET enables access in both historical data and run-time data, events and alarms. OPC .NET’s user interface is also designed so that user can do mapping to the OPC DA, HAD and A&E interfaces. For a comparison Unified Architecture is more complex and is created for communication between several different platforms. (OPC Training Institute, 2014)
OPC.Net has six goals:
- Security: all communication should be secure, but computers should also be accessible through firewalls.
- Simplicity: servers and clients are needed to be easy to implement, deploy and configure.
Robustness: all communication is needed to be able to recover from errors.
Backward compatibility: it is necessary to be able to connect previous OPC servers with .NET interface.
Plug-and-Play: it is necessary to be able to find servers automatically.
Transparency protocols are needed for proper communication between clients and servers.
Figure 1 shows an example of the OPC test client, which uses .NET specification. (OPC Training Institute, 2014)
In Finland in the spring 2005 the OPC committee was founded as a part of Automaatioseura. OPC committee’s goals were to advance Finnish automation education, research and entrepreneurship by sharing information about OPC foundation’s activities and specifications. This was done by organizing education and events and also by taking part in creating OPC specifications. One of the reasons why the OPC committee was founded was the upcoming big specification called Unified architecture. (Automaatioseura ry, [Ref. 16.9.2014])
2.3 Unity
Unity (Unity technologies, 2014) is a multiplatform game-engine. It can be used to develop games for the following platforms:
- iOS and Mac
- Android
- Windows Phone, Windows and Windows store apps
- Blackberry 10
- Linux
- Web Player
- Playstation 3, 4, vita and mobile
- Xbox 360 and one
- Wii U.
Unity uses NVIDIA’s PhysX physics engine, which is able to handle real-time physics (NVIDIA Corporation, 2014). The latest Unity version is Unity 4.5.3 which had several bugs fixed and it also contains enhanced 2d physics. A beta version of the Unity 4.6 is also available at the moment. Also Unity 5 has been announced. It is available for a pre-order, but its official release date has not been announced yet. In Unity 5 physics based shadings will be available also as a free version. Other improvements in Unity 5 are improved audio and a new 64-bit editor which will be beneficial when making large projects, a lighting system based on real-time physics and WEBGL, which makes it possible to take all the content to the server which uses WEBGL, without plugins. (Unity technologies, 2014)
Unity makes it possible to lay out levels and create menus. Also animating, making scripts and organizing projects is possible, which makes Unity fully 3d compatible. Unity’s interface consists of four different panels: The project panel, hierarchy panel, inspector panel and scene panel. All the project’s assets are stored in the project panel. All imported assets also appear there. In the hierarchy panel the assets of the scene can be arranged. In the inspector panel parameters for the assets can be adjusted. For example the assets position and ability to cast
shadows. The creation can be viewed from the scene panel. (Envato Pty Ltd. 2014)
Most of the assets like 3d models, textures, audio, scripts, fonts and materials, have to be imported to Unity. This results that Unity cannot create itself these assets, except from a few very basic models like spheres and cubes. Fortunately Unity is very open to different 3d-modelling programs and allows the transfer of files from other programs to Unity with all textures and materials intact. Unity supports all common file types like: PNG, JPEG, TIFF and PSD files from photoshop without any changes to the files. A list of all formats that Unity can import can be found from their homepage. (Envato Pty Ltd. 2014)
2.4 3D-modelling
3D-modelling means that products are designed in three dimensions, what happens by using x-, y-, and z- coordinates. So the designer can make the model look more like the final product. Real physical and mechanical properties can also be given to the 3D-model as in real life. x-, y- and z- coordinates are placed on the pc screen so that the x-axis is in line with the screens bottom edge, the y-axis is in line with the screens left edge and the z-axis points towards the designer. As in 2D-modelling it is also very important in 3D-modelling to which coordinates are positive and negative in direction. This information is needed to know in which direction will the product rotate. This is used when pictures are placed on the paper and when given assembly recommendations in degree form. (Tuhola & Viitanen 2008, 17-18)
All 3D-modelling programs assume that all degrees are given in positive forms, because programs will rotate the object in to a positive direction. The positive rotation direction of x- and y-axis is direction of positive z-axis, so towards the designer. The positive rotation direction of z-axis is negative y-axis so directly down on the pc screen. (Tuhola & Viitanen 2008, 18-19)
3D-model means a three dimensional product, which compares by look and properties to the final product. 3D-model can be examined in different ways in different programs. But most 3D-modelling programs use similar ways to examine products. (Tuhola & Viitanen 2008, 20)
Wireframe model means that only the edges of the model are displayed. The positive thing in this is that you can define points and edges through surfaces. Negative side in this model is that, it is hard to know which surfaces are at the back or at the front. It is difficult to know on which position the model is. Displaying holes and threads is difficult. It is also messy and unpractical. (Tuhola & Viitanen 2008, 20-21) This is usually used when 3D-models have to be transformed into 2D pictures (Tuhola & Viitanen 2008, 23).
3D-surface model displays only surfaces of the product. This is used usually only for casted and extruded products. In this model the product can be sculpted more freely than with the basic tools. However, it is possible to work only with visible surfaces. (Tuhola & Viitanen 2008, 21)
3D-model contains information of the models shape and also which parts of the model contain material. A good thing in 3D-model is that it is clear and easy to comprehend. It can also be examined to how it would be in real life. The disadvantages of this model are that it is not possible to choose surfaces that aren’t visible or grab a surface through other surfaces. (Tuhola & Viitanen 2008, 22)
There are several different 3D modeling programs, but one of the most popular is the Blender. The Blender is a free 3D modeling program, which is being developed by volunteers. Blender makes possible to model, rig, simulate, animate, composite, render, and do motion tracking. Blender is a multiplatform program and it works for Linux’, Mac’s and windows’ computers. (Blender, [Ref 22.10.2014])
3 Software and Hardware
In this project the Omron’s Sysmac CPM1 (Figure 2), CJ1M (Figure 3) and Beckhoff’s soft PLC were used as PLCs. Several PLCs were used to find out which PLC would work best. Omron PLCs were programmed by using a free trial version of CX-Programmer version 9.4 and Beckhoff’s soft PLC with TwinCAT3.
Figure 2. Omron’s CPM1
Figure 3. Omron’s CJ1M
In this project Omron’s CPM1 and CJ1M were programmed by using a PC and a tool bus to connect the PC to the PLC. The ladder diagram was the programming
language used in this project. Unlike Omron’s PLCs, the Beckhoff’s PLC used in this project was not a physical PLC, it was only a software program inside the PC. The Beckhoff’s PLC was also programmed using a ladder diagram, but TwinCat3 was used instead of the CX-programmer. There was also no need to create a connection between the PC and soft PLC, because TwinCAT3 made it automatically.
OPC Labs’ QuickOPC 5.2 and PLC Data Gateway Developer Environment were used to create the OPC server for the first setup (Figure 4). Kepware’s KEPServerEX 5 and OPC Quick Client were used to create the OPC server for the second and third setup. (Figure 5).
Figure 4 An Interface of PLC Data Gateway Developer Environment.
In the first setup a PLC Data Gateway was used as the OPC server with Omron’s CPM1, but in the second setup, the OPC server had to be changed to KEPServerEX, because the PLC Data Gateway was not compatible with Omron’s CJ1M. KEPServerEX was also used in the third setup with the Beckhoff’s soft PLC.
Microsoft’s Visual Studio Express 2013 for web was used for creating a socket server and scripts for Unity. In this project a free version of Unity’s 4.5.4 was used (Figure 6). All models used in Unity were imported from other 3D-modelling programs, like Solid Edge or Blender. This means that complicated 3D-models cannot be created in Unity.
3.1 Changing Unity Script Editor
Unity has a built in script editor, MonoDevelop, but in this project it was changed to Microsoft’s Visual studio Express 2013 for web. In this chapter it will be shown how this can be done. First a plugin for the Unity, Visual Studio 2013 Tools for Unity, needs to be downloaded. It can be downloaded from the web page: http://unityvs.com/. In the following way: First select preferences from edit tab (Figure 7). Then select external tools and browse from the script editor (Figure 8). From there choose: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\VWDExpress (Figure 9). After this Visual Studio will be used automatically every time, when writing scripts in Unity. (Scott Richmond, 2013)
Figure 7. Unity interface with edit tab open.
Figure 8. Unity interface, external tools.
Figure 9. Choosing VWDExpress.
4 Test Setups
Three test setups with different PLC’s and OPC servers were developed. In these setups there is a connection between PLC and OPC and between OPC and Unity (Figure 10). The socket server is a tool, which is used to transfer data from OPC to Unity, because Unity cannot receive data directly from the OPC server. A socket server might be integrated to Unity sometime in future. Data also flows backwards from Unity to OPC. This makes it possible to simulate sensors in Unity. Sensors in Unity send signals to PLC, which can be used in this program.

4.1 First Setup
The drilling station (Figure 11), which is used to model the Unity model (Figure 12) is a basic workstation. It has a sledge which is able to move all horizontal directions. This sledge will carry an object which will be drilled. The drill is also a very basic, it just moves down and up. There are some differences between the actual drilling station and the Unity model. For example, the user interface is in different location.
Figure 11. The drilling station in Unity
Figure 12. The actual drilling station
In the first setup of this project: there is a PLC connected to a laptop which has OPC server, socket server and Unity. The PLC will run the Unity simulation. All the PLC’s signals will be transmitted to Unity with OPC and socket server. The Unity model is a drilling station, which is replicating a real drilling station, which is located in a laboratory. In this setup, PLC Data Gateway is used as an OPC server and PLC is Omron’s CPM1.
4.1.1 PLC Program
In this project PLC has six inputs which are controlled with switches, in this program they are named Input00-Input05. Their addresses are 0.00-0.05. There are also six digital inputs, these are built inside the PLC and only one of them is used in this program, DigitalInput004, which moves the drill down. The others are not used in this program. Digital input names are DigitalInput000-DigitalInput005. Their addresses are 1.00-1.05. There are also four digital outputs these are used for moving the sledge of the drilling station. Their names are DigitalOutput000-DigitalOutput003 and addresses 10.00-10.03. (Figure 13)
The actual program is very simple (Figure 15). It is made so that Inputs00-03 move the sledge. It was developed so it is not possible to move the sledge forward and backward at the same time or left and right at the same time. It is also not possible to move the sledge when the drill is down and when Input05 is true, it is not possible to move the sledge or move the drill down. When connecting the CX-programmer to PLC, the OPC must be turned off. Otherwise you are not able to connect to the PLC with the CX-programmer.
4.1.2 OPC Server, PLC Data Gateway
Like the program for the PLC also the OPC server is very simple (Figure 16). Digital inputs and outputs in this server have the same names and addresses as the ones in the PLC program, so the OPC server is able to take those values from the PLC program and transfer them to Unity. All digital inputs are mapped to the digital inputs register block and all digital outputs are mapped to the digital outputs register block. Where they are given their address’ start word. For digital inputs it is 1 and for digital outputs it is 10. To map a tag to register block it must be done individually for each tag. It can be done in properties section which is located at left side of the screen and in the bottom of the properties section is register block, there you need to write the address of the required register block. For example in this server digital inputs address is Main.OmronExample.Digital Inputs. Just above the register block it can set the bit off set and above that allow controls, which must be set to true. Every register block must be given the device’s address and the device channel's address, these can be given in same section as the register block.
Figure 16. OPC server made with PLC Data Gateway development Environment.
4.1.3 Socket Server
Socket server is a program which captures the data from the OPC server and transfers it to Unity. In the future the socket server might be integrated to the OPC or to Unity and would not be needed any more. The socket server was created in Visual studio and is written in C#. Every time when PLC or OPC is changed, modifications to the socket servers needs to be done.
The Socket Server program consist of two modules main program and Reader class. First main program opens connection to OPC server and does all necessary initializations. (Figure 17). The whole socket server can be found in attachments.
```csharp
Reader reader = new Reader();
IPAddress ipAddress = IPAddress.Parse("127.0.0.1");
TcpListener tcpListener = new TcpListener(ipAddress, 8221);
OpcLabs.EasyOpc.DataAccess.EasyDAClient easyDAClient1 =
new OpcLabs.EasyOpc.DataAccess.EasyDAClient();
easyDAClient1.ClientMode.AllowAsynchronousMethod = false;
easyDAClient1.ClientMode.AllowAsynchronousMethod = true;
easyDAClient1.ClientMode.DesiredMethod =
OpcLabs.EasyOpc.DataAccess.DAReadWriteMethod.Synchronous;
//Wake up OPC client
easyDAClient1.ReadItemValue("", "Kepware.KEPserverEX.V5",
"Channel1.PLLC.POU_1.DigitalOutput000");
tcpListener.Start();
```
Figure 17. Socket server making a connection to OPC-server.
After that the main program waits for signals coming from the OPC server. When a signal arrives all input and output streams are opened. (Figure 18).
tcpListener.Start();
while (true)
{
TcpClient tcpClient = tcpListener.AcceptTcpClient();
NetworkStream ns = tcpClient.GetStream();
StreamWriter sw = new StreamWriter(ns);
StreamReader sr = new StreamReader(ns);
sw.AutoFlush = true;
bool stopped = false;
while (!stopped)
{
if (ns.DataAvailable)
{
string command = sr.ReadLine();
string answer;
switch (command)
{
case "read Digital1Output000":
answer = inputs[0][1];
break;
// other read commands...
case "write Digital1Input000 True":
easyDAClient1.WriteItemValue("", "Kepware.KEPServerEX.5", "Channel1.PLC.POU_1.DigitalInput000", "True");
answer = "";
break;
// other write commands
case "quit":
answer = "quit";
stopped = true;
break;
default:
answer = "default";
break;
}
sw.WriteLine(answer);
Thread.Sleep(1);
}
}
}
Figure 18. Socket server opening the streams.
Next the program reads the commands from the OPC server. After READ command all values from the PLC will be read. Write command means, that values are written to OPC which transfers the values to PLC. (Figure 19)
Figure 19. Socket server command handling.
4.1.4 Unity Simulation
The Unity model that is being simulated is a drilling station which has a movable sledge (Figure 24). This sledge is moved by the PLC’s input ports 0-3. This sledge is also carrying a brown cube. The drill is controlled with the PLC’s input port4 and it doesn’t do any actual drilling. When it comes to contact with another object during the simulation it just stops. The program inside the PLC does not allow movement and drilling at the same time.
Figure 20. The 3D model in Unity.
Commands that come from PLC will be implemented to Unity simulation with a script. Also the values from Unity can be transferred to PLC with this same script. At this particular model it comes to a client script and from there these commands will be distributed to different parts of the simulation. The client opens streams and updates outputs and inputs. (Figure 21) The whole Unity client can be found in attachments.
4.2 Second Setup
This second setup is almost similar to the first one, but in this setup the OPC server is Kepware’s KEPServerEX5. Also the PLC is changed for this setup,
because Omron’s CPM1 is not compatible with KEPServerEX5. In this setup Omron’s CJ1M is used. The PLC’s program and Unity model are very similar as in the first setup, but some small modifications are made, because CJ1M has more outputs available than CPM1. Also the socket server is very similar as in the first setup. The only thing that has to be changed is the OPC’s address. In the first setup it was: (“”; “FernHillSoftware.PLCDataGateway”, “localhost.Main.Omronexample.DigitalOutput000”) for DigitalOutput000. In the second setup its address is: (“”, “Kepware.KEPServerEX.V5”, “Channel1.PLL.DigitalOutput000”).
4.2.1 OPC Server, Kepware
The interface of the KEPServerEX5 is similar to PLC Data Gateway Development environment (Figure 29). This server has five digital inputs and five digital outputs, which are located in a device called PLC, which is connected to channel1.
Figure 22. KEPServerEX.
In the tools tab there is “Launch OPC Quick Client” and by clicking this, OPC Quick Client can be started (Figure 30). All tag values can be monitored here. In this screen, tags’ connection quality can be checked: If it is bad, it can be improved in Channel Properties, by setting the right COM ID, baud rate, data bits, parity and stop bits. Also adjusting the request timeout in the device properties might help.
Figure 23. OPC quick client.
4.3 Third Setup
In this setup instead of using physical PLC a soft PLC was used. The soft PLC is just a software, but it has all the same functions and capabilities as the normal PLC. The soft PLC that was used was Beckhoff’s. It was programmed by using TwinCAT3 and was programmed by using ladder diagram.
Even though all these PLCs were programmed with same programming language, ladder diagram, the layout is still a little different. Especially with the I/O lists. The actual program is basically the same which was used also in Omron's PLCs.
The OPC server which was used in this setup was also Kepware’s KepServerEX. The only major difference in this setup and in setup 2 is that, when using KepServerEX’s Beckhoff TwinCAT driver tags cannot be manually created, they must be auto created. This can be done in device properties, database creation and auto create. All the settings must be correct otherwise this would not work.
5 Testing & Results
To determine which of these setups were most successful, some tests were done to find out which setup had the fastest response time. Setup 2 was not included in these, because it was not able to send signals from Unity back to OPC and PLC. So setup 1 and 3 were only used in these tests. The program in PLC was altered slightly for these experiments. The program was made so that it moves the sledge in the Unity model to the left until it hits a sensor. When this sensor is activated the sledge changes its direction. The picture below is the program made for the Beckhoff, but the program for the Omron is basically same.

The interval between activating sensor and moving direction is measured in these experiments. The measurements were done in Unity, by adding Debug.Log twice in the script. So it gave system time with accuracy of a millisecond when sensor was activated and when it changed direction. Then the response time was manually calculated with these two times.
```csharp
if (Input.GetAxis("KelkkaHorizontal") > 0 || Client.Inputs[0][1].Equals("True"))
{
movingState = MovingState.Left;
DateTime now = DateTime.Now;
Debug.Log (Time);
}
```
Figure 27. A code inserted to "kelkka ylempi" script, which gives a time when the sledge changes direction.
if (AnturitYrkenta == InputNumber.Input_2)
{
DateTime now = DateTime.Now;
Debug.Log(Time);
}
Figure 28. A code inserted to "Anturit" script, which gives a time when the sensor is activated.
These tests were performed twenty times for both setups. These setups were also made in different PCs, so the results will not be fully comparable, but they will be directional. The results are displayed in milliseconds. Also the first setup failed twice in changing direction when the sensor was activated. This resulted from the fact that, the first setup’s response time was so slow that the sledge was able to pass the sensor before the signal to change direction came to Unity.
<table>
<thead>
<tr>
<th>PLC</th>
<th>BECKHOFF</th>
<th>OMRON</th>
</tr>
</thead>
<tbody>
<tr>
<td>TEST 1</td>
<td>49</td>
<td>825</td>
</tr>
<tr>
<td>TEST 2</td>
<td>55</td>
<td>1122</td>
</tr>
<tr>
<td>TEST 3</td>
<td>49</td>
<td>622</td>
</tr>
<tr>
<td>TEST 4</td>
<td>49</td>
<td>801</td>
</tr>
<tr>
<td>TEST 5</td>
<td>65</td>
<td>639</td>
</tr>
<tr>
<td>TEST 6</td>
<td>33</td>
<td>534</td>
</tr>
<tr>
<td>TEST 7</td>
<td>49</td>
<td>465</td>
</tr>
<tr>
<td>TEST 8</td>
<td>33</td>
<td>935</td>
</tr>
<tr>
<td>TEST 9</td>
<td>49</td>
<td>638</td>
</tr>
<tr>
<td>TEST 10</td>
<td>66</td>
<td>699</td>
</tr>
<tr>
<td>TEST 11</td>
<td>66</td>
<td>1366</td>
</tr>
<tr>
<td>TEST 12</td>
<td>49</td>
<td>886</td>
</tr>
<tr>
<td>TEST 13</td>
<td>33</td>
<td>915</td>
</tr>
<tr>
<td>TEST 14</td>
<td>33</td>
<td>798</td>
</tr>
<tr>
<td>TEST 15</td>
<td>49</td>
<td>733</td>
</tr>
<tr>
<td>TEST 16</td>
<td>49</td>
<td>493</td>
</tr>
<tr>
<td>TEST 17</td>
<td>49</td>
<td>835</td>
</tr>
<tr>
<td>TEST 18</td>
<td>66</td>
<td>689</td>
</tr>
<tr>
<td>TEST 19</td>
<td>65</td>
<td>1182</td>
</tr>
<tr>
<td>TEST 20</td>
<td>65</td>
<td>784</td>
</tr>
</tbody>
</table>
Average: 51,05 798,05
Highest: 66 1366
Lowest: 33 465
As the results show the third setup was almost 16 times faster than the first setup. The third setup’s average response time was about 50ms, when the first setup’s average response time was about 800ms. The big time difference most likely originates from the fact that in the third setup there was a soft PLC, which makes the response time so fast that it is possible to simulate pulse sensors in Unity.
6 Summary
The outcome of this project was somewhat surprising. It was expected, that the soft PLC would be much faster than Omron’s CPM1. The biggest surprise was how slow the normal PLC was. Its response time was about 800ms, when its response time was expected to be about 500ms. Also the response time of Beckhoff’s soft PLC was shorter than expected.
Unfortunately Omron’s CJ1M did not work properly. It would have been interesting to see how long its response time would have been. Most likely its response time would been somewhere between CPM1 and the soft PLC. Overall these tests gave a good knowledge about how much the response times depend on the PLC and OPC server type.
This subject was overall very interesting and most likely very beneficial. It is possible that these kinds of virtual models will become more popular in the near future. Also the use of Unity will most likely increase in the future. At the moment Unity is mainly used for making video games. Therefore, it has lots of potential if these kinds of automation applications become more popular. It is also possible that the upcoming Unity 5 has some functions which would ease the creating of these kinds of setups.
Sources
APPENDICES
APPENDIX 1. Unity client 1/4.
APPENDIX 5. Socket Server reader 1/2.
APPENDIX 7. Socket Server program 1/5.
APPENDIX 8. Socket Server program 2/5.
APPENDIX 10. Socket Server program 4/5.
APPENDIX 11. Socket Server program 5/5.
APPENDIX 1. Unity client 1/4.
```csharp
using UnityEngine;
using System.IO;
using System.Net.Sockets;
using System.Collections.Generic;
public static class Client
{
private static List<Sensori> sensorit = new List<Sensori>();
public static string[][][] inputs = new string[][][] {
new string[][] {"DigitalOutput000", "False"},
new string[][] {"DigitalOutput001", "False"},
new string[][] {"DigitalOutput002", "False"},
new string[][] {"DigitalOutput003", "False"},
new string[][] {"DigitalInput004", "False"},
new string[][] {"DigitalInput005", "False"},
};
private static bool DigitalInput000 = false;
private static bool DigitalInput001 = false;
private static bool DigitalInput002 = false;
private static bool DigitalInput003 = false;
private static bool DigitalInput004 = false;
private static bool DigitalInput005 = false;
public static void KytkeSensori(Sensori sensori)
{
sensorit.Add(sensori);
}
public static void Paivita()
{
TcpClient client = new TcpClient("localhost", 8221);
}
```
TcpClient client = new TcpClient("localhost", 8221);
// avataan streamit
NetworkStream ns = client.GetStream();
StreamWriter sw = new StreamWriter(ns);
StreamReader sr = new StreamReader(ns);
sw.AutoFlush = true;
// päivitetään outputit
foreach (Sensori sensori in sensorit)
{
switch (sensori.Anturinkytkenta)
{
case Sensori.InputNumber.Input_0:
if (sensori.Tila != DigitalInput000)
{
DigitalInput000 = sensori.Tila;
sw.WriteLine("write DigitalInput000 " + DigitalInput000.ToString());
sr.ReadLine();
}
break;
case Sensori.InputNumber.Input_1:
if (sensori.Tila != DigitalInput001)
{
DigitalInput001 = sensori.Tila;
sw.WriteLine("write DigitalInput001 " + DigitalInput001.ToString());
sr.ReadLine();
}
break;
case Sensori.InputNumber.Input_2:
if (sensori.Tila != DigitalInput002)
{
DigitalInput002 = sensori.Tila;
sw.WriteLine("write DigitalInput002 " + DigitalInput002.ToString());
}
break;
}
```csharp
if (sensori.Tila != DigitalInput002)
{
DigitalInput002 = sensori.Tila;
sw.WriteLine("write DigitalInput002 " + DigitalInput002.ToString());
sr.ReadLine();
} break;
case Sensori.InputNumber.Input_3:
if (sensori.Tila != DigitalInput003)
{
DigitalInput003 = sensori.Tila;
sw.WriteLine("write DigitalInput003 " + DigitalInput003.ToString());
sr.ReadLine();
} break;
case Sensori.InputNumber.Input_4:
if (sensori.Tila != DigitalInput004)
{
DigitalInput004 = sensori.Tila;
sw.WriteLine("write DigitalInput004 " + DigitalInput004.ToString());
sr.ReadLine();
} break;
case Sensori.InputNumber.Input_5:
if (sensori.Tila != DigitalInput005)
{
DigitalInput005 = sensori.Tila;
sw.WriteLine("write DigitalInput005 " + DigitalInput005.ToString());
sr.ReadLine();
} break;
```
```csharp
} break;
}
// päivitetään inputit
for (int i = 0; i < inputs.Length; i++)
{
sw.WriteLine("read " + inputs[i][0]);
inputs[i][1] = sr.ReadLine();
}
// katkaistaan yhteys
sw.WriteLine("quit");
sr.ReadLine();
// suljetaan streamit
sr.Close();
sw.Close();
ns.Close();
client.Close();
```
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
namespace ConsoleApplication1
{
public class Reader
{
public Reader()
{
Thread thread = new Thread(new ThreadStart(Palvita));
thread.Start();
}
private void Palvita()
{
easyOAClient.ClientMode.AllowAsynchronousMethod = false; // these three lines might be unnecessary.
easyOAClient.ClientMode.AllowAsynchronousMethod = true;
// Wake up OPC client
easyOAClient.ReadItemValue("", "FernhillSoftware.PLCDatagateway", "localhost.Main.OmronExample.DigitalOutput000");
while (true)
{
}
}
}
}
```java
public class Reader {
public Reader() {
Thread thread = new Thread(new ThreadStart(Palvita));
thread.Start();
}
private void Palvita() {
easyOAClient1.ClientMode.AllowAsynchronousMethod = false; // these three lines might be unnecessary.
easyOAClient1.ClientMode.AllowSynchronousMethod = true;
// Wake up OPC client
easyOAClient1.ReadItemValue("", "FernhillSoftware.PC.DataGateway", "localhost.Main.0ronExample.DigitalInput000");
while (true) {
Program Inputs[0][1] = easyOAClient1.ReadItemValue("", "FernhillSoftware.PC.DataGateway", "localhost.Main.0ronExample.DigitalInput000").ToString();
Program Inputs[1][0] = easyOAClient1.ReadItemValue("", "FernhillSoftware.PC.DataGateway", "localhost.Main.0ronExample.DigitalInput001").ToString();
Program Inputs[3][1] = easyOAClient1.ReadItemValue("", "FernhillSoftware.PC.DataGateway", "localhost.Main.0ronExample.DigitalInput003").ToString();
Program Inputs[4][1] = easyOAClient1.ReadItemValue("", "FernhillSoftware.PC.DataGateway", "localhost.Main.0ronExample.DigitalInput004").ToString();
Program Inputs[5][1] = easyOAClient1.ReadItemValue("", "FernhillSoftware.PC.DataGateway", "localhost.Main.0ronExample.DigitalInput005").ToString();
}
}
}
```
APPENDIX 7. Socket Server program 1/5.
```csharp
using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Data;
using System.IO;
using System.Linq;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Threading;
using System.Threading.Tasks);
namespace ConsoleApp1
{
static class Program
{
public static string[][] inputs = new string[][]
{
new string[] {"DigitalOutput000", "False"},
new string[] {"DigitalOutput001", "False"},
new string[] {"DigitalOutput002", "False"},
new string[] {"DigitalOutput003", "False"},
new string[] {"DigitalInput004", "False"},
new string[] {"DigitalInput005", "False"},
};
static void Main(string[] args)
{
Reader reader = new Reader();
IPAddress ipAddress = IPAddress.Parse("127.0.0.1");
TcpListener tcpListener = new TcpListener(ipAddress, 8221);
}
}
}
```
APPENDIX 8. Socket Server program 2/5.
```csharp
// Wake up OPC client
easyDA.Client.ReadItemValue("", "FernhillSoftware.PLCDataGateway", "localhost".Main.OmronExample.DigitalOutput000");
tcpListener.Start();
while (true)
{
TcpClient tcpClient = tcpListener.AcceptTcpClient();
NetworkStream ns = tcpClient.GetStream();
StreamWriter sw = new StreamWriter(ns);
StreamReader sr = new StreamReader(ns);
sw.AutoFlush = true;
bool stopped = false;
while (!stopped)
{
if (ns.DataAvailable)
{
string command = sr.ReadLine();
string answer;
switch (command)
{
// Code for different commands...
}
}
}
tcpClient.Close();
}
```
switch (command)
{
case "read DigitalOutput000":
answer = inputs[0][1];
break;
case "read DigitalOutput001":
answer = inputs[1][1];
break;
case "read DigitalOutput002":
answer = inputs[2][1];
break;
case "read DigitalOutput003":
answer = inputs[3][1];
break;
case "read DigitalInput004":
answer = inputs[4][1];
break;
case "read DigitalInput005":
answer = inputs[5][1];
break;
case "write DigitalInput000 True":
answer = "";
break;
case "write DigitalInput000 False":
answer = "";
break;
case "write DigitalInput001 True":
APPENDIX 10. Socket Server program 4/5.
```c
case "write DigitalInput01 True":
answer = "";
break;
case "write DigitalInput01 False":
answer = "";
break;
case "write DigitalInput02 True":
answer = "";
break;
case "write DigitalInput02 False":
answer = "";
break;
case "write DigitalInput03 True":
answer = "";
break;
case "write DigitalInput03 False":
answer = "";
break;
case "write DigitalInput04 True":
answer = "";
break;
case "write DigitalInput04 False":
answer = "";
break;
```
APPENDIX 11. Socket Server program 5/5.
```c
answer = "";
case "write DigitalInput005 True":
answer = "";
break;
case "write DigitalInput005 False":
answer = "";
break;
case "quit":
answer = "quit";
stopped = true;
break;
default:
answer = "default";
break;
}
sw.WriteLine(answer);
Thread.Sleep(1);
si.Close();
su.close();
nr.Close();
tcpClient.Close();
```
|
{"Source-Url": "http://www.theseus.fi/bitstream/handle/10024/83645/Korpioksa.Martti.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 12571, "olmocr-version": "0.1.53", "pdf-total-pages": 54, "total-fallback-pages": 0, "total-input-tokens": 78973, "total-output-tokens": 17685, "length": "2e13", "weborganizer": {"__label__adult": 0.0008077621459960938, "__label__art_design": 0.00275421142578125, "__label__crime_law": 0.00045418739318847656, "__label__education_jobs": 0.005950927734375, "__label__entertainment": 0.0003077983856201172, "__label__fashion_beauty": 0.0003139972686767578, "__label__finance_business": 0.0012369155883789062, "__label__food_dining": 0.0005125999450683594, "__label__games": 0.0027713775634765625, "__label__hardware": 0.0302886962890625, "__label__health": 0.0005769729614257812, "__label__history": 0.0009527206420898438, "__label__home_hobbies": 0.0006384849548339844, "__label__industrial": 0.01116180419921875, "__label__literature": 0.0007200241088867188, "__label__politics": 0.0004169940948486328, "__label__religion": 0.0009632110595703124, "__label__science_tech": 0.32373046875, "__label__social_life": 0.00015735626220703125, "__label__software": 0.03076171875, "__label__software_dev": 0.58154296875, "__label__sports_fitness": 0.00044798851013183594, "__label__transportation": 0.0022029876708984375, "__label__travel": 0.00028586387634277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62358, 0.0356]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62358, 0.71011]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62358, 0.81503]], "google_gemma-3-12b-it_contains_pii": [[0, 160, false], [160, 869, null], [869, 1658, null], [1658, 4221, null], [4221, 7095, null], [7095, 8887, null], [8887, 10771, null], [10771, 12284, null], [12284, 14306, null], [14306, 16654, null], [16654, 18649, null], [18649, 20974, null], [20974, 23142, null], [23142, 24125, null], [24125, 25800, null], [25800, 27730, null], [27730, 29611, null], [29611, 30135, null], [30135, 30851, null], [30851, 31496, null], [31496, 32241, null], [32241, 32331, null], [32331, 32362, null], [32362, 33450, null], [33450, 33531, null], [33531, 34613, null], [34613, 35137, null], [35137, 36415, null], [36415, 37884, null], [37884, 39346, null], [39346, 40277, null], [40277, 40449, null], [40449, 41358, null], [41358, 42112, null], [42112, 42740, null], [42740, 44213, null], [44213, 45751, null], [45751, 46155, null], [46155, 47354, null], [47354, 48834, null], [48834, 50331, null], [50331, 50331, null], [50331, 50736, null], [50736, 51860, null], [51860, 53007, null], [53007, 53933, null], [53933, 54269, null], [54269, 55257, null], [55257, 57013, null], [57013, 58158, null], [58158, 58923, null], [58923, 60007, null], [60007, 61721, null], [61721, 62358, null]], "google_gemma-3-12b-it_is_public_document": [[0, 160, true], [160, 869, null], [869, 1658, null], [1658, 4221, null], [4221, 7095, null], [7095, 8887, null], [8887, 10771, null], [10771, 12284, null], [12284, 14306, null], [14306, 16654, null], [16654, 18649, null], [18649, 20974, null], [20974, 23142, null], [23142, 24125, null], [24125, 25800, null], [25800, 27730, null], [27730, 29611, null], [29611, 30135, null], [30135, 30851, null], [30851, 31496, null], [31496, 32241, null], [32241, 32331, null], [32331, 32362, null], [32362, 33450, null], [33450, 33531, null], [33531, 34613, null], [34613, 35137, null], [35137, 36415, null], [36415, 37884, null], [37884, 39346, null], [39346, 40277, null], [40277, 40449, null], [40449, 41358, null], [41358, 42112, null], [42112, 42740, null], [42740, 44213, null], [44213, 45751, null], [45751, 46155, null], [46155, 47354, null], [47354, 48834, null], [48834, 50331, null], [50331, 50331, null], [50331, 50736, null], [50736, 51860, null], [51860, 53007, null], [53007, 53933, null], [53933, 54269, null], [54269, 55257, null], [55257, 57013, null], [57013, 58158, null], [58158, 58923, null], [58923, 60007, null], [60007, 61721, null], [61721, 62358, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62358, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62358, null]], "pdf_page_numbers": [[0, 160, 1], [160, 869, 2], [869, 1658, 3], [1658, 4221, 4], [4221, 7095, 5], [7095, 8887, 6], [8887, 10771, 7], [10771, 12284, 8], [12284, 14306, 9], [14306, 16654, 10], [16654, 18649, 11], [18649, 20974, 12], [20974, 23142, 13], [23142, 24125, 14], [24125, 25800, 15], [25800, 27730, 16], [27730, 29611, 17], [29611, 30135, 18], [30135, 30851, 19], [30851, 31496, 20], [31496, 32241, 21], [32241, 32331, 22], [32331, 32362, 23], [32362, 33450, 24], [33450, 33531, 25], [33531, 34613, 26], [34613, 35137, 27], [35137, 36415, 28], [36415, 37884, 29], [37884, 39346, 30], [39346, 40277, 31], [40277, 40449, 32], [40449, 41358, 33], [41358, 42112, 34], [42112, 42740, 35], [42740, 44213, 36], [44213, 45751, 37], [45751, 46155, 38], [46155, 47354, 39], [47354, 48834, 40], [48834, 50331, 41], [50331, 50331, 42], [50331, 50736, 43], [50736, 51860, 44], [51860, 53007, 45], [53007, 53933, 46], [53933, 54269, 47], [54269, 55257, 48], [55257, 57013, 49], [57013, 58158, 50], [58158, 58923, 51], [58923, 60007, 52], [60007, 61721, 53], [61721, 62358, 54]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62358, 0.0325]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
ba873db57ee40ba69a11f3b323638d8f01aee688
|
[REMOVED]
|
{"len_cl100k_base": 12216, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43292, "total-output-tokens": 17394, "length": "2e13", "weborganizer": {"__label__adult": 0.0003991127014160156, "__label__art_design": 0.0002779960632324219, "__label__crime_law": 0.0005211830139160156, "__label__education_jobs": 0.0005097389221191406, "__label__entertainment": 6.365776062011719e-05, "__label__fashion_beauty": 0.00016605854034423828, "__label__finance_business": 0.00016391277313232422, "__label__food_dining": 0.00025343894958496094, "__label__games": 0.0007910728454589844, "__label__hardware": 0.0010404586791992188, "__label__health": 0.00034689903259277344, "__label__history": 0.000194549560546875, "__label__home_hobbies": 8.749961853027344e-05, "__label__industrial": 0.0002484321594238281, "__label__literature": 0.00023245811462402344, "__label__politics": 0.0002110004425048828, "__label__religion": 0.00034332275390625, "__label__science_tech": 0.015655517578125, "__label__social_life": 7.88569450378418e-05, "__label__software": 0.0086669921875, "__label__software_dev": 0.96923828125, "__label__sports_fitness": 0.00021326541900634768, "__label__transportation": 0.0003659725189208984, "__label__travel": 0.00015676021575927734}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63343, 0.04713]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63343, 0.23841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63343, 0.83096]], "google_gemma-3-12b-it_contains_pii": [[0, 3893, false], [3893, 9930, null], [9930, 15743, null], [15743, 20611, null], [20611, 26864, null], [26864, 32606, null], [32606, 38371, null], [38371, 45941, null], [45941, 49123, null], [49123, 56768, null], [56768, 63343, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3893, true], [3893, 9930, null], [9930, 15743, null], [15743, 20611, null], [20611, 26864, null], [26864, 32606, null], [32606, 38371, null], [38371, 45941, null], [45941, 49123, null], [49123, 56768, null], [56768, 63343, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63343, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63343, null]], "pdf_page_numbers": [[0, 3893, 1], [3893, 9930, 2], [9930, 15743, 3], [15743, 20611, 4], [20611, 26864, 5], [26864, 32606, 6], [32606, 38371, 7], [38371, 45941, 8], [45941, 49123, 9], [49123, 56768, 10], [56768, 63343, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63343, 0.17442]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
fc7995604fb2b88d351fa8bb128788738659ddf5
|
Peer reviewed version
Link to published version (if available): 10.1109/FUZZ-IEEE.2016.7737808
Link to publication record in Explore Bristol Research
PDF-document
This is the author accepted manuscript (AAM). The final published version (version of record) is available online via IEEE at http://ieeexplore.ieee.org/document/7737808/. Please refer to any applicable terms of use of the publisher.
University of Bristol - Explore Bristol Research
General rights
This document is made available in accordance with publisher policies. Please cite only the published version using the reference above. Full terms of use are available: http://www.bristol.ac.uk/pure/about/ebr-terms
A Virtual Machine for Event Sequence Identification using Fuzzy Tolerance
Trevor Martin \(^a,b\)
\(^a\) Machine Intelligence and Uncertainty Lab, Engineering Maths, University of Bristol, Bristol, BS8 1UB, UK
Ben Azvine \(^b\)
\(^b\) Security Futures Lab, BT TSO, Adastral Park, Ipswich, IP5 3RE, UK
Abstract—Analysing event logs and identifying multiple overlapping sequences of events is an important task in web intelligence and in other applications involving data streams. It is ideally suited to a collaborative intelligence approach, where humans provide insight and machines perform the repetitive processing and data collection. A fuzzy approach allows flexible definition of the relations which link events into a sequence. In this paper we describe a virtual machine which enables a previously published expandable sequence pattern format to be represented as virtual machine instructions, which can filter event streams and identify fuzzily related sequences.
Keywords—Fuzzy Event Sequence Identification, Fuzzy Virtual Machine, Collaborative Intelligence
I. INTRODUCTION
Collaborative web intelligence is a combination of human expertise (to provide insight) with machine power (to provide repetitive processing and data gathering capabilities). Current web intelligence - in the form of applications such as search engines, recommender systems, e-commerce systems, etc. - is essentially machine-based, relying on the availability of a large quantity of data and sophisticated statistical machine learning methods to produce a predictive model. Such models have been successful in a range of fields, but can be criticised on a number of grounds. They generally do not enable human understanding of the underlying mechanisms, and exist essentially as black boxes where a set of attributes in a specific case leads to a predicted outcome for that case. Secondly, they rely on the existence of large collections of reliable data. We argue that statistical machine learning is not adequate in situations where human expertise is required (either to build or to understand the model of a process), or where reliable data is not available. For example, in detecting and combating cyber-attacks, reliance on statistical machine learning is often inadequate. Almost by definition, a successful cyber-attack needs to involve novel (hitherto unseen) features and is thus out of scope for systems which require large scale data collection - for example, spectrum.ieee.org/telecom/security/the-real-story-of-stuxnet describes how a number of so-called zero-day vulnerabilities were exploited.
In such cases, collaborative intelligence offers an improvement by combining the processing powers and visualisation provided by machines with the interpretive skills, insight and lateral thinking provided by human analysts. In order to successfully implement a collaborative intelligent system, it is necessary to exchange knowledge between the components - in particular between humans and machines. We argue that there is a fundamental difference in the knowledge representations, where machine processing is usually centred on well-defined entities and relations, ranging from the flat table structures of database systems through graph-based representations and up to ontological approaches involving formal logics. On the other hand, human language and communication is based on a degree of vagueness and ambiguity that leads to an efficient transmission of information between humans without the need for precise definition of every term used. Even quantities that can be measured precisely (height of a person or building, volume of a sound, amount of rainfall, colour of an object, etc.) are usually described in non-precise terms such as tall, loud, quite heavy, dark green, etc. More abstract properties such as beautiful landscape, delicious food, pleasant weather, clear documentation, corporate social responsibility, are essentially ill-defined, whether they are based on a holistic assessment or reduced to a combination of lower-level, measurable quantities. Zadeh’s initial formulation of fuzzy sets [1] was inspired primarily by the flexibility of definitions in natural language.
Linking events into sequences is an area in which collaborative intelligence can play a role. The notion of linkages between events is inherently uncertain in many cases - example such internet logs, physical access logs, transaction records, email and phone records all contain multiple overlapping sequences of events related by different attributes. Clearly in the case of phone records, the calls made by a specific user (or from a specific phone) would form a sequence - but it is often possible to link individual events in different ways. Specific problems in extracting sequences of related events include determination of what makes events “related”, how to find groups of “similar” sequences, identification of typical sequences, and detection of sequences that deviate from previous patterns. This is strongly linked to the concept of information granulation introduced by Zadeh [2] to formalise the process of dividing a group of objects into sub-groups (granules) based on “indistinguishability, similarity, proximity or functionality”. In this view, a granule is a fuzzy set whose members are (to a degree) equivalent. In a similar manner, humans are good at dividing events into related groups, both from the temporal perspective (event A occurred a few minutes before event B but involves the same entities) and from the
Multiple sequences of events can be compactly represented using a directed graph (DASG), such that common initial and final sub-sequences are combined. User-supplied code defines the similarities between events (allowing them to be grouped together on the same path) and relations which indicate that one event follows another in a sequence.
We assume that a DASG representation of various ordered sequences of events is available, together with a suitable source of data and user-supplied code to categorise events and sequence steps. Each method in this code takes specified arguments from the data, performs appropriate computation (for example, to determine that two events are sufficiently close in time to belong to the same sequence, or whether one event is allowed to follow another). Each method returns a value in the range \([0, 1]\) using the normal fuzzy interpretation. Since the fuzzy matching is under control of the user, we do not cover this aspect in depth. It is important to note that fuzziness is fundamental to the virtual machine operation.
The virtual machine is capable of reading an interleaved series of events and matching them to the sequence patterns, producing (on demand) lists of event sequences that match complete sequence patterns, lists of event sequences that match initial phases of sequence patterns, and lists of events that do not match any known patterns. The process of translating the DASG to virtual machine code is not described here, but can be achieved using standard techniques.
B. Related Research
The major research fields linked to this work are
(i) compilation of finite state machines (FSM) into executable code or a virtual machine. This is a well-developed area of computer science and is used extensively in compilers, string processing, speech recognition, etc. See [5] for a tutorial description of finite state machines. Open source software such as http://smc.sourceforge.net exists to convert FSM specifications into most common programming languages.
(ii) implementation of virtual machines for logic programs, such as the Warren Abstract Machine (WAM) [6]. The Fril Abstract Machine [7, 8] is of particular relevance to the work described here as the representation used in SIFT incorporates fuzzy matching. Fril is the only logic programming compiler with a mechanism to handle uncertainty integrated at the lowest level.
(iii) compilation of graphical models, although the notion of “compilation” here refers to translation into library routine calls, rather than to a dedicated virtual machine [9].
Our work uses a much more general representation than a finite state machine. In particular, the use of fuzzy values and multiple labels to define an edge distinguishes the approach from finite state machines. Fuzzy labels mean it may be necessary to revisit a node and test alternative edges from it. Multiple labels allow more complex branching behaviour than is possible in a finite state machine. As a consequence of these differences, the execution model described here is very different to a finite state machine. The notion of restarting computation and multiple threaded execution is not common in finite state machines.
Additionally, it is a relatively simple task to dynamically alter the virtual machine code to reflect changes in the DASG model of event sequences. There is a close (essentially, one-to-one) correspondence between the DASG representation of event sequences and sections of code for the virtual machine. Broadly speaking, edges correspond to short instruction sequences and nodes correspond to points at which execution may be suspended.
Finally, the virtual machine allows reconstruction of event data corresponding to recognised sequences and to unrecognised sequences by examination of the thread execution records.
Virtual machines for logic programs are complex, reflecting the fact that they implement complete programming languages. This DASG machine is a much simpler design, meaning that the use of multiple execution threads is easier.
Graphical models (particularly bayesian nets) are normally implemented using specialist code, and “compilation” in this context typically refers to a translation process, whereby a specification of a graphical model is converted to a sequence of program calls to pre-written library functions. Initial construction of the graphical model and its use in simulation is reliant on the user’s statistical knowledge, the collection of large quantities of data and an assumption that past performance can be used to predict future behaviour. In contrast, the work described here does not rely on statistics to form the initial network of event sequences, and allows the graph to reconfigure easily, as new patterns are incorporated. The virtual machine gives a simple execution model corresponding to the DASG and does not rely on complex library functions.
C. Sample Data
A small subset of data from the 2009 VAST challenge was used in [3] to illustrate DASG formation. A similar subset (Table 1) is used here, but events are listed in time order (and event IDs are changed to reflect the ordering). To illustrate features of the system, row 8 has been changed so that the sequence for employee 10 no longer matches any pattern and an event has been added at row 20 which cannot be matched to any initial pattern step. The data is drawn from attributes
<table>
<thead>
<tr>
<th>Event ID</th>
<th>Date</th>
<th>Time</th>
<th>Employee</th>
<th>Entrance</th>
<th>Direction</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Jan-2</td>
<td>07:30</td>
<td>10</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>2</td>
<td>Jan-2</td>
<td>09:30</td>
<td>11</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>3</td>
<td>Jan-2</td>
<td>10:20</td>
<td>11</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>4</td>
<td>Jan-2</td>
<td>13:20</td>
<td>11</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>5</td>
<td>Jan-2</td>
<td>13:30</td>
<td>10</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>6</td>
<td>Jan-2</td>
<td>14:10</td>
<td>10</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>7</td>
<td>Jan-2</td>
<td>14:10</td>
<td>11</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>8</td>
<td>Jan-2</td>
<td>14:40</td>
<td>10</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>9</td>
<td>Jan-2</td>
<td>16:20</td>
<td>11</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>10</td>
<td>Jan-3</td>
<td>09:00</td>
<td>12</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>11</td>
<td>Jan-3</td>
<td>09:20</td>
<td>11</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>12</td>
<td>Jan-3</td>
<td>10:20</td>
<td>12</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>13</td>
<td>Jan-3</td>
<td>10:40</td>
<td>10</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>14</td>
<td>Jan-3</td>
<td>13:00</td>
<td>12</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>15</td>
<td>Jan-3</td>
<td>13:00</td>
<td>14</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>16</td>
<td>Jan-3</td>
<td>14:30</td>
<td>12</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>17</td>
<td>Jan-3</td>
<td>14:40</td>
<td>10</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>18</td>
<td>Jan-3</td>
<td>15:10</td>
<td>12</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>19</td>
<td>Jan-3</td>
<td>16:50</td>
<td>10</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>20</td>
<td>Jan-4</td>
<td>06:00</td>
<td>12</td>
<td>b</td>
<td>in</td>
</tr>
</tbody>
</table>
Additionally, it is a relatively simple task to dynamically alter the virtual machine code to reflect changes in the DASG model of event sequences. There is a close (essentially, one-to-one) correspondence between the DASG representation of event sequences and sections of code for the virtual machine. Broadly speaking, edges correspond to short instruction sequences and nodes correspond to points at which execution may be suspended.
Finally, the virtual machine allows reconstruction of event data corresponding to recognised sequences and to unrecognised sequences by examination of the thread execution records.
Virtual machines for logic programs are complex, reflecting the fact that they implement complete programming languages. This DASG machine is a much simpler design, meaning that the use of multiple execution threads is easier.
Graphical models (particularly bayesian nets) are normally implemented using specialist code, and “compilation” in this context typically refers to a translation process, whereby a specification of a graphical model is converted to a sequence of program calls to pre-written library functions. Initial construction of the graphical model and its use in simulation is reliant on the user’s statistical knowledge, the collection of large quantities of data and an assumption that past performance can be used to predict future behaviour. In contrast, the work described here does not rely on statistics to form the initial network of event sequences, and allows the graph to reconfigure easily, as new patterns are incorporated. The virtual machine gives a simple execution model corresponding to the DASG and does not rely on complex library functions.
... (Table 1)
<table>
<thead>
<tr>
<th>Event ID</th>
<th>Date</th>
<th>Time</th>
<th>Employee</th>
<th>Entrance</th>
<th>Direction</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Jan-2</td>
<td>07:30</td>
<td>10</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>2</td>
<td>Jan-2</td>
<td>09:30</td>
<td>11</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>3</td>
<td>Jan-2</td>
<td>10:20</td>
<td>11</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>4</td>
<td>Jan-2</td>
<td>13:20</td>
<td>11</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>5</td>
<td>Jan-2</td>
<td>13:30</td>
<td>10</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>6</td>
<td>Jan-2</td>
<td>14:10</td>
<td>10</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>7</td>
<td>Jan-2</td>
<td>14:10</td>
<td>11</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>8</td>
<td>Jan-2</td>
<td>14:40</td>
<td>10</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>9</td>
<td>Jan-2</td>
<td>16:20</td>
<td>11</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>10</td>
<td>Jan-3</td>
<td>09:00</td>
<td>12</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>11</td>
<td>Jan-3</td>
<td>09:20</td>
<td>11</td>
<td>b</td>
<td>in</td>
</tr>
<tr>
<td>12</td>
<td>Jan-3</td>
<td>10:20</td>
<td>12</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>13</td>
<td>Jan-3</td>
<td>10:40</td>
<td>10</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>14</td>
<td>Jan-3</td>
<td>13:00</td>
<td>12</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>15</td>
<td>Jan-3</td>
<td>13:00</td>
<td>14</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>16</td>
<td>Jan-3</td>
<td>14:30</td>
<td>12</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>17</td>
<td>Jan-3</td>
<td>14:40</td>
<td>10</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>18</td>
<td>Jan-3</td>
<td>15:10</td>
<td>12</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>19</td>
<td>Jan-3</td>
<td>16:50</td>
<td>10</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>20</td>
<td>Jan-4</td>
<td>06:00</td>
<td>12</td>
<td>b</td>
<td>in</td>
</tr>
</tbody>
</table>
C. Sample Data
A small subset of data from the 2009 VAST challenge was used in [3] to illustrate DASG formation. A similar subset (Table 1) is used here, but events are listed in time order (and event IDs are changed to reflect the ordering). To illustrate features of the system, row 8 has been changed so that the sequence for employee 10 no longer matches any pattern and an event has been added at row 20 which cannot be matched to any initial pattern step. The data is drawn from attributes
- **Employee** = set of employee ids = \{10, 11, 12\}
- **Date** = date / time of event
- **Entrance points** = \{B - building, C - classified section\}
- **Access direction** = \{in, out\}
and represents movement of employees in and out of a building (B) with a swipecard barrier on entrance but not on exit. The building contains a classified area (C) with swipecard access on entrance and exit. Tailgating (following another employee without swiping a card) is possible. We use the same user-defined relations as [3]. For a candidate sequence of $n$ events:
$$ S_i = (o_{1i}, o_{12}, \ldots, o_{1n}) $$
we define the following computed quantities:
- **ElapsedTime** $\Delta T_{ij} = Time(o_{ij}) - Time(o_{ij-1})$
- **Date** $\Delta T_{i1} = Time(o_{i1})$
and restrictions (for $j > 1$):
- **Date** $\Delta T_{ij} = Date(o_{ij})$
- $0 < Time(o_{ij}) - Time(o_{ij-1}) \leq T_{thresh}$
- **Emp** $\Delta T_{ij} = Emp(o_{ij})$
- $(Action(o_{ij-1}), Action(o_{ij})) \in AllowedActions$
where $Action(o_{ij}) = \{Entrance(o_{ij}), Direction(o_{ij})\}$
and $T_{thresh}$ specifies how close events must be to form part of the same sequence. The relation $AllowedActions$ is given by the following table (row = first action, column = next action)
<table>
<thead>
<tr>
<th></th>
<th>b</th>
<th>in</th>
<th>c</th>
<th>in</th>
<th>c</th>
<th>out</th>
</tr>
</thead>
<tbody>
<tr>
<td>b</td>
<td>x</td>
<td>x</td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>c</td>
<td></td>
<td></td>
<td>x</td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>c</td>
<td></td>
<td></td>
<td></td>
<td>x</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
These constraints can be summarised as
- events in a single sequence refer to the same employee
- successive events in a single sequence conform to allowed transitions between locations and are on the same day, within a specified time of each other. We choose $T_{thresh} = 8$ (this ensures anything more than 8 hours after the last event is a new sequence). Note that
1 http://hcil2.cs.umd.edu/newvarepository/benchmarks.php
the allowed transitions are defined by a human expert. In an environment where “tailgating” occurs commonly, it is likely that learning from data would see this as normal behaviour.
We see from events 2, 3, 4, 7, 9 that employee 11 enters the building at approx 9:00 (rounding times to the hour), enters the classified area at 10:00 and leaves it after 3 hours, re-enters at 14:00, leaving 2 hours later. This corresponds to path S-5-6-12-13-14-E in the graph (Fig 2).
III. THE VIRTUAL MACHINE
The DASG is compiled to a sequence of instructions for the virtual machine. The instructions contain an initialisation section (labelled LS), an acceptance section (labelled L0), and code corresponding to the nodes and edges in the graph. A distinguished code label <E> denotes the final edge in the graph. Given a valid graph, the corresponding virtual machine code can be generated straightforwardly. It is easy to make small optimisations (such as re-ordering operations so that instructions most likely to fail are executed first). Other obvious enhancements (not described here) include:
- use of an index table or switching code to select best threads, given event data
- time-out scheduler (assuming events arrive in real time or in temporal order, this causes threads to fail when the time since their last event exceeds a threshold)
- addition of arbitrary code to graph nodes (for instance, to raise an alert)
Each sequence of events is represented by an execution thread. A partially recognised sequence corresponds to a suspended or executing thread; fully recognised sequences or rejected (unrecognised) sequences correspond to terminated threads.
A thread is represented by a small set of registers and a stack plus queue, and suspends execution once it has consumed relevant data. Execution of a thread terminates successfully if a complete sequence is identified. Unsuccessful termination represents a set of events which was not recognised as a sequence. Lists of executing / suspended threads and terminated threads are maintained. If not terminated, the thread is either executing or suspended (on the open sequence list). The return values from thread execution are:
- SUSPENDED-SUCCESS
- SUSPENDED-FAIL
- TERMINATE-SUCCESS
- TERMINATE-FAIL
Each thread has an associated degree of match, depending on how well the event data matches the fuzzy patterns used to describe the sequence. If this value falls below a specified threshold during thread execution, the computation is unwound and restarted at a previously unconsidered path from a branching node (with outdegree >2).
The virtual machine consists of registers, storage areas for runtime structures (stacks etc.) and code made up of instructions which operate on the registers and storage. The registers and runtime structures are described below, with the virtual machine instructions listed in Fig 1.
Registers
\[\text{args}[0...n-1, n...m]\]
(typed) argument registers corresponding to a row of event data. Registers from \(n\) upwards are used as working storage but are not saved on the stack
\(N\) NextChoice : instruction label, gives alternative execution address if current instruction fails. Can be null.
\(C\) ContinuationInstruction
instruction label, indicating the next step for execution when new data arrives and is accepted by this thread. Can be null.
\(M\) MatchDegree
number in the interval \([0,1]\) giving the membership of the sequence on its matched path. Set by \(XOF\) instruction.
\(CP\)
code pointer, indicates current instruction (not saved on stack)
\(StackTop\) top frame on stack.
\(UR\) UserReturn : returned result (match) from user code
\(TS\) thread status
\(node\text{Args}[0...n-1]\)
saved arguments in top stack frame, accessed via StackTop
\(n\) number of arguments in data table
\(types[0...n-1]\) data types in the data table
\(Threshold\) minimum value for \(M\) (MatchDegree)
Runtime Structures
\(Stack\)
storage area for execution records (last in, first out stack)
\(RematchQueue\)
0 or more sets of \(n-1\) argument registers stored as a queue (first in, first out).
\(USL\) unidentified sequence list
\(OSL\) open sequence list
\(ISL\) identified sequence list
Each of the lists \(USL, OSL, ISL\) is initially empty and enables addition and removal of specified sequence threads from the list.
IV. VIRTUAL MACHINE EXECUTION
Execution consists of the steps shown in Fig 3, for each data row. For simplicity, the algorithm does not cater for threads that “time out” i.e. partial event sequences that were last modified at a point exceeding the time threshold. With the assumption that all events arrive in the correct temporal order, a simple extension to the execution model described makes it possible to identify threads that can no longer be extended and they can be failed (and moved to the USL).
DFR DequeueFromRematch
Copy content of argument registers from the front of rematch queue, and de-allocate the space used.
EXEC <label>
If <label> is null, execute FAIL. Otherwise, continue execution from the instruction labelled by <label>.
FAIL
reset MatchDegree to value saved in top stack frame
IF <NextChoice> is not null THEN
continue execution from address given by <NextChoice>
ELSE IF <NextChoice> is null THEN
DO pop stack
UNTIL NextChoice is non-null or stack is empty
IF NextChoice is not null THEN
continue execution from NextChoice
ELSE IF stack is empty THEN
return TERMINATE -FAIL
ENDIF
ENDIF
POP
Reset registers N, C, M and args to saved values
set StackFrame to previous frame
QueueOnRematch (QOR) // copy saved args[0…n-1] to rematch queue
PUSH
Allocate StackFrame and save registers N, C, M plus n arguments
QOR
QueueOnRematch
Allocate space for argument registers 0…n-1 at the back of the rematch queue and copy content of argument registers to the newly allocated space
RIF <userMethod(typed arguments)> RejectIfFailure
Executes userMethod on the specified arguments. If the result of userMethod is <= Threshold, the thread suspends and returns the value SUSPENDED-FAIL
SLN SaveLiveNode
IF rematch queue is not empty THEN
DFR // dequeue a set of arguments from rematch queue
EXEC Contin // continue execution at address in the Contin register
ELSE If rematch queue is empty,
PUSH // save all registers in new stack frame
IF Contin register is <E> THEN
terminate execution
return TERMINATE-SUCCESS
ELSE
suspend execution
return SUSPEND-SUCCESS
ENDIF
ENDIF
TNA ThereIs No Alternative
Writes NULL into NextChoice register
XOF <userMethod(typed arguments)> ExtendOrFail
Executes user method with specified arguments (from arg registers and nodeArg registers).
Return value is a number in the range [0,1] representing data match.
If return value is <= Threshold, execute <FAIL>
Otherwise set MatchDegree = min(return value, MatchDegree) and continue with next instruction.
XWA <L1> <L2> ExecuteWithAlternative
Writes <L2> into NextChoice register and passes control to <L1>
Fig. 1. Virtual machine instructions (listed alphabetically by abbreviated code), with abbreviated code, longer descriptive name if appropriate, arguments, and a brief description
### A. Worked Example
We represent the sequences in Table 1 as a minimal DASG with edges labelled by event categorisations (see Fig 3).
This corresponds to the following virtual machine code. Labels (e.g. L1) correspond to graph nodes and comments are delimited by // and end of line.
**L0:**
```plaintext
// Thread Acceptance Code
RIF equalityCheck(a[3], nodeArgs[3]) // accept if day and emp-id match
RIF equalityCheck(a[1], nodeArgs[1])
a[7] = elapsedTime(a[2], n[2])
RIF lessEqCheck(a[7], 8) // Time threshold = 8
RIF allowedAction(a[4], a[5], nodeArgs[4], nodeArgs[5])
EXEC Cont in // accepted - match to next edge
```
**L5:**
```plaintext
// thread initialisation step
N (NextChoice) = null
C (Cont in) = null
M (Match Degree) = 1
RIF equalityCheck(a[4], b)
```
---
**L1:**
```plaintext
TNA
XOF equivCheck(a[2], 13)
XOF equalityCheck(a[4], b)
XOF equalityCheck(a[5], in)
C = L2
SaveLiveNode
```
**L2:**
```
// accepted - match to next edge
```
**L3:**
```
// accepted - match to next edge
```
**L4:**
```
// accepted - match to next edge
```
**L6:**
```
// accepted - match to next edge
```
---
After three rows of data have been read, the virtual machine state is shown in Fig 4. The top section of the figure shows the content of the sequence lists OSL, USL, ISL (respectively,
open, unidentified and identified sequence lists. The status of registers for each thread is shown below, labelled T1, T2, … (for first thread, second thread etc).
OSL T1, T2
USL empty
ISL empty
Thread T1
Stack
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>LS2</td>
<td>1</td>
<td>1</td>
<td>Jan-2</td>
<td>07:30</td>
<td>10</td>
<td>b</td>
<td>in</td>
<td></td>
</tr>
</tbody>
</table>
RematchQueue : empty
Thread T2
Stack
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>-</td>
<td>L6</td>
<td>1</td>
<td>3</td>
<td>Jan-2</td>
<td>10:20</td>
<td>11</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>-</td>
<td>L5</td>
<td>1</td>
<td>2</td>
<td>Jan-2</td>
<td>09:30</td>
<td>11</td>
<td>b</td>
<td>in</td>
</tr>
</tbody>
</table>
RematchQueue : empty
Fig 4 Machine State after reading three rows of data
Figs 5 and 6 show the machine state after 10 and 20 rows (respectively) have been read. After 10 rows (Fig 5), thread 2 has terminated - this corresponds to the path S-5-6-12-13-14-E in the graph (Fig 2). This will be recognised on reading subsequent data when the threshold time will be exceeded. At this stage, thread 2 can be moved to a record of completed threads (sequences), processed to extract relevant data, or simply discarded according to the task requirements.
OSL T3
USL T1
ISL T2
Thread T1
Stack : Empty
|---|---|---|------|------|------|------|------|------|
RematchQueue
8 Jan-2 14:40 10 b in
6 Jan-2 14:10 10 c in
5 Jan-2 13:30 10 b in
1 Jan-2 07:30 10 b in
Thread T2
Stack
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>-</td>
<td><E></td>
<td>1</td>
<td>9</td>
<td>Jan-2</td>
<td>16:20</td>
<td>11</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>-</td>
<td>L13</td>
<td>1</td>
<td>7</td>
<td>Jan-2</td>
<td>14:10</td>
<td>11</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>-</td>
<td>L12</td>
<td>1</td>
<td>4</td>
<td>Jan-2</td>
<td>13:20</td>
<td>11</td>
<td>c</td>
<td>out</td>
</tr>
<tr>
<td>-</td>
<td>L6</td>
<td>1</td>
<td>3</td>
<td>Jan-2</td>
<td>10:20</td>
<td>11</td>
<td>c</td>
<td>in</td>
</tr>
<tr>
<td>-</td>
<td>L5</td>
<td>1</td>
<td>2</td>
<td>Jan-2</td>
<td>09:30</td>
<td>11</td>
<td>b</td>
<td>in</td>
</tr>
</tbody>
</table>
RematchQueue : empty
Thread T3
Stack
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>-</td>
<td>L5</td>
<td>1</td>
<td>10</td>
<td>Jan-3</td>
<td>09:00</td>
<td>12</td>
<td>b</td>
<td>in</td>
</tr>
</tbody>
</table>
RematchQueue : empty
Fig 5 Machine State after reading ten rows of data
READ data for next event into argument registers a[0 … n-1]
SET ThreadStatus to SUSPENDED-FAIL
WHILE (ThreadStatus == SUSPENDED-FAIL)
IF (OSL contains untried threads) THEN
select an untried thread and remove it from the OSL
ThreadStatus = execute thread from L0
ELSE // i.e. no threads accepted data
create new thread
ThreadStatus = execute thread starting from LS
ENDIF
IF ThreadStatus == TERMINATE-SUCCESS THEN
add thread to ISL
ELSE IF ThreadStatus == TERMINATE-FAIL THEN
add thread to USL
ELSE IF ThreadStatus == SUSPENDED-SUCCESS THEN
add thread to OSL
ELSE IF ThreadStatus == SUSPENDED-FAIL THEN
add thread to OSL
ENDIF
ENDWHILE
Fig 3 Execution steps for each row of data
OSL empty
USL T1, T6
ISL T2, T3, T4
Thread T1 (as before)
Thread T2 (as before)
Thread T3
Stack
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>J</td>
<td>18</td>
<td>Jan-3</td>
<td>15:10</td>
<td>12</td>
<td>c</td>
<td>out</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L3</td>
<td>16</td>
<td>Jan-3</td>
<td>14:30</td>
<td>12</td>
<td>c</td>
<td>in</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L6B</td>
<td>14</td>
<td>Jan-3</td>
<td>13:00</td>
<td>12</td>
<td>c</td>
<td>out</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L6</td>
<td>12</td>
<td>Jan-3</td>
<td>10:20</td>
<td>12</td>
<td>c</td>
<td>in</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L5</td>
<td>10</td>
<td>Jan-3</td>
<td>09:00</td>
<td>12</td>
<td>b</td>
<td>in</td>
<td></td>
</tr>
</tbody>
</table>
RematchQueue: empty
Thread T4
Stack
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>J</td>
<td>19</td>
<td>Jan-3</td>
<td>16:50</td>
<td>10</td>
<td>c</td>
<td>out</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L13</td>
<td>17</td>
<td>Jan-3</td>
<td>14:40</td>
<td>10</td>
<td>c</td>
<td>in</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L12</td>
<td>15</td>
<td>Jan-3</td>
<td>14:00</td>
<td>10</td>
<td>c</td>
<td>out</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L6</td>
<td>13</td>
<td>Jan-3</td>
<td>10:40</td>
<td>10</td>
<td>c</td>
<td>in</td>
<td></td>
</tr>
<tr>
<td>-</td>
<td>L5</td>
<td>11</td>
<td>Jan-3</td>
<td>09:20</td>
<td>10</td>
<td>b</td>
<td>in</td>
<td></td>
</tr>
</tbody>
</table>
RematchQueue: empty
Thread T5
Stack: Empty
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
RematchQueue
<table>
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>20</td>
<td>Jan-4</td>
<td>06:00</td>
<td>12</td>
<td>b</td>
<td>in</td>
</tr>
</tbody>
</table>
Fig 6 Machine state after reading 20 rows
V. SUMMARY
Defining and recognising meaningful event sequences is a complex task which often requires human expertise to group attributes and events into related categories, which tend to be fuzzy in nature. It is a key task in analysing many data sources, including activity logs of users interacting with web applications - and hence, it is a key enabler for web intelligence. Our previous work has described a way of storing event sequences in a compact directed graph format, providing an efficient incremental algorithm to update the graph with an unseen sequence. A human expert can easily add sequence patterns, even if these have not been seen in the data yet. This aspect particularly distinguishes our work from statistical machine learning. The work described in this paper illustrates how a virtual machine can be defined from the directed graph representation, enabling event streams to be filtered and classified according to the sequences identified. The virtual machine can be implemented in software or by means of configurable hardware.
REFERENCES
|
{"Source-Url": "https://research-information.bristol.ac.uk/files/109169238/Trevor_Martin_A_Virtual_Machine_for_Event_Sequence_Identification_using_Fuzzy_Tolerance.pdf", "len_cl100k_base": 9715, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26862, "total-output-tokens": 9752, "length": "2e13", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.0004117488861083984, "__label__crime_law": 0.0004117488861083984, "__label__education_jobs": 0.001056671142578125, "__label__entertainment": 0.00011426210403442384, "__label__fashion_beauty": 0.00018310546875, "__label__finance_business": 0.00032782554626464844, "__label__food_dining": 0.0004014968872070313, "__label__games": 0.0006833076477050781, "__label__hardware": 0.0026683807373046875, "__label__health": 0.0006661415100097656, "__label__history": 0.000335693359375, "__label__home_hobbies": 0.00016677379608154297, "__label__industrial": 0.0008144378662109375, "__label__literature": 0.00040602684020996094, "__label__politics": 0.0003364086151123047, "__label__religion": 0.0006008148193359375, "__label__science_tech": 0.23828125, "__label__social_life": 0.0001405477523803711, "__label__software": 0.01373291015625, "__label__software_dev": 0.73681640625, "__label__sports_fitness": 0.0002720355987548828, "__label__transportation": 0.0006532669067382812, "__label__travel": 0.00017070770263671875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34170, 0.04936]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34170, 0.20054]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34170, 0.86339]], "google_gemma-3-12b-it_contains_pii": [[0, 1150, false], [1150, 6689, null], [6689, 9894, null], [9894, 18933, null], [18933, 23787, null], [23787, 26175, null], [26175, 27482, null], [27482, 30365, null], [30365, 34170, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1150, true], [1150, 6689, null], [6689, 9894, null], [9894, 18933, null], [18933, 23787, null], [23787, 26175, null], [26175, 27482, null], [27482, 30365, null], [30365, 34170, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34170, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34170, null]], "pdf_page_numbers": [[0, 1150, 1], [1150, 6689, 2], [6689, 9894, 3], [9894, 18933, 4], [18933, 23787, 5], [23787, 26175, 6], [26175, 27482, 7], [27482, 30365, 8], [30365, 34170, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34170, 0.23592]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
eb3d4c9712878b39b0ba88e039168fe3b9f3af55
|
Practical Software and Systems Measurement (PSM)
Methods of Operation
Draft Version 2.6
November 2006
# Table of Contents
Revision History ........................................................................................................................................ ii
1. **Charter of the PSM Project** ........................................................................................................1
1.1 Background and Objectives of the PSM Project .................................................................1
1.2 Scope and Products of the PSM Project ...........................................................................1
1.3 Structure of the PSM Project ..........................................................................................4
1.4 Roles and Responsibilities in the PSM Project ...............................................................4
1.5 Relationship of the PSM Project to Other Groups ..........................................................6
2. **Methods of Operation of the Executive Steering Committee** .............................................7
2.1 Responsibilities .............................................................................................................7
2.2 Meetings .....................................................................................................................8
2.3 Membership and Leader .............................................................................................8
2.4 Working Relationships ..............................................................................................8
2.5 Decision Making .......................................................................................................9
3. **Methods of Operation of the Technical Steering Group** .....................................................9
3.1 Responsibilities .........................................................................................................9
3.2 Membership .............................................................................................................9
3.3 Team Leadership ......................................................................................................9
3.4 Reporting Methods ..................................................................................................10
3.5 Decision Making ....................................................................................................10
4. **Methods of Operation of the Core Sponsors** ....................................................................10
4.1 Membership .............................................................................................................10
5. **Methods of Operation of the PSM Project Manager** .........................................................10
5.1 Reporting Methods ....................................................................................................10
5.2 Decision Making .......................................................................................................11
6. **Methods of Operation of the PSM Support Center** ..........................................................11
6.1 Training Support .......................................................................................................12
6.2 Database Support and Reporting ...............................................................................12
6.3 Arrange Meetings .....................................................................................................12
7. **Methods of Operation of the Technical Working Group** ................................................12
7.1 Meetings ..................................................................................................................12
7.2 Team Leadership ......................................................................................................13
7.3 Reporting Methods ..................................................................................................13
7.4 Team Communication ..............................................................................................13
7.5 Decision Making ....................................................................................................13
7.6 Other Items .............................................................................................................13
8. **Methods of Operation for Transition Organizations** .........................................................13
9. Decisions of the PSM Project ................................................................. 14
9.1 Handling Changes to Course Material .............................................. 14
9.2 Criteria for New Systems and Software Measures and Indicators .......... 15
9.3 Use of PSM Logo by TSG Members and Transition Organizations .......... 16
Appendix A - Members of PSM Groups ...................................................... 17
Appendix B - Technical Working Group Members ....................................... 19
Appendix C - PSM Risk Management Plan ............................................... 24
Attachment C-1 - PSM Project Risk Profile .............................................. 27
Appendix D - PSM Measurement Plan ...................................................... 34
D.1 Introduction ...................................................................................... 34
D.2 Project Description ............................................................................. 34
D.3 Measurement Approach ..................................................................... 34
D.4 Description of PSM Project Information needs .................................. 37
D.5 PSM Project Measurement Specifications and Indicators .................... 37
D.6 PSM Project Aggregation Structures ................................................. 37
D.7 Indicators ......................................................................................... 38
D.8 Reporting ......................................................................................... 38
REVISION HISTORY
<table>
<thead>
<tr>
<th>Revision</th>
<th>Date</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>1.0</td>
<td>7/21/99</td>
<td>Draft outline</td>
</tr>
<tr>
<td>1.1, 1.2</td>
<td>8/99</td>
<td>Updates after discussion</td>
</tr>
<tr>
<td>1.3</td>
<td>1/22/00</td>
<td>Updates from project history and general PSM experience</td>
</tr>
<tr>
<td>2.0</td>
<td>5/1/01</td>
<td>Update based on project changes</td>
</tr>
<tr>
<td>2.1</td>
<td>6/29/01</td>
<td>Update based on peer review Add Transition Organizations, qualified trainers, and TWG members</td>
</tr>
<tr>
<td>2.2</td>
<td>7/12/01</td>
<td>Update based on peer review comments</td>
</tr>
<tr>
<td>2.3</td>
<td>12/1/03</td>
<td>Update based on project changes Add new Transition Organizations, qualified trainers, and TWG members</td>
</tr>
<tr>
<td>2.4</td>
<td>6/10/04</td>
<td>Update based on peer review comments, and add new TWG members</td>
</tr>
<tr>
<td>Version</td>
<td>Date</td>
<td>Description</td>
</tr>
<tr>
<td>---------</td>
<td>--------</td>
<td>-----------------------------------------------------------------------------</td>
</tr>
<tr>
<td>2.5</td>
<td>8/1/04</td>
<td>Update based on peer review comments, and add new TWG members</td>
</tr>
<tr>
<td>2.5</td>
<td>10/26/04</td>
<td>Minor updates based on new TWG members (did not update version number)</td>
</tr>
<tr>
<td>2.6</td>
<td>02/05</td>
<td>Updates based peer review comments</td>
</tr>
<tr>
<td>2.6</td>
<td>04/05</td>
<td>Minor updates based on new TWG members (did not update version number)</td>
</tr>
<tr>
<td>2.6</td>
<td>04/06</td>
<td>Minor updates based on new TWG members (did not update version number)</td>
</tr>
<tr>
<td>2.6</td>
<td>09/06</td>
<td>Minor updates based on new TWG members (did not update version number)</td>
</tr>
<tr>
<td>2.6</td>
<td>11/06</td>
<td>Minor updates based on RUP Plug-In Version # (did not update version number)</td>
</tr>
</tbody>
</table>
1. Charter of the PSM Project
This section describes the mission and objectives of the Practical Software and Systems Measurement (PSM) project, its scope, structure, roles and responsibilities of people involved in the work, and how this project relates to other organizations and similar endeavors.
1.1 Background and Objectives of the PSM Project
In 1993, the Joint Logistics Commanders Joint Group on Systems Engineering (JLC/JGSE) established a measurement project to develop software measurement guidance and to transition that guidance into practice. In 1994, the project and guidance was named Practical Software Measurement (PSM). Initial measures focused on software project management. In 1996, additional product measures were added, and a PSM Support Center was established. In 1997, sponsorship transitioned to OSD. Also in 1997, an ISO project on software measurement was established, and PSM was used as the base document for the new international standard: ISO/IEC 15939. In 1998, a formal transition program for PSM was established, and the PSM Support Center moved to the US Army, ARDEC. In 1999, systems engineering and product engineering measurement was added to the guidance. The international standard, ISO/IEC 15939, Software Measurement Process, was published in 2001, and Addison-Wesley published Practical Software Measurement as a book in 2002.
The PSM project objectives are to:
- Establish a proven process to implement a tailored information-driven measurement process for software and systems engineering management
- Provide a basis for objective communication and informed decision making
- Establish a foundation for organizational and executive-level performance management
The PSM project has achieved these objectives by:
- Defining measurement as a process, not a pre-defined list of graphs or reports
- Establishing a flexible measurement process that may be adapted to meet specific program and organizational information needs and objectives
- Supporting organizations to integrate measurement requirements into their management and development processes
1.2 Scope and Products of the PSM Project
PSM is a primary measurement and analysis process used by the DoD, government, and commercial programs. The PSM technology is based on actual experience. PSM’s purpose is to 1) Develop effective measurement practices that address software and systems technical and management information needs, and 2) transition into general use an integrated measurement approach that results in performance improvements. The PSM project defines and transitions measurement practices in these areas:
• Software development and maintenance projects
• Systems development and maintenance projects
• Process improvement projects
• Project, organization, and enterprise performance measurement
• Risk management
Although PSM defines a measurement process that can support many types of projects, PSM is not intended to define specific procedures for all situations. PSM does not define all measurement procedures, such as those needed to address project-specific information needs, different software domains, or individual system technologies. The PSM guidance provides guidelines to allow a user to tailor the PSM process to specific information needs.
The PSM project offers services and products to support a fully integrated measurement approach. The PSM project technical team is highly qualified to provide direct project support. Products are developed and improved incrementally by a joint government, industry, and academic technical working group, and are based on implementation experiences. Products are updated based on the technical consensus of best practices. The following PSM products have either been published or are in final form. For a detailed list of products, as well as other related information, refer to the PSM Detailed Information included as Attachment 1.
PSM Products Include:
• Practical Software Measurement: Objective Information for Decision Makers, Addison-Wesley, 2002 (Version 5 of the guidance)
• Practical Software and Systems Measurement Guidebook
• PSM Insight
PSM Services Include:
• PSM Training
• PSM Briefings
• PSM Workshops
• PSM Overview Courses
• PSM Advanced Course
• PSM Consulting
• Training in PSM Insight-One Tool for a Comprehensive Measurement Program
• PSM Insight Consulting
Other Related Products and Information Include:
• Measurement-Related Papers and Articles:
– Measures for DoD Software Product Lines
– Measuring System Interoperability
– Object-Oriented Measurement
– Measures in Support of Evolutionary Acquisition
– Applying PSM to Enterprise Measurement
– Making Measurement Work
– Measuring Tailoring Workshops
Tailoring and Implementing an Organization Measurement Process
- PSM Measuring for Process Management and Improvement (MPM), April 1997
- ESx - ES is a tool that collects coupling, maintenance, and size metrics
- Experience Reports
- Sample Measurement Specifications
- Supporting materials (i.e., marketing materials)
In addition, the following products have been developed based on the PSM process:
- Applying PSM to Enterprise Measurement, March 2003
- The CMMI Measurement and Analysis (M&A) Process implements the PSM process
The PSM Guidebook, PSM Insight tool, training materials, workshop materials, and supporting materials are available on the PSM web site or distributed on CD-ROM. The PSM book is available on Addison-Wesley’s web site: <http://www.awprofessional.com/>.
Other PSM products that are still in progress include:
- System of Systems Measurement White Paper (Draft), 8 July 2003
- Acquisition measurement - under development
These PSM products and services are directed to:
- Project team members, project managers, and project measurement experts
- Program managers and organization managers of supplier organizations
- Process improvement specialists
- Project managers, acquisition specialists, and organization management in acquisition organizations
PSM products have also been developed for internal management of the PSM project:
- Annual task summary with cost and schedule
- PSM Methods of Operation (this document)
- Transition Organization Package
- Executive Steering Committee and Technical Steering Group meeting minutes
- Contractual documents (SOW, IGCE, technical evaluations)
- Research and technical experience presentations at the annual PSM working group and users’ group conferences
- User input through the PSM web site
- PSM supporting materials (white papers, project summaries, etc.)
All products of the PSM project are version controlled and managed via a Data Management (DM) process.
1.3 Structure of the PSM Project
The PSM project consists of these key contributors:
1.4 Roles and Responsibilities in the PSM Project
The following table lists the primary responsibilities of the various roles within PSM. Current members of key groups are listed in Appendix A of the Methods of Operation.
<table>
<thead>
<tr>
<th>Role Names</th>
<th>Key Responsibilities</th>
</tr>
</thead>
<tbody>
<tr>
<td>Executive Steering Committee (ESC)</td>
<td>This group oversees the management of the PSM project. The ESC:</td>
</tr>
<tr>
<td></td>
<td>• Ensures PSM activities are consistent with PSM goals and objectives</td>
</tr>
<tr>
<td></td>
<td>• Provides advice and evaluation of PSM management efforts for marketing, development, and transition of PSM products</td>
</tr>
<tr>
<td></td>
<td>• Provides strategic input on future project requirements</td>
</tr>
<tr>
<td>Technical Steering Group (TSG)</td>
<td>This group helps steer the technical work of the PSM project. The TSG:</td>
</tr>
<tr>
<td></td>
<td>• Provides advice and evaluation of PSM project technical efforts for marketing, development, and transition of PSM products</td>
</tr>
<tr>
<td></td>
<td>• Provides technical recommendations for product content</td>
</tr>
<tr>
<td></td>
<td>• Represents PSM to external initiatives related to measurement throughout the world</td>
</tr>
<tr>
<td></td>
<td>• Represents the interests of PSM organizations during decision-making sessions, such as writers’ groups and study groups</td>
</tr>
<tr>
<td>Role Names</td>
<td>Key Responsibilities</td>
</tr>
<tr>
<td>----------------------------------</td>
<td>------------------------------------------------------------------------------------------------------------------------------------------------------</td>
</tr>
<tr>
<td>Project Manager</td>
<td>Responsible for management of the PSM project and for its technical performance, including the following:</td>
</tr>
<tr>
<td></td>
<td>• Working with sponsors on funding, political issues, justification, special projects, and other matters as needed</td>
</tr>
<tr>
<td></td>
<td>• Planning and managing the PSM project approach, tasks, and schedule</td>
</tr>
<tr>
<td></td>
<td>• Managing the budget and finances for the PSM project</td>
</tr>
<tr>
<td></td>
<td>• Providing technical vision for PSM project work, and planning and reviewing the technical direction</td>
</tr>
<tr>
<td></td>
<td>• Participating in the development of PSM work products</td>
</tr>
<tr>
<td></td>
<td>• Providing interfaces from PSM to external organizations</td>
</tr>
<tr>
<td></td>
<td>• Managing the transition of PSM work products to users through the Transition Organizations</td>
</tr>
<tr>
<td></td>
<td>• Managing and performing technical tasks for developing and publishing PSM work products</td>
</tr>
<tr>
<td>Core Sponsors</td>
<td>• Provide funding to the PSM project</td>
</tr>
<tr>
<td></td>
<td>• Provide input on project requirements and review results</td>
</tr>
<tr>
<td>PSM Support Center (PSMSC)</td>
<td>Provides guidance materials for PSM and manages the interface to Transition Organizations and the PSM user community. Tasks include:</td>
</tr>
<tr>
<td></td>
<td>• Providing information as requested by users and prospects</td>
</tr>
<tr>
<td></td>
<td>• Maintaining the PSMSC web site and monitoring its use</td>
</tr>
<tr>
<td></td>
<td>• Managing and providing support to transition organizations</td>
</tr>
<tr>
<td></td>
<td>• Tracking PSM performance measures</td>
</tr>
<tr>
<td></td>
<td>• Managing the annual user conference and TWG meetings</td>
</tr>
<tr>
<td>Transition Organizations</td>
<td>• Provide PSM guidance, training, and support to users within specific domains after being qualified in formal train-the-trainer programs</td>
</tr>
<tr>
<td></td>
<td>• Participate regularly in technical activities of PSM</td>
</tr>
<tr>
<td></td>
<td>• Continue education on current measurement guidance</td>
</tr>
<tr>
<td>Technical Working Group (TWG)</td>
<td>• Provide technical inputs for development of PSM work products</td>
</tr>
<tr>
<td></td>
<td>• Share lessons learned in implementing measurement</td>
</tr>
<tr>
<td>Study Group</td>
<td>• A subgroup of the TWG that focuses on a particular area of work for an extended period of time in a volunteer capacity</td>
</tr>
<tr>
<td></td>
<td>• Examples include:</td>
</tr>
<tr>
<td></td>
<td>– Systems Engineering (1997-current)</td>
</tr>
<tr>
<td></td>
<td>– Process Improvement (1998-current)</td>
</tr>
<tr>
<td></td>
<td>– Acquisition Measurement (2003-current)</td>
</tr>
<tr>
<td>Writers’ Group</td>
<td>A subgroup of the TWG that develops PSM guidance and/or courseware</td>
</tr>
</tbody>
</table>
1.5 RELATIONSHIP OF THE PSM PROJECT TO OTHER GROUPS
The PSM project’s processes have been defined to ensure coordination between internal and external functions and projects. Internal coordination of PSM activities is established through the PSM Technical Working Group (TWG) and Transition Organizations. Representatives of these organizations participate, as appropriate, in establishing the project plan, schedule, and technical activities of the PSM project as described in the PSM Methods of Operation. Frequent communication and reviews are conducted with management and other project members to ensure everyone is involved and appropriately aware of the PSM project’s status and plans.
External communication from the PSM project is conducted through the project staff, the PSM Support Center (PSMSC) web site, and the Transition Organizations. The Transition Organizations have agreed to abide by the terms and conditions that are established by the PSMSC for the use of all PSM products and services. In general, the PSMSC has the sole management control of all PSM products and services, and is the only agent authorized to modify those products and services. The Transition Organizations agree that all separately developed and delivered supporting guidance, products, and services that refer to any PSM products and services are consistent with the PSM technical approach and the PSM information-driven measurement process.
PSM maintains active involvement with a number of other professional organizations, standards associations, and funding agencies, including:
- **Office of the Secretary of Defense** - funded by the OSD, the project participates in coordinating activities with the various programs of OSD.
- **DoD Acquisition Policy - DoDD 5000.2R and the Interim Defense Acquisition Guidebook** - the revision to 5000 requires projects to implement measurement; PSM is the recommended approach in the guidance material.
- **Tri-Service Assessment Initiative** - the process architecture of the tri-service assessment initiative has been harmonized with the PSM information categories to ensure that both initiatives address key project information needs.
- **OSD Measurement Initiatives** - the PSM project is working with additional ongoing measurement initiatives to ensure that the guidance provided is consistent with best practices. Initiatives addressed include the PA&E initiative to collect data for cost estimation purposes and the various NII measurement initiatives.
- **Performance Management Initiatives** - the PSM process is expanding to include performance management. The same process of information-driven measurement is used, with the addition of organizational and enterprise measures.
- **CMMI Integration Project** - the international standard, ISO/IEC 15939, was also used as the base document for the new Measurement and Analysis (M&A) Process Area in CMMI. This M&A process area describes how a measurement and analysis process may be evaluated.
- **Software Engineering Institute (SEI)** - members of the PSM community are also employees or resident affiliates of the SEI; as another organization funded by OSD, the SEI coordinates its measurement efforts with the PSMSC and the PSM Technical Steering Group (TSG).
• ISO Standards Efforts - as standards related to measurement are developed or modified, members of the PSM working groups and others working with the project participate in their areas of expertise; for example, some members contribute to the ISO/IEC 15939 measurement standard and others are contributing to the IEEE/ISO/IEC 16085 risk management standard.
• Commercial Standards - PSM is integrated with key IEEE and ISO/IEC SC7 (Software and Systems Engineering) standards, including ISO/IEC 12207 and 15288.
• International Council on Systems Engineering (INCOSE) - members of the PSM community are also members of INCOSE; the development of the systems engineering guidance was a joint effort between the INCOSE measurement working group and PSM.
• Project Management Institute (PMI) - several active members of the PSM community are members of PMI and work especially with its risk management subgroup.
• International Function Point User Group (IFPUG) - the PSM project maintains communication with IFPUG, especially with respect to project estimation.
• American Society for Quality (ASQ), Software Division - members of the PSM community are actively involved with the Software Division, and there are occasional joint meetings and conferences.
2. Methods of Operation of the Executive Steering Committee
The PSM Executive Steering Committee (ESC) was established in May 1999 to ensure that the PSM work program is consistent with PSM goals and objectives. The committee provides advice and recommendations on how to manage the project.
The PSM ESC periodically reviews the program’s work plan to ensure that it is aligned with DoD, other government, and industry requirements. The PSM ESC is composed of cognizant senior officials representing the full breadth of the PSM end-user community. The ESC meets twice a year, or whenever it deems necessary, to receive briefings on program status and plans. It considers the proposed program priorities and fiscal plan, and either approves them or makes recommendations for change(s). The ESC provides formal endorsement of the finalized PSM Project Plan and proposed budget.
2.1 Responsibilities
Specific responsibilities of the ESC include the following:
• Promote and coordinate strategic plans for use, evolution, and expansion of PSM products
• Identify areas of collaboration with other government and industry initiatives, including standards work, where PSM products may be leveraged
• Relate the work on PSM to past and current activities in quantitative measurement in government and industry to adequately represent PSM in discussions of other efforts
• Define financial and other support for the PSM program
- Review plans and accomplishments of the PSM project for marketing, development, and transition of PSM materials into use
- Review the PSM work program and resources available, prioritize the work, and provide guidance on effective use of resources
- Help resolve issues when there are conflicts between the needs and concerns of the PSM user communities within government and industry
- Provide strategic input on future project requirements (the TSG provides technical input)
### 2.2 Meetings
This committee formally meets twice per year, generally at the beginning of the calendar year and at the annual users’ conference. Additional meetings are held as needed to fulfill its responsibilities and support the PSM project manager.
During PSM ESC meetings, any defects in the PSM products and/or processes are discussed, and solutions are recommended. The goal is to fix the defective part of the product and/or process to prevent future defects. Solutions are then implemented on future product developments or within existing work activities. Defects and solutions are identified in meeting minutes.
### 2.3 Membership and Leader
The PSM project manager designates the chair of the ESC. Other members of the group are selected from government and industry at the recommendation of the core sponsors, the PSM project manager, the TSG, and members of the existing ESC. Overall membership of the ESC reflects the interests of its users and funding sponsors, as well as the interests of the Transition Organizations working to transition PSM materials into the user community.
### 2.4 Working Relationships
The ESC regularly communicates the status and plans of PSM work with the core sponsors of the PSM work program. The ESC promotes PSM strategic business plans and objectives identified by senior DoD management, other core sponsors, and the TSG. The ESC facilitates working relationships between the sponsors, TSG, and PSM working groups. The ESC confers with sponsors to provide and maintain adequate resources for long-term improvement of the PSM process in government and industry.
The ESC reviews the efforts of the TSG in providing technical guidance to the PSM project and ensures that the TSG adheres to its Methods of Operation. The ESC meets as needed with the TSG and members of the PSM community to discuss strategy, to gather input on direction, and to deal with issues.
The ESC chair meets regularly with the PSM project manager to review current activities and to provide advice. It examines the PSM performance measures to monitor how well PSM is marketed and transitioned to meet the needs of the measurement community.
evaluates user feedback on PSM products to ensure that the products are meeting the overall goals of the PSM program.
Current members of key groups are listed in Appendix A of the Methods of Operation.
2.5 Decision Making
Decisions of the ESC are by consensus, with the chairperson of the committee resolving conflicts.
3. Methods of Operation of the Technical Steering Group
The Technical Steering Group (TSG) provides technical guidance to the PSM project and the PSM project manager. The TSG represents the interests of the PSM users, the community of developers of PSM materials, and the Transition Organizations moving PSM into the user community.
3.1 Responsibilities
Specific responsibilities of the TSG include the following:
- Recommend technical content for PSM processes and products, including tools
- Recommend procedures and resources to educate managers and users on the effective use of PSM materials
- Provide advice on and evaluation of the PSM project’s technical efforts for marketing, developing, and transitioning PSM products
- Represent PSM at external initiatives related to measurement worldwide
- Represent the interests of PSM organizations during decision-making sessions, such as writers’ groups and study groups
3.2 Membership
Members of the TSG include representatives of several Transition Organizations, representatives of user organizations, members of external initiatives with which PSM coordinates, and those who provide the ongoing development and improvement of the PSM materials.
In general, the TSG meets two times per year, with additional meetings held if necessary. Members of the TSG must attend or send a representative to at least one of the annual meetings of the TSG. If a member does not attend the required meetings during a given calendar year, he/she may be removed from the TSG, as determined by TSG members (May 1999 decision of the TSG).
3.3 Team Leadership
The PSM project manager chairs the TSG and creates the agenda for each TSG meeting with input from TSG members.
3.4 **REPORTING METHODS**
The PSM project manager or a designee takes the TSG meeting minutes and distributes them to the TSG as soon as possible after each meeting. Action items are assigned to TSG members during meetings and are monitored by the PSM project manager. Results are reported to the TSG upon completion of tasks and at TSG meetings.
3.5 **DECISION MAKING**
The TSG makes decisions by consensus at TSG meetings or by email for items considered between meetings.
4. **METHODS OF OPERATION OF THE CORE SPONSORS**
4.1 **MEMBERSHIP**
The group of core sponsors varies from year to year, depending on who provides resources to the PSM project. The PSM project manager seeks sponsors based on the project’s needs.
5. **METHODS OF OPERATION OF THE PSM PROJECT MANAGER**
As described in the table of responsibilities in Section 1.4, the PSM project manager handles the day-to-day management of the PSM project. Additionally, project management is responsible for tracking risks (described in Appendix C) and analyzing measurement data (described in Appendix D).
The PSM project manager uses the following methods to perform duties.
5.1 **REPORTING METHODS**
Status reporting is accomplished via a presentation at each TWG and Users’ Group Conference. Additional information is provided in sponsor, ESC, and TSG forums.
5.1.1 **Biannual Status Reporting to Sponsors and ESC**
- Accomplishments section - addresses products, activities, and impacts
- Performance Measurement section
- Includes development, transition, impact measures
- Used for justification, marketing, and effective management
- Annual prospective task list (developed based on ESC inputs and reviewed by TSG)
5.1.2 **Reporting to the TSG and Transition Organizations**
- Data is collected from each Transition Organization on a quarterly basis
• Input is summarized and is sent back to each Transition Organization and to the TSG on at least a biannual basis
• Information from ESC reporting is summarized and reviewed by the TSG
5.1.3 Financial Management
The PSM project is ongoing. Project tasks are defined on a yearly basis according to project requirements, sponsor requests, funding availability, and volunteer activity. With each task, a defined budget and schedule is identified. PSM financial management criteria are:
• Funds are limited and carefully allocated and monitored
• Plans for PSM development funds are reviewed with specific sponsors (their funding status) and ARDEC management (all funding)
• Spending is reported for each funded task
5.2 Decision Making
The PSM project manager makes day-to-day decisions. Items that require major budget modifications are reviewed with the ESC or sponsors as necessary.
Each year, a PSM task list is prepared with potential tasks to be accomplished and associated timeframes. This PSM project manager generates the task list based on recommendations from the ESC, and the TSG reviews the task list. The potential task list and review process ensure that the PSM products and services meet the needs of the services and organizations that use the PSM products. Once the potential list is generated, sponsors are identified and solicited.
Only those tasks that are sponsored and funded are addressed. While most tasks require direct funding, some are accomplished by volunteer effort. Tasks using volunteer effort are formally managed, although the schedule is less critically managed since volunteer effort is dependent on the availability of resources. When a task is funded, a detailed work plan, schedule, effort, and cost profile are generated and tracked. The PSM project manager tracks the detailed work plans for fiscal year tasks that have been funded to date.
The PSM project’s Work Breakdown Structure (WBS) is defined in the potential task list. Task lists are organized using the WBS structure.
6. Methods of Operation of the PSM Support Center
The primary responsibilities of PSM Support Center (PSMSC) staff include:
• Answering phone calls and emails
• Sending marketing material, training course information, and other information to those who request it
• Maintaining databases of PSM users and PSM activities
• Managing the Internet web site and monitoring its use
• Acting as primary contact and support for training course instructors
• Coordinating and managing train-the-trainer sessions to train instructors
• Gathering information from and providing materials to transition organizations
• Tracking PSM performance measures
• Planning and managing the annual conference and meetings of the PSM community
6.1 **TRAINING SUPPORT**
The PSMSC provides the primary point of contact for PSM instructors. It distributes training materials, gathers course attendee information from instructors, and creates completion certificates for course attendees. Training course statistics are entered into the Performance Measurement database.
6.2 **DATABASE SUPPORT AND REPORTING**
The PSMSC maintains two databases:
- Contacts database - names, addresses, and other information for any person who has worked with or made an inquiry on the PSM project
- Performance Measurement database - training course statistics and other records that are used to create performance reports
6.3 **ARRANGE MEETINGS**
The PSMSC staff is responsible for the majority of the planning and managing of the Annual PSM Users’ Group Conference and Technical Working Group meetings. These meetings are funded through attendee fees, with no cost to the government.
7. **METHODS OF OPERATION OF THE TECHNICAL WORKING GROUP**
A Technical Working Group (TWG) is a collection of individuals who participate in development of PSM work products. Members of this group also generally participate in study groups, writing groups, or other special efforts that generate PSM materials. Current TWG members are listed in Appendix B of the Methods of Operation.
7.1 **MEETINGS**
The PSM community meets with the TWG twice a year, once in the first quarter of the calendar year and once at the annual PSM Users’ Group Conference. During the meetings, progress on PSM work efforts is reported and workshops focus on developing or evaluating additional PSM materials. The PSM project manager chairs the twice-yearly TWG meetings.
Individual groups may meet between these sessions to make progress on a specific work area. The material generated during these meetings is circulated to the full TWG for review and comment. The PSM project manager must review any significant changes to the approved work program.
During TWG meetings, any defects in the PSM products and/or processes are discussed and solutions are recommended. The goal is to fix the defective part of the product and/or process to prevent future defects. Solutions are then implemented on future product developments or within existing work activities. Defects and solutions are identified in the meeting minutes.
7.2 Team Leadership
Each working group has a designated chair (or team of co-chairs). Generally, the PSM project manager appoints the chair; sometimes, the team elects the chair.
7.3 Reporting Methods
When a working group is active, progress is reported to the PSM project manager on a regular basis. Reporting is required immediately after a working session, at times designated for major deliverables, and at the twice-yearly TWG meetings. If a working group is inactive, it is the responsibility of the PSM project manager to appoint a new working group chair to revitalize the group or to dissolve the working group.
7.4 Team Communication
The working group handles internal communication as needed, generally via email, teleconferences, and/or small group meetings. The working group chair organizes activities at the twice-yearly TWG meetings, communicating plans and expectations to the PSM community through the PSM project manager.
7.5 Decision Making
Decisions of the working group are made by consensus. When consensus cannot be reached, the working group chair recommends a course of action to the PSM project manager, who is responsible for resolving the issue. If the decision is of significant impact to the PSM products or to the PSM community, the TSG may be asked to resolve the issue.
7.6 Other Items
Members of a working group are encouraged to share the considerations of the group with others in the technical community. However, they must represent their work as draft or as work in progress to protect the integrity of the PSM project.
8. Methods of Operation for Transition Organizations
Transition Organizations are responsible for providing training and support to PSM users. There are well-defined methods for how an organization becomes a Transition Organization, how it remains a Transition Organization, and how a Transition Organization works with the PSM Support Center to serve its PSM users. Refer to the Transition Organization Package for more information.
9. DECISIONS OF THE PSM PROJECT
This section documents decisions that are anticipated to be useful for future reference by one or more of the PSM groups using this Methods of Operation. As key decisions are made, a short summary will be added in this section.
9.1 HANDLING CHANGES TO COURSE MATERIAL
Adopted 9/23/98 [TSG Meeting; report of a subgroup on 9/17/98; adopted by the TSG 9/23/98]
Topic
• What PSM materials may requesters change, and what rights do Transition Organizations have to significant changes they have made?
Definitions
• Writer - someone authorized by the PSM project manager to update the content of official PSM documents
• Transition Organization – an organization that has agreed to the Transition Organization requirements and employs at least one person who has completed the train-the-trainer (TTT) process
• Reviewer - a member of the TSG who is authorized by the PSM project manager to evaluate and authorize proposed changes to official PSM content
• Requester - member of the Transition Organization who has completed the TTT process and is following the Elements of the Special Change Process [see below]
Considerations
Changes need to be consistent with the goals of the PSM community and:
• Promote the use of PSM for measurement [i.e., reward those who do]
• Preserve the quality of the PSM content
Materials that may change
• PSM Guidebook updates may only be made by writers authorized to make changes (those authorized by the PSM project manager)
• Portions of the Guidebook that appear in the course material may not be changed by anyone except writers, especially graphics and key lists of various types
• Training elements may be modified with examples from the students’ environment since students better understand text specifically tailored to their organization
• Course material areas that are most likely to change include:
– Front sections that introduce key concepts and applicability to the industry segment, government area, or specific organization
– Case studies
– Extended examples used throughout the course
Individual examples that illustrate the process at key points
- The PSM format needs to be preserved in the presentation, including the PSM logo and the order of materials. Those making significant changes (if approved by the PSMSC) may also include their own organization logo on the materials they create
- The PSM title should be kept in the presentation, with a subtitle that’s specific to the course being taught
**Elements of the Special Change Process**
- Only qualified trainers may access and make changes to the PSM materials
- Requester proposes a set of changes, gets PSM project manager approval (or denial), and both agree on the reviewer(s) who will examine the changes
- Reviewer(s) is a TSG member from a Transition Organization other than the requester(s)
- For specific business reasons, requester may restrict which TSG members may be reviewers
- Requester negotiates edits/changes with reviewer
- Requester makes approved changes
- Requester provides a copy of the changes to the reviewer(s), with a courtesy copy to the PSM project manager
- Requester provides changes to the PSMSC for use in future content updates if they wish, but they may retain ownership on significant changes
- Requester agrees to revise the PSM Guidebook content and official course material with the approved changes when these materials are updated
The subgroup expects that after this process is done several times, it will be clear which areas of the course material should be separated out as modules that people may tailor.
### 9.2 Criteria for New Systems and Software Measures and Indicators
Adopted April 1999 [PSM Guidance workshop April 1999]
a. For a measure to be added to the PSM guidance, it must meet the following criteria:
1. Track Record
- Has been used successfully by several organizations
- Continues to be used
- Success is documented
2. Flexible Application Base
- Useful in more than one application domain
- Can be applied in different management and organizational arrangements
- Not dependent on specific product development methods
3. Scope
- Systems and software engineering focus
- Organization or enterprise
- Process improvement
4. Well-Defined - data is specific and measurable
(5) Unique - does not duplicate another measure
(6) Suitable for Historical Baselines - retains value and meaning for future comparison or estimation
b. For an indicator to be added to the PSM Guidance, it must meet the following criteria:
(1) Same as Measure Criteria
(a) Track Record
(b) Flexible Application Base
(2) Clearly addresses its associated information needs
(3) Can be derived from its measure or measures’ suggested data specifications
(4) Straightforward to interpret
9.3 **USE OF PSM LOGO BY TSG MEMBERS AND TRANSITION ORGANIZATIONS**
Adopted May 1999 [TSG Meeting May 1999]
A question was raised as to whether TSG members and Transition Organizations may use the PSM logo. The decision was that TSG members and TOs may use the PSM logo in their presentations when they are using PSM materials. If they make any changes to the material (and the changes are approved), the TOs should use both the PSM logo and their own.
## APPENDIX A - MEMBERS OF PSM GROUPS
(As of February 2005)
<table>
<thead>
<tr>
<th>Project Manager</th>
<th>Cheryl Jones</th>
</tr>
</thead>
<tbody>
<tr>
<td>Core Sponsors</td>
<td></td>
</tr>
<tr>
<td>• US Army ARDEC (1999-current)</td>
<td></td>
</tr>
<tr>
<td>• Naval Air Systems Command (2004-current)</td>
<td></td>
</tr>
<tr>
<td>• OUSD DDR&E/S&T (2003-current)</td>
<td></td>
</tr>
<tr>
<td>• OSD-NII (2003-current)</td>
<td></td>
</tr>
<tr>
<td>• OUSD AT&L (1997-2003)</td>
<td></td>
</tr>
<tr>
<td>• Federal Aviation Administration (1998-2001)</td>
<td></td>
</tr>
<tr>
<td>• Naval Undersea Warfare Center (1993-1998)</td>
<td></td>
</tr>
<tr>
<td>• JLC Joint Group on Systems Engineering (1993-1997)</td>
<td></td>
</tr>
<tr>
<td>Executive Steering Committee</td>
<td></td>
</tr>
<tr>
<td>• US Army - Alison Ferraro</td>
<td></td>
</tr>
<tr>
<td>• US Navy - Tom Conrad (Chair)</td>
<td></td>
</tr>
<tr>
<td>• US Air Force - Bruce Allgood</td>
<td></td>
</tr>
<tr>
<td>• OSD - Joe Jarzombek</td>
<td></td>
</tr>
<tr>
<td>• DCMA - Guy Mercurio</td>
<td></td>
</tr>
<tr>
<td>• Industry - Dennis Ahern</td>
<td></td>
</tr>
<tr>
<td>• FAA - Roger Cooley</td>
<td></td>
</tr>
<tr>
<td>• IEEE/ISO - TBD</td>
<td></td>
</tr>
<tr>
<td>• PSM Project Manager - Cheryl Jones</td>
<td></td>
</tr>
<tr>
<td>• Secretary - Fred Hall</td>
<td></td>
</tr>
<tr>
<td>Technical Steering Group</td>
<td></td>
</tr>
<tr>
<td>• Cheryl Jones, ARDEC (PSM Support Center), PSM Project Manager</td>
<td></td>
</tr>
<tr>
<td>• John McGarry, ARDEC (PSM Support Center)</td>
<td></td>
</tr>
<tr>
<td>• Fred Hall, Assurance Engineering (Writer, Tool Developer)</td>
<td></td>
</tr>
<tr>
<td>• Betsy Bailey Clark (alternate Brad Clark), SMI (Writer)</td>
<td></td>
</tr>
<tr>
<td>• Dave Card, Q-Labs (Writer)</td>
<td></td>
</tr>
<tr>
<td>• Beth Layman, TeraQuest Metrics (Writer)</td>
<td></td>
</tr>
<tr>
<td>• Garry Roedler, Lockheed Martin and INCOSE (Systems Engineering Study Group)</td>
<td></td>
</tr>
<tr>
<td>• Joe Dean, Tecolote (Writer)</td>
<td></td>
</tr>
<tr>
<td>• Dave Zubrow, SEI (SEI Interface)</td>
<td></td>
</tr>
<tr>
<td>• Bob Charette, ITABHI (Risk Interface)</td>
<td></td>
</tr>
<tr>
<td>• Keith Kratzert, FAA (FAA Interface)</td>
<td></td>
</tr>
<tr>
<td>• Bruce Allgood, USAF STSC (Writer)</td>
<td></td>
</tr>
<tr>
<td>• Paul Janusz, ARDEC (RGM Interface)</td>
<td></td>
</tr>
<tr>
<td>• Guy Mercurio, DCMA (DCMA Interface)</td>
<td></td>
</tr>
<tr>
<td>• Joyce Statz, Borland TeraQuest (Transition Organization Representative)</td>
<td></td>
</tr>
<tr>
<td>• Terry Rout (Australian Representative)</td>
<td></td>
</tr>
<tr>
<td>• Ray Irvine (Australian DoD Representative)</td>
<td></td>
</tr>
<tr>
<td>• Paul Caseley (UK Representative)</td>
<td></td>
</tr>
<tr>
<td><strong>Technical Working Group</strong></td>
<td>See Appendix B</td>
</tr>
<tr>
<td>-----------------------------</td>
<td>----------------</td>
</tr>
<tr>
<td><strong>PSM Support Center</strong></td>
<td>Fred Hall</td>
</tr>
<tr>
<td></td>
<td>Dave Morris</td>
</tr>
<tr>
<td></td>
<td>Denise VanBuren</td>
</tr>
<tr>
<td></td>
<td>Jeannie Hall</td>
</tr>
<tr>
<td><strong>Transition Organizations</strong></td>
<td>Refer to Transition Organization Package</td>
</tr>
</tbody>
</table>
APPENDIX B - TECHNICAL WORKING GROUP MEMBERS
**DoD and Government**
- US Air Force Materiel Command (AFMC)
- US Air Force Space System Support Group (SSSG)
- US Air Force Strategic Command (STRATCOM)
- US Air Force Software Technology Support Center (STSC)
- US Army CERDEC & Engineering Center
- US Army Communications-Electronics Command (CECOM)
- US Army Material Command (AMC)
- US Army Research, Development and Engineering Command (RDECOM) - Armament Research, Development and Engineering Center (ARDEC)
- US Army SAALT
- US Army Space and Missile Defense Command (SMDC)
- US Marine Corps Tactical System Support Activity (MCTSSA)
- US Navy Arnold Engineering Development Center (AEDC)
- US Navy Fleet Material Support Office (FMSO)
- US Naval Air Systems Command (NAVAIR)
- US Naval Air Warfare Center (NAWC)
- US Naval Sea Systems Command (NAVSEA)
- US Navy Operational Test & Evaluation Force (OPTEVFOR)
- US Navy Research Lab (NRL)
- US Navy Surface Warfare Center (NSWC)
- US Navy Undersea Warfare Center (NUWC)
- Assistant Secretary of the Navy (ASN) Research Development & Acquisition (RD&A)
- Office of the Secretary of Defense (OSD) National Information Infrastructure (NII)
- Office of the Secretary of Defense (OSD) Program Analysis & Evaluation (PA&E)
- Office of the Under Secretary of Defense (OUSD) Science & Technology (S&T)
- Aerospace Corporation
- Central Intelligence Agency
- Defense Acquisition University (DAU) - Defense Systems Management College (DSMC)
- Defense Contract Management Agency (DCMA)
- Defense Finance and Accounting Service (DFAS)
- Defense Information Systems Agency (DISA)
- Defense Logistics Agency (DLA)
- Department of Homeland Security
- US Customs and Border Protection
- Federal Aviation Administration (FAA)
• Institute for Defense Analyses (IDA)
• MITRE Corporation
• National Aeronautics and Space Administration (NASA)
• Sandia National Lab
• Social Security Administration (SSA)
• Software Engineering Institute (SEI)
Industry
- ACS GSG
- Accenture, Quality and Process Improvement
- Alion Science and Technology
- American System Corporation
- Ameritrade Corporation
- Apptis
- Argon Engineering Associates
- Assurance Engineering
- BAE Systems
- Bank of America
- Bloodworth Int. Tech.
- Boeing
- Booz Allen Hamilton
- Borland TeraQuest
- Center for Systems Management
- CMIS
- Computer Technology Associates (CTA)
- Countrywide
- Carnegie Mellon University
- Computer Sciences Corporation (CSC)
- David Consulting Group
- Distributive Software
- Federal Reserve Bank
- First Line Partners
- FMI Solutions
- Fraunhofer Center for Experimental Software Engineering
- GTE
- Galorath, Inc.
- General Dynamics
- General Scientific Corporation
- Graeme & Garland
- Harris Corporation
- Hawaiian Electric
- International Business Machines (IBM)
- IEEE
- International Function Point Users Group (IFPUG)
- IIT Research Institute (IITRI)
- International Council on Systems Engineering (INCOSE)
- ITABHI
- Independent Engineering, Inc.
- Jacobs Sverdrup
- James Gregory Associates
• Kodak Health Imaging
• L-3 Communications
• Lexmark International
• Lockheed Martin
• Management-By-Measurement, LLC
• National Renewable Energy Laboratory
• Northrop Grumman
• OAO Corporation
• Paraswift, Inc.
• Pragma Systems Corporation
• PRICE Systems, LLC
• Q-Labs
• Quantitative Software Management
• Quality Plus Technologies, Inc.
• Raytheon
• Reifer Consultants
• Robbins Gioia, LLC
• Rockwell Collins
• Science Applications International Corporation (SAIC)
• Sallie Mae
• Sentel
• Softstar Systems
• Software Engineering Associates, Inc.
• Software Management Solutions
• Software Metrics, Inc.
• Systems and Software Consortium, Inc (SSCI)
• Technomics
• Tecolote Research, Inc.
• Texas Guaranteed Student Loan Corporation
• Titan Corporation
• Tivoli
• Tybrin Corporation
• United Defense
• University of Southern CA
• UpStart Systems, LLC
• User Trust Network
• US West
• VisiTech, Ltd.
• Virginia Polytechnic Institute and State University
• West Virginia High Tech. Consortium
• West Virginia University
• Whittaker Group
• Whittaker Group
• Wind River Systems
• Xcel Energy
**International**
• ADI Limited (*Australia*)
• Amdocs (*Israel*)
• Australian Defence Force Academy (ADFA) (*Australia*)
• Australia SoC Technology Centre (*Australia*)
• BAE Systems (*Australia*)
• Centro de Investigacion en Matematicas (CIMAT) (*Mexico*)
• Defence Science and Technology Labs (*UK*)
• Defence Material Organisation - Australian DoD (*Australia*)
• EMBRAER (*Brazil*)
• Ericsson Espana SA (*Spain*)
• General Dynamics (*Canada*)
• Government of Israel, Ministry of Defense (*Israel*)
• Jacobs Sverdrup (*Australia*)
• Kozo Keikaku Engineering, Inc. (*Japan*)
• LiveWare I.S.S.A (*Argentina*)
• MEADS International (U.S, Germany, Italy)
• MS SPI Solutions (*Mexico*)
• National Research Council of Canada (*Canada*)
• S-3 Consulting Pty. Ltd. (*Australia*)
• Saab Systems Pty. Ltd. (*Australia*)
• Software Improvements Pty. Ltd. (*Australia*)
• Software Quality Institute (*Australia*)
• Tangram Hi-Tech Solutions (*Israel*)
• Tenix ESD (*Australia*)
• ti Metricas (*Brazil*)
• UK Ministry of Defence (*UK*)
• University of York/YorkMetrics Ltd. (*UK*)
C.1 Introduction
The purpose of risk management planning is to define the resources and strategies that ensure PSM project risks are identified and managed in a consistent, systematic manner.
This Risk Management Plan defines how risk management activities are implemented and supported as a continuing PSM project management activity. The objective of risk management is to reduce or eliminate risks prior to them becoming a threat to successful achievement of PSM goals and objectives.
This Risk Management Plan serves as the mechanism for implementing the PSM project risk management.
C.2 Plan Scope
The scope of this Risk Management Plan covers developing, documenting, and implementing a risk management process for the PSM project.
The scope of risk management includes the development of PSM products, consultation support to ARDEC and external customers, and project management activities.
C.3 Risk Management Process Description
The risk management process described in the ARDEC Software Enterprise CP-103 is applied to this project, except as otherwise specified.
C.3.1 Risk Management Strategy
The basic risk management strategy is to identify critical areas and risk events/situations, both technical and non-technical, and take the necessary action to investigate and resolve them before they adversely affect cost, schedule, or performance. The strategy for managing risks on the PSM project is to focus on the following criteria that highlight top risks:
- High risk exposure
- Timing more likely to cause risk occurrence
- Great potential for organizational impact
- Great chance that risks may be linked/coupled to other risks
- Low data confidence
C.3.2 Risk Sources and Categories
The PSM project activities are reviewed against the Risk Sources identified in the PSM Guidebook. The sources are then evaluated in terms of the risk to each particular activity. Information needs that provide sources of risk for the PSM project are identified using the Information Category-Measurable Concept Measure (ICM) table in the PSM book. The information needs are then correlated to the list of PSM activities required. The risk sources are then correlated with the initial set of risk-related categories. Finally,
the list of PSM activities is correlated to the set of categories, and the associated risks or obstacles are identified.
**C.3.3. Risk Management Context and Risk Identification**
The objectives of the PSM project are described in the PSM Methods of Operation, Section 1.1.
Assumptions of the PSM project are:
- Personnel assigned to PSM tasks are available as needed
Constraints on the PSM project are:
- Funding is a major constraint - tasks are planned within available funding
The purpose of this activity is to identify risks and develop a risk profile for the PSM project. The list of current top-rated risks is contained in the PSM Project Risk Profile, Attachment C-1 to this Appendix. This risk profile creates a consistent, current, and historical view of the risks present in the PSM project, along with their priority ranking and treatment, so that the risks may be communicated fully and succinctly to relevant stakeholders. The PSM risk profile will be maintained throughout the project’s life cycle.
**C.3.4. Risk Parameters**
The risk criteria and parameters are those identified in CP-103, with the following additions.
The following approval levels were defined for the selection and approval of risk treatment alternatives.
- **High**: PSM TSG
- **Moderate**: PSM project manager
- **Low**: PSM project manager
**C.3.5. Risk Monitoring**
This project utilizes project-level measures IAW the Project Measurement Plan, Appendix D, as a basis for monitoring those risks. See the risk measures section of each risk for specific measures used.
**C.4 Risk Management Process Evaluation**
The risk management process described in CP-103 is applied to this project, except as noted below.
**C.4.1. Capturing Risk Information**
Risk information is documented in the PSM TSG meeting minutes.
C.4.2. Assessing the Risk Management Process
Suggestions for improving risk management procedures, process, or policies are submitted through an Organizational Change Request (OCR).
C.4.3. Generating Lessons Learned
Information on the risks identified, their treatments, and the success of the treatments will be reviewed during PSM TSG meetings by the stakeholders and other parties to identify systemic organizational risks. Project risks are also reviewed at the PSM TSG meetings, and individual project lessons learned may be collected to aid in identifying systemic project risks. Lessons learned are submitted and processed.
C.5 Risk Communication
C.5.1. Process Documentation and Reporting
Risk action requests are used to document risk-related treatments, actions, decisions, and status.
C.5.2. Coordinating Risk Management with Stakeholders
The results of the risk management activities are reported at all TSGs or whenever risk threshold breaches occur.
C.5.3. Coordinating Risk Management with Interested Parties
The results of the PSM risk management activities are reported to stakeholders during regularly scheduled meetings.
Current PSM project risks are identified in this section.
**Risk 1: No long-term sponsor for PSM**
**Expected Phase:** All
**Status:** Monitor
**Date:** 15 May 2001
**Priority:** 1
**Risk Level:** High
**Probability:** High (3)
**Impact:** High (3)
**Risk Exposure:** (9)
**Risk Description:** PSM project sponsorship has changed several times as previous sponsors were reorganized out of existence and personnel changes have occurred. This has caused delays in receipt of funding, changing requirements, and priorities. There is a continued risk that the sponsors may change.
PSM is a tri-service initiative with product development funding provided by various Army, OSD, and other government organizations. Each year, an estimated task plan is prepared with estimates of the funding requirements. Writers are generally paid, but Technical Working Group (TWG) members volunteer their time to the project. The PSM Executive Steering Committee (ESC) approves the task plan based on the identified needs of the DoD and industry users.
The actual tasking that is accomplished each year depends on identifying a sponsor for a particular task or on identifying a volunteer to lead a task without funding. Funding for an activity must be available at least one month prior to the start of the activity.
**Risk Measures and Threshold:** The PSM task plan is updated when sponsor funding is received in house. Any funding received is either allocated to in-house personnel or to support contractors. The threshold is exceeded when any expected funding is more than one month late.
**Risk Action Requests:** None
**Contingency Plans:** The PSM ESC reviews tasking at its biannual meetings. At these meetings, priorities are re-evaluated, any required adjustments to the project plan are identified, and additional potential sponsors are identified. In addition, the PSM project manager reviews this information on at least a quarterly basis to evaluate new plans. This risk may be:
- Avoided by matching requirements on SOWs to available funding
- Controlled by identifying additional funding sources and sponsors
- Controlled by soliciting volunteer effort for support identified tasks
- Monitored to re-evaluate availability of funding
Risk 2: Schedule deviations for funded tasks
Expected Phase: All
Status: Monitor
Date: 15 May 2001
Priority: 3
Risk Level: Moderate
Probability: Medium (2) Impact: Medium (2) Risk Exposure (4)
Risk Description: This risk applies to PSM funded tasks related to product development activities. PSM product development is heavily dependent on both funding (see risk #1) and availability of TWG members from many diverse organizations that work on PSM product development. A detailed schedule is developed prior to the start of any task based on availability of funding and resources. Deviations from this schedule are monitored.
Risk Measures and Threshold: A task list containing schedule activities for the PSM products is used to measure this risk. This measure falls within Schedule and Progress in the ICM table. The threshold is exceeded whenever a product is more than 20 percent over schedule.
Risk Action Requests: None
Contingency Plans: In order to reduce the occurrence of apparent schedule slips, schedules are not defined until funding is available, available resources have committed to each task, and any required contractual mechanisms are in place.
When the schedule cannot be maintained, alternatives are evaluated. This risk may be:
- Avoided by revising the schedule based on funding availability
- Controlled by changing the resources (personnel or amount of funding) applied to the tasks
- Controlled by changing the scope of work to reflect revised requirements
Risk 3: Resource limitations - external personnel
Expected Phase: All
Status: Monitor
Date: 15 May 2001
Priority: 2
Risk Level: Moderate
Probability: High (3) Impact: Medium (2) Risk Exposure (6)
Risk Description: There are limitations for the personnel available to support the project. There are very few personnel who work on the PSM project full time. Most participate in many other activities within their organization in addition to their work on PSM. As a result, resources are not always available as scheduled. This has a large impact on schedule milestones. This is especially true for volunteer activities.
Risk Measures and Threshold: The informal task list is used to track the PSM project’s contracted efforts. This measure falls within Resources and Cost in the ICM table. Volunteer efforts are tracked informally. This is more difficult to control since it is a volunteer effort.
Risk Action Requests: None
Contingency Plans: When schedule cannot be maintained, alternatives are evaluated. This may involve:
- Revising the schedule based on resource availability
- Changing the resources (personnel) applied to the task
- Changing the scope of work to reflect available resources
**Risk 4: Resource limitations - RDECOM-ARDEC personnel**
**Expected Phase:** All
**Status:** Monitor
**Date:** 15 May 2001
**Priority:** 5
**Risk Level:** Moderate
**Probability:** Low (1)
**Impact:** High (3)
**Risk Exposure:** (3)
**Risk Description:** ARDEC SWISE personnel are working on PSM development and management tasks, ARDEC process improvement activities (helping projects to implement the M&A, risk, and estimation procedures, as well as developing project plans for selected projects), an FMS case, and supporting several external organizations. Resources must be constantly prioritized to meet PSM and other requirements.
**Risk Measures and Threshold:** This is monitored via the informal task list.
**Risk Action Requests:** None
**Contingency Plans:** This risk may be:
- Avoided by adding additional SWISE resources, either through contractual support or additional internal resources
- Controlled by continuously reevaluating priorities to focus on the most important task
Risk 5: Unauthorized and inappropriate use of the PSM name, process, or products
Expected Phase: All
Status: Monitor
Date: 15 May 2001
Priority: 6
Risk Level: Moderate
Probability: Low (1) Impact: Medium (2) Risk Exposure (2)
Risk Description: There have been instances where individuals that are not associated with PSM have used the PSM logo or claimed to be authors of PSM material.
Risk Measures and Threshold: Subjective judgment
Risk Action Requests: None
Contingency Plans: This risk may be:
- Avoided by listing references to PSM and by clearing documentation usage requirements
- Monitored by reviewing PSM uses
- Controlled by contacting any individual or organization who uses PSM in an unwarranted manner
- Controlled by posting materials on the web site in the most secure format possible
Risk 6: Unexpected changes or termination of software COTS products that are used in the PSM Insight Tool
Expected Phase: All
Status: Monitor
Date: 15 May 2001
Priority: 4
Risk Level: Moderate
Probability: Medium (2)
Impact: Medium (2)
Risk Exposure (4)
Risk Description: The PSM Insight tool contains many COTS products as part of the delivered code. If any of these tools are changed or if the software is no longer supported, it impacts the PSM Insight code. This has already occurred with several components. When this occurs, the components have to be replaced or the functionality has to be (re)developed.
Risk Measures and Threshold: The threshold is a binary one: either a component is available and is supported or it is not.
Risk Action Requests: None
Contingency Plans: When a component is changed and can no longer be used or if it is no longer supported, a trade study is conducted to determine if a feasible alternative is available. The trade study considers required functionality, costs, and whether the functionality can be developed. In addition, the tool developers periodically review available components for potential inclusion in the PSM Insight code.
Risk 7: Loss of technical consistency
Expected Phase: All
Status: Monitor
Date: 15 May 2001
Priority: 7
Risk Level: Moderate
Probability: Medium (2) Impact: Low (1) Risk Exposure (2)
Risk Description: PSM guidance is referenced in multiple technical standards and DoD/service policies and regulations. As these policies and regulations change, PSM may need to make changes to coordinate with these documents. There is also a potential that changes to PSM may require changes to documents that reference PSM.
Risk Measures and Threshold: All changes to PSM guidance are evaluated subjectively to determine whether other documents that reference PSM need to be changed. Additionally, changes to documents that reference PSM are also evaluated subjectively to determine whether PSM guidance needs to be changed.
Risk Action Requests: None
Contingency Plans: Depending on the extent of the change, several possibilities exist:
- The PSM guidance may be changed to reflect policy or standard changes
- Recommendations for changes to other documents may be made
- A white paper mapping PSM to other documents may be created
APPENDIX D - PSM MEASUREMENT PLAN
D.1 Introduction
D.1.1. Purpose
This PSM Measurement Plan describes the specific details of implementing the software measurement process to provide feedback on PSM project-specific information needs. It helps project and technical managers meet cost, schedule, and technical objectives.
D.1.2. Objectives of the PSM Project
The objectives of the PSM project are described in the PSM Methods of Operation, Section 1.1.
D.1.3. Scope of PSM Project Measurement
The scope of the PSM project and of its associated products may be found in the PSM Methods of Operation, Section 1.2.
PSM describes an information-driven measurement process that may be applied to the types of projects described in the PSM guidance. For clarification, the measures described in this plan refer to the information needs related to the PSM project itself and its associated products. It does not address the project measures for the projects that apply the PSM process.
D.2 Project Description
D.2.1. Project Management Characteristics
The PSM organizational structure is described in the PSM Methods of Operation, Section 1.3. Roles and responsibilities for each of these key contributors are identified in the Methods of Operation, Section 1.4. Current PSM group members are identified in the Methods of Operation, Appendix A.
D.3 Measurement Approach
D.3.1. Measurement Roles and Responsibilities
Roles and responsibilities for each of these key contributors are identified in the PSM Methods of Operation, Section 1.4. Current PSM group members are identified in the Methods of Operation, Appendix A.
D.3.2. Communications and Interfaces
Communications and interfaces among the PSM project manager, PSM organizational structure, and its internal and external interfaces are discussed in Sections 3.4, 5.1, and 7.3 of the PSM Methods of Operation.
D.3.3. Tools and Databases
As identified in the PSM Project Plan, several automated tools are used in the development and management of the PSM project. These include:
- PSM Insight for storage and analysis of the measurement data
- Various cost estimating tools, including SLIM Estimate, SLIM Control, SEER-SEM, and COCOMO II for generating and evaluating project estimates
- Microsoft Excel and other components of the Microsoft Office Suite for analysis of the measurement data
D.3.4. PSM Project Measurement Objectives
PSM project measurement objectives are to:
- Provide a proven measurement process to identify and address information needs, issues, and risks
- Monitor the transition of PSM products and services into the measurement community
- Evaluate the performance of PSM products and services
- Provide a basis for objective communication, informed decision making, and allocation of PSM project resources
D.3.5. Measurement Implementation Strategy
The PSM project measurement process has been implemented over time, starting with the establishment of the PSM Support Center in 1996. Measures currently provide feedback on its objectives, services, and products. The measures were chosen based on the history of the PSM project, its current objectives, information needs, risks, and customer priorities. The measures are updated periodically to reflect new information needs, lessons learned, and the next phase of the measurement strategy.
Measures collected include the following:
<table>
<thead>
<tr>
<th>Indicators</th>
<th>Reference</th>
<th>Currently Collected?</th>
</tr>
</thead>
<tbody>
<tr>
<td>PSM Presentations</td>
<td>P1</td>
<td>Yes</td>
</tr>
<tr>
<td>PSM Courses</td>
<td>P2</td>
<td>Yes</td>
</tr>
<tr>
<td>PSM Guidance</td>
<td>P3</td>
<td>Yes</td>
</tr>
<tr>
<td>PSM Insight</td>
<td>P4</td>
<td>Yes</td>
</tr>
<tr>
<td>Course Ratings</td>
<td>P5</td>
<td>Yes</td>
</tr>
<tr>
<td>Schedule</td>
<td>Modified from Org 1</td>
<td>Yes</td>
</tr>
<tr>
<td>Effort</td>
<td>Modified from Org 6</td>
<td>Yes</td>
</tr>
<tr>
<td>Cost</td>
<td>Modified from Org 9</td>
<td>Yes</td>
</tr>
</tbody>
</table>
D.3.6. Evaluation Criteria
Prior to conducting the measurement process evaluation activities, evaluation criteria are established for each of the measures to determine the effectiveness and benefit of the measures previously chosen for implementation. Criteria for evaluating the utility of the analyses and results and for conducting Measurement and Analysis (M&A) activities are established. These items include:
- **Measurement product use** - the extent to which the measure is used
- **Confidence in measurement results** - the level of confidence the user has in the measurement results
- **Measurement results’ fitness for purpose** - the extent to which the measures and indicators provide feedback on the information need they were meant to report on, and the extent to which predictive measures and indicators demonstrate the ability to forecast
- **Understandability of measurement results** - the extent the user can understand and properly interpret the results
- **Satisfaction with the assumptions of an indicator model** - are the assumptions of the indicator satisfactory
- **Measurement accuracy** - was the measure implemented according to its measurement specification, and are the results different from what was intended
- **Measurement reliability** - are the results repeatable and reproduceable
The PSM project measurement process is evaluated from three perspectives:
- **Performance** - measuring the inputs, outputs, and effects of the measurement process - performance is evaluated against timeliness, efficiency, defect containment, and customer satisfaction
- **Conformance** - comparing the process to a description of its intended use - each of the measurement process activities of Plan Measurement, Perform Measurement, Evaluate Measurement, and Obtain and Sustain Commitment is audited for compliance
• **Capability** - comparing the process to an external benchmark of process maturity - capability is compared to the Capability Maturity Model, Integrated (CMMI) Level 3 criteria
Additional guidance for the evaluating measures and the measurement process are found in PSM Guidebook Version 4.0b, Part 7.
**D.3.7. Measurement Investment**
A measurement analyst has been identified for the PSM project. Funding has been allocated and provided as part of the PSM project management process tasks. The PSM project manager requires measurement support to implement and evaluate the measurement process. PSM project members who implement the measurement process provide this support as part of their day-to-day activities and tasks, and it is not delineated as a separate line item or WBS element.
**D.3.8. Measurement Activities**
Planned measurement activities and deliverables are aligned with the current PSM Project Plan, scheduled milestones, meetings and reviews, and the other PSM project processes. For example, targeted measurement activities in 2004 included update of service, product, and performance measures prior to the annual PSM Users’ Group Conference.
**D.3.9. Measurement Training**
Any new project personnel assigned to a measurement-related role will require training in the M&A procedure.
**D.4 Description of PSM Project Information needs**
The PSM project prioritized list of information needs is identified in the PSM Measurement Specification. These needs stem from past performance on the PSM project, current activities, organizational and customer needs, and the subsequent assessment and evaluation of risks and issues for this project.
**D.5 PSM Project Measurement Specifications and Indicators**
For each of the prioritized information needs identified, one or more candidate measures and indicators have been identified to provide feedback on each information need in the PSM Measurement Specification.
**D.6 PSM Project Aggregation Structures**
The PSM project uses the WBS aggregation structure. The PSM project’s WBS is defined in “PSM FY01 Potential Tasks and Timeframes.”
D.7 Indicators
Current indicators used on the PSM project are described in the PSM Measurement Specification.
D.8 Reporting
D.8.1. Collecting and Reporting Mechanisms
The project measures are reported to the PSM project manager and within the PSM organization as described in Methods of Operation document.
D.8.2. PSM Organizational Structure
Reporting mechanisms for the PSM Organizational Structure is discussed in the PSM Methods of Operation.
D.8.3. Contents of Reports
Results from the analysis of measures are compared to their corresponding risk, issues, and information needs. Specific details for the measures are contained within the Measurement Specification. Formats for presentations to the PSM project manager vary according to the review being presented.
D.8.4. Storage and Repository
In performing their day-to-day activities, each PSM project member collects measurement data as described in Part 5, Measurement Specification, and Part 8, Data Collection and Reporting Mechanisms. The data collected is stored in the engineering notebooks and automated tools noted in the measurement specification. The PSM project members, the measurement analyst, and the PSM project manager upload appropriate reports, findings, briefings, and other measurement products as defined in the procedure.
|
{"Source-Url": "http://www.psmsc.com/Downloads/Other/PSMMethodsofOperations_Nov2006.pdf", "len_cl100k_base": 16193, "olmocr-version": "0.1.50", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 81544, "total-output-tokens": 17316, "length": "2e13", "weborganizer": {"__label__adult": 0.0007500648498535156, "__label__art_design": 0.003265380859375, "__label__crime_law": 0.002468109130859375, "__label__education_jobs": 0.0487060546875, "__label__entertainment": 0.0004687309265136719, "__label__fashion_beauty": 0.0005664825439453125, "__label__finance_business": 0.05279541015625, "__label__food_dining": 0.0005884170532226562, "__label__games": 0.003307342529296875, "__label__hardware": 0.004581451416015625, "__label__health": 0.0010976791381835938, "__label__history": 0.002010345458984375, "__label__home_hobbies": 0.001255035400390625, "__label__industrial": 0.005802154541015625, "__label__literature": 0.0013790130615234375, "__label__politics": 0.0012054443359375, "__label__religion": 0.000766754150390625, "__label__science_tech": 0.314697265625, "__label__social_life": 0.0004451274871826172, "__label__software": 0.08636474609375, "__label__software_dev": 0.46484375, "__label__sports_fitness": 0.0006256103515625, "__label__transportation": 0.0014123916625976562, "__label__travel": 0.0006051063537597656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79173, 0.01824]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79173, 0.09753]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79173, 0.89345]], "google_gemma-3-12b-it_contains_pii": [[0, 104, false], [104, 4653, null], [4653, 7344, null], [7344, 8236, null], [8236, 10868, null], [10868, 12967, null], [12967, 15118, null], [15118, 16996, null], [16996, 22827, null], [22827, 26098, null], [26098, 28787, null], [28787, 31438, null], [31438, 33472, null], [33472, 35309, null], [35309, 37783, null], [37783, 40020, null], [40020, 42395, null], [42395, 44473, null], [44473, 46753, null], [46753, 47719, null], [47719, 50043, null], [50043, 50412, null], [50412, 52175, null], [52175, 52389, null], [52389, 53446, null], [53446, 54524, null], [54524, 55612, null], [55612, 57846, null], [57846, 59656, null], [59656, 60801, null], [60801, 63053, null], [63053, 64542, null], [64542, 65743, null], [65743, 66756, null], [66756, 67563, null], [67563, 68744, null], [68744, 69867, null], [69867, 71490, null], [71490, 73200, null], [73200, 75670, null], [75670, 77866, null], [77866, 79173, null]], "google_gemma-3-12b-it_is_public_document": [[0, 104, true], [104, 4653, null], [4653, 7344, null], [7344, 8236, null], [8236, 10868, null], [10868, 12967, null], [12967, 15118, null], [15118, 16996, null], [16996, 22827, null], [22827, 26098, null], [26098, 28787, null], [28787, 31438, null], [31438, 33472, null], [33472, 35309, null], [35309, 37783, null], [37783, 40020, null], [40020, 42395, null], [42395, 44473, null], [44473, 46753, null], [46753, 47719, null], [47719, 50043, null], [50043, 50412, null], [50412, 52175, null], [52175, 52389, null], [52389, 53446, null], [53446, 54524, null], [54524, 55612, null], [55612, 57846, null], [57846, 59656, null], [59656, 60801, null], [60801, 63053, null], [63053, 64542, null], [64542, 65743, null], [65743, 66756, null], [66756, 67563, null], [67563, 68744, null], [68744, 69867, null], [69867, 71490, null], [71490, 73200, null], [73200, 75670, null], [75670, 77866, null], [77866, 79173, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79173, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79173, null]], "pdf_page_numbers": [[0, 104, 1], [104, 4653, 2], [4653, 7344, 3], [7344, 8236, 4], [8236, 10868, 5], [10868, 12967, 6], [12967, 15118, 7], [15118, 16996, 8], [16996, 22827, 9], [22827, 26098, 10], [26098, 28787, 11], [28787, 31438, 12], [31438, 33472, 13], [33472, 35309, 14], [35309, 37783, 15], [37783, 40020, 16], [40020, 42395, 17], [42395, 44473, 18], [44473, 46753, 19], [46753, 47719, 20], [47719, 50043, 21], [50043, 50412, 22], [50412, 52175, 23], [52175, 52389, 24], [52389, 53446, 25], [53446, 54524, 26], [54524, 55612, 27], [55612, 57846, 28], [57846, 59656, 29], [59656, 60801, 30], [60801, 63053, 31], [63053, 64542, 32], [64542, 65743, 33], [65743, 66756, 34], [66756, 67563, 35], [67563, 68744, 36], [68744, 69867, 37], [69867, 71490, 38], [71490, 73200, 39], [73200, 75670, 40], [75670, 77866, 41], [77866, 79173, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79173, 0.14526]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
b12eab7dc431f394b86b5c58518da6a0a13eeeb2
|
A Fine-Grained Horizontal Scaling Method for Container-Based Cloud
Chunmao Jiang and Peng Wu
School of Computer Science and Information Engineer, Harbin Normal University, Harbin, Heilongjiang 150025, China
Correspondence should be addressed to Peng Wu; 864782389@qq.com
Received 26 September 2021; Revised 5 November 2021; Accepted 9 November 2021; Published 27 November 2021
1. Introduction
The rapid growth of container technology requires effective deployment and management strategies for containerized applications while addressing their runtime adaptability. In addition, the ability of cloud computing to provide resources on demand encourages the development of elastic applications that can accommodate changes in working conditions (e.g., variable workloads). Horizontal elasticity allows increasing (scaling-out) and decreasing (scaling-in) the number of application instances (e.g., containers) [1]. Most of the existing horizontal scaling methods explore resilience, which respond quickly to small load changes [2–4]. In this study, we build fine-grained horizontal scaling to cope with sudden load peaks.
As two crucial quantitative metrics, response time and resource utilization are essential measurements for various load variations under dynamic environmental conditions [2]. Container-based virtualization technology can improve application performance and resource utilization more efficiently than virtual machines (VM). Many existing scaling mechanisms employ fixed thresholds, which are based on cloud platform metrics, in general, such as CPU utilization. In contrast, such an approach is widely used, including Amazon’s EC2, a virtual machine-based cloud platform. However, for applications that are constantly changing their requirements for CPU, memory, and other resources, their performance and resource utilization decrease significantly [5–7].
The adaptation of advanced metrics and dynamic thresholds may respond more finely to fluctuations in the workload, so it can improve application performance and get higher resource utilization. Therefore, we hope to develop a...
new dynamic autoscaling approach that automatically adjusts the thresholds based on the state of the execution environment observed by the monitoring system. In this way, the monitoring information, including infrastructure and application-specific metrics, will help the service provider to accomplish a satisfactory adaptation mechanism to various operational states. Furthermore, fine-grained scalability thresholds and degrees of scalability can better improve resource utilization and better cope with dynamic workload variations.
Therefore, this study aims to develop a new fine-grained dynamic scaling method based on the thought of granular computation. The major contributions of this study are as follows: first, we classify the container scaling events into three categories by establishing two thresholds, i.e., scale-out, neither scale-out nor scale-in, and scale-in. Second, we further subdivide the scaling strength into three levels for the scaling events, i.e., no scaling (to prevent jitter), regular scaling, and fast scaling. Third, the scalability metric applied in this study considers not only CPU utilization but also the growth rate of CPU utilization. We validate the algorithm’s effectiveness by simulation under low load, medium load, and high load scenarios, respectively. The results show that the proposed algorithm in this study can resist high load and jitter well and effectively guarantee the cluster’s quality of service (QoS).
The remainder of this study is organized as follows. Section 2 reviews the horizontal scaling mechanism of container clouds and presents the limitations that currently exist in Kubernetes. In Section 3, we present the DHPA algorithm, which is a dual-threshold horizontal scaling algorithm. And then, a specific example is given to illustrate the idea and process of the DHPA algorithm. Section 4 gives the experiment result and analysis. Finally, we conclude this study and prospective future studies in Section 5.
2. Related Work
Kubernetes [8–10] offers Horizontal Pod Autoscaler (HPA) [11–13], a built-in horizontal scaling controller, which automatically scales the ReplicaSet controller, deployment controller, or pod quantity based on statistical CPU utilization (or other custom metrics). This section presents the Kubernetes’ horizontal scaling technique, including the acquisition of HPA metrics, how it works, and its limitations.
2.1. Horizontal Pod Autoscaler. HPA is a cyclic control process. The controller manager queries resource utilization during each cycle based on the metrics specified in each Horizontal Pod Autoscaler definition.
The controller manager can retrieve data from the following sources: (1) gather CPU utilization and memory usage data from Heapster, (2) use the Resource Metrics API to collect data from the Metrics Server that contains resource metrics for each pod in the cluster, and (3) the Custom Metrics Adapter provides the data collected by third-party plug-ins such as Prometheus to the Custom Metrics API, which the cluster then uses to fetch the data. In the latest version of Kubernetes, the cluster introduces a new data reporting channel—aggregation layer, an abstract data reporting interface that third-party plug-ins or administrators can use to implement this interface themselves. The approach of HPA to acquire data is shown in Figure 1.
2.2. How HPA Works. The principle of HPA is to poll resources of each pod every 30 seconds to determine whether the number of copies of the target pod needs to be adjusted by statistically analyzing the changes in the load of the target pod. There are two approaches to HPA to calculate the number of targets that the pod needs to scale-out or scale-in.
2.2.1. CPU Utilization Percentage. CPU utilization percentage represents the average CPU utilization of all copies of the current pod. A Pod’s CPU utilization is the Pod’s current CPU usage divided by the Pod Request value [14]. The calculation of the target number of pods for a scaling capacity is given by
$$\text{ER} = \text{ceil} \left( \frac{\text{cR} \times (\frac{\text{CV}}{\text{dV}})}{\text{dV}} \right),$$
where ER (expect replicas) represents the expected number of pods needed for expansion. The cR (current replicas) represents the number of pods in the current state. The CV (current value) represents the metrics that are currently being detected, such as memory usage, CPU utilization, and HTTP request traffic. The dV (desired value) represents the threshold for scaling up or scaling down, and Cell represents the value, which is the nearest integer that is greater than or equal to the dV. Suppose the value of CPU utilization percentage exceeds 80% at a given moment. In that case, it means that the current number of pod copies is likely insufficient to support more subsequent requests, and dynamic scaling is required. When the request peak passes, the CPU utilization of Pod drops again, and HPA will reduce the number of pod copies to a reasonable level.
2.2.2. Application-Based Defined Metrics. CPU utilization percentage is implemented by the Heapster plug-in when calculating the CPU usage of the Pod, but adding a plug-in increases the complexity of the system while decreasing the efficiency of HPA’s scaling. Kubernetes supports using custom metrics as metrics starting with version 1.2, which requires the given properties such as the metric units and how the metrics data are obtained. This mechanism is not widely used yet. The HPA control is illustrated in Figure 2.
The workflow of HPA can be summarized as follows. HPA will fetch the metrics data in the cluster every 30 seconds. Suppose the fetched metrics exceed the initial threshold. In that case, the HPA starts counting the number of target pods, and the HPA controller sends a command to the corresponding controller of the pods (ReplicaSet and deployment). The controller recycles or scales out the number of pods according to the number of target pods. After the operation of the pod is completed, the service layer
inside Kubernetes will automatically perform load balancing operations for the scaled-up or scaled-down pods. At this point, HPA has completed the entire horizontal scaling operation, and the scaling flowchart is shown in Figure 3.
2.3. Limitations Analysis of HPA. By analyzing the Kubernetes source code, we found that the HPA system implementation is relatively simple and has some limitations.
(1) The algorithm used by HPA for expansion and contraction is based on equation (1), which is simple to implement and inflexible. For example, suppose there are many network requests instantaneously. In that case, HPA will scale out, but it needs time and resources to start a pod service. Suppose the scale-out is not timely, or the number of scale-out is insufficient. It may seriously crash service and even threaten the cluster’s security.
(2) Due to HPA’s antijitter mechanism, the cluster will not be rescaled within 3 minutes after an expansion, which may result in an inadequate expansion. The number of containers cannot meet the subsequent service requests. The quality of service will be severely degraded or even collapse, which significantly affects the user experience and even cluster security. Simultaneously, there will not be any scaled operations within 5 minutes. If the scale occurs when traffic peaks to arrive again, the pod copy is not enough, which will eventually lead to a decline in the quality of service, cluster crash, and other issues.
(3) HPA fixes the time of data sampling to save resource consumption. The data reporting interval is the same during regular and high load periods, seriously affecting the cluster’s access to information about the entire load during high load. The mechanism makes the cluster unable to correctly estimate the current pod load, prone to untimely and inadequate capacity expansion.
A summary of the related work is shown in Table 1.
3. Dual-Threshold Horizontal Scaling Algorithm
In this section, we present a dual-threshold-based scaling algorithm (DHPA) and analyze the algorithm through an example.
3.1. The Basic Idea of DHPA. The basic idea of the DHPA algorithm is to divide the container scaling into finer granularity by introducing the idea of granular computation. First, a threshold is set for scale-out and scale-in, respectively, and the two thresholds divide a scaling event into three parts: scale-out, no scale-in, and scale-in. The scaling strength is also subdivided as follows: no scale-out, normal scale-out and scale-in, and fast scale-out and scale-in. This fine-grained division of the container scale-out and scale-in capacity problems can be an excellent solution to the problems mentioned above, and the algorithm implementation steps are as follows:
(1) In the DHPA algorithm, there will be no longer mechanisms such as no more expansion within 3 minutes and no more expansion within 5 minutes of shrinkage. DHPA will use dynamic antijitter measures in place with the original static antijitter mechanism.
(2) The DHPA algorithm dynamically adjusts the reporting time of cluster monitoring pod data, which is subdivided into three granularities, i.e., at low load, the reporting time is 30 seconds. For medium load, it is 10 seconds. For high load, the data uptime is once every 1 second. This mechanism improves the mastery of the pod load situation of this algorithm under different load cases, allowing for better control of the system’s scaling operations.
(3) The DHPA algorithm dynamically adjusts the pod’s expansion by triangulating the pod expansion situation. It performs no expansion operation when the fluctuation of the pod load changes little. When the fluctuation variation is moderate, it performs the regular expansion operation. If the pod’s load
fluctuation varies sharply, the algorithm will perform a robust expansion operation to meet the pod’s load demand. This case will reduce the number of expansion resources wasted because of jitter and fully consider the expansion under different load conditions.
(4) During capacity reduction, the DHPA algorithm also dynamically adjusts the capacity reduction range of pods. It can effectively reduce the frequent expansion and reduction problems caused by the sudden increase in the load after the load drop and reduce the business crash caused by the antijitter problem.
3.2. Scheduling Algorithm
Definition 1. (base threshold). Let $\alpha$ and $\beta$ represent two thresholds, which are used to adjust the capacity of provided pods.
When the DHPA monitors the current pod’s CPU utilization $U$ over $\alpha$, it changes the monitoring time from 30 seconds to 10 seconds and starts the capacity expansion judgment. If the CPU utilization $U$ of the monitored pod exceeds $\beta$, we change the refresh rate to 1 second.
Definition 2. A pod’s CPU utilization queue is set, where $n$ is customizable and in this study is provisionally defined as 3. The larger the value, the better the antijitter effect, but the more stringent the scaling conditions will be.
Definition 3. Two thresholds $\delta$ and $\beta$ are defined, satisfying $0.1 < \delta < \beta$, and $c$ and $d$ are the two critical granularity thresholds used by the DHPA algorithm to determine the strength of the expansion and contraction. We suggest that $a$ and $b$ take 40% of their range of values, while $d$ and $c$ are suggested to be 70% of their range of values. The developer can determine the most appropriate threshold value by conducting experiments in their cluster.
Let $\Delta_n = (x_n - x_{n-1})/x_{n-1}$ be the growth rate between two neighboring CPU utilization rates in the CPU utilization queue, $\varphi_n = (\Delta_2 + \Delta_3 + \cdots + \Delta_{n-1} + \Delta_n)/n$ is the average of the growth rate of CPU utilization. The DHPA algorithm as follows addresses the above scaling problem and formulates scheduling algorithms for each of the three granularities in the scaling case.
The process of scaling up a container can be outlined as follows. For a given CPU utilization history queue $P = \{x_1, x_2, \ldots, x_n\}$, we first compute each item $\Delta_i$ in queue $P$, if not all of $\Delta_i$ are greater than $\delta$, or one $x_i$ is not greater than $\alpha$, i.e., $\exists \Delta_i < \delta$ or $\exists x_i < \alpha$; then, the cluster will not be scaled up because the algorithm will determine it to be a normal jitter for pod services. If each $\Delta_i$ is greater than $\delta$, but there is one $\Delta_i$ is not greater than $\alpha$, or each utilization $x_i$ in the queue is greater than $\alpha$, i.e., $\forall \Delta_i < \delta$ and $\exists x_i > \alpha$; then, the DHPA algorithm determines it as a normal cluster load rise and performs the normal scaling up, and the number of scaled-up pod copies is computed according to the following equation:
$$ER = \text{ceil} \left[ cR \ast \left( \frac{cV}{\alpha} \right) \right].$$
(2)
If the growth rate of each is greater than $\varepsilon$, and each $x_i$ in the queue is greater than $\alpha$, that is, $\varepsilon < \Delta_2 < \Delta_3 < \dots < \Delta_{n-1} < \Delta_n, \forall \Delta_i < \varepsilon, \text{and } \forall \Delta_i < \alpha$, then the algorithm determines that the traffic peak is about to come; therefore, this strategy adopts emergency expansion. The number of needed to expansion copies of the pod according to equation is as follows:
$$ER = \text{ceil} \left[ cR \ast \left( \frac{cV}{\alpha} \ast |\varphi_n| \ast 10 \right) \right].$$
(3)
The scaling-up strategy of the DHPA algorithm is summarized in the following equation:
$$\begin{align*}
\text{no expansion} & \quad \begin{cases}
\exists \Delta_i < \delta, \quad \text{or, } \exists x_i < \alpha \\
\forall \Delta_i > \delta, \quad \text{and, } \exists \Delta_i < \varepsilon, \quad \text{or, } \forall x_i > \alpha \\
\varepsilon < \Delta_2 < \Delta_3 < \ldots < \Delta_n, \quad \text{and, } \forall \Delta_i < \varepsilon, \quad \text{and, } \forall x_i > \alpha
\end{cases} \\
\text{normal expansion} & \quad \begin{cases}
\varepsilon < \Delta_2 < \Delta_3 < \ldots < \Delta_n, \quad \text{and, } \forall \Delta_i < \varepsilon, \quad \text{and, } \forall x_i > \alpha
\end{cases} \\
\text{rapid expansion} & \quad \begin{cases}
\exists \Delta_i < \delta, \quad \text{or, } \exists x_i < \alpha \\
\forall \Delta_i > \delta, \quad \text{and, } \exists \Delta_i < \varepsilon, \quad \text{or, } \forall x_i > \alpha \\
\varepsilon < \Delta_2 < \Delta_3 < \ldots < \Delta_n, \quad \text{and, } \forall \Delta_i < \varepsilon, \quad \text{and, } \forall x_i > \alpha
\end{cases}
\end{align*}$$
(4)
Similarly, we give the following procedure for container scaling down. If there is a $\Delta_i$ that is greater than 0, or there is a $\Delta_i$ greater than $-\delta$, i.e., $\exists \Delta_i > -\delta$ or $\exists \Delta_i > 0$, the algorithm determines that this is a normal cluster load fluctuation and does not perform a scale-down operation. If each $\Delta_i$ is less than $-\delta$, there is one $\Delta_i$ that is greater than $-\varepsilon$, or each utilization $x_i$ in the queue is less than $\alpha$, i.e., $\exists \Delta_i < -\delta$ and $\exists \Delta_i < -\varepsilon$ or $\forall x_i < \alpha$. The algorithm determines that this is a normal cluster...
### Table 1: Overview of various HPA for container.
<table>
<thead>
<tr>
<th>Virtualization</th>
<th>Basis</th>
<th>Metrics</th>
<th>Method</th>
<th>Ability</th>
</tr>
</thead>
<tbody>
<tr>
<td>Container</td>
<td>CPU and memory</td>
<td>Time and throughput</td>
<td>Control theory</td>
<td>Dynamic</td>
</tr>
<tr>
<td>Container</td>
<td>CPU</td>
<td>Nothing</td>
<td>Rule-based</td>
<td>Static</td>
</tr>
<tr>
<td>VM and container</td>
<td>CPU and bandwidth</td>
<td>Application throughput</td>
<td>Rule-based</td>
<td>Static</td>
</tr>
<tr>
<td>VM and container</td>
<td>CPU</td>
<td>Nothing</td>
<td>Rule-based</td>
<td>Static</td>
</tr>
<tr>
<td>Container</td>
<td>CPU, memory, and bandwidth</td>
<td>Time and throughput</td>
<td>Rule-based</td>
<td>Dynamic</td>
</tr>
</tbody>
</table>
load drop and performs a normal scaling-down operation, and the number of shrunken pods is calculated according to the following equation:
$$ER = \text{ceil} \left[ cR \left( \frac{CV}{\alpha} \right) \right].$$
If each of \( \Delta_i \) is less than \(-\varepsilon\), and at the same time each \( x_i \) in the queue is less than \( \alpha \), that is, at this point, the cluster load drops faster, this time you can do a quick scaling down, in order to save resources, scaling down the number of copies of the pod according to the following equation:
$$ER = \text{ceil} \left[ cR \left( \frac{CV}{\alpha} \right) \cdot \left| \varphi_n \right| \cdot 10 \right].$$
The time complexity of the DHPA algorithm is mainly focused on the polling step of the cluster load. Suppose \( n \) represents the number of copies of each pod in the container cluster and \( n \) represents the cluster load at each moment. In the scaling process, each time needs to traverse the \( n \) copies of the cluster, so the time complexity of the DHPA algorithm is \( O(N) \). The DHPA algorithm has a CPU utilization list and a pod list, each with a finite number of internal objects, so the space complexity of the DHPA algorithm is \( O(N) \).
3.3. An Illustrative Example. This section gives an example of the DHPA algorithm. In the example, we use sin function to simulate the CPU utilization of a set of pods per second as shown in equation
$$U_t = 200 \times \sin(t),$$
where \( t \) represents the times (second), the entire experiment lasts 180 seconds \( t = [1, 2, \ldots, 180] \), and then, the utilization for each second is \( U_t = [0, 3, \ldots, 200] \cup [200, \ldots, 3, 0] \), thus simulating the trend of the pod’s CPU utilization. Suppose \( \alpha = 50 \) and \( \beta = 70 \); these two basic thresholds are used to dynamically adjust the data reporting time of CPU utilization. Suppose \( \delta = 0.1 \) and \( \varepsilon = 0.3 \); these two granularity thresholds are used to determine the increase or decrease in the CPU utilization queue to determine the scaling effort. The RT can be used to represent the cluster data reporting interval. \( P = [x_1, x_2, \ldots, x_n] \) represents the queue that holds the CPU utilization history. Then, \( n = 3 \) in this case.
1. The experiment starts from 1 second, and \( U_t = 3.49 \) according to equation (7). According to the algorithm, we derive the current CPU utilization data reporting time \( RT = 30 \), which \( U_t \) is not reached \( \alpha \) at this time, and the historical rate of change in the utilization has not reached \( \delta \) or \( \varepsilon \), therefore, not scaling up and scaling down.
2. After an interval of 30 seconds, \( U_t = 99 \), and \( P = [3.49, 99.9] \), the CPU utilization exceeds \( \alpha \), but the rate of change of the historical CPU utilization has not yet reached \( \delta \) or \( \varepsilon \), so do not perform a capacity scaling up. The \( RT \) is modified by 1 because \( U_t = 99 > \beta \).
3. At 31 seconds, \( U_t = 103 \), and \( P = [3.49, 99.9, 103] \), the CPU utilization exceeds \( \alpha \), but the rate of change of the historical CPU utilization in the middle has not reached \( \delta \) or \( \varepsilon \), so do not perform a capacity scaling up.
4. At 32 seconds and \( P = [99.9, 103, 105] \), the CPU utilization has exceeded \( \alpha \), but the rate of change in the historical CPU utilization has not reached \( \varepsilon \), so normal expansion. According to equation (2), the approach to calculate the number of copies of the pod should be expanded to 3, and then expansion starts.
5. Since it takes 5 seconds to expand a container, the container is expanded to 3 copies at 42 seconds, so the expansion operation is completed.
6. At 150 seconds, \( U_t = 99 \), and \( P = [105, 103, 99] \), the CPU utilization is over \( \alpha \), but the rate of change in CPU utilization is less than 0, so the normal shrink operation is performed at this time according to equation (5). The number of copies of the shrink pod should be 2.
7. Since it takes 5 seconds to shrink one container, at 160 seconds, the container will be shrunk to two, at which point the shrink operation is completed.
4. Experiments and Data Analysis
This section conducts comparative experiments on the DHPA algorithm’s effectiveness in low, medium, and high load cases. The number of containers produced by the DHPA algorithm is compared with the number of containers produced by the HPA algorithm and the number of containers theoretically required to analyze the actual performance of the DHPA algorithm in the three load cases.
The experiment was conducted based on a simulator program written in Java. The specific environment was as follows: operating system Windows 10 1909 version, JDK version 1.8, data analysis program using Python language for writing, the data analysis tool Matplotlib version 3.1.1, and NumPy version 1.16.5. In the simulation experiments, the CPU utilization of a single pod was simulated using the sin function as the base data and multiplied by the corresponding multiplier to simulate the CPU utilization under different pressures. Ten experiments were performed for each of three cases, and the average of the experimental data was taken as a sample value.
4.1. Analysis of Experimental Data under Low Load Conditions. This experiment carries out a comparison by simulating the DHPA algorithm and Kubernetes’ own HPA algorithm under low load, simulated node 4, node CPU cores for 4 cores, single-core processing power of 2,252 MIPS, node RAM of 16 GB, hard disk capacity 1T, bandwidth 1,000 MB/s. In this experiment, CPU utilization ranges between 0% and 200%. We set that every second the CPU utilization of the pod is
$$U_t = 200 \times \sin(t),$$
(8)
where \( t \) is the number of seconds, the whole experiment lasts 180 seconds, and the initial number of pods is set to 1. Part of the experimental data is shown in Table 2, where field time represents the time, BeforeUtil represents the real-time CPU utilization, CalPod represents the theoretical calculation of the number of pods, RealPod represents the actual number of pods after the expansion of the algorithm, AfterUtil represents the expansion of the calculation, and IsBreak represents whether the cluster crashes or not (a single pod crashes if its CPU utilization exceeds 100%).
It can be seen from Table 2 that the DHPA algorithm can perform expansion and contraction operations efficiently at all time points under low load.
The simulated experimental data for HPA are shown in Table 3. The HPA algorithm has a gap between the number of pods and the number of computed pods in most of the time points under low load and cannot promptly perform the scaling operation.
As shown in Figure 4, most of the time, the actual number of pods for the HPA algorithm is lower than the number of pods required, and this problem is largely due to the inadequate prediction of the HPA algorithm at the time of capacity expansion and cooldown time after the expansion and contraction operation. The DHPA algorithm can efficiently expand and contract capacity after a short delay, which is very close to the theoretical number of pods needed by the cluster, which shows that the DHPA algorithm has a great advantage over the HPA native algorithm low load situations.
4.2. Analysis of Experimental Data under Low Load Conditions. In the medium load experiment, we assume that the number of nodes is 20, the number of CPU cores per node is 4, the single-core processing power of 2,252 MIPS, node RAM is 16 GB, hard disk capacity 1 T, and bandwidth 1,000 MB/s. We expand the multiples of the sin function to simulate the CPU utilization of the pod. The experimental CPU utilization ranges between 0% and 1,000%. We set the CPU utilization per second as follows:
\[
U_t = 200 \times \sin(t),
\]
where \( t \) is the number of seconds, the entire experiment lasts 360 seconds, the initial pod number is set to 10, and some of the experimental data are shown in Table 4.
There is a small difference between the number of pods scaled by the DHPA algorithm and the theoretical number of pods under medium load, proving that the DHPA algorithm also has good performance under medium load. As shown in Table 5, the HPA algorithm scales out the number of pods that are needed under medium load and the total number of pods that are needed.
From Figure 5, it can be seen that HPA algorithm has a large gap between the number of pods and the actual number of pods needed, so there were several cluster crashes, which show that the HPA algorithm has a large defect in scaling up and scaling down under medium load.
4.3. Analysis of Experimental Data under High Load Conditions. In the high load experiment, we assume that the number of nodes is 40, the number of CPU cores per node is 4, the single-core processing power of 2,252 MIPS, node RAM is 16 GB, hard disk capacity 1 T, and bandwidth 1,000 MB/s. We expand the multiples of the sin function to simulate the CPU utilization of the pod. The experimental CPU utilization ranges between 0% and 2,000%. We set the CPU utilization per second as follows.
\[
U_t = 1000 \times \sin(t),
\]
where \( t \) is the number of seconds, the entire experiment lasts 360 seconds, the initial pod number is set to 15, and some of the experimental data are shown in Table 6.
As seen in Table 6, under high load, the DHPA algorithm falls short of the actual number of pods needed because of the container expansion time limit. However, there is a high overlap with the actual number of pods in the overall expansion and contraction trend.
Table 7 shows that the antijitter delay mechanism still constrains the HPA algorithm, and the number of pods scaled out differs significantly from the theoretical number of pods, thus leading to multiple cluster crashes.
As shown in Figure 6, on the one hand, the DHPA algorithm has a lag in the trend of scaling-down capacity compared to the theoretical pod curve, but the overall trend remains consistent. The HPA algorithm, on the other hand, always maintains a lower number of pods, much lower than the actual number of pods needed. Hence, the DHPA algorithm still has a more significant scheduling advantage over HPA in high load situations and can properly schedule the number of containers to ensure the regular operation of the cluster.
Comparison of DHPA and HPA
Table 4: Experimental data of the DHPA algorithm under medium load.
<table>
<thead>
<tr>
<th>Time (s)</th>
<th>BeforeUtil</th>
<th>CalPod</th>
<th>RealPod</th>
<th>AfterUtil</th>
<th>IsBreak</th>
</tr>
</thead>
<tbody>
<tr>
<td>60</td>
<td>866</td>
<td>18</td>
<td>16</td>
<td>54</td>
<td>False</td>
</tr>
<tr>
<td>100</td>
<td>984</td>
<td>20</td>
<td>21</td>
<td>21</td>
<td>False</td>
</tr>
<tr>
<td>160</td>
<td>342</td>
<td>7</td>
<td>11</td>
<td>31</td>
<td>False</td>
</tr>
<tr>
<td>230</td>
<td>766</td>
<td>16</td>
<td>11</td>
<td>69</td>
<td>False</td>
</tr>
<tr>
<td>280</td>
<td>984</td>
<td>20</td>
<td>21</td>
<td>46</td>
<td>False</td>
</tr>
</tbody>
</table>
Table 5: Experimental data of HPA algorithm under medium load case.
<table>
<thead>
<tr>
<th>Time (s)</th>
<th>BeforeUtil</th>
<th>CalPod</th>
<th>RealPod</th>
<th>AfterUtil</th>
<th>IsBreak</th>
</tr>
</thead>
<tbody>
<tr>
<td>60</td>
<td>866</td>
<td>18</td>
<td>9</td>
<td>96</td>
<td>False</td>
</tr>
<tr>
<td>100</td>
<td>984</td>
<td>20</td>
<td>9</td>
<td>109</td>
<td>True</td>
</tr>
<tr>
<td>160</td>
<td>342</td>
<td>7</td>
<td>9</td>
<td>38</td>
<td>False</td>
</tr>
<tr>
<td>230</td>
<td>766</td>
<td>16</td>
<td>4</td>
<td>191</td>
<td>True</td>
</tr>
<tr>
<td>280</td>
<td>984</td>
<td>20</td>
<td>4</td>
<td>246</td>
<td>True</td>
</tr>
</tbody>
</table>
Comparison of DHPA and HPA
Figure 5: Comparison of the DHPA and HPA algorithms.
Table 6: Experimental data of the DHPA algorithm under high load.
<table>
<thead>
<tr>
<th>Time (s)</th>
<th>BeforeUtil</th>
<th>CalPod</th>
<th>RealPod</th>
<th>AfterUtil</th>
<th>IsBreak</th>
</tr>
</thead>
<tbody>
<tr>
<td>60</td>
<td>1732</td>
<td>35</td>
<td>22</td>
<td>78</td>
<td>False</td>
</tr>
<tr>
<td>100</td>
<td>1969</td>
<td>40</td>
<td>30</td>
<td>65</td>
<td>False</td>
</tr>
<tr>
<td>160</td>
<td>684</td>
<td>14</td>
<td>30</td>
<td>22</td>
<td>False</td>
</tr>
<tr>
<td>230</td>
<td>1509</td>
<td>31</td>
<td>22</td>
<td>68</td>
<td>False</td>
</tr>
<tr>
<td>280</td>
<td>1969</td>
<td>40</td>
<td>32</td>
<td>61</td>
<td>False</td>
</tr>
</tbody>
</table>
Table 7: Experimental data of HPA algorithm under high load conditions.
<table>
<thead>
<tr>
<th>Time (s)</th>
<th>BeforeUtil</th>
<th>CalPod</th>
<th>RealPod</th>
<th>AfterUtil</th>
<th>IsBreak</th>
</tr>
</thead>
<tbody>
<tr>
<td>60</td>
<td>1732</td>
<td>35</td>
<td>10</td>
<td>173</td>
<td>True</td>
</tr>
<tr>
<td>100</td>
<td>1969</td>
<td>40</td>
<td>15</td>
<td>131</td>
<td>True</td>
</tr>
<tr>
<td>160</td>
<td>684</td>
<td>14</td>
<td>15</td>
<td>45</td>
<td>False</td>
</tr>
<tr>
<td>230</td>
<td>1509</td>
<td>31</td>
<td>7</td>
<td>215</td>
<td>True</td>
</tr>
<tr>
<td>280</td>
<td>1969</td>
<td>40</td>
<td>32</td>
<td>281</td>
<td>True</td>
</tr>
</tbody>
</table>
5. Conclusions
For highly dynamic workloads in cloud environments, this study proposes a fine-grained horizontal scaling mechanism that can apply dynamic rules to automatically increase or decrease the total number of compute instances to adapt to different workloads. The expansion and contraction operations of the DHPA algorithm are in a dynamic equilibrium state. Because of the pod expansion and contraction time lag, the queue cannot be updated in real time. Each time it scales, it is placed inside the message queue as a single task, so the number of pods dispatched by the algorithm deviates somewhat from the theoretical calculation, but the overall balance is dynamic.
The original HPA algorithm counts how many pods the entire cluster has each time and determines whether to expand or shrink based on the calculated expected pod value. This approach consumes many system resources. In this study, the proposed DHPA algorithm’s expansion or contraction operation is based on calculating the growth rate of CPU utilization and on whether the CPU utilization exceeds the threshold to decide by introducing the idea of granularity calculation. Therefore, the DHPA algorithm is to traverse all pods each time in the cluster after calculating whether expansion is needed or not. If there is no expansion or contraction at this point, then there is no need for further operations, which nicely reduces the cluster’s performance pressure with each poll. Simultaneous use of two metrics to comprehensively control the expansion and contraction trigger has better stability. The experiments also show that the DHPA algorithm has better antijitter performance in container spreading and shrinking capacity, ensuring the cluster’s quality of service and security. In the future, we will try to extend the proposed approach to multi-instance architectures and high-level service customization.
Data Availability
All data used during the study are available in a repository or online in accordance with funder data retention policies (https://archive.ics.uci.edu/ml/datasets.php and http://cs.uef.fi/sipu/datasets/).
Conflicts of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
This work was supported in part by the Natural Science Foundation of Heilongjiang Province (LH2020F031).
References
|
{"Source-Url": "https://downloads.hindawi.com/journals/sp/2021/6397786.pdf", "len_cl100k_base": 8293, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 30342, "total-output-tokens": 9523, "length": "2e13", "weborganizer": {"__label__adult": 0.00033974647521972656, "__label__art_design": 0.0004668235778808594, "__label__crime_law": 0.0003991127014160156, "__label__education_jobs": 0.0012340545654296875, "__label__entertainment": 0.00012040138244628906, "__label__fashion_beauty": 0.0001990795135498047, "__label__finance_business": 0.0010175704956054688, "__label__food_dining": 0.0003714561462402344, "__label__games": 0.0006909370422363281, "__label__hardware": 0.0022125244140625, "__label__health": 0.0007891654968261719, "__label__history": 0.0004870891571044922, "__label__home_hobbies": 0.00015854835510253906, "__label__industrial": 0.00067138671875, "__label__literature": 0.00033020973205566406, "__label__politics": 0.0003707408905029297, "__label__religion": 0.0004305839538574219, "__label__science_tech": 0.40283203125, "__label__social_life": 0.00014793872833251953, "__label__software": 0.0264129638671875, "__label__software_dev": 0.55908203125, "__label__sports_fitness": 0.0002930164337158203, "__label__transportation": 0.0007214546203613281, "__label__travel": 0.0002830028533935547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36163, 0.04765]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36163, 0.54716]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36163, 0.87254]], "google_gemma-3-12b-it_contains_pii": [[0, 2113, false], [2113, 8152, null], [8152, 11920, null], [11920, 18007, null], [18007, 23826, null], [23826, 28444, null], [28444, 29537, null], [29537, 30686, null], [30686, 33123, null], [33123, 36163, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2113, true], [2113, 8152, null], [8152, 11920, null], [11920, 18007, null], [18007, 23826, null], [23826, 28444, null], [28444, 29537, null], [29537, 30686, null], [30686, 33123, null], [33123, 36163, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36163, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36163, null]], "pdf_page_numbers": [[0, 2113, 1], [2113, 8152, 2], [8152, 11920, 3], [11920, 18007, 4], [18007, 23826, 5], [23826, 28444, 6], [28444, 29537, 7], [29537, 30686, 8], [30686, 33123, 9], [33123, 36163, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36163, 0.20231]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
d446929c06158e7e15e04e72cbd954d9dcd86196
|
A Description of the Camellia Encryption Algorithm
Status of this Memo
This memo provides information for the Internet community. It does not specify an Internet standard of any kind. Distribution of this memo is unlimited.
Copyright Notice
Copyright (C) The Internet Society (2004). All Rights Reserved.
Abstract
This document describes the Camellia encryption algorithm. Camellia is a block cipher with 128-bit block size and 128-, 192-, and 256-bit keys. The algorithm description is presented together with key scheduling part and data randomizing part.
1. Introduction
1.1. Camellia
Camellia was jointly developed by Nippon Telegraph and Telephone Corporation and Mitsubishi Electric Corporation in 2000 [CamelliaSpec]. Camellia specifies the 128-bit block size and 128-, 192-, and 256-bit key sizes, the same interface as the Advanced Encryption Standard (AES). Camellia is characterized by its suitability for both software and hardware implementations as well as its high level of security. From a practical viewpoint, it is designed to enable flexibility in software and hardware implementations on 32-bit processors widely used over the Internet and many applications, 8-bit processors used in smart cards, cryptographic hardware, embedded systems, and so on [CamelliaTech]. Moreover, its key setup time is excellent, and its key agility is superior to that of AES.
Camellia has been scrutinized by the wide cryptographic community during several projects for evaluating crypto algorithms. In particular, Camellia was selected as a recommended cryptographic primitive by the EU NESSIE (New European Schemes for Signatures, Integrity and Encryption) project [NESSIE] and also included in the list of cryptographic techniques for Japanese e-Government systems which were selected by the Japan CRYPTREC (Cryptography Research and Evaluation Committees) [CRYPTREC].
2. Algorithm Description
Camellia can be divided into "key scheduling part" and "data randomizing part".
2.1. Terminology
The following operators are used in this document to describe the algorithm.
& bitwise AND operation.
| bitwise OR operation.
^ bitwise exclusive-OR operation.
<< logical left shift operation.
>> logical right shift operation.
<<< left rotation operation.
~ bitwise complement of y.
0x hexadecimal representation.
Note that the logical left shift operation is done with the infinite data width.
The constant values of MASK8, MASK32, MASK64, and MASK128 are defined as follows.
```
MASK8 = 0xff;
MASK32 = 0xffffffff;
MASK64 = 0xffffffffffffffff;
MASK128 = 0xffffffffffffffffffffffffffffffff;
```
2.2. Key Scheduling Part
In the key schedule part of Camellia, the 128-bit variables of KL and KR are defined as follows. For 128-bit keys, the 128-bit key K is used as KL and KR is 0. For 192-bit keys, the leftmost 128-bits of key K are used as KL and the concatenation of the rightmost 64-bits of K and the complement of the rightmost 64-bits of K are used as KR. For 256-bit keys, the leftmost 128-bits of key K are used as KL and the rightmost 128-bits of K are used as KR.
128-bit key K:
KL = K; KR = 0;
192-bit key K:
KL = K >> 64;
KR = ((K & MASK64) << 64) | (~(K & MASK64));
256-bit key K:
KL = K >> 128;
KR = K & MASK128;
The 128-bit variables KA and KB are generated from KL and KR as follows. Note that KB is used only if the length of the secret key is 192 or 256 bits. D1 and D2 are 64-bit temporary variables. F-function is described in Section 2.4.
D1 = (KL ^ KR) >> 64;
D2 = (KL ^ KR) & MASK64;
D2 = D2 ^ F(D1, Sigma1);
D1 = D1 ^ F(D2, Sigma2);
D1 = D1 ^ (KL >> 64);
D2 = D2 ^ (KL & MASK64);
D2 = D2 ^ F(D1, Sigma3);
D1 = D1 ^ F(D2, Sigma4);
KA = (D1 << 64) | D2;
D1 = (KA ^ KR) >> 64;
D2 = (KA ^ KR) & MASK64;
D2 = D2 ^ F(D1, Sigma5);
D1 = D1 ^ F(D2, Sigma6);
KB = (D1 << 64) | D2;
The 64-bit constants Sigma1, Sigma2, ..., Sigma6 are used as "keys" in the F-function. These constant values are, in hexadecimal notation, as follows.
Sigma1 = 0xA09E667F3BCC908B;
Sigma2 = 0xB67AE8584CAA73B2;
Sigma3 = 0xC6EF372FE94F82BE;
Sigma4 = 0x54FF53A5F1D36F1C;
Sigma5 = 0x10E527FADE682D1D;
Sigma6 = 0xB05688C2B3E6C1FD;
64-bit subkeys are generated by rotating KL, KR, KA, and KB and taking the left- or right-half of them.
For 128-bit keys, 64-bit subkeys $kw1, ..., kw4, k1, ..., k18, ke1, ..., ke4$ are generated as follows.
$$
kw1 = (KL <<< 0) >> 64; \\
kw2 = (KL <<< 0) \& \text{MASK64}; \\
k1 = (KA <<< 0) >> 64; \\
k2 = (KA <<< 0) \& \text{MASK64}; \\
k3 = (KL <<< 15) >> 64; \\
k4 = (KL <<< 15) \& \text{MASK64}; \\
k5 = (KA <<< 15) >> 64; \\
k6 = (KA <<< 15) \& \text{MASK64}; \\
ke1 = (KA <<< 30) >> 64; \\
ke2 = (KA <<< 30) \& \text{MASK64}; \\
k7 = (KL <<< 45) >> 64; \\
k8 = (KL <<< 45) \& \text{MASK64}; \\
k9 = (KA <<< 45) >> 64; \\
k10 = (KL <<< 60) \& \text{MASK64}; \\
k11 = (KA <<< 60) >> 64; \\
k12 = (KA <<< 60) \& \text{MASK64}; \\
ke3 = (KL <<< 77) >> 64; \\
ke4 = (KL <<< 77) \& \text{MASK64}; \\
k13 = (KL <<< 94) >> 64; \\
k14 = (KL <<< 94) \& \text{MASK64}; \\
k15 = (KA <<< 94) >> 64; \\
k16 = (KA <<< 94) \& \text{MASK64}; \\
k17 = (KL <<< 111) >> 64; \\
k18 = (KL <<< 111) \& \text{MASK64}; \\
kw3 = (KA <<< 111) >> 64; \\
kw4 = (KA <<< 111) \& \text{MASK64}; \\
$$
For 192- and 256-bit keys, 64-bit subkeys $kw1, ..., kw4, k1, ..., k24, ke1, ..., ke6$ are generated as follows.
$$
kw1 = (KL <<< 0) >> 64; \\
kw2 = (KL <<< 0) \& \text{MASK64}; \\
k1 = (KB <<< 0) >> 64; \\
k2 = (KB <<< 0) \& \text{MASK64}; \\
k3 = (KR <<< 15) >> 64; \\
k4 = (KR <<< 15) \& \text{MASK64}; \\
k5 = (KA <<< 15) >> 64; \\
k6 = (KA <<< 15) \& \text{MASK64}; \\
ke1 = (KR <<< 30) >> 64; \\
ke2 = (KR <<< 30) \& \text{MASK64}; \\
k7 = (KB <<< 30) >> 64; \\
k8 = (KB <<< 30) \& \text{MASK64}; \\
k9 = (KL <<< 45) >> 64; \\
k10 = (KL <<< 45) \& \text{MASK64}; \\
k11 = (KA <<< 45) >> 64; \\
$$
\[ k_{12} = (K_A \ll 45) \& \text{MASK64}; \]
\[ k_{13} = (K_L \ll 60) \gg 64; \]
\[ k_{14} = (K_R \ll 60) \& \text{MASK64}; \]
\[ k_{15} = (K_B \ll 60) \gg 64; \]
\[ k_{16} = (K_B \ll 60) \& \text{MASK64}; \]
\[ k_{17} = (K_L \ll 77) \gg 64; \]
\[ k_{18} = (K_L \ll 77) \& \text{MASK64}; \]
\[ k_{19} = (K_R \ll 94) \gg 64; \]
\[ k_{20} = (K_R \ll 94) \& \text{MASK64}; \]
\[ k_{21} = (K_A \ll 94) \gg 64; \]
\[ k_{22} = (K_A \ll 94) \& \text{MASK64}; \]
\[ k_{23} = (K_L \ll 111) \gg 64; \]
\[ k_{24} = (K_L \ll 111) \& \text{MASK64}; \]
\[ k_{25} = (K_B \ll 111) \gg 64; \]
\[ k_{26} = (K_B \ll 111) \& \text{MASK64}; \]
2.3. Data Randomizing Part
2.3.1. Encryption for 128-bit keys
128-bit plaintext \( M \) is divided into the left 64-bit \( D_1 \) and the right 64-bit \( D_2 \).
\[ D_1 = M \gg 64; \]
\[ D_2 = M \& \text{MASK64}; \]
Encryption is performed using an 18-round Feistel structure with FL- and FLINV-functions inserted every 6 rounds. F-function, FL-function, and FLINV-function are described in Section 2.4.
\[ D_1 = D_1 \wedge kw_1; \] // Prewhitening
\[ D_2 = D_2 \wedge kw_2; \]
\[ D_2 = D_2 \wedge F(D_1, k_1); \] // Round 1
\[ D_1 = D_1 \wedge F(D_2, k_2); \] // Round 2
\[ D_2 = D_2 \wedge F(D_1, k_3); \] // Round 3
\[ D_1 = D_1 \wedge F(D_2, k_4); \] // Round 4
\[ D_2 = D_2 \wedge F(D_1, k_5); \] // Round 5
\[ D_1 = D_1 \wedge F(D_2, k_6); \] // Round 6
\[ D_1 = FL(D_1, k_{e_1}); \] // FL
\[ D_2 = FLINV(D_2, k_{e_2}); \] // FLINV
\[ D_2 = D_2 \wedge F(D_1, k_7); \] // Round 7
\[ D_1 = D_1 \wedge F(D_2, k_8); \] // Round 8
\[ D_2 = D_2 \wedge F(D_1, k_9); \] // Round 9
\[ D_1 = D_1 \wedge F(D_2, k_{10}); \] // Round 10
D2 = D2 ^ F(D1, k11); // Round 11
D1 = D1 ^ F(D2, k12); // Round 12
D1 = FL (D1, ke3); // FL
D2 = FLINV(D2, ke4); // FLINV
D2 = D2 ^ F(D1, k13); // Round 13
D1 = D1 ^ F(D2, k14); // Round 14
D2 = D2 ^ F(D1, k15); // Round 15
D1 = D1 ^ F(D2, k16); // Round 16
D2 = D2 ^ F(D1, k17); // Round 17
D1 = D1 ^ F(D2, k18); // Round 18
D2 = D2 ^ kw3; // Postwhitening
D1 = D1 ^ kw4;
128-bit ciphertext C is constructed from D1 and D2 as follows.
C = (D2 << 64) | D1;
2.3.2. Encryption for 192- and 256-bit keys
128-bit plaintext M is divided into the left 64-bit D1 and the right 64-bit D2.
D1 = M >> 64;
D2 = M & MASK64;
Encryption is performed using a 24-round Feistel structure with FL- and FLINV-functions inserted every 6 rounds. F-function, FL-function, and FLINV-function are described in Section 2.4.
D1 = D1 ^ kw1; // Prewhitening
D2 = D2 ^ kw2;
D2 = D2 ^ F(D1, k1); // Round 1
D1 = D1 ^ F(D2, k2); // Round 2
D2 = D2 ^ F(D1, k3); // Round 3
D1 = D1 ^ F(D2, k4); // Round 4
D2 = D2 ^ F(D1, k5); // Round 5
D1 = D1 ^ F(D2, k6); // Round 6
D1 = FL (D1, ke1); // FL
D2 = FLINV(D2, ke2); // FLINV
D2 = D2 ^ F(D1, k7); // Round 7
D1 = D1 ^ F(D2, k8); // Round 8
D2 = D2 ^ F(D1, k9); // Round 9
D1 = D1 ^ F(D2, k10); // Round 10
D2 = D2 ^ F(D1, k11); // Round 11
D1 = D1 ^ F(D2, k12); // Round 12
D1 = FL (D1, ke3); // FL
D2 = FLINV(D2, ke4); // FLINV
D2 = D2 ^ F(D1, k13); // Round 13
D1 = D1 ^ F(D2, k14); // Round 14
D2 = D2 ^ F(D1, k15); // Round 15
D1 = D1 ^ F(D2, k16); // Round 16
D2 = D2 ^ F(D1, k17); // Round 17
D1 = D1 ^ F(D2, k18); // Round 18
D1 = FL (D1, ke5); // FL
D2 = FLINV(D2, ke6); // FLINV
D2 = D2 ^ F(D1, k19); // Round 19
D1 = D1 ^ F(D2, k20); // Round 20
D2 = D2 ^ F(D1, k21); // Round 21
D1 = D1 ^ F(D2, k22); // Round 22
D2 = D2 ^ F(D1, k23); // Round 23
D1 = D1 ^ F(D2, k24); // Round 24
D2 = D2 ^ kw3; // Postwhitening
D1 = D1 ^ kw4;
128-bit ciphertext C is constructed from D1 and D2 as follows.
C = (D2 << 64) | D1;
2.3.3. Decryption
The decryption procedure of Camellia can be done in the same way as
the encryption procedure by reversing the order of the subkeys.
That is to say:
128-bit key:
kw1 <-> kw3
kw2 <-> kw4
k1 <-> k18
k2 <-> k17
k3 <-> k16
k4 <-> k15
k5 <-> k14
k6 <-> k13
k7 <-> k12
k8 <-> k11
k9 <-> k10
ke1 <-> ke4
ke2 <-> ke3
192- or 256-bit key:
kw1 <-> kw3
kw2 <-> kw4
k1 <-> k24
k2 <-> k23
k3 <-> k22
k4 <-> k21
k5 <-> k20
k6 <-> k19
k7 <-> k18
k8 <-> k17
k9 <-> k16
k10 <-> k15
k11 <-> k14
k12 <-> k13
ke1 <-> ke6
ke2 <-> ke5
ke3 <-> ke4
2.4. Components of Camellia
2.4.1. F-function
F-function takes two parameters. One is 64-bit input data F_IN. The other is 64-bit subkey KE. F-function returns 64-bit data F_OUT.
\[
F(F_{IN}, KE) \\
\text{begin} \\
\text{var } x \text{ as 64-bit unsigned integer;} \\
\text{var } t_1, t_2, t_3, t_4, t_5, t_6, t_7, t_8 \text{ as 8-bit unsigned integer;} \\
\text{var } y_1, y_2, y_3, y_4, y_5, y_6, y_7, y_8 \text{ as 8-bit unsigned integer;} \\
x = F_{IN} ^ KE; \\
t_1 = x \gg 56; \\
t_2 = (x \gg 48) \& \text{MASK8}; \\
t_3 = (x \gg 40) \& \text{MASK8}; \\
t_4 = (x \gg 32) \& \text{MASK8}; \\
t_5 = (x \gg 24) \& \text{MASK8}; \\
t_6 = (x \gg 16) \& \text{MASK8}; \\
t_7 = (x \gg 8) \& \text{MASK8}; \\
t_8 = x \& \text{MASK8}; \\
t_1 = \text{SBOX1}[t_1]; \\
t_2 = \text{SBOX2}[t_2]; \\
t_3 = \text{SBOX3}[t_3]; \\
t_4 = \text{SBOX4}[t_4]; \\
t_5 = \text{SBOX2}[t_5]; \\
t_6 = \text{SBOX3}[t_6]; \\
t_7 = \text{SBOX4}[t_7]; \\
t_8 = \text{SBOX1}[t_8]; \\
y_1 = t_1 \oplus t_3 \oplus t_4 \oplus t_6 \oplus t_7 \oplus t_8; \\
y_2 = t_1 \oplus t_2 \oplus t_4 \oplus t_5 \oplus t_7 \oplus t_8; \\
y_3 = t_1 \oplus t_2 \oplus t_3 \oplus t_5 \oplus t_6 \oplus t_8; \\
y_4 = t_2 \oplus t_3 \oplus t_4 \oplus t_5 \oplus t_6 \oplus t_7; \\
y_5 = t_1 \oplus t_2 \oplus t_6 \oplus t_7 \oplus t_8; \\
y_6 = t_2 \oplus t_3 \oplus t_5 \oplus t_7 \oplus t_8; \\
\text{end}
\]
y7 = t3 ^ t4 ^ t5 ^ t6 ^ t8;
y8 = t1 ^ t4 ^ t5 ^ t6 ^ t7;
F_OUT = (y1 << 56) | (y2 << 48) | (y3 << 40) | (y4 << 32)
| (y5 << 24) | (y6 << 16) | (y7 << 8) | y8;
return F0_OUT;
end.
SBOX1, SBOX2, SBOX3, and SBOX4 are lookup tables with 8-bit input/output data. SBOX2, SBOX3, and SBOX4 are defined using SBOX1 as follows:
SBOX2[x] = SBOX1[x] <<< 1;
SBOX3[x] = SBOX1[x] <<< 7;
SBOX4[x] = SBOX1[x <<< 1];
SBOX1 is defined by the following table. For example, SBOX1[0x3d] equals 86.
SBOX1:
<table>
<thead>
<tr>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
<th>e</th>
<th>f</th>
</tr>
</thead>
<tbody>
<tr>
<td>00: 112 130 44 236 179 39 192 229 228 133 87 53 234 12 174 65</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>10: 35 239 107 147 69 25 165 33 237 14 79 78 29 101 146 189</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>20: 134 184 175 143 124 235 31 206 62 48 220 95 94 197 11 26</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>30: 166 225 57 202 213 71 93 61 217 1 90 214 81 86 108 77</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>40: 139 13 154 102 251 204 176 45 116 18 43 32 240 177 132 153</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>50: 223 76 203 194 52 126 118 5 109 183 169 49 209 23 4 215</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>60: 20 88 58 97 222 27 17 28 50 15 156 22 83 24 242 34</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>70: 254 68 207 178 195 181 122 145 36 8 232 168 96 252 105 80</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>80: 170 208 160 125 161 137 98 151 84 91 30 149 224 255 100 210</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>90: 16 196 0 72 163 247 117 219 138 3 230 218 9 63 221 148</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>a0: 135 92 131 2 205 74 144 51 115 103 246 243 157 127 191 226</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>b0: 82 155 216 38 200 55 198 59 129 150 111 75 19 190 99 46</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>c0: 233 121 167 140 159 110 188 142 41 245 249 182 47 253 180 89</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>d0: 120 152 6 106 231 70 113 186 212 37 171 66 136 162 141 250</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>e0: 114 7 185 85 248 238 172 10 54 73 42 104 60 56 241 164</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>f0: 64 40 211 123 187 201 67 193 21 227 173 244 119 199 128 158</td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
2.4.2. FL- and FLINV-functions
FL-function takes two parameters. One is 64-bit input data FL_IN. The other is 64-bit subkey KE. FL-function returns 64-bit data FL_OUT.
FL(FL_IN, KE)
begin
var x1, x2 as 32-bit unsigned integer;
var k1, k2 as 32-bit unsigned integer;
x1 = FL_IN >> 32;
x2 = FL_IN & MASK32;
k1 = KE >> 32;
k2 = KE & MASK32;
x2 = x2 ^ ((x1 & k1) <<< 1);
x1 = x1 ^ (x2 | k2);
FL_OUT = (x1 << 32) | x2;
end.
FLINV-function is the inverse function of the FL-function.
FLINV(FLINV_IN, KE)
begin
var y1, y2 as 32-bit unsigned integer;
var k1, k2 as 32-bit unsigned integer;
y1 = FLINV_IN >> 32;
y2 = FLINV_IN & MASK32;
k1 = KE >> 32;
k2 = KE & MASK32;
y1 = y1 ^ (y2 | k2);
y2 = y2 ^ ((y1 & k1) <<< 1);
FLINV_OUT = (y1 << 32) | y2;
end.
3. Object Identifiers
The Object Identifier for Camellia with 128-bit key in Cipher Block Chaining (CBC) mode is as follows:
```
id-camellia128-cbc OBJECT IDENTIFIER ::=
{ iso(1) member-body(2) 392 200011 61 security(1)
algorithm(1) symmetric-encryption-algorithm(1)
camellia128-cbc(2) }
```
The Object Identifier for Camellia with 192-bit key in Cipher Block Chaining (CBC) mode is as follows:
```
id-camellia192-cbc OBJECT IDENTIFIER ::=
{ iso(1) member-body(2) 392 200011 61 security(1)
algorithm(1) symmetric-encryption-algorithm(1)
camellia192-cbc(3) }
```
The Object Identifier for Camellia with 256-bit key in Cipher Block Chaining (CBC) mode is as follows:
```
id-camellia256-cbc OBJECT IDENTIFIER ::=
{ iso(1) member-body(2) 392 200011 61 security(1)
algorithm(1) symmetric-encryption-algorithm(1)
camellia256-cbc(4) }
```
The above algorithms need Initialization Vector (IV). To determine the value of IV, the above algorithms take parameters as follows:
CamelliaCBCParameter ::= CamelliaIV -- Initialization Vector
CamelliaIV ::= OCTET STRING (SIZE(16))
When these object identifiers are used, plaintext is padded before encryption according to RFC2315 [RFC2315].
4. Security Considerations
The recent advances in cryptanalytic techniques are remarkable. A quantitative evaluation of security against powerful cryptanalytic techniques such as differential cryptanalysis and linear cryptanalysis is considered to be essential in designing any new block cipher. We evaluated the security of Camellia by utilizing state-of-the-art cryptanalytic techniques. We confirmed that Camellia has no differential and linear characteristics that hold with probability more than $2^{-128}$, which means that it is extremely unlikely that differential and linear attacks will succeed against the full 18-round Camellia. Moreover, Camellia was designed to offer security against other advanced cryptanalytic attacks including higher order differential attacks, interpolation attacks, related-key attacks, truncated differential attacks, and so on [Camellia].
5. Informative References
http://info.isl.ntt.co.jp/camellia/
http://info.isl.ntt.co.jp/camellia/
Appendix A. Example Data of Camellia
Here are test data for Camellia in hexadecimal form.
128-bit key
Key : 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10
Plaintext: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10
Ciphertext: 67 67 31 38 54 96 69 73 08 57 06 56 48 ea be 43
192-bit key
Key : 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10
: 00 11 22 33 44 55 66 77
Plaintext: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10
Ciphertext: b4 99 34 01 b3 e9 96 f8 4e e5 ce e7 d7 9b 09 b9
256-bit key
Key : 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10
: 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff
Plaintext: 01 23 45 67 89 ab cd ef fe dc ba 98 76 54 32 10
Ciphertext: 9a cc 23 7d ff 16 d7 6c 20 ef 7c 91 9e 3a 75 09
Acknowledgements
Shiho Moriai worked for NTT when this document was developed.
Authors’ Addresses
Mitsuru Matsui
Mitsubishi Electric Corporation
Information Technology R&D Center
5-1-1 Ofuna, Kamakura
Kanagawa 247-8501, Japan
Phone: +81-467-41-2190
Fax: +81-467-41-2185
EMail: matsui@iss.isl.melco.co.jp
Junko Nakajima
Mitsubishi Electric Corporation
Information Technology R&D Center
5-1-1 Ofuna, Kamakura
Kanagawa 247-8501, Japan
Phone: +81-467-41-2190
Fax: +81-467-41-2185
EMail: june15@iss.isl.melco.co.jp
Shiho Moriai
Sony Computer Entertainment Inc.
Phone: +81-3-6438-7523
Fax: +81-3-6438-8629
EMail: shiho@rd.scei.sony.co.jp
camellia@isl.ntt.co.jp (Camellia team)
Full Copyright Statement
Copyright (C) The Internet Society (2004). This document is subject to the rights, licenses and restrictions contained in BCP 78 and except as set forth therein, the authors retain all their rights.
This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Intellectual Property
The IETF takes no position regarding the validity or scope of any Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in this document or the extent to which any license under such rights might or might not be available; nor does it represent that it has made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any assurances of licenses to be made available, or the result of an attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this specification can be obtained from the IETF on-line IPR repository at http://www.ietf.org/ipr.
The IETF invites any interested party to bring to its attention any copyrights, patents or patent applications, or other proprietary rights that may cover technology that may be required to implement this standard. Please address the information to the IETF at ietf-ipr@ietf.org.
Acknowledgement
Funding for the RFC Editor function is currently provided by the Internet Society.
|
{"Source-Url": "https://tools.ietf.org/pdf/rfc3713.pdf", "len_cl100k_base": 8549, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 30259, "total-output-tokens": 9744, "length": "2e13", "weborganizer": {"__label__adult": 0.0005478858947753906, "__label__art_design": 0.0005273818969726562, "__label__crime_law": 0.0023937225341796875, "__label__education_jobs": 0.0003681182861328125, "__label__entertainment": 0.00016295909881591797, "__label__fashion_beauty": 0.00019109249114990232, "__label__finance_business": 0.0011196136474609375, "__label__food_dining": 0.00036215782165527344, "__label__games": 0.0010480880737304688, "__label__hardware": 0.005584716796875, "__label__health": 0.0007505416870117188, "__label__history": 0.0005002021789550781, "__label__home_hobbies": 0.00014090538024902344, "__label__industrial": 0.0011692047119140625, "__label__literature": 0.0003218650817871094, "__label__politics": 0.0005650520324707031, "__label__religion": 0.0008974075317382812, "__label__science_tech": 0.38916015625, "__label__social_life": 0.00010693073272705078, "__label__software": 0.033355712890625, "__label__software_dev": 0.5595703125, "__label__sports_fitness": 0.000339508056640625, "__label__transportation": 0.0006871223449707031, "__label__travel": 0.00022530555725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 20973, 0.15807]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 20973, 0.79484]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 20973, 0.57181]], "google_gemma-3-12b-it_contains_pii": [[0, 1385, false], [1385, 3109, null], [3109, 4285, null], [4285, 5863, null], [5863, 7523, null], [7523, 9019, null], [9019, 10034, null], [10034, 11541, null], [11541, 13629, null], [13629, 14969, null], [14969, 17025, null], [17025, 17620, null], [17620, 18345, null], [18345, 19024, null], [19024, 20973, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1385, true], [1385, 3109, null], [3109, 4285, null], [4285, 5863, null], [5863, 7523, null], [7523, 9019, null], [9019, 10034, null], [10034, 11541, null], [11541, 13629, null], [13629, 14969, null], [14969, 17025, null], [17025, 17620, null], [17620, 18345, null], [18345, 19024, null], [19024, 20973, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 20973, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 20973, null]], "pdf_page_numbers": [[0, 1385, 1], [1385, 3109, 2], [3109, 4285, 3], [4285, 5863, 4], [5863, 7523, 5], [7523, 9019, 6], [9019, 10034, 7], [10034, 11541, 8], [11541, 13629, 9], [13629, 14969, 10], [14969, 17025, 11], [17025, 17620, 12], [17620, 18345, 13], [18345, 19024, 14], [19024, 20973, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 20973, 0.04337]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
703a0854e7b334fa7c7c922890a2d6377b5d41e7
|
Lecture 6:
Query Execution and Optimization
Parallel Data processing
Announcements
• HW5 due today
• HW6 released
– Please start early! You need to apply for credits from Amazon
• Two lectures this week (tonight and Thurs)
– Query optimization
– Parallel data processing
– Conceptual design
• No reading assignment for conceptual design
• OH change this week to Thursday
Query Execution and Optimization
Class overview
• Data models
– Relational: SQL, RA, and Datalog
– NoSQL: SQL++
• RDBMS internals
– Query processing and optimization
– Physical design
• Parallel query processing
– Spark and Hadoop
• Conceptual design
– E/R diagrams
– Schema normalization
• Transactions
– Locking and schedules
– Writing DB applications
Data models
Query Processing
Using DBMS
Query Evaluation Steps Review
1. Parse & Rewrite Query
2. Select Logical Plan
3. Select Physical Plan
4. Query Execution
Implementing Query Operators with the Iterator Interface
interface Operator {
// initializes operator state
// and sets parameters
void open (...);
// calls next() on its inputs
// processes an input tuple
// produces output tuple(s)
// returns null when done
Tuple next () ;
// cleans up (if any)
void close () ;
}
class Select implements Operator {
void open (Predicate p, Operator child) {
this.p = p; this.child = child;
}
Tuple next () {
boolean found = false;
Tuple r = null;
while (!found) {
r = child.next();
if (r == null) break;
found = p(r);
}
return r;
}
void close () { child.close(); }
}
Example “on the fly” selection operator
Implementing Query Operators with the Iterator Interface
```java
interface Operator {
// initializes operator state
// and sets parameters
void open (...);
// calls next() on its inputs
// processes an input tuple
// produces output tuple(s)
// returns null when done
Tuple next ();
// cleans up (if any)
void close ();
}
Query plan execution
Operator q = parse("SELECT ...");
q = optimize(q);
q.open();
while (true) {
Tuple t = q.next();
if (t == null) break;
else printOnScreen(t);
}
q.close();
```
Suppliers
Supply(sid, pno, quantity)
\begin{align*}
\text{Supplier}(\text{sid, sname, scity, sstate}) \\
\text{Supply}(\text{sid, pno, quantity})
\end{align*}
\textbf{Pipelining}
\( (\text{On the fly}) \)
\( (\text{On the fly}) \sigma_{\text{scity} = 'Seattle' \ and \ sstate = 'WA' \ and \ pno = 2} \)
\( (\text{Nested loop}) \)
\( \pi_{\text{sname}} \)
\( \text{sno} = \text{sno} \)
\textbf{Discuss: open/next/close for nested loop join}
Suppliers (File scan)
Supplies (File scan)
Recall: Physical Data Independence
- Applications are insulated from changes in physical storage details
- SQL and relational algebra facilitate physical data independence
- Both languages input and output relations
- Can choose different implementations for operators
Class overview
• Data models
– Relational: SQL, RA, and Datalog
– NoSQL: SQL++
• RDBMS internals
– Query processing and optimization
– Physical design
• Parallel query processing
– Spark and Hadoop
• Conceptual design
– E/R diagrams
– Schema normalization
• Transactions
– Locking and schedules
– Writing DB applications
Hash table example
Index **Student_ID** on **Student.ID**
Data File **Student**
<table>
<thead>
<tr>
<th>ID</th>
<th>fName</th>
<th>lName</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>Tom</td>
<td>Hanks</td>
</tr>
<tr>
<td>20</td>
<td>Amy</td>
<td>Hanks</td>
</tr>
</tbody>
</table>
*Index File (in memory)*
*Data file (on disk)*
CSEP 544 - Fall 2017
B+ Tree Index by Example
d = 2
Find the key 40
Basic Index Selection Guidelines
• Consider queries in workload in order of importance
• Consider relations accessed by query
– No point indexing other relations
• Look at WHERE clause for possible search key
• Try to choose indexes that speed-up multiple queries
Cost of Reading Data From Disk
Cost Parameters
- Cost = I/O + CPU + Network BW
- We will focus on I/O in this class
- Parameters:
- $B(R) = \# \text{ of blocks (i.e., pages) for relation } R$
- $T(R) = \# \text{ of tuples in relation } R$
- $V(R, a) = \# \text{ of distinct values of attribute } a$
- When $a$ is a key, $V(R, a) = T(R)$
- When $a$ is not a key, $V(R, a)$ can be anything $\leq T(R)$
- Where do these values come from?
- DBMS collects statistics about data on disk
Selectivity Factors for Conditions
• $A = c$ /* $\sigma_{A=c}(R)$ */
– Selectivity = $1/V(R,A)$
• $A < c$ /* $\sigma_{A<c}(R)$ */
– Selectivity = $(c - \min(R, A))/\left(\max(R,A) - \min(R,A)\right)$
• $c_1 < A < c_2$ /* $\sigma_{c_1<A<c_2}(R)$ */
– Selectivity = $(c_2 - c_1)/\left(\max(R,A) - \min(R,A)\right)$
Cost of Executing Operators (Focus on Joins)
Join Algorithms
- Hash join
- Nested loop join
- Sort-merge join
Hash Join
Hash join: \( R \bowtie S \)
- Scan \( R \), build buckets in main memory
- Then scan \( S \) and join
- Cost: \( B(R) + B(S) \)
- Which relation to build the hash table on?
- One-pass algorithm when \( B(R) \leq M \)
- \( M = \) number of memory pages available
Nested Loop Joins
• Tuple-based nested loop $R \bowtie S$
• $R$ is the outer relation, $S$ is the inner relation
\[
\begin{align*}
\text{for each tuple } t_1 \text{ in } R & \text{ do} \\
& \quad \text{for each tuple } t_2 \text{ in } S \text{ do} \\
& \quad \quad \text{if } t_1 \text{ and } t_2 \text{ join then output } (t_1, t_2)
\end{align*}
\]
• Cost: $B(R) + T(R)B(S)$
• Multiple-pass since $S$ is read many times
Block-Nested-Loop Refinement
for each group of M-1 pages r in R do
for each page of tuples s in S do
for all pairs of tuples t₁ in r, t₂ in s
if t₁ and t₂ join then output (t₁, t₂)
• Cost: B(R) + B(R)B(S)/(M-1)
Sort-Merge Join
Sort-merge join: \( R \bowtie S \)
- Scan \( R \) and sort in main memory
- Scan \( S \) and sort in main memory
- Merge \( R \) and \( S \)
- Cost: \( B(R) + B(S) \)
- One pass algorithm when \( B(S) + B(R) \leq M \)
- Typically, this is NOT a one pass algorithm
Index Nested Loop Join
\[ R \bowtie S \]
- Assume S has an index on the join attribute
- Iterate over R, for each tuple fetch corresponding tuple(s) from S
- **Cost:**
- If index on S is clustered:
\[ B(R) + T(R) \times (B(S) \times \frac{1}{V(S,a)}) \]
- If index on S is unclustered:
\[ B(R) + T(R) \times (T(S) \times \frac{1}{V(S,a)}) \]
Cost of Query Plans
Logical Query Plan 1
\[ \sigma_{\text{pno}=2 \land \text{scity}='Seattle' \land \text{sstate}='WA'}(\text{Supplier}) \]
\[ \pi_{\text{sname}}(\text{Supplier}) = 1000 \]
\[ \text{B(Supplier)} = 100 \]
\[ \text{V(Supplier, pno)} = 2500 \]
\[ \text{T(Supplier)} = 10000 \]
\[ \text{V(Supplier, scity)} = 20 \]
\[ \text{V(Supplier, state)} = 10 \]
\[ \text{M} = 11 \]
Logical Query Plan 1
\[\sigma_{\text{pno}=2 \land \text{scity}='Seattle' \land \text{sstate}='WA'}(\text{Supplier})\]
\[\Pi_{\text{sname}}(\text{Supplier})\]
\[\text{SELECT} \ \text{sname} \ \text{FROM} \ \text{Supplier \ x, \ Supply \ y} \ \text{WHERE} \ x.\text{sid} = y.\text{sid} \ \text{and} \ y.\text{pno} = 2 \ \text{and} \ x.\text{scity} = 'Seattle' \ \text{and} \ x.\text{sstate} = 'WA'\]
\[\text{T(Supplier)} = 1000 \ \text{B(Supplier)} = 100 \ \text{V(Supplier, scity)} = 20 \ \text{V(Supplier, state)} = 10\]
\[\text{M} = 11\]
Logical Query Plan 1
\[ \sigma_{pno=2 \land scity='Seattle' \land sstate='WA'} \]
\[ \Pi_{sname} \]
\[ T < 1 \]
\[ T = 10000 \]
\[ \text{SELECT } sname \]
\[ \text{FROM } Supplier \times, Supply \times y \]
\[ \text{WHERE } x.\text{sid} = y.\text{sid} \]
\[ \text{and } y.\text{pno} = 2 \]
\[ \text{and } x.\text{scity} = 'Seattle' \]
\[ \text{and } x.\text{sstate} = 'WA' \]
Logical Query Plan 2
```
SELECT sname
FROM Supplier x, Supply y
WHERE x.sid = y.sid
and y.pno = 2
and x.scity = 'Seattle'
and x.sstate = 'WA'
```
```
M = 11
```
Data:
- `T(Supplier) = 10000`
- `B(Supplier) = 100`
- `V(Supplier, scity) = 20`
- `V(Supplier, state) = 10`
- `T(Supply) = 10000`
- `B(Supply) = 100`
- `V(Supply, pno) = 2500`
Logical Query Plan 2
SELECT sname
FROM Supplier x, Supply y
WHERE x.sid = y.sid
and y.pno = 2
and x.scity = 'Seattle'
and x.sstate = 'WA'
T(Supplier) = 10000
B(Supplier) = 100
V(Supplier, scity) = 20
V(Supplier, sstate) = 10
M=11
Logical Query Plan 2
\[
\text{SELECT } sname \\
\text{FROM Supplier x, Supply y} \\
\text{WHERE x.sid = y.sid} \\
\text{and y.pno = 2} \\
\text{and x.scity = 'Seattle'} \\
\text{and x.sstate = 'WA'}
\]
- \( T(Supplier) = 10000 \)
- \( B(Supplier) = 100 \)
- \( V(Supplier, scity) = 20 \)
- \( V(Supplier, state) = 10 \)
- \( M = 11 \)
Very wrong! Why?
Logical Query Plan 2
\[
\begin{align*}
\text{SELECT} & \quad \text{sname} \\
\text{FROM} & \quad \text{Supplier } x, \text{ Supply } y \\
\text{WHERE} & \quad x.\text{sid} = y.\text{sid} \\
& \quad \text{and } y.\text{pno} = 2 \\
& \quad \text{and } x.\text{scity} = \text{'Seattle'} \\
& \quad \text{and } x.\text{sstate} = \text{'WA'}
\end{align*}
\]
\[
T(\text{Supply}) = 10000 \\
B(\text{Supply}) = 100 \\
V(\text{Supply, pno}) = 2500
\]
\[
T(\text{Supplier}) = 1000 \\
B(\text{Supplier}) = 100 \\
V(\text{Supplier, scity}) = 20 \\
V(\text{Supplier, state}) = 10
\]
\[M=11\]
Logical Query Plan 2
\[
\begin{align*}
\text{SELECT } & \quad \text{sname} \\
\text{FROM } & \quad \text{Supplier } x, \text{ Supply } y \\
\text{WHERE } & \quad x.\text{sid} = y.\text{sid} \\
& \quad \text{and } y.\text{pno} = 2 \\
& \quad \text{and } x.\text{scity} = 'Seattle' \\
& \quad \text{and } x.\text{sstate} = 'WA'
\end{align*}
\]
\[
\begin{align*}
T(\text{Supplier}) & = 1000 \\
B(\text{Supplier}) & = 100 \\
V(\text{Supplier}, \text{scity}) & = 20 \\
V(\text{Supplier}, \text{sstate}) & = 10 \\
M & = 11
\end{align*}
\]
\[
\begin{align*}
\text{Supp} & \text{ler}(\text{sid}, \text{sname}, \text{scity}, \text{sstate}) \\
\text{Supply}(\text{sid}, \text{pno}, \text{quantity})
\end{align*}
\]
Physical Plan 1
\[ \Pi_{\text{sname}} \]
\[ \sigma_{\text{pno}=2 \land \text{scity}=\text{Seattle} \land \text{sstate}=\text{WA}} \]
\[ T < 1 \]
\[ T = 10000 \]
\[ \text{sid} = \text{sid} \]
Block nested loop join
Total cost: 1000
Supply (sid, pno, quantity)
Supplier (sid, sname, scity, sstate)
T(Supply) = 10000
B(Supply) = 100
V(Supply, pno) = 2500
T(Supplier) = 1000
B(Supplier) = 100
V(Supplier, scity) = 20
V(Supplier, state) = 10
M=11
Physical Plan 1
\[ \Pi_{sname} \sigma_{pno=2 \land scity='Seattle' \land sstate='WA'} \]
\[ \text{T = 10000} \]
\[ \text{Block nested loop join} \]
\[ \text{Total cost: } 100 + 100 \times \frac{100}{10} = 1100 \]
\[ \text{T(Supply)} = 10000 \]
\[ \text{B(Supply)} = 100 \]
\[ \text{V(Supply, pno)} = 2500 \]
\[ \text{T(Supplier)} = 1000 \]
\[ \text{B(Supplier)} = 100 \]
\[ \text{V(Supplier, scity)} = 20 \]
\[ \text{V(Supplier, state)} = 10 \]
Physical Plan 2
Suppliers: (sid, sname, scity, sstate)
Supplies: (sid, pno, quantity)
Cost of Supply(pno) =
Cost of Supplier(scity) =
Total cost:
T(Supply) = 10000
B(Supply) = 100
V(Supply, pno) = 2500
T(Supplier) = 1000
B(Supplier) = 100
V(Supplier, scity) = 20
V(Supplier, state) = 10
M = 11
Physical Plan 2
\[ \text{Supplier}(\text{sid, sname, scity, sstate}) \]
\[ \text{Supply}(\text{sid, pno, quantity}) \]
\[ T(\text{Supply}) = 10000 \]
\[ B(\text{Supply}) = 100 \]
\[ V(\text{Supply, pno}) = 2500 \]
\[ T(\text{Supplier}) = 1000 \]
\[ B(\text{Supplier}) = 100 \]
\[ V(\text{Supplier, scity}) = 20 \]
\[ V(\text{Supplier, state}) = 10 \]
\[ M = 11 \]
Cost of \( \text{Supply}(\text{pno}) = 4 \)
Cost of \( \text{Supplier}(\text{scity}) = 50 \)
Total cost: 54
Physical Plan 2
```
σ_{pno=2} \rightarrow \text{Supply}
σ_{sstate='WA'} \rightarrow \text{Supplier}
π_{sname} \rightarrow \text{Supplier}
```
Cost of \text{Supply}(pno) = 4
Cost of \text{Supplier}(scity) = 50
Total cost: 54
T(\text{Supply}) = 10000
B(\text{Supply}) = 100
V(\text{Supply}, pno) = 2500
T(\text{Supplier}) = 1000
B(\text{Supplier}) = 100
V(\text{Supplier}, scity) = 20
V(\text{Supplier}, state) = 10
M=11
Physical Plan 3
\[ \pi_{\text{sname}} \]
\[ \sigma_{\text{scity}=\text{Seattle} \land \text{sstate}=\text{WA}} \]
\[ \text{sid} = \text{sid} \]
\[ \sigma_{\text{pno}=2} \]
\begin{align*}
T(\text{Supply}) &= 10000 \\
B(\text{Supply}) &= 100 \\
V(\text{Supply, pno}) &= 2500
\end{align*}
\begin{align*}
T(\text{Supplier}) &= 1000 \\
B(\text{Supplier}) &= 100 \\
V(\text{Supplier, scity}) &= 20 \\
V(\text{Supplier, state}) &= 10
\end{align*}
Cost of \text{Supply(pno)} = 4
Cost of Index join = 4
Total cost: 8
M = 11
Physical Plan 3
\[
\Pi_{\text{sname}} \left( \sigma_{\text{scity}=\text{Seattle} \land \text{sstate}=\text{WA}} \left( \sigma_{\text{pno}=2} \left( \text{Supply} \right) \right) \right)
\]
Cost of Supply(pno) = 4
Cost of Index join = 4
Total cost: 8
Clustered Index join
\[\sigma_{\text{scity}=\text{Seattle} \land \text{sstate}=\text{WA}}\]
Unclustered index lookup
Supply(pno)
\[\sigma_{\text{pno}=2}\]
\text{Supply}
\[\text{Supplier}
\]
T(\text{Supply}) = 10000
B(\text{Supply}) = 100
V(\text{Supply, pno}) = 2500
T(\text{Supplier}) = 1000
B(\text{Supplier}) = 100
V(\text{Supplier, scity}) = 20
V(\text{Supplier, state}) = 10
M = 11
Physical Plan 3
\[ \prod_{\text{name}} \sigma_{\text{city}=\text{Seattle} \land \text{state}=\text{WA}} \]
Cost of \( \text{Supply}(\text{pno}) = 4 \)
Cost of Index join = 4
Total cost: 8
Query Optimizer
```java
lowestCost = ∞;
bestPlan = null;
for (p : physicalPlan(q)) {
if (cost(p) < lowestCost)
bestPlan = p;
}
return p;
```
• This never works
• Way too many plans to consider!
• Typical query optimizer:
• Construct logical plan \( p \)
• Apply heuristic rules to transform \( p \)
(e.g., do selection as early as possible)
• Go through each operator \( \text{op} \) in bottom up manner
• Choose an implementation for \( \text{op} \) to construct the physical plan
(why does this not always return the best plan?)
The System R Optimizer
A Case Study
Two Types of Plan Enumeration Algorithms
- Dynamic programming
- Based on System R (aka Selinger) style optimizer [1979]
- Limited to joins: *join reordering algorithm*
- Bottom-up
- Rule-based algorithm *(will not discuss)*
- Database of rules (=algebraic laws)
- Usually: dynamic programming
- Usually: *top-down*
System R Search Space
- Only left-deep plans
- Enable dynamic programming for enumeration
- Facilitate tuple pipelining from outer relation
- Consider plans with all “interesting orders”
- Perform cross-products after all other joins (heuristic)
- Only consider nested loop & sort-merge joins
- Consider both file scan and indexes
- Try to evaluate predicates early
Plan Enumeration Algorithm
• Idea: use dynamic programming
• For each subset of \{R_1, \ldots, R_n\}, compute the best plan for that subset
• In increasing order of set cardinality:
– Step 1: for \{R_1\}, \{R_2\}, \ldots, \{R_n\}
– Step 2: for \{R_1,R_2\}, \{R_1,R_3\}, \ldots, \{R_{n-1}, R_n\}
– ...
– Step n: for \{R_1, \ldots, R_n\}
• It is a bottom-up strategy
• A subset of \{R_1, \ldots, R_n\} is also called a subquery
Dynamic Programming Algo.
• For each subquery $Q \subseteq \{R_1, \ldots, R_n\}$ compute the following:
– Size($Q$)
– A best plan for $Q$: Plan($Q$)
– The cost of that plan: Cost($Q$)
Dynamic Programming Algo.
• **Step 1:** Enumerate all single-relation plans
– Consider selections on attributes of relation
– Consider all possible access paths
– Consider attributes that are not needed
– Compute cost for each plan
– Keep cheapest plan per “interesting” output order
Dynamic Programming Algo.
• **Step 2:** Generate all two-relation plans
- For each each single-relation plan from step 1
- Consider that plan as outer relation
- Consider every other relation as inner relation
- Compute cost for each plan
- Keep cheapest plan per “interesting” output order
Dynamic Programming Algo.
• **Step 3**: Generate all three-relation plans
- For each each two-relation plan from step 2
- Consider that plan as outer relation
- Consider every other relation as inner relation
- Compute cost for each plan
- Keep cheapest plan per “interesting” output order
• **Steps 4 through n**: repeat until plan contains all the relations in the query
Query Optimizer Summary
• Input: A logical query plan
• Output: A good physical query plan
• Basic query optimization algorithm
– Enumerate alternative plans (logical and physical)
– Compute estimated cost of each plan
– Choose plan with lowest cost
• This is called cost-based optimization
Parallel Data Processing
Class overview
• Data models
– Relational: SQL, RA, and Datalog
– NoSQL: SQL++
• RDMBS internals
– Query processing and optimization
– Physical design
• Parallel query processing
– Spark and Hadoop
• Conceptual design
– E/R diagrams
– Schema normalization
• Transactions
– Locking and schedules
– Writing DB applications
Why compute in parallel?
• Multi-cores:
– Most processors have multiple cores
– This trend will likely increase in the future
• Big data: too large to fit in main memory
– Distributed query processing on 100x-1000x servers
– Widely available now using cloud services
– Recall HW3 and HW6
Performance Metrics for Parallel DBMSs
Nodes = processors, computers
- **Speedup:**
- More nodes, same data \(\rightarrow\) higher speed
- **Scaleup:**
- More nodes, more data \(\rightarrow\) same speed
Linear v.s. Non-linear Speedup
Speedup
# nodes (=P)
×1 ×5 ×10 ×15
Ideal
Linear v.s. Non-linear Scaleup
Batch Scaleup
# nodes (=P) AND data size
×1 ×5 ×10 ×15
Ideal
Why Sub-linear Speedup and Scaleup?
• **Startup cost**
– Cost of starting an operation on many nodes
• **Interference**
– Contention for resources between nodes
• **Skew**
– Slowest node becomes the bottleneck
Architectures for Parallel Databases
- Shared memory
- Shared disk
- Shared nothing
Shared Memory
- Nodes share both RAM and disk
- Dozens to hundreds of processors
Example: SQL Server runs on a single machine and can leverage many threads to speed up a query
- check your HW3 query plans
- Easy to use and program
- Expensive to scale
- last remaining cash cows in the hardware industry
Shared Disk
- All nodes access the same disks
- Found in the largest "single-box" (non-cluster) multiprocessors
Example: Oracle
- No need to worry about shared memory
- Hard to scale: existing deployments typically have fewer than 10 machines
Shared Nothing
- Cluster of commodity machines on high-speed network
- Called "clusters" or "blade servers"
- Each machine has its own memory and disk: lowest contention.
Example: Google
Because all machines today have many cores and many disks, shared-nothing systems typically run many "nodes" on a single physical machine.
- Easy to maintain and scale
- Most difficult to administer and tune.
We discuss only Shared Nothing in class
Parallel Data Processing @ 1990
Approaches to Parallel Query Evaluation
- **Inter-query parallelism**
- Transaction per node
- Good for transactional workloads
- **Inter-operator parallelism**
- Operator per node
- Good for analytical workloads
- **Intra-operator parallelism**
- Operator on multiple nodes
- Good for both?
We study only intra-operator parallelism: most scalable
Single Node Query Processing (Review)
Given relations R(A,B) and S(B, C), no indexes:
• **Selection**: \( \sigma_{A=123}(R) \)
– Scan file R, select records with A=123
• **Group-by**: \( \gamma_{A,\text{sum}(B)}(R) \)
– Scan file R, insert into a hash table using A as key
– When a new key is equal to an existing one, add B to the value
• **Join**: \( R \bowtie S \)
– Scan file S, insert into a hash table using B as key
– Scan file R, probe the hash table using B
Distributed Query Processing
• Data is horizontally partitioned on many servers
• Operators may require data reshuffling
• First let’s discuss how to distribute data across multiple nodes / servers
Horizontal Data Partitioning
<table>
<thead>
<tr>
<th>K</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Data:
Servers:
1 2 ... P
Horizontal Data Partitioning
Data:
<table>
<thead>
<tr>
<th>K</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Servers:
1
<table>
<thead>
<tr>
<th>K</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
2
<table>
<thead>
<tr>
<th>K</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
...
P
<table>
<thead>
<tr>
<th>K</th>
<th>A</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
Which tuples go to what server?
Horizontal Data Partitioning
• **Block Partition:**
– Partition tuples arbitrarily s.t. \( \text{size}(R_1) \approx \ldots \approx \text{size}(R_P) \)
• **Hash partitioned on attribute A:**
– Tuple \( t \) goes to chunk \( i \), where \( i = h(t.A) \mod P + 1 \)
– Recall: calling hash fn’s is free in this class
• **Range partitioned on attribute A:**
– Partition the range of \( A \) into \( -\infty = v_0 < v_1 < \ldots < v_P = \infty \)
– Tuple \( t \) goes to chunk \( i \), if \( v_{i-1} < t.A < v_i \)
Uniform Data v.s. Skewed Data
• Let R(K,A,B,C); which of the following partition methods may result in skewed partitions?
• Block partition
- Uniform
• Hash-partition
– On the key K
– On the attribute A
– Uniform
– May be skewed
Assuming good hash function
E.g. when all records have the same value of the attribute A, then all records end up in the same partition
Keep this in mind in the next few slides
Parallel Execution of RA Operators: Grouping
Data: $R(K, A, B, C)$
Query: $\gamma_{A, \text{sum}(C)}(R)$
How to compute group by if:
- $R$ is hash-partitioned on $A$?
- $R$ is block-partitioned?
- $R$ is hash-partitioned on $K$?
Parallel Execution of RA Operators: Grouping
**Data:** $R(K, A, B, C)$
**Query:** $\gamma_{A, \text{sum}(C)}(R)$
- $R$ is block-partitioned or hash-partitioned on $K$
Reshuffle $R$ on attribute $A$
Run grouping on reshuffled partitions
Speedup and Scaleup
• Consider:
– Query: \( \gamma_{A,\text{sum}(C)}(R) \)
– Runtime: only consider I/O costs
• If we double the number of nodes \( P \), what is the new running time?
– Half (each server holds \( \frac{1}{2} \) as many chunks)
• If we double both \( P \) and the size of \( R \), what is the new running time?
– Same (each server holds the same \# of chunks)
Parallel Execution of RA Operators: Partitioned Hash-Join
- **Data**: \( R(K_1, A, B), S(K_2, B, C) \)
- **Query**: \( R(K_1, A, B) \bowtie S(K_2, B, C) \)
- Initially, both \( R \) and \( S \) are partitioned on \( K_1 \) and \( K_2 \)
- Reshuffle \( R \) on \( R.B \) and \( S \) on \( S.B \)
- Each server computes the join locally
Data: R(K1, A, B), S(K2, B, C)
Query: R(K1, A, B) \bowtie S(K2, B, C)
Parallel Join Illustration
Partition
Shuffle on B
Local Join
**Data:** $R(A, B), S(C, D)$
**Query:** $R(A, B) \bowtie_{B=C} S(C, D)$
### Broadcast Join
Reshuffle $R$ on $R.B$
Broadcast $S$
Why would you want to do this?
A Challenge
• Have P number of servers (say P=27 or P=1000)
• How do we compute this Datalog query in one step?
• \( Q(x, y, z) :\neg R(x, y), S(y, z), T(z, x) \)
A Challenge
- Have P number of servers (say P=27 or P=1000)
- How do we compute this Datalog query in one step? $Q(x,y,z) = R(x,y), S(y,z), T(z,x)$
- Organize the P servers into a cube with side $P^{1/3}$
- Thus, each server is uniquely identified by $(i,j,k)$, $i,j,k \leq P^{1/3}$
HyperCube Join
• Have P number of servers (say P=27 or P=1000)
• How do we compute this Datalog query in one step?
\[ Q(x,y,z) = R(x,y), S(y,z), T(z,x) \]
• Organize the P servers into a cube with side \( P^{\frac{1}{3}} \)
– Thus, each server is uniquely identified by \((i,j,k)\), \(i,j,k \leq P^{\frac{1}{3}}\)
• Step 1:
1, 2
– Each server sends \( R(x,y) \) to all servers \((h(x), h(y), *)\)
– Each server sends \( S(y,z) \) to all servers \((*, h(y), h(z))\)
– Each server sends \( T(x,z) \) to all servers \((h(x), (*), h(z))\)
HyperCube Join
- Have P number of servers (say P=27 or P=1000)
- How do we compute this Datalog query in one step?
\[ Q(x,y,z) = R(x,y), S(y,z), T(z,x) \]
- Organize the P servers into a cube with side \( P^{1/3} \)
- Thus, each server is uniquely identified by \((i,j,k)\), \(i,j,k \leq P^{1/3}\)
- Step 1:
- Each server sends \( R(x,y) \) to all servers \((h(x), h(y), \ast)\)
- Each server sends \( S(y,z) \) to all servers \((\ast, h(y), h(z))\)
- Each server sends \( T(x,z) \) to all servers \((h(x), \ast, h(z))\)
- Final output:
- Each server \((i,j,k)\) computes the query \( R(x,y), S(y,z), T(z,x) \) locally
HyperCube Join
• Have P number of servers (say P=27 or P=1000)
• How do we compute this Datalog query in one step? $Q(x,y,z) = R(x,y), S(y,z), T(z,x)$
• Organize the P servers into a cube with side $P^{\frac{1}{3}}$
– Thus, each server is uniquely identified by $(i,j,k)$, $i,j,k \leq P^{\frac{1}{3}}$
• Step 1:
– Each server sends $R(x,y)$ to all servers $(h(x), h(y), *)$
– Each server sends $S(y,z)$ to all servers $(*, h(y), h(z))$
– Each server sends $T(x,z)$ to all servers $(h(x), *, h(z))$
• Final output:
– Each server $(i,j,k)$ computes the query $R(x,y), S(y,z), T(z,x)$ locally
• Analysis: each tuple $R(x,y)$ is replicated at most $P^{\frac{1}{3}}$ times
\[ Q(x,y,z) = R(x,y), S(y,z), T(z,x) \]
<table>
<thead>
<tr>
<th>R1</th>
<th>S1</th>
<th>T1</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td></td>
</tr>
<tr>
<td>y</td>
<td>z</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>7</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>9</td>
</tr>
<tr>
<td>x</td>
<td>y</td>
<td>z</td>
</tr>
<tr>
<td>y</td>
<td>z</td>
<td>x</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>9</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td></td>
</tr>
</tbody>
</table>
**Partition**
**Shuffle**
**Local Join**
**Hypercube join**
<table>
<thead>
<tr>
<th>R2</th>
<th>S2</th>
<th>T2</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td></td>
</tr>
<tr>
<td>5</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>9</td>
</tr>
<tr>
<td>3</td>
<td>3</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>R2’</th>
<th>S2’</th>
<th>T2</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td></td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>9</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>R3</th>
<th>S3</th>
<th>T3</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td></td>
</tr>
<tr>
<td>8</td>
<td>6</td>
<td>3</td>
</tr>
<tr>
<td>9</td>
<td>6</td>
<td></td>
</tr>
<tr>
<td>y</td>
<td>z</td>
<td></td>
</tr>
<tr>
<td>6</td>
<td>7</td>
<td>3</td>
</tr>
<tr>
<td>z</td>
<td>x</td>
<td></td>
</tr>
<tr>
<td>7</td>
<td>1</td>
<td></td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>R3’</th>
<th>S3’</th>
<th>T3</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td></td>
</tr>
<tr>
<td>z</td>
<td>x</td>
<td></td>
</tr>
<tr>
<td>3</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
**P1:** \( (1, 2, 7) \)
**P2:** \( (1, 2, 3) \)
**P3:** \( (3, 2, 3) \)
\[ Q(x, y, z) = R(x, y), S(y, z), T(z, x) \]
**Hypercube join**
<table>
<thead>
<tr>
<th></th>
<th>R1</th>
<th>S1</th>
<th>T1</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td>y z</td>
<td>z x</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>4</td>
<td>1 1</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>4</td>
<td>3 3</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>R2</th>
<th>S2</th>
<th>T2</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td>y z</td>
<td>z x</td>
</tr>
<tr>
<td>5</td>
<td>4</td>
<td>2</td>
<td>3 9</td>
</tr>
<tr>
<td>7</td>
<td>6</td>
<td>2</td>
<td>3 1</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>R3</th>
<th>S3</th>
<th>T3</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td>y z</td>
<td>z x</td>
</tr>
<tr>
<td>8</td>
<td>6</td>
<td>6</td>
<td>7 1</td>
</tr>
<tr>
<td>9</td>
<td>6</td>
<td>6</td>
<td>3 1</td>
</tr>
</tbody>
</table>
**Partition**
<table>
<thead>
<tr>
<th></th>
<th>R1</th>
<th>S1</th>
<th>T1</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td>y z</td>
<td>z x</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>4</td>
<td>1 1</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>4</td>
<td>3 3</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>R2</th>
<th>S2</th>
<th>T2</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td>y z</td>
<td>z x</td>
</tr>
<tr>
<td>5</td>
<td>4</td>
<td>2</td>
<td>3 9</td>
</tr>
<tr>
<td>7</td>
<td>6</td>
<td>2</td>
<td>3 1</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th></th>
<th>R3</th>
<th>S3</th>
<th>T3</th>
</tr>
</thead>
<tbody>
<tr>
<td>x</td>
<td>y</td>
<td>y z</td>
<td>z x</td>
</tr>
<tr>
<td>8</td>
<td>6</td>
<td>6</td>
<td>7 1</td>
</tr>
<tr>
<td>9</td>
<td>6</td>
<td>6</td>
<td>3 1</td>
</tr>
</tbody>
</table>
**Shuffle**
What if
h(x): h(1) = h(3)?
The query \( Q(x,y,z) = R(x,y), S(y,z), T(z,x) \) is illustrated with a hypercube join. The partition and shuffle steps are shown with values distributed across partitions and then joined. The local join is applied to the results of the partition and shuffle steps.
<table>
<thead>
<tr>
<th></th>
<th>R1</th>
<th>S1</th>
<th>T1</th>
<th></th>
<th>R2</th>
<th>S2</th>
<th>T2</th>
<th></th>
<th>R3</th>
<th>S3</th>
<th>T3</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td>x</td>
<td>y</td>
<td></td>
<td></td>
<td>x</td>
<td>y</td>
<td></td>
<td></td>
<td>x</td>
<td>y</td>
<td></td>
</tr>
<tr>
<td>P1</td>
<td>1</td>
<td>2</td>
<td></td>
<td></td>
<td>4</td>
<td>7</td>
<td>1</td>
<td></td>
<td>8</td>
<td>6</td>
<td>1</td>
</tr>
<tr>
<td></td>
<td>3</td>
<td>2</td>
<td></td>
<td></td>
<td>4</td>
<td>9</td>
<td>3</td>
<td></td>
<td>9</td>
<td>6</td>
<td>3</td>
</tr>
</tbody>
</table>
The partitions are labeled P1, P2, and P3. The local join results in P1: (1, 2, 7), P2: (1, 2, 3), and P3: (3, 2, 3). The question asks: What if \( h(x): h(1) = h(3) \)?
Putting it Together: Example Parallel Query Plan
Find all orders from today, along with the items ordered
SELECT *
FROM Order o, Line i
WHERE o.item = i.item
AND o.date = today()
Example Parallel Query Plan
Example Parallel Query Plan
Order(oid, item, date), Line(item, ...)
Node 1
- scan Item i
- hash h(i.item)
Node 2
- scan Item i
- hash h(i.item)
Node 3
- scan Item i
- hash h(i.item)
Join
- o.item = i.item
- date = today()
Example Parallel Query Plan
Node 1
- join: o.item = i.item
- contains all orders and all lines where hash(item) = 1
Node 2
- join: o.item = i.item
- contains all orders and all lines where hash(item) = 2
Node 3
- join: o.item = i.item
- contains all orders and all lines where hash(item) = 3
The MapReduce Programming Paradigm
Parallel Data Processing @ 2000
Optional Reading
- Original paper: [https://www.usenix.org/legacy/events/osdi04/tech/dean.html](https://www.usenix.org/legacy/events/osdi04/tech/dean.html)
- Rebuttal to a comparison with parallel DBs: [http://dl.acm.org/citation.cfm?doid=1629175.1629198](http://dl.acm.org/citation.cfm?doid=1629175.1629198)
- Chapter 2 (Sections 1, 2, 3 only) of Mining of Massive Datasets, by Rajaraman and Ullman [http://i.stanford.edu/~ullman/mmds.html](http://i.stanford.edu/~ullman/mmds.html)
Motivation
• We learned how to parallelize relational database systems
• While useful, it might incur too much overhead if our query plans consist of simple operations
• MapReduce is a programming model for such computation
• First, let’s study how data is stored in such systems
Distributed File System (DFS)
- For very large files: TBs, PBs
- Each file is partitioned into *chunks*, typically 64MB
- Each chunk is replicated several times (≥3), on different racks, for fault tolerance
- Implementations:
- Google’s DFS: GFS, proprietary
- Hadoop’s DFS: HDFS, open source
MapReduce
• Google: paper published 2004
• Free variant: Hadoop
• MapReduce = high-level programming model and implementation for large-scale parallel data processing
Typical Problems Solved by MR
- Read a lot of data
- **Map**: extract something you care about from each record
- Shuffle and Sort
- **Reduce**: aggregate, summarize, filter, transform
- Write the results
Paradigm stays the same, change map and reduce functions for different problems
slide source: Jeff Dean
Data Model
Files!
A file = a bag of \((\text{key}, \text{value})\) pairs
A MapReduce program:
• Input: a bag of \((\text{inputkey}, \text{value})\) pairs
• Output: a bag of \((\text{outputkey}, \text{value})\) pairs
Step 1: the **MAP** Phase
User provides the **MAP**-function:
- Input: \((\text{input key, value})\)
- Output: bag of \((\text{intermediate key, value})\)
System applies the map function in parallel to all \((\text{input key, value})\) pairs in the input file
Step 2: the REDUCE Phase
User provides the REDUCE function:
• Input: \((\text{intermediate key}, \text{bag of values})\)
• Output: bag of output \((\text{values})\)
System groups all pairs with the same intermediate key, and passes the bag of values to the REDUCE function.
Example
• Counting the number of occurrences of each word in a large collection of documents
• Each Document
– The key = document id (did)
– The value = set of words (word)
map(String key, String value):
// key: document name
// value: document contents
for each word w in value:
EmitIntermediate(w, "1");
reduce(String key, Iterator values):
// key: a word
// values: a list of counts
int result = 0;
for each v in values:
result += parseInt(v);
Emit(AsString(result));
Jobs v.s. Tasks
• A MapReduce Job
– One single “query”, e.g. count the words in all docs
– More complex queries may consists of multiple jobs
• A Map Task, or a Reduce Task
– A group of instantiations of the map-, or reduce-function, which are scheduled on a single worker
Workers
• A *worker* is a process that executes one task at a time
• Typically there is one worker per processor, hence 4 or 8 per node
Fault Tolerance
• If one server fails once every year…
... then a job with 10,000 servers will fail in less than one hour
• MapReduce handles fault tolerance by writing intermediate files to disk:
– Mappers write file to local disk
– Reducers read the files (=reshuffling); if the server fails, the reduce task is restarted on another server
MAP Tasks
REDUCE Tasks
Shuffle
CSEP 544 - Fall 2017
MapReduce Execution Details
Reduce
Task
Reduce (Shuffle)
Intermediate data goes to local disk: $M \times R$ files (why?)
Output to disk, replicated in cluster
Map
Task
Data not necessarily local
File system: GFS or HDFS
CSEP 544 - Fall 2017
MapReduce Phases
Map Task:
1. Split
2. Record Reader
3. Map
4. Combine
Reduce Task:
1. Copy
2. Sort
3. Reduce
Local storage
HDFS
Implementation
• There is one master node
• Master partitions input file into $M$ splits, by key
• Master assigns workers (=servers) to the $M$ map tasks, keeps track of their progress
• Workers write their output to local disk, partition into $R$ regions
• Master assigns workers to the $R$ reduce tasks
• Reduce workers read regions from the map workers’ local disks
Interesting Implementation Details
Worker failure:
• Master pings workers periodically,
• If down then reassigns the task to another worker
Interesting Implementation Details
Backup tasks:
• **Straggler** = a machine that takes unusually long time to complete one of the last tasks. E.g.:
– Bad disk forces frequent correctable errors (30MB/s → 1MB/s)
– The cluster scheduler has scheduled other tasks on that machine
• Stragglers are a main reason for slowdown
• Solution: *pre-emptive backup execution of the last few remaining in-progress tasks*
Straggler Example
Worker 1
Worker 2
Worker 3
Backup execution
Straggler
Killed
Killed
CSEP 544 - Fall 2017
|
{"Source-Url": "https://courses.cs.washington.edu/courses/csep544/17au/lectures/lec6-parallel-db.pdf", "len_cl100k_base": 12371, "olmocr-version": "0.1.53", "pdf-total-pages": 109, "total-fallback-pages": 0, "total-input-tokens": 174497, "total-output-tokens": 16069, "length": "2e13", "weborganizer": {"__label__adult": 0.0003368854522705078, "__label__art_design": 0.00031256675720214844, "__label__crime_law": 0.0003936290740966797, "__label__education_jobs": 0.00888824462890625, "__label__entertainment": 9.1552734375e-05, "__label__fashion_beauty": 0.00015985965728759766, "__label__finance_business": 0.0006442070007324219, "__label__food_dining": 0.0004177093505859375, "__label__games": 0.0006318092346191406, "__label__hardware": 0.00125885009765625, "__label__health": 0.0006532669067382812, "__label__history": 0.0003380775451660156, "__label__home_hobbies": 0.00019860267639160156, "__label__industrial": 0.0008335113525390625, "__label__literature": 0.0003001689910888672, "__label__politics": 0.0002541542053222656, "__label__religion": 0.0004444122314453125, "__label__science_tech": 0.058441162109375, "__label__social_life": 0.00017750263214111328, "__label__software": 0.0170745849609375, "__label__software_dev": 0.9072265625, "__label__sports_fitness": 0.0002968311309814453, "__label__transportation": 0.0006060600280761719, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33675, 0.03692]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33675, 0.48378]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33675, 0.68142]], "google_gemma-3-12b-it_contains_pii": [[0, 69, false], [69, 384, null], [384, 417, null], [417, 798, null], [798, 920, null], [920, 1715, null], [1715, 2275, null], [2275, 2769, null], [2769, 3044, null], [3044, 3384, null], [3384, 3630, null], [3630, 3679, null], [3679, 3949, null], [3949, 3980, null], [3980, 4451, null], [4451, 4773, null], [4773, 4818, null], [4818, 4884, null], [4884, 5162, null], [5162, 5586, null], [5586, 5811, null], [5811, 6093, null], [6093, 6450, null], [6450, 6470, null], [6470, 6842, null], [6842, 7386, null], [7386, 7767, null], [7767, 8113, null], [8113, 8355, null], [8355, 8710, null], [8710, 9293, null], [9293, 10001, null], [10001, 10455, null], [10455, 10906, null], [10906, 11207, null], [11207, 11684, null], [11684, 12108, null], [12108, 12630, null], [12630, 13279, null], [13279, 13469, null], [13469, 14031, null], [14031, 14067, null], [14067, 14397, null], [14397, 14768, null], [14768, 15203, null], [15203, 15394, null], [15394, 15694, null], [15694, 15996, null], [15996, 16381, null], [16381, 16680, null], [16680, 16705, null], [16705, 17049, null], [17049, 17349, null], [17349, 17559, null], [17559, 17638, null], [17638, 17737, null], [17737, 17956, null], [17956, 18041, null], [18041, 18350, null], [18350, 18596, null], [18596, 19037, null], [19037, 19069, null], [19069, 19433, null], [19433, 19914, null], [19914, 20115, null], [20115, 20274, null], [20274, 20564, null], [20564, 21086, null], [21086, 21524, null], [21524, 21756, null], [21756, 21997, null], [21997, 22384, null], [22384, 22723, null], [22723, 22858, null], [22858, 23022, null], [23022, 23188, null], [23188, 23474, null], [23474, 24022, null], [24022, 24654, null], [24654, 25333, null], [25333, 26217, null], [26217, 27035, null], [27035, 27813, null], [27813, 27994, null], [27994, 28022, null], [28022, 28249, null], [28249, 28550, null], [28550, 28585, null], [28585, 28617, null], [28617, 29101, null], [29101, 29385, null], [29385, 29683, null], [29683, 29852, null], [29852, 30164, null], [30164, 30383, null], [30383, 30645, null], [30645, 30921, null], [30921, 31423, null], [31423, 31423, null], [31423, 31704, null], [31704, 31842, null], [31842, 32192, null], [32192, 32247, null], [32247, 32498, null], [32498, 32631, null], [32631, 33001, null], [33001, 33144, null], [33144, 33561, null], [33561, 33675, null]], "google_gemma-3-12b-it_is_public_document": [[0, 69, true], [69, 384, null], [384, 417, null], [417, 798, null], [798, 920, null], [920, 1715, null], [1715, 2275, null], [2275, 2769, null], [2769, 3044, null], [3044, 3384, null], [3384, 3630, null], [3630, 3679, null], [3679, 3949, null], [3949, 3980, null], [3980, 4451, null], [4451, 4773, null], [4773, 4818, null], [4818, 4884, null], [4884, 5162, null], [5162, 5586, null], [5586, 5811, null], [5811, 6093, null], [6093, 6450, null], [6450, 6470, null], [6470, 6842, null], [6842, 7386, null], [7386, 7767, null], [7767, 8113, null], [8113, 8355, null], [8355, 8710, null], [8710, 9293, null], [9293, 10001, null], [10001, 10455, null], [10455, 10906, null], [10906, 11207, null], [11207, 11684, null], [11684, 12108, null], [12108, 12630, null], [12630, 13279, null], [13279, 13469, null], [13469, 14031, null], [14031, 14067, null], [14067, 14397, null], [14397, 14768, null], [14768, 15203, null], [15203, 15394, null], [15394, 15694, null], [15694, 15996, null], [15996, 16381, null], [16381, 16680, null], [16680, 16705, null], [16705, 17049, null], [17049, 17349, null], [17349, 17559, null], [17559, 17638, null], [17638, 17737, null], [17737, 17956, null], [17956, 18041, null], [18041, 18350, null], [18350, 18596, null], [18596, 19037, null], [19037, 19069, null], [19069, 19433, null], [19433, 19914, null], [19914, 20115, null], [20115, 20274, null], [20274, 20564, null], [20564, 21086, null], [21086, 21524, null], [21524, 21756, null], [21756, 21997, null], [21997, 22384, null], [22384, 22723, null], [22723, 22858, null], [22858, 23022, null], [23022, 23188, null], [23188, 23474, null], [23474, 24022, null], [24022, 24654, null], [24654, 25333, null], [25333, 26217, null], [26217, 27035, null], [27035, 27813, null], [27813, 27994, null], [27994, 28022, null], [28022, 28249, null], [28249, 28550, null], [28550, 28585, null], [28585, 28617, null], [28617, 29101, null], [29101, 29385, null], [29385, 29683, null], [29683, 29852, null], [29852, 30164, null], [30164, 30383, null], [30383, 30645, null], [30645, 30921, null], [30921, 31423, null], [31423, 31423, null], [31423, 31704, null], [31704, 31842, null], [31842, 32192, null], [32192, 32247, null], [32247, 32498, null], [32498, 32631, null], [32631, 33001, null], [33001, 33144, null], [33144, 33561, null], [33561, 33675, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33675, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33675, null]], "pdf_page_numbers": [[0, 69, 1], [69, 384, 2], [384, 417, 3], [417, 798, 4], [798, 920, 5], [920, 1715, 6], [1715, 2275, 7], [2275, 2769, 8], [2769, 3044, 9], [3044, 3384, 10], [3384, 3630, 11], [3630, 3679, 12], [3679, 3949, 13], [3949, 3980, 14], [3980, 4451, 15], [4451, 4773, 16], [4773, 4818, 17], [4818, 4884, 18], [4884, 5162, 19], [5162, 5586, 20], [5586, 5811, 21], [5811, 6093, 22], [6093, 6450, 23], [6450, 6470, 24], [6470, 6842, 25], [6842, 7386, 26], [7386, 7767, 27], [7767, 8113, 28], [8113, 8355, 29], [8355, 8710, 30], [8710, 9293, 31], [9293, 10001, 32], [10001, 10455, 33], [10455, 10906, 34], [10906, 11207, 35], [11207, 11684, 36], [11684, 12108, 37], [12108, 12630, 38], [12630, 13279, 39], [13279, 13469, 40], [13469, 14031, 41], [14031, 14067, 42], [14067, 14397, 43], [14397, 14768, 44], [14768, 15203, 45], [15203, 15394, 46], [15394, 15694, 47], [15694, 15996, 48], [15996, 16381, 49], [16381, 16680, 50], [16680, 16705, 51], [16705, 17049, 52], [17049, 17349, 53], [17349, 17559, 54], [17559, 17638, 55], [17638, 17737, 56], [17737, 17956, 57], [17956, 18041, 58], [18041, 18350, 59], [18350, 18596, 60], [18596, 19037, 61], [19037, 19069, 62], [19069, 19433, 63], [19433, 19914, 64], [19914, 20115, 65], [20115, 20274, 66], [20274, 20564, 67], [20564, 21086, 68], [21086, 21524, 69], [21524, 21756, 70], [21756, 21997, 71], [21997, 22384, 72], [22384, 22723, 73], [22723, 22858, 74], [22858, 23022, 75], [23022, 23188, 76], [23188, 23474, 77], [23474, 24022, 78], [24022, 24654, 79], [24654, 25333, 80], [25333, 26217, 81], [26217, 27035, 82], [27035, 27813, 83], [27813, 27994, 84], [27994, 28022, 85], [28022, 28249, 86], [28249, 28550, 87], [28550, 28585, 88], [28585, 28617, 89], [28617, 29101, 90], [29101, 29385, 91], [29385, 29683, 92], [29683, 29852, 93], [29852, 30164, 94], [30164, 30383, 95], [30383, 30645, 96], [30645, 30921, 97], [30921, 31423, 98], [31423, 31423, 99], [31423, 31704, 100], [31704, 31842, 101], [31842, 32192, 102], [32192, 32247, 103], [32247, 32498, 104], [32498, 32631, 105], [32631, 33001, 106], [33001, 33144, 107], [33144, 33561, 108], [33561, 33675, 109]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33675, 0.0932]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
e385ea8f0caaa30cdfef519887f0bccc4abf9e1b
|
Chapter outline
- background
- categories of employees
- relationships and hierarchies
- inheritance programming
- creating subclasses
- overriding behavior
- multiple levels of inheritance
- interacting with the superclass using the `super` keyword
- inheritance and design
- polymorphism
- "polymorphism mystery" problems
- interfaces
The software crisis
- software engineering: The practice of conceptualizing, designing, developing, documenting, and testing large-scale computer programs.
- Large-scale projects face many issues:
- getting many programmers to work together
- getting code finished on time
- avoiding redundant code
- finding and fixing bugs
- maintaining, improving, and reusing existing code
- code reuse: The practice of writing program code once and using it in many contexts.
Employee analogy
Consider a law firm with many types of employees.
- common rules: hours, vacation time, benefits, regulations, ...
- all employees attend common orientation to learn general rules
- each employee receives 20-page manual of the common rules
- each subdivision also has specific rules
- employee attends a subdivision-specific orientation to learn them
- employee receives a smaller (1-3 page) manual of these rules
- smaller manual adds some rules and also changes some rules from
the large manual ("use the pink form instead of yellow form"...)
Separating behavior
- Why not just have a 22 page Lawyer manual, a 21-page Secretary manual, a 23-page Marketer manual, etc.?
- Some advantages of the separate manuals:
- maintenance: If a common rule changes, we'll need to update only the common manual.
- locality: A person can look at the lawyer manual and quickly discover all rules that are specific to lawyers.
- Some key ideas from this example:
- It's useful to be able to describe general rules that will apply to many groups (the 20-page manual).
- It's also useful for a group to specify a smaller set of rules for itself, including being able to replace rules from the overall set.
Is-a relationships, hierarchies
- is-a relationship: A hierarchical connection where one category can be treated as a specialized version of another.
- every marketer is an employee
- every legal secretary is a secretary
- inheritance hierarchy: A set of classes connected by is-a relationships that can share common code.
- Often drawn as a downward tree of connected boxes or ovals representing classes:
Employee regulations
- Consider the following employee regulations:
- Employees work 40 hours per week.
- Employees make $40,000 per year, except legal secretaries who make $5,000 extra per year ($45,000 total), and marketers who make $10,000 extra per year ($50,000 total).
- Employees have 2 weeks of paid vacation leave per year, except lawyers who get an extra week (a total of 3).
- Employees should use a yellow form to apply for leave, except for lawyers who use a pink form.
- Each type of employee has some unique behavior:
- Lawyers know how to sue.
- Marketers know how to advertise.
- Secretaries know how to take dictation.
- Legal secretaries know how to prepare legal documents.
General employee code
// A class to represent employees in general (20-page manual).
public class Employee {
public int getHours() {
return 40; // works 40 hours / week
}
public double getSalary() {
return 40000.0; // $40,000.00 / year
}
public int getVacationDays() {
return 10; // 2 weeks' paid vacation
}
public String getVacationForm() {
return "yellow"; // use the yellow form
}
}
Exercise: Implement class Secretary, based on the previous employee regulations.
Desire for code-sharing
- The `takeDictation` method is the only unique behavior in the Secretary class.
- We'd like to be able to say the following:
// A class to represent secretaries.
public class Secretary {
<copy all the contents from Employee class.>
public void takeDictation(String text) {
System.out.println("Taking dictation of text: " + text);
}
}
Redundant secretary code
// A redundant class to represent secretaries.
public class Secretary {
public int getHours() {
return 40; // works 40 hours / week
}
public double getSalary() {
return 40000.0; // $40,000.00 / year
}
public int getVacationDays() {
return 10; // 2 weeks' paid vacation
}
public String getVacationForm() {
return "yellow"; // use the yellow form
}
public void takeDictation(String text) {
System.out.println("Taking dictation of text: " + text);
}
}
Inheritance
- inheritance: A way to form new classes based on existing classes, taking on their attributes/behavior.
- a way to group related classes
- a way to share code between two or more classes
- We say that one class can extend another by absorbing its state and behavior.
- superclass: The parent class that is being extended.
- subclass: The child class that extends the superclass and inherits its behavior.
- The subclass receives a copy of every field and method from its superclass.
**Inheritance syntax**
- Creating a subclass, general syntax:
```java
public class <name> extends <superclass name> {
...
}
```
- Example:
```java
public class Secretary extends Employee {
...
}
```
- By extending Employee, each Secretary object now:
- receives a getHours, getSalary, getVacationDays, and getVacationForm method automatically
- can be treated as an Employee by any other code (seen later)
(e.g. a Secretary could be stored in a variable of type Employee or stored as an element of an Employee[])
**Improved secretary code**
// A class to represent secretaries.
public class Secretary extends Employee {
public void takeDictation(String text) {
System.out.println("Taking dictation of text: " + text);
}
}
- Now we only have to write the portions that are unique to each type.
- Secretary inherits getHours, getSalary, getVacationDays, and getVacationForm methods from Employee.
- Secretary adds the takeDictation method.
---
**Implementing Lawyer**
- Let’s implement a Lawyer class.
- Consider the following employee regulations:
- Lawyers get an extra week of paid vacation (a total of 3).
- Lawyers use a pink form when applying for vacation leave.
- Lawyers have some unique behavior: they know how to sue.
- The problem: We want lawyers to inherit most of the behavior of the general employee, but we want to replace certain parts with new behavior.
**Overriding methods**
- **override**: To write a new version of a method in a subclass that replaces the superclass's version.
- The new method must have the same signature as the parent's method, but can have a different body
- The type of the object executing the method determines which version of the method is invoked
- There is no special syntax for overriding.
- To override a superclass method, just write a new version of it in the subclass. This will replace the inherited version.
- Example:
```java
public class Lawyer extends Employee {
// overrides getVacationForm method in Employee class
public String getVacationForm() {
return "pink";
}
...
}
```
- Exercise: Complete the Lawyer class.
Overriding methods
- A method in the parent class can be invoked explicitly using the super reference
- If a method is declared with the `final` modifier, it cannot be overridden
- The concept of overriding can be applied to data and is called *shadowing variables*
- Shadowing variables should be avoided because it tends to cause unnecessarily confusing code
Overloading vs. Overriding
- Overloading deals with multiple methods with the same name in the same class, but with different signatures
- Overloading lets you define a similar operation in different ways for different parameters
- Overriding deals with two methods, one in a parent class and one in a child class, that have the same signature
- Overriding lets you define a similar operation in different ways for different object types
Complete Lawyer class
```java
// A class to represent lawyers.
public class Lawyer extends Employee {
// overrides getVacationForm from Employee class
public String getVacationForm() {
return "pink";
}
// overrides getVacationDays from Employee class
public int getVacation() {
return 15; // 3 weeks vacation
}
public void sue() {
System.out.println("I'll see you in court!");
}
}
```
Exercise: Now complete the `Marketer` class. Marketers make $10,000 extra ($50,000 total) and know how to advertise.
Levels of inheritance
- Deep hierarchies can be created by multiple levels of subclassing.
- Example: The legal secretary is the same as a regular secretary except for making more money ($45,000) and being able to file legal briefs.
```java
public class LegalSecretary extends Secretary {
public void fileLegalBriefs() {
System.out.println("I could file all day!");
}
public double getSalary() {
return 45000.0; // $45,000.00 / year
}
}
```
- Exercise: Complete the `LegalSecretary` class.
Complete LegalSecretary class
// A class to represent legal secretaries.
public class LegalSecretary extends Secretary {
public void fileLegalBriefs() {
System.out.println("I could file all day!");
}
public double getSalary() {
return 45000.0; // $45,000.00 / year
}
}
Changes to common behavior
- Imagine that a company-wide change occurs that affects all employees.
- Example: Because of inflation, everyone is given a $10,000 raise.
- The base employee salary is now $50,000.
- Legal secretaries now make $55,000.
- Marketers now make $60,000.
- We must modify our code to reflect this policy change.
Modifying the superclass
- This modified Employee class handles the new raise:
```java
public class Employee {
public int getHours() { return 40; } // works 40 hours / week
public double getSalary() { return 50000.0; } // $50,000.00 / year
}
```
- What problem now exists in the code?
- The Employee subclasses are now incorrect.
- They have overridden the getSalary method to return other values such as 45,000 and 50,000 that need to be changed.
The super Reference
- A child’s constructor is responsible for calling the parent’s constructor
- The first line of a child’s constructor should use the super reference to call the parent’s constructor
- The super reference can also be used to refer to other variables and methods defined in the parent’s class
Calling overridden methods
- A subclass can call an overridden method with the super keyword.
Calling an overridden method, syntax:
```java
super. <method name> ( <parameter(s)> )
```
- Example:
```java
public class LegalSecretary extends Secretary {
public double getSalary() {
return super.getSalary() + 5000.0;
}
}
```
- Exercise: Modify the Lawyer and Marketer classes to also use the super keyword.
### Improved subclasses
```java
public class Lawyer extends Employee {
public String getVacationForm() { return "pink"; }
public int getVacationDays() {
return super.getVacationDays() + 5;
}
public void sue() {
System.out.println("I'll see you in court!");
}
}
public class Marketer extends Employee {
public void advertise() {
System.out.println("Act now while supplies last!");
}
public double getSalary() {
return super.getSalary() + 10000.0;
}
}
```
### Inheritance and constructors
- Imagine that we want to give employees more vacation days the longer they've been with the company.
- For each year worked, we'll award 2 additional vacation days.
- When an Employee object is constructed, we'll pass in the number of years the person has been with the company.
- This will require us to modify our Employee class and add some new state and behavior.
**Exercise:** Make the necessary modifications to the Employee class.
### Modified Employee class
```java
public class Employee {
private int years;
public Employee(int years) {
this.years = years;
}
public int getHours() { return 40; }
public double getSalary() { return 50000.0; }
public int getVacationDays() { return 10 + 2 * years; }
public String getVacationForm() { return "yellow"; }
}
```
### Problem with constructors
- Now that we've added the constructor to the Employee class, our subclasses do not compile. The error:
- Lawyer.java:2: cannot find symbol
- symbol : constructor Employee()
- location: class Employee
- public class Lawyer extends Employee {
^
- The short explanation: Once we write a constructor (that requires parameters) in the superclass, we must now write constructors for our employee subclasses as well.
- The long explanation: (next slide)
The detailed explanation
- Constructors aren't inherited.
- The Employee subclasses don't inherit the public Employee(int years) constructor.
- Since our subclasses don't have constructors, they receive a default parameterless constructor that contains the following:
public Lawyer() {
super(); // calls public Employee() constructor
}
- But our public Employee(int years) replaces the default Employee constructor.
- Therefore all the subclasses' default constructors are now trying to call a non-existent default superclass constructor.
Calling superclass constructor
- Syntax for calling superclass's constructor:
super( <parameter(s)> );
- Example:
public class Lawyer extends Employee {
public Lawyer(int years) {
super(years); // call Employee constructor
}
...
}
- The call to the superclass constructor must be the first statement in the subclass constructor.
- Exercise: Make a similar modification to the Marketer class.
Modified Marketer class
// A class to represent marketers.
public class Marketer extends Employee {
public Marketer(int years) {
super(years);
}
public void advertise() {
System.out.println("Act now while supplies last!");
}
public double getSalary() {
return super.getSalary() + 10000.0;
}
}
- Exercise: Modify the Secretary subclass to make it compile:
- Secretaries' years of employment are not tracked and they do not earn extra vacation for them.
- Secretary objects are also constructed without a years parameter.
Modified Secretary class
// A class to represent secretaries.
public class Secretary extends Employee {
public Secretary() {
super(0);
}
public void takeDictation(String text) {
System.out.println("Taking dictation of text: " + text);
}
...
}
- Note that since the Secretary doesn't require any parameters to its constructor, the LegalSecretary now compiles without a constructor (its default constructor calls the parameterless Secretary constructor).
- This isn't the best solution; it isn't that Secretaries work for 0 years, it's that they don't receive a bonus. How can we fix it?
Suppose that we want to give lawyers a $5000 raise for each year they've been with the company.
The following modification doesn't work:
```java
public class Lawyer extends Employee {
public Lawyer(int years) {
super(years);
}
public double getSalary() {
return super.getSalary() + 5000 * years;
}
...
}
```
The error is the following:
```
Lawyer.java:7: years has private access in Employee
return super.getSalary() + 5000 * years;
^)
```
Private access limitations
Private fields cannot be directly accessed from other classes, not even subclasses.
One reason for this is to prevent malicious programmers from using subclassing to circumvent encapsulation.
How can we get around this limitation?
Improved Employee code
Add an accessor for any field needed by the superclass.
```java
public class Employee {
private int years;
public Employee(int years) {
this.years = years;
}
public int getYears() {
return years;
}
...
}
public class Lawyer extends Employee {
public Lawyer(int years) {
super(years);
}
public double getSalary() {
return super.getSalary() + 5000 * getYears();
}
...
}
```
Revisiting Secretary
The Secretary class currently has a poor solution.
- We set all Secretaries to 0 years because they do not get a vacation bonus for their service.
- If we call `getYears` on a Secretary object, we'll always get 0.
- This isn't a good solution; what if we wanted to give some other reward to all employees based on years of service?
Let's redesign our Employee class a bit to allow for a better solution.
Improved Employee code
Let’s separate the standard 10 vacation days from those that are awarded based on seniority.
```java
public class Employee {
private int years;
public Employee(int years) {
this.years = years;
}
public int getVacationDays() {
return 10 + getSeniorityBonus();
}
// vacation days given for each year in the company
public int getSeniorityBonus() {
return 2 * years;
}
...
}
```
How does this help us improve the Secretary?
Improved Secretary code
The Secretary can selectively override the getSeniorityBonus method, so that when it runs its getVacationDays method, it will use this new version as part of the computation.
- Choosing a method at runtime like this is called dynamic binding.
```java
public class Secretary extends Employee {
public Secretary(int years) {
super(years);
}
// Secretaries don’t get a bonus for their years of service.
public int getSeniorityBonus() {
return 0;
}
public void takeDictation(String text) {
System.out.println("Taking dictation of text: " + text);
}
}
```
Class Hierarchies
- A child class of one parent can be the parent of another child, forming a class hierarchy
```
Business
/ /
/ /
RetailBusiness ServiceBusiness
/ \
/ /
Carrefour Migros THY Varan
```
Class Hierarchies
- Two children of the same parent are called siblings.
- Common features should be put as high in the hierarchy as is reasonable.
- An inherited member is passed continually down the line.
- Therefore, a child class inherits from all its ancestor classes.
- There is no single class hierarchy that is appropriate for all situations.
The Object Class
- A class called Object is defined in the java.lang package of the Java standard class library.
- All classes are derived from the Object class.
- If a class is not explicitly defined to be the child of an existing class, it is assumed to be the child of the Object class.
- Therefore, the Object class is the ultimate root of all class hierarchies.
The Object Class
- The Object class contains a few useful methods, which are inherited by all classes.
- For example, the toString method is defined in the Object class.
- Every time we define the toString method, we are actually overriding an inherited definition.
- The toString method in the Object class is defined to return a string that contains the name of the object’s class along with some other information.
The Object Class
- The equals method of the Object class returns true if two references are aliases.
- We can override equals in any class to define equality in some more appropriate way.
- As we’ve seen, the String class defines the equals method to return true if two String objects contain the same characters.
- The designers of the String class have overridden the equals method inherited from Object in favor of a more useful version.
Multiple Inheritance
- Java supports single inheritance, meaning that a derived class can have only one parent class.
- Multiple inheritance allows a class to be derived from two or more classes, inheriting the members of all parents.
- Collisions, such as the same variable name in two parents, have to be resolved.
- Java does not support multiple inheritance.
- In most cases, the use of interfaces gives us aspects of multiple inheritance without the overhead.
The protected Modifier
- Visibility modifiers affect the way that class members can be used in a child class.
- Variables and methods declared with private visibility cannot be referenced by name in a child class.
- They can be referenced in the child class if they are declared with public visibility -- but public variables violate the principle of encapsulation.
- There is a third visibility modifier that helps in inheritance situations: protected.
The protected Modifier
- The protected modifier allows a child class to reference a variable or method directly in the child class.
- It provides more encapsulation than public visibility, but is not as tightly encapsulated as private visibility.
- A protected variable is visible to any class in the same package as the parent class.
Controlling Access
- There are two levels of access control:
- At the top level—public, or package-private (no explicit modifier).
- At the member level—public, private, protected, or package-private (no explicit modifier).
Controlling Access: At the top level
- A class may be declared:
- **With the modifier public**: That class is visible to all classes everywhere
- **With no modifier (a.k.a. package-private)**: It is visible only within its own package
Controlling Access: At the member level
- A class may be declared:
- **With the modifier public**: The member is visible to all classes everywhere
- **With no modifier (package-private)**: It is visible only within its own package
- **Private**: It can only be accessed in its own class.
- **Protected**: It can only be accessed within its own package (as with package-private) and, in addition, by a subclass of its class in another package
Controlling Access
<table>
<thead>
<tr>
<th>Modifier</th>
<th>Class</th>
<th>Package</th>
<th>Subclass</th>
<th>World</th>
</tr>
</thead>
<tbody>
<tr>
<td>public</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
<td>Y</td>
</tr>
<tr>
<td>protected</td>
<td>Y</td>
<td>Y</td>
<td>N</td>
<td></td>
</tr>
<tr>
<td>no modifier</td>
<td>Y</td>
<td>Y</td>
<td>N</td>
<td>N</td>
</tr>
<tr>
<td>private</td>
<td>Y</td>
<td>N</td>
<td>N</td>
<td>N</td>
</tr>
</tbody>
</table>
Polymorphism reading: 9.2
Polymorphism
- **polymorphism**: The ability for the same code to be used with several different types of objects and behave differently depending on the type of object used.
- A reference variable of a type T can legally refer to an object of any subclass of T.
- You can call any methods from Employee on the variable person, but not any methods specific to Lawyer (such as sue).
- Once a method is called on the object, it behaves in its normal way (as a Lawyer, not as a normal Employee).
Polymorphism + parameters
- You can declare methods to accept superclass types as parameters, then pass a parameter of any subtype.
```java
public class EmployeeMain {
public static void main(String[] args) {
Lawyer lisa = new Lawyer(3);
Secretary steve = new Secretary(2);
printInfo(lisa);
printInfo(steve);
}
public static void printInfo(Employee empl) {
System.out.println("salary = " + empl.getSalary());
System.out.println("days = " + empl.getVacationDays());
System.out.println("form = " + empl.getVacationForm());
System.out.println();
}
}
```
- **OUTPUT:**
- salary = 65000.0
- vacation days = 21
- vacation form = pink
- salary = 50000.0
- vacation days = 10
- vacation form = yellow
Polymorphism + arrays
- You can declare arrays of superclass types, and store objects of any subtype as elements.
```java
public class EmployeeMain2 {
public static void main(String[] args) {
Employee[] employees = {new Lawyer(3), new Secretary(2),
new Marketer(4), new LegalSecretary(1)};
for (int i = 0; i < employees.length; i++) {
System.out.println("salary = " + employees[i].getSalary());
System.out.println("days = " + employees[i].getVacationDays());
System.out.println("form = " + employees[i].getVacationForm());
System.out.println();
}
}
}
```
- **OUTPUT:**
- salary = 65000.0
- vacation days = 21
- vacation form = pink
- salary = 50000.0
- vacation days = 10
- vacation form = yellow
Polymorphism problems
- The textbook has several useful exercises to test your knowledge of polymorphism.
- Each exercise declares a group of approximately 4 or 5 short classes with inheritance is-a relationships between them.
- Then a client program is shown that calls methods on objects of each class.
- Your task is to interpret the code and determine the output of the client program.
(Example on next slide...)
A polymorphism problem
Assume that the following four classes have been declared:
```java
public class Foo {
public void method1() {
System.out.println("foo 1");
}
public void method2() {
System.out.println("foo 2");
}
public String toString() {
return "foo";
}
}
```
```java
public class Bar extends Foo {
public void method2() {
System.out.println("bar 2");
}
}
```
```java
public class Baz extends Foo {
public void method1() {
System.out.println("baz 1");
}
public String toString() {
return "baz";
}
}
```
```java
public class Mumble extends Baz {
public void method2() {
System.out.println("mumble 2");
}
}
```
What would be the output of the following client code?
```java
Foo[] pity = {new Baz(), new Bar(), new Mumble(), new Foo()};
for (int i = 0; i < pity.length; i++) {
System.out.println(pity[i]);
pity[i].method1();
pity[i].method2();
System.out.println();
}
```
Finding output with diagrams
One way to determine the output is to diagram each class and its methods, including their output:
- Add the classes from top (superclass) to bottom (subclass).
- Include any inherited methods and their output.
Finding output with tables
Another possible technique for solving these problems is to make a table of the classes and methods, writing the output in each square.
<table>
<thead>
<tr>
<th>method</th>
<th>Foo</th>
<th>Bar</th>
<th>Baz</th>
<th>Mumble</th>
</tr>
</thead>
<tbody>
<tr>
<td>method1</td>
<td>foo 1</td>
<td>foo 1</td>
<td>baz 1</td>
<td>baz 1</td>
</tr>
<tr>
<td>method2</td>
<td>foo 2</td>
<td>bar 2</td>
<td>foo 2</td>
<td>mumble 2</td>
</tr>
<tr>
<td>toString</td>
<td>foo</td>
<td>baz</td>
<td>baz</td>
<td>baz</td>
</tr>
</tbody>
</table>
Polymorphism answer
The code produces the following output:
- baz
- baz 1
- foo 2
- foo
- Baz
- bar 2
- baz
- baz 1
- mumble 2
- foo
- foo 1
- foo 2
Another problem
Assume following classes have been declared. The order of classes is changed, as well as the client.
```java
public class Lamb extends Ham {
public void b() {
System.out.println("Lamb b");
}
}
public class Ham {
public void a() {
System.out.println("Ham a");
}
public void b() {
System.out.println("Ham b");
}
public String toString() {
return "Ham";
}
}
public class Spam extends Yam {
public void a() {
System.out.println("Spam a");
}
}
public class Yam extends Lamb {
public void a() {
System.out.println("Yam a");
}
public String toString() {
return "Yam";
}
}
```
What would be the output of the following client code?
```java
Ham[] food = {new Spam(), new Yam(), new Ham(), new Lamb()};
for (int i = 0; i < food.length; i++) {
System.out.println(food[i]);
food[i].a();
food[i].b();
System.out.println();
}
```
The class diagram
The following diagram depicts each class's behavior:
![Class Diagram]
The table
The following table also depicts each class's behavior:
<table>
<thead>
<tr>
<th>method</th>
<th>Ham</th>
<th>Lamb</th>
<th>Yam</th>
<th>Spam</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>Ham a</td>
<td>Ham a</td>
<td>Yam a</td>
<td>Spam a</td>
</tr>
<tr>
<td>b</td>
<td>Ham b</td>
<td>Lamb b</td>
<td>Lamb b</td>
<td></td>
</tr>
<tr>
<td>toString</td>
<td>Ham</td>
<td>Ham</td>
<td>Yam</td>
<td>Yam</td>
</tr>
</tbody>
</table>
The answer
```java
Ham[] food = {new Spam(), new Yam(), new Ham(), new Lamb()};
for (int i = 0; i < food.length; i++) {
System.out.println(food[i]);
food[i].a();
food[i].b();
System.out.println();
}
```
The code produces the following output:
Yam
Spam a
Lamb b
Yam
Yam a
Lamb b
Ham
Ham a
Ham b
Ham
Ham a
Lamb b
Relatedness of types
- Consider the task of writing classes to represent 2D shapes such as Circle, Rectangle, and Triangle.
- There are certain attributes or operations that are common to all shapes.
- perimeter - distance around the outside of the shape
- area - amount of 2D space occupied by the shape
- Every shape has these attributes, but each computes them differently.
Shape area, perimeter
- Rectangle (as defined by width $w$ and height $h$):
- area $= w \times h$
- perimeter $= 2w + 2h$
- Circle (as defined by radius $r$):
- area $= \pi r^2$
- perimeter $= 2 \pi r$
- Triangle (as defined by side lengths $a$, $b$, and $c$)
- area $= \sqrt{s(s-a)(s-b)(s-c)}$
where $s = \frac{1}{2}(a + b + c)$
- perimeter $= a + b + c$
Common behavior
- Let’s write shape classes with methods named `perimeter` and `area`.
- We’d like to be able to write client code that treats different shape objects in the same way, insofar as they share common behavior, such as:
- Write a method that prints any shape’s area and perimeter.
- Create an array of shapes that could hold a mixture of the various shape objects.
- Write a method that could return a rectangle, a circle, a triangle, or any other shape we’ve written.
- Make a `DrawingPanel` display many shapes on screen.
Interfaces
- **interface**: A list of methods that classes can promise to implement.
- Inheritance gives you an is-a relationship and code-sharing.
- A Lawyer object can be treated as an Employee, and Lawyer inherits Employee's code.
- Interfaces give you an is-a relationship without code sharing.
- A Rectangle object can be treated as a Shape.
- Analogous to non-programming idea of roles or certifications
- "I'm certified as a CPA accountant. The certification assures you that I know how to do taxes, perform audits, and do management consulting."
- "I'm certified as a Shape. That means you can be sure that I know how to compute my area and perimeter."
Interface syntax
- Interface declaration, general syntax:
```java
public interface <name> {
public <type> <name>(...);
...
public <type> <name>(...);
}
```
- Example:
```java
public interface Vehicle {
public double getSpeed();
public void setDirection(int direction);
}
```
- **abstract method**: A method header without an implementation.
- The actual bodies of the methods are not specified, because we want to allow each class to implement the behavior in its own way.
- Exercise: Write an interface for shapes.
Implementing an interface
- A class can declare that it *implements* an interface.
- This means the class contains an implementation for each of the abstract methods in that interface.
(Otherwise, the class will fail to compile.)
- Implementing an interface, general syntax:
```java
public class <name> implements <interface name> {
...
}
```
- Example:
```java
public class Bicycle implements Vehicle {
...
}
```
(What must be true about the Bicycle class for it to compile?)
Shape interface
- An interface for shapes:
```java
public interface Shape {
public double area();
public double perimeter();
}
```
- This interface describes the features common to all shapes. (Every shape has an area and perimeter.)
Interface requirements
- If we write a class that claims to be a Shape but doesn’t implement the area and perimeter methods, it will not compile.
Example:
```java
public class Banana implements Shape {
...
}
```
The compiler error message:
```java
Banana.java:1: Banana is not abstract and does not override abstract method area() in Shape
public class Banana implements Shape {
^
```
Complete Circle class
// Represents circles.
public class Circle implements Shape {
private double radius;
// Constructs a new circle with the given radius.
public Circle(double radius) {
this.radius = radius;
}
// Returns the area of this circle.
public double area() {
return Math.PI * radius * radius;
}
// Returns the perimeter of this circle.
public double perimeter() {
return 2.0 * Math.PI * radius;
}
}
Complete Rectangle class
// Represents rectangles.
public class Rectangle implements Shape {
private double width;
private double height;
// Constructs a new rectangle with the given dimensions.
public Rectangle(double width, double height) {
this.width = width;
this.height = height;
}
// Returns the area of this rectangle.
public double area() {
return width * height;
}
// Returns the perimeter of this rectangle.
public double perimeter() {
return 2.0 * (width + height);
}
}
Diagrams of interfaces
- We draw arrows upward from the classes to the interface(s) they implement.
- There is a supertype-subtype relationship here; e.g., all Circles are Shapes, but not all Shapes are Circles.
- This kind of picture is also called a UML class diagram.
- Exercise: Implement the Circle, Rectangle, and Triangle classes.
### Complete Triangle class
```java
// Represents triangles.
public class Triangle implements Shape {
private double a;
private double b;
private double c;
// Constructs a new Triangle given side lengths.
public Triangle(double a, double b, double c) {
this.a = a;
this.b = b;
this.c = c;
}
// Returns this triangle's area using Heron's formula.
public double area() {
double s = (a + b + c) / 2.0;
return Math.sqrt(s * (s - a) * (s - b) * (s - c));
}
// Returns the perimeter of this triangle.
public double perimeter() {
return a + b + c;
}
}
```
### Interfaces and polymorphism
- **Using interfaces doesn't benefit the class author so much as the client code author.**
- The is-a relationship provided by the interface means that the client can take advantage of polymorphism.
- Example:
```java
public static void printInfo(Shape s) {
System.out.println("The shape: " + s);
System.out.println("area : " + s.area());
System.out.println("perim: " + s.perimeter());
System.out.println();
}
```
- Any object that implements the interface may be passed as the parameter to the above method.
```java
Circle circ = new Circle(12.0);
Triangle tri = new Triangle(5, 12, 13);
printInfo(circ);
printInfo(tri);
```
- **Arrays of interface type**
- We can create an array of an interface type, and store any object implementing that interface as an element.
```java
Circle circ = new Circle(12.0);
Rectangle rect = new Rectangle(4, 7);
Triangle tri = new Triangle(5, 12, 13);
Shape[] shapes = {circ, tri, rect};
for (int i = 0; i < shapes.length; i++) {
printInfo(shapes[i]);
}
```
- Each element of the array executes the appropriate behavior for its object when it is passed to the `printInfo` method, or when `area` or `perimeter` is called on it.
|
{"Source-Url": "http://www2.cmpe.boun.edu.tr/courses/cmpe160/spring2011/files/Ch4-Inheritance_and_Interfaces.pdf", "len_cl100k_base": 8497, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 49978, "total-output-tokens": 9940, "length": "2e13", "weborganizer": {"__label__adult": 0.0005712509155273438, "__label__art_design": 0.00025153160095214844, "__label__crime_law": 0.000392913818359375, "__label__education_jobs": 0.0012798309326171875, "__label__entertainment": 5.120038986206055e-05, "__label__fashion_beauty": 0.00019168853759765625, "__label__finance_business": 0.00030803680419921875, "__label__food_dining": 0.0005240440368652344, "__label__games": 0.0006232261657714844, "__label__hardware": 0.0006651878356933594, "__label__health": 0.0004432201385498047, "__label__history": 0.0002359151840209961, "__label__home_hobbies": 0.00012153387069702148, "__label__industrial": 0.0003795623779296875, "__label__literature": 0.0002689361572265625, "__label__politics": 0.0002999305725097656, "__label__religion": 0.0005588531494140625, "__label__science_tech": 0.0013303756713867188, "__label__social_life": 0.00012362003326416016, "__label__software": 0.002292633056640625, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0005006790161132812, "__label__transportation": 0.000675201416015625, "__label__travel": 0.0003075599670410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35369, 0.0093]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35369, 0.65452]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35369, 0.82275]], "google_gemma-3-12b-it_contains_pii": [[0, 835, false], [835, 3196, null], [3196, 5253, null], [5253, 7430, null], [7430, 8794, null], [8794, 9973, null], [9973, 11217, null], [11217, 13065, null], [13065, 15254, null], [15254, 16902, null], [16902, 18265, null], [18265, 19849, null], [19849, 21338, null], [21338, 22384, null], [22384, 24924, null], [24924, 26715, null], [26715, 28392, null], [28392, 29699, null], [29699, 31686, null], [31686, 33419, null], [33419, 35369, null]], "google_gemma-3-12b-it_is_public_document": [[0, 835, true], [835, 3196, null], [3196, 5253, null], [5253, 7430, null], [7430, 8794, null], [8794, 9973, null], [9973, 11217, null], [11217, 13065, null], [13065, 15254, null], [15254, 16902, null], [16902, 18265, null], [18265, 19849, null], [19849, 21338, null], [21338, 22384, null], [22384, 24924, null], [24924, 26715, null], [26715, 28392, null], [28392, 29699, null], [29699, 31686, null], [31686, 33419, null], [33419, 35369, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35369, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35369, null]], "pdf_page_numbers": [[0, 835, 1], [835, 3196, 2], [3196, 5253, 3], [5253, 7430, 4], [7430, 8794, 5], [8794, 9973, 6], [9973, 11217, 7], [11217, 13065, 8], [13065, 15254, 9], [15254, 16902, 10], [16902, 18265, 11], [18265, 19849, 12], [19849, 21338, 13], [21338, 22384, 14], [22384, 24924, 15], [24924, 26715, 16], [26715, 28392, 17], [28392, 29699, 18], [29699, 31686, 19], [31686, 33419, 20], [33419, 35369, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35369, 0.0177]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
1e3fd55f3f4e284c91a0fe7223a77b135fee65ac
|
Certification of Safety-Critical Software Under DO-178C and DO-278A
Stephen A. Jacklin
NASA Ames Research Center, Moffett Field, CA, 94035
The RTCA has recently released DO-178C and DO-278A as new certification guidance for the production of airborne and ground-based air traffic management software, respectively. Additionally, RTCA special committee SC-205 has also produced, at the same time, five other companion documents. These documents are RTCA DO-248C, DO-330, DO-331, DO-332, and DO-333. These supplements address frequently asked questions about software certification, provide guidance on tool qualification requirements, and illustrate the modifications recommended to DO-178C when using model-based software design, object oriented programming, and formal methods. The objective of this paper is to first explain the relationship of DO-178C to the former DO-178B in order to give those familiar with DO-178B an indication of what has been changed and what has not been changed. With this background, the relationship of DO-178C and DO-278 to the new DO-278A document for ground-based software development is shown. Last, an overview of the new guidance contained in the tool qualification document and the three new supplements to DO-178C and DO-278A is presented. For those unfamiliar with DO-178B, this paper serves to provide an entry point to this new certification guidance for airborne and ground-based CNS/ATM software certification.
I. Introduction
RTCA DO-178B\(^1\) has long been regarded as a document providing the premier means or path to obtain FAA certification of software to be used in airborne systems. DO-178B was not intended to be a process guide for software certification, but rather a description of what high-quality software development processes should be put in place to create airborne software that performs its desired function. In principal, if life cycle evidence can be produced to demonstrate that these processes have been correctly and appropriately implemented, then such software should be certifiable. The document is maintained by the RTCA (Radio Technical Commission for Aeronautics, established 1935), which is a private association of over 250 aeronautical organizations, including the FAA, NASA, DOD, other government agencies, airline manufacturer, airline operators, aircraft equipment suppliers, and various pilot associations.
Seven years ago, the RTCA created special committee 205 (SC-205) to produce a revision of DO-178B to account for new software development and verification technologies that were deemed immature at the time DO-178B was written. The new version, DO-178C “Software Considerations in Airborne Systems and Equipment Certification”\(^2\) was released in December 2011. Rather than placing all of the new guidance in DO-178C, the special committee decided to place the vast majority of the new guidance in six other documents. These documents were released together with DO-178C. They are:
- RTCA DO-278A\(^3\): Software Integrity Assurance Considerations for Communication, Navigation, Surveillance and Air Traffic Management (CNS/ATM) Systems
- RTCA DO-248C\(^4\): Supporting Information for DO-178C and DO-278A
- RTCA DO-330\(^5\): Software Tool Qualification Considerations
- RTCA DO-331\(^6\): Model-Based Development and Verification Supplement to DO-178C and DO-278A
- RTCA DO-332\(^7\): Object-Oriented Technology and Related Techniques Supplement to DO-178C and DO-278A
- RTCA DO-333\(^8\): Formal Methods Supplement to DO-178C and DO-278A
\(^1\) Aerospace Engineer, Intelligent Systems Division, Mail Stop 269-2, Senior Member AIAA
\(^2\) American Institute of Aeronautics and Astronautics
Figure 1. RTCA airborne and CNS/ATM software certification-related documents introduced in December of 1992. Dashed lines indicate supplements.
Figure 1 illustrates the functional relationship of airborne and CNS/ATM software certification-related documents published by RTCA prior to December 2011. DO-178B was a derivative product of DO-178A, DO-178, and other documents and was released in December 1992. The guidance contained in DO-178B was intended to be applicable to both airborne and ground-based software development. DO-278 was intended to be a supplemental document to modify the guidance of DO-178B for CNS/ATM software. Hence, both DO-178B and DO-278 together were to be referenced for the ground side. DO-248B was an additional supplement that provided no additional certification guidance, but contained an appendix of frequently asked software certification questions, several discussion papers of key DO-178B concepts, and the rationale used to create the DO-178B and DO-278 documents. The boxes in dashed lines indicate supplemental documents that were not intended to be complete in themselves.
Figure 2 illustrates how the new documents introduced by RTCA in December 2011 for DO-178C and DO-278A relate to each other. In this diagram, the dashed boxes indicate supplemental documents that are not intended to be used on their own. The supplemental documents modify the guidance contained in DO-178C and DO-278A. On the airborne side, DO-178C is the key document and it is a direct derivative of DO-178B. On the ground side, DO-278A is the key document, but it is not a direct derivative of DO-178B. Rather, DO-278A combines the guidance of DO-178C and DO-278 to produce a stand-alone reference for ground-based software verification. For both airborne and ground-based software, DO-331, DO-332, and DO-333 provide additional guidance for software using model-based development, object-oriented programming, and formal methods, respectively. These supplements provide additional guidance for both DO-178C and DO-278A, but need not be used if not applicable. DO-330 is a stand-alone document containing guidance for tool qualification and is intended to be used not only by persons using tools to verify software or auto-generate code, but also by persons developing the software tools. The tool developers need not read DO-178C or DO-278A because the guidance contained in those documents that is relevant for tool software development is repeated in DO-330.
The purpose of this paper is to provide an overview of the new guidance for safety-critical airborne and ground-based CNS/ATM software contained in DO-178C, DO-278A, and the other documents. In section II, the similarities of DO-178C to DO-178B will be presented by reviewing the basics of the DO-178B verification philosophy. In section III, an overview of the major new guidance contained in DO-178C is presented to highlight what has been changed. Section IV discusses the relationship of DO-278A developed for ground-based CNS/ATM software to the guidance presented in DO-178C for airborne software. The remaining sections of the paper provide a discussion of the new guidance contained in the other documents; section V for DO-248C, section VI for DO-330, section VII for DO-33, section VIII for DO-332, and section IX for DO-333.
Within the scope of this paper it is not possible to cite all or even most of the guidance contained in the new DO-178C document set from RTCA. Taken as a whole, the new documents comprise over 1000 pages of new documentation. The interested reader must download these documents from RTCA in order to fully appreciate and apply the new guidance. This paper provides an entry point for those interested in understanding the scope of these publications.
II. The A-B-Cs of the DO-178C Software Verification Philosophy
The purpose of this section is to identify the similarities of the guidance contained in DO-178C to past versions of the document. DO-178C is built on the principles established by its predecessor documents, DO-178, DO-178A, and DO-178B. Since testing can never prove the absence of software errors, the primary DO-178C philosophy is to demonstrate the quality of the software development process from beginning to end in an effort to minimize the creation of error. DO-178C, like DO-178B, calls for an extensive amount of requirements-based software testing to be performed, but equally important is the emphasis placed on system safety analyses, software analyses, software reviews, and formal proofs used to augment and support the development process. The subsections below identify the guidance presented in DO-178B that is retained in DO-178C.
A. Software Levels and Coverage.
DO-178C (section 2) uses the same software levels categories (SL-A to SL-E) as are used in DO-178B. The meaning of these categories is unchanged from their meaning in DO-178B. Level A is the highest level of software criticality. Like DO-178B, DO-178C (section 6) requires extensive verification coverage testing for level A and B software. Coverage refers to the degree to which it can be proved that the verification activities cover all requirements, code structure, and object code. DO-178C divides coverage into two types, requirements-based coverage and structural coverage. Requirements-based coverage analyzes the software test cases to confirm that they provide proof that all requirements have been satisfied. Structural coverage is met by proving that the test cases execute all code statements and that all data coupling and control paths have been tested.
B. Software Development Plan.
The Software Development Plan identifies the method of software development. It specifies the coding standards, programming languages, software testing, debugging tools, software development procedures, and the hardware used to develop and execute the software. As stated by DO-178C (section 4), the purpose is “to choose requirements development and design methods, tools, and programming languages that limit the opportunity for introducing errors, and verification methods that ensure that errors introduced are detected”. Tools include things like compilers, linkers, and V&V tools. Reviews are to be regularly conducted throughout the software development process to ensure that the Software Development Plan is being followed.
As was true in past versions of DO-178, DO-178C (section 5) views the software development process as a life cycle that starts with the planning and development of software requirements, continues through the software development and testing, and ends with the deployment and maintenance of the software. The software development process begins with recognition that the software is part of a larger system (e.g., an aircraft). The system defines the requirements of the software. In DO-178C, these are referred to as the system requirements allocated to software. The list of requirements includes not only the performance specifications of what the software is supposed to do, but also includes requirements derived from the System Safety Assessment and other documents. The decomposition of system requirements into software requirements remains a key step in the software development process. The incomplete or incorrect formulation of the software requirements will produce validation failures during software integration testing. Validation is the process of determining that the software requirements are correct and complete. DO-178C does not provide guidance for software validation testing because it is reasoned that software that is verified to be correct should, in theory, have no validation problems during software integration testing, unless the software requirements are incomplete or incorrect.
The software development process transforms high-level and derived high-level software requirements into code via a sequential process. In the first step, the high-level requirements are used to develop the software architecture. From the software architecture, low-level and derived-low-level requirements are developed. A derived requirement is anything that the software must do to function properly, yet is not stated as part of the software performance or safety requirements. Low-level requirements are used to produce the source code, and the source code is used to generate the object code for the target computer. It sometimes happens that high-level requirements are used to generate source code directly, in which cases those high-level requirements are also considered to be low-level requirements. Like DO-178B, DO-178C requires traceability from requirements to code.
D. Software Verification Process.
The software verification process is aimed at showing the correctness of the software. It consists of requirement reviews, code reviews, analyses, and testing. All steps in the decomposition of high-level system requirements to object code are considered in this process. DO-178B and DO-178C require examination of the output of all processes to check for software correctness and to find errors. DO-178C (section 6) requires that: 1) the high-level software requirements are correctly and completely formed from the system requirements, 2) the high-level requirements are complete and consistent, 3) the software architecture correctly and completely meets all high-level requirements, 4) the low-level requirements correctly and completely fulfill the software architecture, 5) the low-level requirements are consistent and correct, 6) the software code correctly satisfies all low-level requirements, 7) all code is traceable to one or more low-level requirements, and finally, 8) the object code correctly implements the software on the target computer, and it is traceable and complies with all low-level and high-level requirements.
E. Certification.
The certification process described in DO-178C (section 10) is the same as that presented in DO-178B. Software certification (technically, “approval”) is obtained as the result of the certification authority agreeing that the Plan for Software Aspect of Certification (PSAC) has been fulfilled. In the United States, the authority responsible for certifying aircraft flight software is the Federal Aviation Administration (FAA). The PSAC is developed in collaboration between the software developer’s Designated Engineering Representative (DER) and the FAA. The same certification liaison process presented in DO-178B is also contained in DO-178C (section 9).
F. Other Similarities.
The sections contained in DO-178C describing the software life cycle (section 3), the software configuration management plan (section 7), the software quality assurance plan (section 8), the software development standards
(section 4.5), the software design standards (section 11), and the overall verification activities to be performed are generally the same as those presented in DO-178B.
III. What’s New in DO-178C?
Since DO-178C was seven years in the making, one might assume that the document has been substantially changed from DO-178B, but this is not true. Although DO-178C has many minor changes, these are mostly either editorial in nature or are clarifications made to help readers better understand key DO-178B concepts. These changes are frequent and helpful. However, the core structure and content of DO-178C is essentially the same as that seen in DO-178B. The utility of this similarity is to make DO-178C backward compatible with DO-178B by design. This means that existing software that has been previously approved under DO-178B is also approvable under DO-178C.
The major new guidance, additions, and modifications introduced in the DO-178C documentation set are contained in the supplemental documents and will be discussed in later sections of this paper. The subsections below highlight some of the new guidance contained in the DO-178C core document.
A. Activities, Guidance, and Guidelines.
One of the most frequent changes seen in DO-178C is that the word “activities” is used to replace the word “guidance” found in DO-178B, and wherever “guidelines” appeared in DO-178B, DO-178C uses the word “guidance”. The reason for these changes is that because DO-178B described processes (e.g., software development process) in terms of a list of activities the software developer was to perform, it made sense to call them activities, not guidance. Commensurate with this effort, the tables in Annex A have also been modified to include a new “activity” column that lists the relevant activities associated with all verification activities supporting the objective. DO-178C reserves use of the word guidance to indicate the most important steps to certification authorities. The word guideline is still used in some places, but its meaning is intended to indicate a list of supporting information.
B. Parameter Data Item Files.
DO-178C treats parameter data items in the same manner as executable object code. DO-178C defines parameter data items as data that influences the behavior of the software without modifying the executable object code. The parameter data file is directly usable by the processing unit of the target computer. Parameter data file items can be used to configure the software or provide database information that the executable code can use in execution. In nearly all instances, DO-178C replaces the phrase “executable object code” used in DO-178B with “executable object code and parameter data items”. In making this change, DO-178C calls for the same verification process to be followed for parameter data file items as that done for executable object code.
C. Bi-directional Software Traceability.
DO-178C emphasizes that two-way or bi-directional traceability is required between 1) system requirements (allocated to software) and high-level requirements, 2) high-level requirements and low-level requirements, 3) low-level requirements and source code, 4) software requirements and test cases, 5) test cases and procedures, and 6) test procedures and test results. Although DO-178B probably had this intent, the actual wording implied traceability only in the decomposition direction from high-level requirements to source code. DO-178C makes it clear that traceability needs to be verifiable in both directions and that verifiable trace data between these entities must exist. This assures that orphan source code and dead source code are not inadvertently produced.
DO-178C allows an exception to requirements traceability for compilers that produce object code that is not directly traceable to source code. However, the software planning process must provide a means to detect this code and ensure its verification.
D. Product Service History.
Under alternative methods (section 12.3), DO-178C provides expanded guidance for using product service history as a means to gain certification credit. Software that has been in service a length of time and whose executable object code has not been modified in an uncontrolled manner may be given certification credit. DO-178C states that the software developer should propose the amount of certification credit sought in the Plan for Software Aspects of Certification (PSAC). Product service history is identified as part of the PSAC in DO-178C.
E. Tool Qualification Levels.
DO-178C states that the tools used to generate software or to verify software must themselves be verified to be correct. This tool verification process is called qualification. Moreover, a tool such as a compiler qualified for one project is not necessarily qualified for a different project. DO-178C distinguishes between tools used to automatically generate software from tools used to automate some portion of the verification process. In general, DO-178C requires greater scrutiny of software generation tools. DO-178C sets forth five tool qualification levels (TQL-1 – TQL-5) based on the software level, and whether the tool is used to generate software, to verify software, or used to detect software errors. DO-178C refers the reader to the tool qualification supplement (DO-330) for specific guidance.
F. Formal Methods and Assurance Cases.
DO-178C removes explicit reference to the use of formal methods as an alternative method to satisfy DO-178C objectives, but instead cites the use of assurance cases to provide adequate confidence and evidence that a product or process satisfies its requirements. DO-178C defines an assurance case as a technique in which arguments are explicitly given to link the evidence to the claims of compliance with the system safety objectives. A rationale for an assurance case may be included in the software plans if that means of verification is planned. Guidance on the use of formal methods is presented in the DO-333 supplement on Formal Methods.
IV. RTCA DO-278A: Software Integrity Assurance Considerations for CNS/ATM Systems
DO-278A provides guidance for the production of ground-based, Communication, Navigation, Surveillance, and Air Traffic Management (CNS/ATM) software, just as DO-178C provides guidance for the production of airborne software. Because the former DO-278 was intended to be used as a supplement to DO-178B (see Fig. 1), CNS/ATM software developers were required to be familiar with DO-178B. DO-278 described the additions, deletions, and modifications to DO-178B that applied to the verification of ground-side software.
In contrast, DO-278A was created by combining DO-178C and DO-278 to make a single, stand-alone document. As a result, DO-278A may be used without reference to DO-178C. Both DO-178C and DO-278A have the same section names and use the same section numbers. The differences are that some subsections have been added to DO-278A. A good many of the differences between DO-178C and DO-278A are produced by changes in terminology, for example:
- “Software level” in DO-178C was replaced with “assurance level” in DO-278A
- “Certification” authority in DO-178C was replaced with “approval” authority in DO-278A
- “Aircraft” or “airborne system” in DO-178C was replaced with “CNS/ATM system” in DO-278A
- “Adaptation” data in DO-178C was replaced with “parameter” data in DO-278A
- The “Plan for Software Aspects of Certification (PSAC)” in DO-178C is referred to as “Plan for Software Aspects of Approval (PSAA)” in DO-278A
A. Assurance Level Definitions.
Whereas DO-178C defines five software levels (A-E) to categorize the effect of software failure in airborne systems, DO-278A defines six assurance levels (A-F). DO-178C abbreviates software level with SL, whereas DO-278A uses AL for assurance level. Table 1 compares the DO-178C and DO-278A software level classification schemes. As cited in DO-278 (but removed from DO-278A), assurance level 4 (or AL-4) was developed to account for “certain CNS/ATM systems where AL-3 was too stringent and AL-5 was too lenient”. DO-278A includes a column in the Process Objectives and Output Tables of Annex A to indicate the objectives that specifically apply to AL-4.
B. Tool Qualification.
DO-278A contains essentially the same tool qualification guidance contained in DO-178C. Like DO-178C, DO-278A requires software development and verification tools are qualified when the processes used in DO-278A are eliminated, reduced, or automated by the use of software tools. The main difference is that DO-278A takes into account the additional assurance level used for CNS/ATM systems. DO-278A refers the reader to DO-330 for an in depth discussion of the activities and guidance for tool qualification.
C. Service Experience.
The Product Service History section from DO-178C was expanded and added to DO-278A as the Service Experience section. This section describes what previous usage of a software product can be counted toward
approval credit. DO-278A specifies the requirements for receiving credit for product service history. The main objective is to verify that an equivalent level of safety is provided by the service experience history as would be otherwise obtained by following the standard DO-278A guidance. Whereas DO-178C identifies flight-hours as a useful metric for airborne software service experience, DO-278A cites in-service hours as an appropriate metric for CNS/ATM systems. DO-278A also provides guidance for systems having deactivated code and for those systems using recovery methods to recover from software or system errors.
D. COTS Software.
DO-278A includes an extensive section on the use of Commercial Off-The-Shelf (COTS) software in CNS/ATM systems. This section expands the COTS material presented in DO-278. In DO-278A, COTS software is software that is sold by commercial vendors without modification or development of the software required. Any software needed to integrate COTS software into a CNS/ATM system (e.g., wrapper code) is approvable only if it is developed in a manner that fulfills all the objectives of DO-278A.
The guidance provided by DO-278A for COTS software aims to ensure that the level of confidence in COTS software is the same as that for software developed according to the standard guidance provided in DO-278A. In order to identify any software development weaknesses of COTS software, DO-278A recommends that a gap analysis be performed to identify the extent to which the objectives of DO-278A can be demonstrated to have been achieved by the COTS software. An assurance plan should be developed to specify how the gaps will be satisfied for the assurance level sought. DO-178A recommends that a COTS software integrity assurance case be developed that provides a rationale for demonstrating that the software meets its requirements through a rigorous presentation of claims, arguments, evidence, assumptions, justifications, and strategies. As such, COTS software must essentially be shown to meet all the objectives of DO-278A. DO-278A presents an extensive explanation of the software planning, objectives, activities, acquisition, verification, configuration management and quality assurance processes and objectives in Section 12 and in the tables of Annex A.
E. Additional System Considerations.
DO-278A addresses additional topics for ground software verification not considered in DO-178C. These are software communication, security, adaptability, and cutover (or hot-swapping). More than airborne software, ground-software is comprised of many distributed system components. System communication guidance is provided for ground software systems that are coupled together. The main concern when coupling systems is that software approved to a lower assurance level might potentially corrupt software approved to a higher level. The general fix for this situation is to specify further verification activities to increase the assurance level of the lower-level software. A section on hot-swapping specifies additional considerations to ensure software integrity for systems that are in use 24-hours a day and require real-time software and hardware updates.
V. DO-248C: Supporting Information for DO-178C and DO-278A
DO-248C provides a list of frequently asked questions, discussion papers, and rationale for DO-178C and DO-278A. It is not intended that this supplement be read from front to back, but rather topically. A list of searchable key words is provided to help the reader find the most pertinent material for a topic of interest.
DO-248C provides a wealth of discussion papers that contain explanatory information supporting the guidance found in DO-178C and DO-278A. Those who worked on SC-205 will recognize that these discussion papers
encapsulate the great debates held during the formulation of DO-178B and DO-178C. Discussion papers were the primary means SC-205 members used to facilitate discussion of proposed changes to DO-178B. Most are short (1-2 page) documents that describe the supporting rationale for a proposed change. Literally hundreds of discussion papers were written over the course of the project. DO-248C also presents an appendix of 84 frequently asked questions and answers. Examples are:
- Does the DO-178C/DO-278A definition of COTS software include software option-selectable software?
- What needs to be considered when performing structural coverage at the object code level?
- How can all Level D (AL-5) objectives be met if low-level requirements and source code are not required?
The last section of DO-248C presents 11 rationale arguments, one to discuss the intended use of each section in DO-178C (2 thru 12) plus a rationale for the creation of the supplements to DO-178C and DO-278A.
It is important to note that while DO-248C is interesting and useful, it does not provide any additional certification or approval guidance for airborne or ground-based CNS/ATM software. It provides a large quantity of explanatory material and a record of the great arguments and rationale developed while writing the new guidance.
VI. DO-330: Software Tool Qualification Considerations
DO-330 is a stand alone document that provides guidelines to judge when tool qualification is necessary, and if so, what verification activities are recommended. DO-330 presents guidance for both tools used to create software and tools used to verify software. The goal of the guidance is to ensure that these tools are developed to the same software assurance level as the software they produce or verify. DO-330 repeats much of the same guidance contained in DO-178C because it is intended to be used by audiences that are not familiar with DO-178C or DO-278A. Hence, DO-330 is very similar in appearance and content to DO-178C, and has the same document organization. Software developers and automated verification tool developers, therefore, need not look at DO-178C. The subsections below highlight some of the tool qualification guidance provided by DO-330.
A. Tool Qualification Levels.
DO-330 defines five tool qualification levels (TQLs). Tool qualification level 1 (or TQL-1) is the highest qualification level and has the most objectives and verification activities. TQL-1 is required for software tools that are used to generate either DO-178C software level A (SL-A) software or DO-278A assurance level 1 (AL-1) software. TQL-2 is required for software tools that are used to generate either SL-B or AL-2 software. TQL-3 is required for software tools that are used to generate either SL-C or AL-3 software. TQL-4 and TQL-5 are required for software tools that are used to verify software (not generate it). DO-330 places more stringent verification requirements on tools used to generate code than tools used to verify code. This distinction is the same as that defined in section 12.2 of DO-178C and DO-278A.
B. Tool Verification Activities.
Annex A of DO-330 presents the Tool Qualification Objective Tables to indicate which objectives, processes, and activities need to be satisfied as a function of the TQL. TQL-1, TQL-2, and TQL-3 (the higher levels) generally require the satisfaction of every process, every activity, and every output verification step listed in the tables. TQL-1 also has the most objectives requiring satisfaction by independence. TQL-2 has less objectives requiring independent satisfaction than TQL-1, but requires almost the same activities. TQL-3 has fewer activities than TQL-2 and does not require independence of verification except for the quality assurance activities. TQL-4 (the special category that has no equivalence to a DO-178C software level) provides less stringent satisfaction of the tool planning process, the tool development process, and the verification of outputs. TQL-5 (equivalent to SL-D) further relaxes the requirements by requiring no tool development process or verification tool requirements process, integration process, or testing of tool outputs. TQL-6 is not addressed by the Annex of DO-330.
DO-330 provides essentially the same software development guidance for tool qualification as that presented in DO-178C for software verification. An important distinction, however, is that DO-330 recognizes that there are two life cycles to consider. There is the life cycle associated with the development of the software tool, and there is the life cycle of the software on which the tool will find application. DO-330 provides guidance for both.
DO-330 requires that a tool operational requirements (TOR) document be written to specify the requirements of how the tool will be used within the software life cycle. The requirements set forth in the TOR are required to be verified and validated. DO-330 recommends the same software development life cycle processes as those specified in DO-178C or DO-278A, but references the TOR rather than system requirements. The tool requirements are, in
turn, developed from the TOR and are then used to develop the tool architecture and low-level requirements. The verification of the tool design process, the tool coding process, and the tool integration process are similar to those presented in DO-178C and DO-278A. Like DO-178C, DO-330 requires bi-directional traceability between requirements and code.
D. Tool Verification Process.
The tool verification process recommended by DO-330 consists of two parts. The first part is comprised of a combination of reviews, analyses, and test cases to verify that the requirements are correct, that the tool architecture meets the requirements, that the low-level requirements satisfy the software architecture requirements, and that the source code fulfills the high and low-level requirements.
The second part of tool verification ensures that the software tool meets its intended requirements as a software development or verification tool. The tool operational verification process is performed to provide confidence that the outputs and functionality of the tool comply with the tool operational requirements in the intended operational environment. The operational verification process consists of a combination of reviews, analyses, and tests to demonstrate full coverage of the software life cycle intended to be eliminated, reduced, or automated by use of the tool. The test cases and procedures used to show TOR satisfaction should be requirements-based tests. The test procedures should include tests of the object code in its operational environment. The set of inputs used in the test cases should represent those found in actual tool use.
VII. RTCA DO-331: Model-Based Development and Verification Supplement
The DO-331 supplement is intended to augment DO-178C and DO-278A when model-based development and verification are used as part of the software life cycle. The supplement does not provide guidance on the use of models for verification purposes. It does not discuss how model-based development may be used to support automated code generation, automated test generation, or the automated verification of software requirements and architectures. Rather, the aim of the supplement is to identify how the guidance in DO-178C and DO-278A may be modified when software is developed using model-based methods.
In DO-331, a model can be an abstraction of actual software code or a portion of the verification process. The model may also contain requirements and/or definition of the software architecture so that it may be used for direct analysis of the software behavior.
DO-331 repeats some of the guidance contained in DO-178C to show precisely where modifications and additions for model-based design apply. The following subsections provide a summary of the major new guidance contained in DO-331.
A. Model Requirements.
DO-331 makes a distinction between requirements for specification models and requirements for design models. Specification models use high-level software requirements to state model functional, performance, interface, and safety characteristics. Design models use primarily low-level requirements and software architecture specifications to represent internal data structures, data flow, and control flow. In either case, DO-331 requires that the models specify the configuration items, modeling techniques, model element libraries, interface descriptions, configuration items, and model development environment. Traceability between the design model, low-level requirements and the model code is required. There should be no model code that cannot be traced back to low-level requirements.
B. Model Coverage Analysis.
The objective of model coverage analysis is to discover requirements contained in the model that were not exercised by verification test cases. DO-331 recommends model coverage be shown for all state machine transitions, logic equation decisions, numeric data equivalence classes (and boundary data), and all derived requirements. Model coverage analysis should be performed using requirements-based verification test cases.
C. Model Simulation.
In order to obtain certification credit for simulation, DO-331 requires that the applicant clearly show what reviews and analyses are needed to satisfy the model verification objectives. The analyses must show that simulation cases exist for each model requirement and that these simulations address both normal range and robustness test inputs. Verification of the executable object code is encouraged to be primarily performed by testing in the target computer environment. The objective of model simulation is to verify that the model satisfies the requirements used
to create it and to gather evidence that the model is accurate, consistent, and compatible with all system-level, high-level, and low-level requirements.
D. Software Model Standards.
DO-331 recommends that software models be developed to standards that define the modeling techniques used to build the models. These software model standards specify the methods and tools used to develop the models, including the modeling language. The software model standards should include guidance on programming styles such as naming conventions, diagram layouts, allowable elements, and the number of nesting levels or architectural layers. The software model standards should state the level of traceability between requirements and other life cycle data. The software model standards should also specify any constraints on the use of modeling tools and model element libraries.
VIII. RTCA DO-332: Object Oriented Technology and Related Techniques Supplement
RTCA DO-332 was written to provide guidance on the use of object oriented programming languages that use concepts such as inheritance, polymorphism, overloading, type conversions, exception management, dynamic memory management, virtualization, and other concepts not universally in common usage at the time DO-178B was written. The DO-332 supplement is very well written and includes much explanatory text concerning the basic features of object-oriented programming.
DO-332 identifies the additions, modifications, and deletions to be made to the DO-178C (or DO-278A) objectives and activities when object-orientated techniques are used in airborne or ground-based software. Annex A of DO-332 contains modifications to the verification activities specified in DO-178C, while Annex C presents modifications to the verification activities specified in DO-278A. Annex D provides a discussion of vulnerabilities associated with the use of object-oriented technologies. The highlights of the new guidance are presented below.
A. Software Development Process.
DO-332 recommends that the class hierarchy used should be based on high-level requirements and verified for consistency. A locally type-consistent class hierarchy should be developed with associated low-level requirements wherever substitution of types happens. Local type consistency is required to be verified. DO-332 also calls for the software development process to include a plan for dynamic memory management. A strategy for exception management should also be included. DO-332 states that bi-directional trace data should exist to show traceability between requirements and methods, since methods are used to implement all functionality in object-oriented programs. In addition, a requirement which traces to a method in a class should also trace to the method in its subclasses.
B. Software Verification Process.
DO-332 states that test cases should ensure that class constructors properly initialize the state of their objects. In cases where inheritance with method overriding and dynamic dispatch are used, verification activities should be done to ensure that all type substitutions are safe and that each class pass the same verification testing as their parent types.
DO-332 stresses the importance of having verification activities to verify that dynamic memory allocation is done correctly. Verification activities must show that the dynamic memory management is robust to reference ambiguity, fragmentation starvation, de-allocation starvation, memory exhaustion, premature de-allocation, lost updates, stale references, and unbounded allocation or de-allocation times. It should be verified that there is sufficient memory to accommodate the maximum storage required. The memory must be verified to successfully allocate memory for every request as long as there is sufficient free memory. The means of calculating the amount of free memory remaining should also be verified to be accurate and free from leakage problems whereby memory which is no longer needed fails to be de-allocated.
C. Vulnerability Analysis.
DO-332 presents a vulnerability analysis discussion in Annex D. The purpose is to describe the complications that may arise with the use of object-oriented technologies. These special problems are associated with inheritance, parametric polymorphism, overloading, type conversion, exception management, dynamic memory management, and virtualization. Examples of the extensive verification guidance provided by the annex include:
• With regard to inheritance, a demonstration of type consistency by verifying that each subclass is substitutable for its superclass is recommended. Every verification activity performed on the superclass should also be performed on the subclass.
• For software having parametric polymorphism, verification activities should show that operations acting on substituted parameters implement the intended semantic behavior. Each unique instantiation of a parameterized type or combination of types should be verified.
• To minimize the problems associated with overloading, the use of explicit type conversion should be used to reduce overloading ambiguity. Verification activities should ensure that type conversions (implicit and explicit) are safe and that all implications are understood.
• It is recommended that a strategy be developed to handle all exceptions such as range checks, bounds checks, divide-by-zero checks, or checks on post-conditions. It is desired that all code modules handle exceptions in the same way.
• It is recommended that an automatic method of memory reclamation be provided instead of relying on the correct use of malloc() and free() for dynamic memory management.
• It is advised that worst-case execution timing be performed considering all in-code dynamic memory operations. Separate threads used for memory management (e.g., garbage collection) and should be considered as part of the task load.
Annex D also discusses activities for verification of traceability, structural coverage, component-based development, memory management, and timing for object-oriented programs. Though procedural programming techniques require verification of these as well, object-oriented programming requires additional analyses. DO-332 recommends verifying traceability between the requirements of a subclass and the requirements of all of its superclasses to ensure type substitutability. Traceability should be shown between object code and the source code if multiple dynamic dispatches are possible through a call point. Detailed discussion of these points and many others are presented in the annex of DO-332.
IX. RTCA DO-333: Formal Methods Supplement to DO-178C and DO-278A
RTCA DO-333 states that formal methods are mathematically based techniques for the specification, development, and verification of software aspects of digital systems. The objective of the supplement is to provide additions to and modification of the DO-178C objectives, activities, explanatory text, and software life cycle data that apply when formal methods are used as part of the software life cycle. The supplement makes clear that formal methods may be used for all or just a small part of the verification process and may supplement other verification methods. In addition to the modifications of DO-178C, the supplement also provides clarifications on the use of formal methods through discussion papers contained in Appendix B.
The supplement requires that if formal methods are used to verify some aspect of the software development process, then the software plans must explain the intended usage. The Plan for Software Aspects of Certification (PSAC) should provide an overview of how formal methods will be used and what evidence those methods will provide. The Software Development Plan should provide details on the specific use of formal methods. Additions required to the Software Verification Plan (SVP) are especially important. All assumptions related to the use of formal analysis to detect errors and to verify functionality should be justified therein. The SVP should show that there are no verification gaps produced by the combination of formal and procedural analyses.
A. Formal Models.
DO-333 considers a formal model to be an abstract representation of certain aspects of a system (or code) for which the model notation uses precise, unambiguous, and mathematically defined syntax and semantics. The models may be graphical (e.g., state machine diagrams), differential equations, or computer languages. Because formal notations are precise and unambiguous, they can be used to assist verification by helping show accuracy and consistency in the representation of requirements and life cycle data. DO-333 does not require all of the requirements of a formal model to be formally expressed. However, if the high-level requirements and low-level requirements are both formally modeled, then formal analysis can be used to show compliance. DO-333 defines formal analysis as the use of mathematical reasoning to guarantee that properties are always satisfied by a formal model.
B. Automated Reviews and Analyses.
A substantial amount of the guidance provided by the Formal Methods Supplement consists of defining what DO-178C-required reviews and analyses can be augmented or replaced by formal analysis. DO-333 allows the use
of formal analysis to show satisfaction of the objectives for software output review and analysis. Formal methods may be used to optionally demonstrate the compliance of outputs to inputs, the accuracy of software representation, the consistency of requirements, the conformance to standards, the traceability of requirements to code, and/or the algorithmic correctness. Similarly, for both high-level and low-level requirements, formal analysis is an acceptable means of showing accuracy, consistency, verifiability, traceability, and conformance to standards. DO-333 also presents the specific aspects of software architecture and source code review and analysis that may be satisfied by formal analysis. The precise and unambiguous language of formal models helps reviews and analyses to also be precise and unambiguous.
C. Verification of Source Code and Executable Code.
DO-333 states that formal methods can be used in the verification of the source code or object code (or both) if the requirements for each code are formally expressed, if formal models of the codes exist, and if formal evidence demonstrates that the formal models of the codes satisfy the requirements. If formal models exist for both the source and the object codes, the verification of property preservation between source code and object code is allowed using formal analysis.
D. Coverage Analysis.
DO-333 discusses the ways in which formal analysis may be used to satisfy the coverage requirements of DO-178C and DO-278A. This guidance states that when any low-level testing is used to verify that low-level requirements for a software component are satisfied, then the DO-178C guidance for structural coverage analysis should be followed. When only formal methods are used to verify that low-level requirements are satisfied, then the guidance in DO-333 (section 6) applies. The supplement states that although it is possible to use a mixture of testing and formal analysis to show that verification evidence exists for all high-level and low-level requirements, no known test cases exist. In this case, the supplement permits certification authorities to approve software coverage if it can be demonstrated by a combination of methods that structural and requirements-based coverage have been achieved.
DO-333 requires that all assumptions made during the formal analysis are verified. It should be demonstrated that for all input conditions, the required output has been specified; and likewise, for all outputs, the required input conditions have been specified. Analysis test cases should provide evidence that the formal analysis achieves the required coverage level. All code structures must be shown to be covered by either formal or procedural analyses. DO-333 states that when a combination of formal methods and testing are used to assess coverage, functional (requirements-based) tests executed on the target hardware should always be done to ensure that the software in the target computer will satisfy the high-level requirements.
X. Conclusion
This paper provided an overview of the new certification guidance contained in RTCA DO-178C, DO-278A, DO-330, and the related supplemental documents created by RTCA SC-205 for the development of safety-critical airborne and CNS/ATM ground-based software. The objective of this paper was to help those not familiar with the new DO-178C documentation set to gain an appreciation for the scope of the information contained in the nearly 1000 pages of new guidance material from RTCA. A review of the DO-178B software verification guidance was presented prior to discussing the new material introduced in DO-178C and DO-278A. Following this, an overview of the new content contained in DO-178C for airborne software verification and in DO-278A for ground-based CNS/ATM software verification was discussed. Then the highlights of new guidance contained in the other documents supporting DO-178C and DO-278A were presented in subsequent sections. These other documents are:
- RTCA DO-248C\(^3\): Supporting Information for DO-178C and DO-278A
- RTCA DO-330\(^3\): Software Tool Qualification Considerations
- RTCA DO-331\(^3\): Model-Based Development and Verification Supplement to DO-178C and DO-278A
- RTCA DO-332\(^3\): Object-Oriented Technology and Related Techniques Supplement to DO-178C and DO-278A, and
- RTCA DO-333\(^3\): Formal Methods Supplement to DO-178C and DO-278A.
Although within the scope of this paper it was not possible to present every detail of the new guidance, it is hoped that the summary information contained herein will stimulate interest in these publications. The reader requiring specific information must download these documents from RTCA\(^3\) in order to fully appreciate and apply the new guidance.
Acknowledgments
The new DO-178C, DO-278A, and companion documents are the work of RTCA Special Committee 205 (SC-205). Although the first credit goes to the RTCA staff, the organizing work of the SC-205 leadership team also deserves special recognition. The author especially wishes to acknowledge the effort and devotion of SC-205 chairs, Jim Krodel (Pratt & Whitney) and Gerard Ladier (Airbus), and executive committee members Barbara Lingberg (FAA-CAST Chair), Mike DeWalt (FAA), Leslie Alford (Boeing), Ross Hannan (Sigma Associates), Jean-Luc Delamaide (EASA), John Coleman (Dawson Consulting), Matt Jaffe (ERAU), and Todd White (L-3 Communications/Qualtech). Also deserving special recognition are the leads and co-leads of the subgroups: Ron Ashpole (Silver Atena), Ross Hannan (Sigma Associates), Frederic Pothon (ACG Solutions), Leanna Rierson (Digital Safety Consulting), Pierre Lionne (EADS APSYS), Mark Lillis (Goodrich GPECS), Herve Delseny (Airbus), Jim Chelinni (Verocel), Duncan Brown (Rollys-Royce), Kelly Hayhurst (NASA), David Hawkens (NATS), and Don Heck (Boeing). Several others who chaired these committees prior to publication of DO-178C and the SC-205 committee members themselves whose names are far too numerous to mention are cited in Appendix A of DO-178C (including the author of this paper).
The author’s support of RTCA SC-205 was provided by the NASA Aviation Safety Program, the NASA Intelligent Resilient Aircraft Control (IRAC) Project, and the NASA System-Wide Safety and Assurance Technologies (SSAT) Project.
References
|
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20120016835.pdf", "len_cl100k_base": 10615, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 34481, "total-output-tokens": 12257, "length": "2e13", "weborganizer": {"__label__adult": 0.0003883838653564453, "__label__art_design": 0.0002472400665283203, "__label__crime_law": 0.0005159378051757812, "__label__education_jobs": 0.0014848709106445312, "__label__entertainment": 6.121397018432617e-05, "__label__fashion_beauty": 0.00018334388732910156, "__label__finance_business": 0.00037169456481933594, "__label__food_dining": 0.00033092498779296875, "__label__games": 0.0009312629699707032, "__label__hardware": 0.00182342529296875, "__label__health": 0.00036025047302246094, "__label__history": 0.0002684593200683594, "__label__home_hobbies": 0.00010961294174194336, "__label__industrial": 0.0006718635559082031, "__label__literature": 0.00023353099822998047, "__label__politics": 0.00024008750915527344, "__label__religion": 0.0003731250762939453, "__label__science_tech": 0.0222320556640625, "__label__social_life": 8.386373519897461e-05, "__label__software": 0.00852203369140625, "__label__software_dev": 0.95849609375, "__label__sports_fitness": 0.0003223419189453125, "__label__transportation": 0.001750946044921875, "__label__travel": 0.00024437904357910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53748, 0.03243]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53748, 0.49033]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53748, 0.91923]], "google_gemma-3-12b-it_contains_pii": [[0, 3691, false], [3691, 7463, null], [7463, 9281, null], [9281, 14479, null], [14479, 19047, null], [19047, 23518, null], [23518, 27316, null], [27316, 32511, null], [32511, 37194, null], [37194, 41674, null], [41674, 46533, null], [46533, 51314, null], [51314, 53748, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3691, true], [3691, 7463, null], [7463, 9281, null], [9281, 14479, null], [14479, 19047, null], [19047, 23518, null], [23518, 27316, null], [27316, 32511, null], [32511, 37194, null], [37194, 41674, null], [41674, 46533, null], [46533, 51314, null], [51314, 53748, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53748, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53748, null]], "pdf_page_numbers": [[0, 3691, 1], [3691, 7463, 2], [7463, 9281, 3], [9281, 14479, 4], [14479, 19047, 5], [19047, 23518, 6], [23518, 27316, 7], [27316, 32511, 8], [32511, 37194, 9], [37194, 41674, 10], [41674, 46533, 11], [46533, 51314, 12], [51314, 53748, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53748, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
e0cc3a7bf8662687753518e1c00d2638b6fa0e82
|
dB: Blame tracking at higher fidelity
Citation for published version:
Link:
Link to publication record in Edinburgh Research Explorer
Document Version:
Peer reviewed version
General rights
Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights.
Take down policy
The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim.
This paper introduces $\lambda dB$, a blame calculus with dependent types. It supports dependent functions, predicate refinement at all types, the dynamic type, and full blame tracking. It is inspired by and extends previous work on hybrid types and Sage, by Flanagan and others; manifest contracts, by Greenberg, Pierce, and Weirich; and blame calculus by Wadler and Findler. While previous work only allows refinement over base types, $\lambda dB$ supports refinement over any type. We introduce novel techniques in order to prove blame safety for this language, including a careful analysis that reduces open judgments on terms to closed ones on values, and the idea of ’subtyping with a witness’, which fix flaws in the previous work of Wadler and Findler. These technical contributions mean that we can achieve a completely operational account of the metatheory of our language, and thereby avoid the need to intertwine operational and semantic models which bedevils the work on hybrid types and manifest contracts.
ACM Reference Format:
1 INTRODUCTION
Today half the research community is attempting to make typing more precise, via dependent types, while the other half is attempting to make typing less precise, via gradual types. Our concern here is with gradual dependent types, which aim to achieve both.
Our concerns are not merely academic. Developers are paying increasing attention to dependently-typed systems such as Coq, Agda, Idris, and $F^*$, while vendors are rolling out gradually-typed languages such as Microsoft’s TypeScript, Facebook’s Hack, and Google’s Dart, and established languages such as Racket, C#, and Python are adding features for gradual typing.
A long line of work addresses gradual dependent types.
Findler and Felleisen [2002] introduced contracts for higher-order programming languages. A flat contract ensures that a value satisfies a predicate, and a function contract ensures that its argument and result each satisfy a contract. They also introduced blame tracking, where blame can fall either on the term contained in the contract (positive blame) or the context containing the contract (negative blame). Extension of blame tracking to higher-order functions, where blame behaves covariantly on the range and contravariantly on the domain, was one of their key insights. They also consider dependent function contracts, where the contract for the result depends upon the value of the argument.
Tobin-Hochstadt and Felleisen [2006] and Matthews and Findler [2007] both apply contracts to integrate typed and untyped code, and both show blame safety: if a contract fails, blame must lie with the untyped code. Each requires a sophisticated proof based on operational equivalence.
Supported by EPSRC grants EP/L01503X/1 (CDT in Pervasive Parallelism) and EP/K034413/1 (ABCD: A Basis for Concurrency and Distribution).
Siek and Taha [2006] applied contracts to integrate static and dynamic typing, coining the phrase “gradual typing” to describe such systems.
Ou et al. [2004] applied contracts to integrate dependent typing with ordinary typing, performing dynamic checks to ensure an ordinary typed value conforms to the more precise dependent type. They required that the validity of every predicate in a refinement type can be decided at compile time.
Flanagan [2006] applied contracts to allow more flexible dependent types, called hybrid types. Rather than requiring that every predicate in a dependent type can be decided at compile time, they mix static verification and dynamic checking to support arbitrarily expressive predicates. Knowles et al. [2006] describes Sage, a language that supports both hybrid and gradual types. Flanagan [2006] supports subsumption, forcing the definitions of typing and subtyping judgements to be mutually dependent. Knowles and Flanagan [2010] observes that Flanagan [2006] is ill-defined, due to a typing judgement appearing to the left of an implication in a subtyping judgement, violating monotonicity, and resolve the problem by relying on a denotational semantics for types to ensure their definitions are well founded.
Greenberg et al. [2010, 2012] introduce the names latent and manifest to distinguish contracts from hybrid types, and observe there are two possible formulations of latent systems that they dub lax and picky. Like Knowles and Flanagan [2010] they support subsumption, and rely on a denotational semantics of types to ensure their definitions are well founded. They give translations between the lax and picky latent systems and the manifest system, showing that some translations can be exact while others will overapproximate.
Siek and Taha [2006], Ou et al. [2004], and Flanagan [2006] all feature a similar translation from a surface language to a core calculus with casts, where casts act as the analogue of contracts. None of them features blame tracking; in Siek and Taha [2006] and Ou et al. [2004] casts are unlabelled, while in Flanagan [2006] casts are labelled but there is no notion of positive and negative blame, which means they cannot pin a failure to one side of a cast. As a result, all three only characterise correctness globally: if all the casts are from subtypes to supertypes, then the program never fails. But this characterisation is too strict for the most common use case of all three systems, since if any dynamic check may fail then no formal guarantees apply.
Wadler and Findler [2009] defined a calculus similar to the core language of the three preceeding papers. They added blame tracking, and proved a more forgiving form of blame safety: if a cast fails, blame must lie with the less-precisely-typed side of the cast. As opposed to the global characterisation of the three preceeding papers, blame safety can be established on a cast-by-cast basis. Like Ou et al. [2004] and Flanagan [2006] they only permit refinement over base types, though they are even more restrictive in that they do not support dependent function types. Whereas Tobin-Hochstadt and Felleisen [2006] and Matthews and Findler [2007] provide sophisticated proofs of blame safety based on operational equivalence, Wadler and Findler [2009] provides a simple proof of blame safety based on preservation and progress.
Our goal here is to combine the dependent function types studied by Flanagan [2006] and Greenberg et al. [2010, 2012] with the blame tracking of Wadler and Findler [2009], allowing us to establish blame safety on a cast-by-cast basis while supporting dependent function types. Unlike these previous works, which only permit refinements over base types, we permit refinement over any type. Whereas Knowles and Flanagan [2010] and Greenberg et al. [2012] support subsumption and rely on a denotational semantics of types to ensure their definitions are well founded, we avoid subsumption, permitting us to disentangle the definitions of typing and subtyping judgements and to avoid reliance on a semantics of types.
Flanagan [2006] and Greenberg et al. [2010, 2012] use name-dependent typing, where types may depend upon arbitrary terms and call-by-name evaluation is used. Here we use value-dependant
typing, where types only depend upon values, and call-by-name evaluation is used. We follow the development standard in other works, such as [Swamy et al. 2013]. Value-dependent typing fixes the order of evaluation, facilitating reasoning about blame as an effect, as well as reasoning about other effects.
Like Wadler and Findler [2009] and Greenberg et al. [2012], we have no way of checking at compile-time that one refinement implies another, limiting such (potentially undecidable) judgements to the run-time type system. Also like Wadler and Findler [2009], all casts are explicit in the source language, and at run-time all values of refinement type are explicitly labeled. This gives us a simple system, suitable as a core calculus, which has a clean metatheory with properties such as unicity of types. In practice, one would probably also want a translation from a surface language into this core calculus, similar to those discussed by Ou et al. [2004], Siek and Taha [2006], and Flanagan [2006]. We believe such a translation would be the correct place to add support for subsumption and compile-time validation that a value satisfies a refinement, but we leave this for future work.
We have mentioned above the main influences on our development. We began this work several years ago; since then, there have been additional relevant works. Lehmann and Tanter [2017] introduces refinement types with support for unknown refinements, and again is restricted to refinement of base types. Tanter and Tabareau [2015] and Dagand et al. [2016, 2018] present an interoperability framework added as a library to Coq, and Eremondi et al. [2019] present a gradual type theory with both unknown values and unknown types. These latter systems do not support refinement types per se, but do support the related notion of sigma types and are not restricted to base types. None of these systems supports blame.
It turns out there are two flaws in the development of Wadler and Findler [2009], which we describe and correct here. First, their proof of blame safety is incorrect, as there are counter-examples to its claim that reduction preserves blame safety. Second, they fail to correctly define blame safety for open terms. We detail examples of these flaws in Section 2. The development given here avoids these flaws, by introducing a careful analysis that reduces open judgements on terms to closed judgements on values, and by a novel characterisation of blame safety. Whereas the earlier work uses a subtyping relation over two types, here we use a three-place relation between a closed value and two closed types, and a four-place relation between an open term and two open types in a given type environment; the relations are defined by mutual recursion. We dub these relations ‘subtyping with a witness’.
This paper represents preliminary work. Many of the proofs have not been carried out in detail, and accordingly we often label results as conjectures rather than propositions.
Summary of contributions.
- Whereas previous gradual type systems support either dependent types or blame safety, we present a system that supports both.
- Whereas previous systems only support refinement over base types, we support refinement over any type.
- Whereas previous systems support name-dependent typing, we support value-dependent typing, which is better suited to programs with computational effects such as blame.
- Whereas previous systems support subsumption and require denotational semantics to break circularity in the definition, we eschew avoid subsumption and our semantics is purely operational.
- We reveal flaws in Wadler and Findler which are corrected in our system by introducing an analysis that reduces open judgements on terms to closed ones on values and novel three-place and four-place subtyping relations.
The outline of the remainder of this paper is as follows. Section 2 highlights the problem in the original formulation of Blame Theorem by [Wadler and Findler 2009]. Section 3 introduces λdB syntax, types, and reductions. Section 4 introduces our formulation of type safety for λdB. Section 5 introduces our formulation of subtyping and blame safety. Section 6 describes related work.
2 OVERVIEW
Let’s consider the type of positive integers and the type of natural numbers. Adopting the syntax of Swamy et al. [2013], we define the type of positive numbers and the type of natural numbers as following:
\[
\begin{align*}
\text{Pos} & \overset{\text{def}}{=} (x : \text{int})\{x > 0\} \\
\text{Nat} & \overset{\text{def}}{=} (x : \text{int})\{x \geq 0\}
\end{align*}
\]
We cast the integer 2 to the type Pos as follows:
\[
2 : \text{int} \xRightarrow{\text{p}} \text{Pos}
\]
At runtime, the cast evaluates the predicate \(x > 0\) with \(x\) instantiated to 2, and it raises blame \(\text{p}\) if the predicate evaluates to false. In this case, the predicate evaluates to true, so the cast returns the following value:
\[
2 : \text{int} \xRightarrow{} \text{Pos}
\]
Here we write \(\xRightarrow{}\) in place of \(\xRightarrow{\text{p}}\); the blame label is omitted because the predicate is now verified and cannot fail. The value type-checks only if the predicate \(x > 0\) evaluates to true when \(x\) is 2. (In Wadler and Findler [2009], the two terms above are written as \(\langle \text{Pos} \leftarrow \text{p} \text{ int} \rangle 2\) and \(2_{\text{Pos}}\), respectively.)
Since predicates may be arbitrarily complicated, type checking for the \(\xRightarrow{}\) construct is undecidable. To deal with this issue, we partition our language into a compile-time subset with decidable type checking (which includes \(\xRightarrow{\text{p}}\)) and a runtime superset with undecidable type checking (which adds \(\xRightarrow{}\)). Undecidable type checking of the runtime language is not a serious issue, since the compile-time language is decidable, translation to the runtime language preserves typing, and reduction in the runtime language also preserves typing.
We may now take our value of type Pos and cast it to type Nat:
\[
(2 : \text{int} \xRightarrow{} \text{Pos}) : \text{Pos} \xRightarrow{\text{p}} \text{Nat}
\]
We abbreviate the above as follows:
\[
2 : \text{int} \xRightarrow{} \text{Pos} \xRightarrow{\text{p}} \text{Nat}
\]
In general, we often collapse sequences of casts (either \(\xRightarrow{}\) or \(\xRightarrow{\text{p}}\) or both) in this way. Again the cast succeeds, resulting in
\[
2 : \text{int} \xRightarrow{} \text{Nat}
\]
Indeed, any cast from type Pos to type Nat must succeed, since \(x > 0\) implies \(x \geq 0\) for any integer \(x\). Again, this is undecidable, but this property is used for reasoning about programs rather than compiling.
In Wadler and Findler [2009], the fact that the cast always succeeds is indicated by writing:
\[
\text{Pos} <: \text{Nat}
\]
Indeed, they introduce four related notions:
\[
A <: B \quad A <:^+ B \quad A <:^- B \quad A <:_n B
\]
Consider a cast from $A$ to $B$ with label $p$.
- The first, ordinary subtyping, holds if the cast never raises blame.
- The second, positive subtyping, holds if the cast never raises blame $p$.
- The third, negative subtyping, holds if the cast never raises blame $-p$.
- The fourth, naive subtyping, holds if type $A$ is more precise than type $B$.
A term is said to be \textit{safe for} $p$ if every subterm with a cast from $A$ to $B$ satisfies $A <_{+:} B$ if it has label $p$, and $A <_{-} B$ if it has label $-p$. (In Wadler and Findler [2009], negative blame $-p$ is written as $\bar{p}$.)
To ensure the above properties, Wadler and Findler [2009] claim blame preservation: if a term is safe for $p$ and it reduces to another term, then the new term is also safe for $p$. One of the reduction rules in that paper is (in the notation of this paper) as follows:
\[(V : A \Rightarrow (x : A)(P[x]) \xrightarrow{p} B) \rightarrow (V : A \xrightarrow{p} B)\]
In our case, this means we have the reduction:
\[(2 : \text{int} \Rightarrow \text{Pos} \xrightarrow{p} \text{Nat}) \rightarrow (2 : \text{int} \xrightarrow{p} \text{Nat})\]
But while $\text{Pos} <_{:} \text{Nat}$ clearly holds, $\text{int} <_{:} \text{Nat}$ clearly does not hold; casting any negative integer to the natural type will fail. So even though the reduction yields a term that won't raise blame, it does not yield a term that is safe for blame. The claim that reductions preserve blame safety is flawed.
Here we fix the claim by moving from a two-place relation $A <_{:} B$ where $A$ and $B$ are types, to a three-place relation $V : A <_{:} B$ where $V$ is a value and $A$ and $B$ are types. Although $\text{int} <_{:} \text{Nat}$ does not hold, it will turn out that $2 : \text{int} <_{:} \text{Nat}$ does hold, allowing us to present a correct proof of blame safety.
Another issue with Wadler and Findler [2009] is that if one looks closely at the definition of the relations $<_{:}$, $<_{:+}$, and $<_{:-}$ it becomes clear that they only make sense for closed types, while the notion of blame safety is defined on open types. In particular, that paper defines $A <_{:} (x : B)(P[x])$ to hold only if
\[\text{for every } V : A \text{ if } (V : A \Rightarrow B) \xrightarrow{\star} W \text{ then } P[W] \xrightarrow{\star} \text{true}\]
(see Figure 5 of that paper). But this definition does not consider any free variables other than $x$ that might appear in $P$ (or $A$ or $B$). However, a lambda abstraction is deemed safe for $p$ only when its body is safe for $p$ (see Figure 7 of that paper), and the bound variable of the lambda abstraction appears free in its body.
Here we fix the issue by also defining a four-place relation
\[\Delta \vdash M : A <_{:} B\]
that holds if and only if
\[\text{for every closing substitution } \eta \text{ of } \Delta, \text{ if } \eta(M) \xrightarrow{\star} V \text{ then } V : A <_{:} B.\]
It will turn out that the three- and four-place relations are mutually recursive, as the four-place relation will prove instrumental in properly defining the three-place relation for function types.
Wadler and Findler [2009] only considered refinements over base types. This is because if refinements were permitted over function types then the subtyping relation defined in that paper...
### Base Types
\( \mathbb{I} ::= \) bool \( \mid \) int
### Types
\( A, B, C, D ::= \mathbb{I} \mid \star \mid (x : A)\{P\} \mid (x : A) \to B \)
### Values
\( V, W ::= c \mid x \mid \lambda x : A. N \)
### Terms
\( L, M, N, P, Q ::= V \mid op(M) \mid L \downarrow V \mid \text{let } x = M \text{ in } N \)
\( \mid \text{ if } P \text{ then } M \text{ else } N \mid M : A \Rightarrow B \)
### Labels
\( p, q ::= +\ell \mid -\ell \)
### Environments
\( \Gamma ::= \cdot \mid \Gamma, x : A \)
---
**Fig. 1.** Compile-time language syntax.
would not be transitive. The counter-example was discovered by Greenberg and Pierce (personal communication):
\[
\begin{align*}
\text{Pos} \rightarrow \text{Pos} & \not<^+ \text{ Nat} \rightarrow \text{Nat} \\
\text{Nat} \rightarrow \text{Nat} & \not<^+ (f : \text{int} \rightarrow \text{int})\{f \ 0 \geq 0\} \\
\text{Pos} \rightarrow \text{Pos} & \not<^+ (f : \text{int} \rightarrow \text{int})\{f \ 0 \geq 0\}
\end{align*}
\]
The final subtyping relation does not hold, as the predicate will not hold, since \( f \ 0 \) always reduces to blame when \( f \) has type \( \text{Pos} \rightarrow \text{Pos} \). It will turn out that our new definitions fix the problem, partly because we replace \( P[V] \rightarrow^* \text{true} \) by \( P[V] \rightarrow^* \text{false} \), and partly because our definition of closing substitution excludes from consideration terms in the environment that raise blame rather than reducing to a value.
We can now proceed with our formal development.
### 3 DEPENDENT BLAME CALCULUS (\( \lambda dB \))
We present dependently-typed blame calculus, \( \lambda dB \), which integrates dependently-typed code and simply-typed code using casts, and incorporates refinements over arbitrary types and dependent function types. The language is purely functional, with no mutable state and non-termination and raising blame as the only computational effects.
We present \( \lambda dB \) factored into a compile-time language (Figure 1) with with decidable type-checking and a run-time language (Figure 3) with explicitly tagged values of refinement type where checking that a value satisfies a predicate may be undecidable. The run-time language is a strict superset of the compile-time language.
#### 3.1 Compile-Time Language
Our language is influenced by \( \lambda H \) of Knowles and Flanagan [2010], blame calculus of Wadler and Findler [2009], and value-dependency and syntax of refinements of Swamy et al. [2013].
Our language differs from \( \lambda H \) with our treatment of casts. First and foremost, we keep the blame labels on casts in order to track blame and prove the blame theorem. Second, instead of relying on subsumption and automatic insertions of casts we forego subsumption and require all casts to be explicit. Such decision allows us to achieve decidable type-checking of the compile-time language and provide a simpler formalism, as we explain in Section 3.8.
We let \( \mathbb{I} \) range over base types, which are either integers or booleans.
We let \( A, B, C, D \) range over types: a type is either a base type \( \mathbb{I} \), the dynamic type \( \star \), a refinement \((x : A)\{P\}\) of type \( A \) with a predicate \( P \) acting on \( x \), or a dependent function \((x : A) \rightarrow B\), where \( x \) of type \( A \) is bound in \( B \).
We let $V, W$ range over values, $L, M, N$ range over terms, and $P, Q$ range over terms that occur as predicates.
A value is a constants $c$, variable $x$, or dependent lambda abstractions $\lambda x : A$. $N$. Since the language is call-by-value, a variable is always bound to a value and evaluating a variable can have no computational effect (such as raising blame).
A term is either a value, an application of a built-in operator $\text{op}(M)$, an application of a dependent function to a value $LV$, a non-dependent let binding $\text{let} x = M \text{in} N$, a conditional $\text{if} N \text{then} M \text{else} L$, or a cast $M : A \Rightarrow B$. Following the approach to value-dependency of Swamy et al. [2013], we restrict applications of functions to values.
We let $p, q$ range over blame labels, and $\ell$ range over locations. A blame label is either positive $+\ell$ or negative $-\ell$. We define an involutive negation on blame labels $-p$ as follows
$- (+\ell) = -\ell$
$- (-\ell) = +\ell$.
The mapping $|\cdot|$ from blame labels to locations is given by $|(\ell) = \ell = |(\ell)|$.
We let $\Gamma$ range over environments, which associate variables with types.
### 3.2 Type System for Compile-Time Language
Each constant $c$ and built-in operator $\text{op}$ has a type defined by $\text{type}(c)$ and $\text{type}(\text{op})$. Each built-in operator $\text{op}$ is specified by a total meaning function $\llbracket \text{op} \rrbracket$.
For variables, we define a context lookup relation, $\Gamma \ni x : A$, which is standard.
Lambda abstractions are dependently-typed and the argument is bound in the result type.
We restrict our applications to values as value-dependency is a well-understood technique to reason about side-effects such as raising blame [Swamy et al. 2013]; since the function argument $x$ is bound in the result type $B$ we substitute the argument $V$ for $x$ in the result type $B[x := V]$ (sometimes writing $B[V]$ where this is unambiguous).
We restrict our let-expressions to be non-dependent via the additional clause $\Gamma \vdash_{rct} B : \text{tp}$ which ensures that $B$ does not depend on $x$. By restricting our function applications to be value-dependent and by restricting our let-expressions to be non-dependent we fix the order of evaluation on the term level and on the type level to standard call-by-value evaluation strategy which simplifies reasoning in our language.
We use standard conditional expressions for the compile-time language. We will extend let and conditional expressions in the run-time language of the next section.
Casts must be between compatible types $A$ and $B$, which we write as $A \sim B$. We extend the standard notion of compatibility to include dependent functions. It is straightforward to show that compatibility is closed under value substitution. We further restrict our casts to be defined between well-formed types, via the additional clause $\Gamma \vdash_{rct} B : \text{tp}$.
### 3.3 Run-Time Language
We let $\Delta$ range over run-time environments. These include not only the variable bindings $x : A$ as in the compile-time environments, but also let bindings $x : A = M$, and predicate bindings $P$. These last two track terms bound by let-expressions and predicates tested by conditional expressions. Our environments are similar to those of Knowles et al. [2006], but we permit let bindings to arbitrary terms, whereas Knowles et al. restrict their bindings to values. Predicate bindings mirror those of Ou et al. [2004]. We let $\Xi$ range over closed environments, which only contain let bindings and predicate bindings, and $\rho, \sigma, \eta$ range over substitutions, which map variables to values. Our environments form the following hierarchy
$\Gamma \subset \Delta \subset \Xi \subset \sigma$.
We include substitutions in the environment hierarchy, as we later show that we can evaluate closed environments to closing substitutions.
For the run-time language we extend the syntax with tagged dynamically-typed values $V : G \Rightarrow \star$, tagged dependently-typed values $V : A \Rightarrow (x : A)\{P\}$, and blame terms $\text{blame} \ p$. These three term forms may be introduced by reductions.
The value $V : G \Rightarrow \star$ represents the injection of a value $V$ with type $G$ into the dynamic type $\star$. The value $V : A \Rightarrow (x : A)\{P\}$ represent a dependently-typed value with type $(x : A)\{P\}$, where the underlying value $V$ has type $A$. We prohibit these two forms of values from appearing in compile-time programs. Prohibiting the latter ensures that the compile-time language benefits from decidable type-checking.
The blame expression $\text{blame } p$ results from a failed run-time cast with blame label $p$. We choose not to annotate blame expression with their types because it simplifies the reduction rules. It would be straightforward to make the other choice, and doing so would yield a stronger unicity result.
### 3.4 Run-Time Typing
Typing judgements for the run-time system are shown in Figure 4. The three occurrence of ellipses are to be replaced by corresponding rules from the compile-time system of Figure 2, save that occurrences of $\Gamma$ are replaced by $\Delta$.
The difference between let-expressions and dependent functions is also reflected in the structure of typing environment in our system: a $\lambda$ abstraction extends the environment with a variable binding, while a let-expression extends the environment with a let binding. Conditionals are also tracked in the environment with predicate bindings. Environments with let bindings and predicate bindings will provide extra information useful when it comes to blame safety.
We extend the typing judgement with rules for tagged dynamically-typed values, tagged dependently-typed values, and blame expressions. We modify the typing rule for let expressions to track the binding via an environment extension with a term.
The typing rule for tagged dependently-typed values is similar to the one used by Wadler and Findler [2009], but we use a different notion of entailment, as described below. Both rules used closed environments $\Xi$, which is sound because these constructs only arise from reduction, and reduction always takes place on closed terms. One might expect one could simply use the empty environment for these type rules, but we shall see one of the reduction rules introduces a let-expression and a conditional on the right-hand side, hence the need for a closed environment rather than an empty environment.
For conditional expressions we adopt the ‘Princeton’ formulation of the typing rule [Ou et al. 2004], which explicitly tracks the predicate in $\Delta$ such that for the then branch we record that the predicate should hold, whereas for the else branch we record that the predicate should not hold. We use $\neg P$ to denote negation of a predicate, defined as
$$\neg P \overset{\text{def}}{=} \text{if } P \text{ then false else true.}$$
\[
\begin{align*}
\Gamma & \vdash x : A & \Gamma & \not\vdash P & \Gamma & \vdash x : A & \Delta & \vdash M : A & \Delta & \vdash M : A & \Delta & \vdash \text{let } x = M \text{ in } N : A & \Delta & \vdash P : \text{bool} & \Delta & \vdash P : \text{bool} & \Delta & \vdash M : A & \Delta & \vdash P : \text{bool} & \Delta & \vdash P : \text{bool} & \Delta & \vdash M : A
\end{align*}
\]
10
While other work uses explicit constructs to represent casts in progress [Knowles and Flanagan 2010; Wadler and Findler 2009], we instead use the ‘Princeton’ conditional directly.
### 3.5 Closed Environment Evaluation
Our notion of entailment \( \Xi \models P \) relies on using a closed environment \( \Xi \), one which permits only extensions by predicates \( \Xi, P \) and let bindings \( \Xi, x : A = M \). Our notion of entailment further depends on the successful evaluation \( \Xi \rightarrow^{*} \sigma \) of a closed environment \( \Xi \) to a closing substitution \( \sigma \).

We evaluate environments to closing substitutions, by iterating reduction, and accumulating the resulting value bindings in the eventual closing substitution. For a let binding of the form $x : A = M$ we take the result $V$ of evaluating $M$, and for a predicate binding $P$ we check that the predicate evaluates to $true$ in the current environment.
### 3.6 Closed Environment Entailment
We say that closed environment $\Xi$ entails predicate $P$, written $\Xi \models P$ if for every evaluation $\Xi \rightarrow^* \sigma$ to a closing substitution it is the case that $\sigma^*(P) \not\rightarrow^* false$.
There is a noteworthy subtlety here. In Flanagan [2006]; Knowles and Flanagan [2010] and Greenberg et al. [2012] one finds the condition $\sigma(P) \rightarrow^* true$, while in Keil and Thiemann [2015] and our work one finds $\sigma(P) \not\rightarrow^* false$. The former outlaws the case where $\sigma(P)$ reduces to blame or does not terminate, while the latter permits it. Switching from the former to the latter allowed us to remove the restriction appearing in earlier work, such as Wadler and Findler [2009], that limits refinement to base types.
### 3.7 Defining substitutions $\sigma$
The substitutions we use here, $\sigma$, are simply finite maps from from names to values.
We write $\sigma^*(\Xi)$, $\sigma^*(M)$, and $\sigma^*(A)$ to denote applying substitution $\sigma$ to closed environments, terms, and types respectively, with the usual Curry-style definition (with suitable choices of sufficiently fresh names), as in Figure 5. The action of a substitution may be extended to evaluation frames, $\sigma^*(E)$, and hence redexes, in the obvious structural way, satisfying $\sigma^*(E[M]) = (\sigma^*(E))[\sigma^*(M)]$.
We write $[x := V]$ to denote a single substitution of $V$ for $x$ and we write $[x]$ to denote a single substitution of $x$ for $x$, which signified a potential hole in the term
$$M[x := V] \overset{\text{def}}{=} (\cdot, x = V)^*(M)$$
$$M[x] \overset{\text{def}}{=} (\cdot, x = x)^*(M)$$
### 3.8 Comparison to earlier approaches
Our definition of entailment is similar to the Implication rule of Knowles and Flanagan [2010], but thanks to the absence of subsumption we are working in a simpler setting. While Knowles and Flanagan [2010] needs to show implication between two predicates for all substitutions, we simply need to show that for all substitutions our predicate does not evaluate to false.
The initial work on hybrid type checking [Flanagan 2006] used typing judgements to define closing substitution, however that lead to an unpermitted circularity between their typing rule, subtyping rule, and implication rule, because the implication rule refers to closing substitutions in the negative position of an implication. Subsequent works on hybrid type checking [Greenberg et al. 2012; Knowles and Flanagan 2010] break that circularity by providing a denotational semantics of refinement types and defining the closing substitution in terms of the denotational semantics.
We believe our system solves that problem by foregoing subsumption and factoring $\lambda dB$ into a compile-time and a run-time language. As a consequence, first, we achieve decidable type-checking in our compile-time language. Second, we break the circularity present in the previous systems. We avoid the occurrence of the typing judgement in a negative position of an implication by permitting tagged dependently-typed values only in closed contexts. In such a case there is only a single substitution possible, therefore there is no need to quantify over all possible well-typed substitutions available in the context.
The earlier work had two separate reductions for casting base types and function types to rule of the previous system: lambda abstraction, which is itself a value. Our system is shown to satisfy a property similar to the earlier work treats casts from functions to functions as values, where here such a cast results in a here we have a single rule which adds an intermediate cast to a compatible ground type. And the bindings, and conditionals are standard.
\[
\begin{align*}
\sigma^*(x:A=M,\Xi) &= y:\sigma^*(A) = \sigma^*(M), (\sigma, x = y)^*(\Xi) & \text{fresh } y \\
\sigma^*(P,\Xi) &= \sigma^*(P), \sigma^*(\Xi) \\
\sigma^*(\iota) &= \iota \\
\sigma^*(\star) &= \star \\
\sigma^*(\lambda x:A.N) &= \lambda y:\sigma^*(A). (\sigma, x = y)^*(N) & \text{fresh } y \\
\sigma^*(V : G \Rightarrow \star) &= \sigma^*(V : G \Rightarrow \star) \\
\sigma^*(V : A \Rightarrow (x:A)(P)) &= \sigma^*(V : \sigma^*(A) \Rightarrow (y : \sigma^*(A))\{(\sigma, x = y)^*(P)\} & \text{fresh } y \\
\sigma^*(op(M)) &= op(\sigma^*(M)) \\
\sigma^*(L V) &= \sigma^*(L) \sigma^*(V) \\
\sigma^*(let x = M in N) &= let y = \sigma^*(M) in (\sigma, x = y)^*(N) & \text{fresh } y \\
\sigma^*(\text{if } P \text{ then } M \text{ else } N) &= \text{if } \sigma^*(P) \text{ then } \sigma^*(M) \text{ else } \sigma^*(N) \\
\sigma^*(M : A \Rightarrow B) &= \sigma^*(M : \sigma^*(A) \Rightarrow \sigma^*(B)) \\
\sigma^*(\text{blame } p) &= \text{blame } p
\end{align*}
\]
(\sigma^*(\Xi) = \Xi')
\[
\sigma^*(A) = B
\]
\[
\sigma^*(M) = N
\]
\[
\sigma^*(c) = c \\
(\cdot)^*(x) = x \\
(\sigma, x = V)^*(y) = \sigma^*(y) & \text{if } x \neq y \\
\sigma^*(\lambda x:A.N) &= \lambda y:\sigma^*(A). (\sigma, x = y)^*(N) & \text{fresh } y \\
\sigma^*(V : G \Rightarrow \star) &= \sigma^*(V : G \Rightarrow \star) \\
\sigma^*(V : A \Rightarrow (x:A)(P)) &= \sigma^*(V : \sigma^*(A) \Rightarrow (y : \sigma^*(A))\{(\sigma, x = y)^*(P)\} & \text{fresh } y \\
\sigma^*(op(M)) &= op(\sigma^*(M)) \\
\sigma^*(L V) &= \sigma^*(L) \sigma^*(V) \\
\sigma^*(let x = M in N) &= let y = \sigma^*(M) in (\sigma, x = y)^*(N) & \text{fresh } y \\
\sigma^*(\text{if } P \text{ then } M \text{ else } N) &= \text{if } \sigma^*(P) \text{ then } \sigma^*(M) \text{ else } \sigma^*(N) \\
\sigma^*(M : A \Rightarrow B) &= \sigma^*(M : \sigma^*(A) \Rightarrow \sigma^*(B)) \\
\sigma^*(\text{blame } p) &= \text{blame } p
\]
Fig. 5. Action of substitutions on environments, types, and terms.
### 3.9 Dynamic Semantics
Figure 6 gives the reduction rules for \(\lambda dB\). We define evaluation for \(\lambda dB\) in terms of call-by-value reduction on terms (Figure 3), similar to Wadler and Findler [2009].
Our reduction rules for the evaluation of the operators, application of lambda abstractions, let bindings, and conditionals are standard.
Our rules for casts are reminiscent of Wadler and Findler [2009], with some technical differences. The earlier work had two separate reductions for casting base types and function types to \(\star\), where here we have a single rule which adds an intermediate cast to a compatible ground type. And the earlier work treats casts from functions to functions as values, where here such a cast results in a lambda abstraction, which is itself a value. Our system is shown to satisfy a property similar to the rule of the previous system:
\[
(V : (x : A) \Rightarrow B \Rightarrow (y : C) \Rightarrow D) \Rightarrow W \Rightarrow let x = (W : C \Rightarrow A) in V x : B[x] \Rightarrow D[W]
\]
\[ \text{op}(\bar{V}) \rightarrow \mathbb{II} \text{op} \mathbb{II}(\bar{V}) \]
\[ (\lambda x : A. N) V \rightarrow N[x := V] \]
\[ \text{let } x = V \text{ in } N \rightarrow N[x := V] \]
\[ \text{if true then } M \text{ else } N \rightarrow M \]
\[ \text{if false then } M \text{ else } N \rightarrow N \]
\[ V : \lambda x : A. N \rightarrow V \]
\[ V : \star \Rightarrow \star \rightarrow V \]
\[ V : A \Rightarrow \star \rightarrow V : A \Rightarrow G \Rightarrow \star \]
\[ V : G \Rightarrow \star \Rightarrow A \rightarrow V : G \Rightarrow A \]
\[ V : G \Rightarrow \star \Rightarrow B \rightarrow \text{blame } p \]
\[ V : A \Rightarrow (x : A)\{P\} \Rightarrow B \rightarrow V : A \Rightarrow B \]
\[ V : A \Rightarrow (y : B)\{Q\} \rightarrow \text{let } y = (V : A \Rightarrow B) \text{ in if } Q \text{ then } y : B \Rightarrow (y : B)\{Q\} \text{ else blame } p \]
\[ (\lambda x : A. N) : (x : A) \rightarrow B \Rightarrow (y : C) \rightarrow D \rightarrow \lambda y : C \Rightarrow A \text{ in } N : B[x] \Rightarrow D[y] \]
\[ M \rightarrow N \]
\[ \mathcal{E}[M] \rightarrow \mathcal{E}[N] \]
\[ \mathcal{E}[\text{blame } p] \rightarrow \text{blame } p \]
We have standard rules to take the compatible closure under evaluation frames, and to propagate blame through an enclosing evaluation frame.
**Conjecture 3.1 (Diamond Property).** If \( M \rightarrow N \) and \( M \rightarrow N' \), with \( N \neq N' \) there is some term \( L \) such that \( N \rightarrow L \) and \( N' \rightarrow L \).
**Proof.** The most interesting case is given by \( M \equiv V : (x : A)\{P\} \Rightarrow (y : B)\{Q\} \). This yields the following two paths:
- by matching on the refinement type on the left first
\[ V : A \Rightarrow (x : A)\{P\} \Rightarrow (y : B)\{Q\} \rightarrow V : A \Rightarrow (y : B)\{Q\} \rightarrow \text{let } y = V : A \Rightarrow B \text{ in if } Q \text{ then } y : B \Rightarrow (y : B)\{Q\} \text{ else blame } p \]
by matching on the refinement type on the right first
\[ V : A \Rightarrow \langle x : A \rangle\{P\} \Rightarrow (y : B)\{Q\} \]
\[ \rightarrow \text{let } y = V : A \Rightarrow \langle x : A \rangle\{P\} \Rightarrow B \text{ in if } Q \text{ then } y : B \Rightarrow (y : B)\{Q\} \text{ else blame } p \]
which are confluent. □
4 TYPE SAFETY
Every typeable term in our language has a well-formed type.
**Proposition 4.1 (Well-formed types).** If \( \Delta \vdash \text{rt } M : A \) then \( \Delta \vdash \text{rt } A : \text{tp} \).
**Proof.** By induction on \( \Delta \vdash \text{rt } M : A \). □
Moreover, the system enjoys unicity of types.
**Proposition 4.2 (Unicity).** Let \( M \) be a term where no subterm has the form blame for any \( p \). If \( \Delta \vdash \text{rt } M : A \) and \( \Delta \vdash \text{rt } M : B \) then \( A = B \).
**Proof.** For each term except blame there is a unique typing derivation. □
It would be easy to extend the result to include blame, if blame terms explicitly carried their type. We require slight adjustments to the usual canonical forms lemma.
**Lemma 4.3 (Canonical forms).** Let \( V \) be a value that is well typed in the empty context.
\[ \text{• If } \vdash \text{rt } V : \iota, \text{ then } V = c \text{ with type}(c) = \iota. \]
\[ \text{• If } \vdash \text{rt } V : \star, \text{ then } V = W : G \Rightarrow \star \text{ with } \vdash \text{rt } W : G. \]
\[ \text{• If } \vdash \text{rt } V : \langle x : A \rangle\{P\}, \text{ then } V = W : A \Rightarrow \langle x : A \rangle\{P\} \text{ with } \vdash \text{rt } W : A \text{ and } \models P[W]. \]
\[ \text{• If } \vdash \text{rt } V : \langle x : A \rangle \rightarrow B \text{ then } V = \lambda x : A. N \text{ with } x : A \vdash \text{rt } N : B. \]
**Proof.** By case analysis on the typing derivation of \( V \) in the empty context. □
Whereas traditional progress shows that a term well-typed in the empty context either is a value or takes a step, here there is a third possibility, which is that it results in blame.
**Proposition 4.4 (Progress).** If \( \vdash \text{rt } M : A \) then either:
\[ \text{• } M \text{ is a value.} \]
\[ \text{• } M \rightarrow N \text{ for some term } N. \]
\[ \text{• } M \text{ is blame } p \text{ for some blame label } p. \]
**Proof.** By induction on the typing derivation of \( M \) in the empty context. □
It is straightforward to show that reduction of closed terms preserves types.
**Conjecture 4.5 (Preservation).** If \( \vdash \text{rt } M : A \) and \( M \rightarrow N \) then \( \vdash \text{rt } N : A \).
**Proof.** By case analysis over the reduction rules. The case for a reduction with left-hand side
\[ V : A \Rightarrow (y : B)\{Q\} \]
depends crucially on the type rule for \( V : A \Rightarrow (y : A)\{Q\} \) using a closed context \( \Xi \) rather than the empty context. □
4.1 Context morphisms
We now introduce additional technical machinery that lets us describe type preservation for arbitrary as well as empty contexts, and that will prove essential in defining blame safety in the next section.
Specifically, in addition to our analysis of entailment in closed environments, we further need a device to enable us to pass from an arbitrary environment to such closed environments. Such a device performs a similar role to the ordinary notion of ‘closing substitution’, or to the let-bound values of Knowles et al. [2006], but here, we also need to pay attention to the structure of let and instantiate the variable bindings in $\Delta$ if there exists a closed context $\eta$ under evaluation. As shown in Figure 8, we say that $\eta$ is a closing substitution for $\Delta$, written $\eta : \Delta$, if there exists a closed context $\Xi$ such that $\rho : \Xi \rightarrow \Delta$ and $\Xi \vdash^* \sigma$ with $\eta = \sigma \circ \rho$.
Closing substitutions preserve types.
Conjecture 4.7 (Closing Substitution). Assume a closing substitution $\eta : \Delta$.
\[
\begin{align*}
\rho : \Xi & \rightarrow \Delta \\
\Xi \vdash \rho^*(V) : \rho^*(A) & \rightarrow \rho^*(V) : \Delta, x : A \\
(\rho, x = \rho^*(V)) : \Xi & \rightarrow (\Delta, x : A) \\
\rho : \Xi & \rightarrow \Delta \\
\Xi \vdash \rho^*(P) : \text{bool} & \rightarrow \rho^*(P) : \Delta \\
\rho : (\Xi, \rho^*(P)) & \rightarrow (\Delta, P)
\end{align*}
\]
Fig. 7. Context Morphisms
Closing Substitution.
\[
\begin{array}{ccc}
\rho : \Xi \rightarrow \Delta & \Xi \rightarrow^* \sigma & \eta = \sigma \circ \rho \\
\hline
\eta : \Delta
\end{array}
\]
Fig. 8. Closing Substitution
- If \( \Delta \vdash \text{rt } M : A \) then \( \cdot \vdash \text{rt } \eta^* (M) : \eta^* (A) \).
- If \( \Delta \vdash \text{rt } A : \text{tp} \) and \( \eta : \Delta \) then \( \cdot \vdash \text{rt } \eta^* (A) : \text{tp} \).
We expect the proofs of the above to be routine, if intricate, inductive arguments.
Having developed the above machinery, we may now generalise type preservation from closed terms to open terms.
**Corollary 4.8 (Type safety for open terms).** Assume a closing substitution \( \eta : \Delta \). If \( \Delta \vdash \text{rt } M : A \) and \( \eta^* (M) \rightarrow^* N \), then \( \cdot \vdash \text{rt } N : \eta^* (A) \).
We will use the same machinery for defining blame safety on open terms in the next section.
5 BLAME SAFETY AND SUBTYPING
Our approach to blame safety follows that of Wadler and Findler [2009], in that we define a safety judgment and show preservation of safety under reduction. However, where that paper used subtyping relations on pairs of types, here we use novel three-place relations on closed values and four-place relations on open terms, which are defined by mutual recursion. The customary treatment of subtyping as a two-place relation between types then arises as a derived notion.
5.1 Subtyping with a Witness
Our new concept, subtyping with a witness, is defined in Figure 9. We define a three-place relation between a closed term \( V \) and two closed types \( A \) and \( B \),
\[
V : A \prec : B
\]
which presumes \( \cdot \vdash \text{rt } V : A, \cdot \vdash \text{rt } B : \text{tp}, \) and \( A \sim B \). We also define a four-place relation between an environment \( \Delta \), an open term \( M \), and two open types \( A \) and \( B \),
\[
\Delta \vdash M : A \prec : B
\]
which presumes \( \Delta \vdash \text{rt } M : A, \Delta \vdash \text{rt } B : \text{tp}, \) and \( A \sim B \).
In fact, we define four variants of subtyping:
\[
A \prec : B \quad A \prec^+ : B \quad A \prec^- : B \quad A \prec_n : B
\]
We also use \( A \prec^\pm_n : B \) to range over any of these four relations.
Our motivation in including a value in the subtyping judgement, is that a value witnesses the possibility that we may successfully cast from the first type to the second type.
For our rule for subtyping from a refinement,
\[
V = (W : A \Rightarrow (x : A)\{P\}) \quad W : A \prec : B
\]
the Canonical Forms Lemma guarantees that a value of refinement type must have the form \( W : A \Rightarrow (x : A)\{P\} \), we then remove the tag and check that \( W : A \prec : B \).
Ordinary Subtyping with a witness.
\[ V : \mathfrak{t} \triangleleft \vdash \quad V : \star \triangleleft \star \quad V : A \triangleleft : G \quad V : A \triangleleft : \star \]
\[ V = (W : A \triangleright (x : A)\{P\}) \quad W : A \triangleleft : B \quad v : A : (V : A \triangleright B) \vdash Q \]
\[ V : (x : A)\{P\} \triangleleft : B \quad V : A \triangleleft : y : B = (V : A \triangleright B) \vdash Q \]
\[ y : C \vdash y : C \triangleleft : A \quad y : C, x : A = (y : C \triangleright A) : (V x) : B \triangleleft : D \]
\[ V : (x : A) \rightarrow B \triangleleft : (y : C) \rightarrow D \]
Positive Subtyping with a witness.
\[ V : \mathfrak{t} \triangleleft \vdash \quad V : A \triangleleft : \star \]
\[ V = (W : A \triangleright (x : A)\{P\}) \quad W : A \triangleleft : B \quad V : A \triangleleft : y : B = (V : A \triangleright B) \vdash Q \]
\[ V : (x : A)\{P\} \triangleleft : B \quad V : A \triangleleft : y : B = (V : A \triangleright B) \vdash Q \]
\[ y : C \vdash y : C \triangleleft : A \quad y : C, x : A = (y : C \triangleright A) : (V x) : B \triangleleft : D \]
\[ V : (x : A) \rightarrow B \triangleleft : (y : C) \rightarrow D \]
Negative Subtyping with a witness.
\[ V : \mathfrak{t} \triangleright \vdash \quad V : A \triangleright : \star \]
\[ V = (W : A \triangleright (x : A)\{P\}) \quad W : A \triangleright : B \quad V : A \triangleright : y : B = (V : A \triangleright B) \vdash Q \]
\[ V : (x : A)\{P\} \triangleright : B \quad V : A \triangleright : y : B = (V : A \triangleright B) \vdash Q \]
\[ y : C \vdash y : C \triangleright : A \quad y : C, x : A = (y : C \triangleright A) : (V x) : B \triangleright : D \]
\[ V : (x : A) \rightarrow B \triangleright : (y : C) \rightarrow D \]
Naive Subtyping with a witness.
\[ V : \mathfrak{t} \triangleright \vdash \quad V : A \triangleright : \star \]
\[ V = (W : A \triangleright (x : A)\{P\}) \quad W : A \triangleright : B \quad V : A \triangleright : y : B = (V : A \triangleright B) \vdash Q \]
\[ V : (x : A)\{P\} \triangleright : B \quad V : A \triangleright : y : B = (V : A \triangleright B) \vdash Q \]
\[ x : A \vdash x : A \triangleright : C \quad y : C, x : A = (y : C \triangleright A) : (V x) : B \triangleright : D \]
\[ V : (x : A) \rightarrow B \triangleright : (y : C) \rightarrow D \]
Open Subtyping with a witness.
\[ \Delta \vdash M : A \triangleleft : B \]
for all \( \eta : \Delta, (\eta^*(M) \rightarrow^* V \) implies \( V : \eta^*(A) \triangleleft : \eta^*(B) \)
\[ \Delta \vdash M : A \triangleleft : B \]
Fig. 9. Subtyping with a witness
For our rule for subtyping to a refinement,
\[
V : A <: B \quad y : B = (V : A \Rightarrow) \models Q
\]
we check that \(V : A <: B\), the confirm that if we cast \(V\) from type \(A\) to type \(B\) the result satisfies the predicate \(Q\). We write \(\Rightarrow\) when the choice of blame label does not matter.
For our rule for subtyping between dependent functions,
\[
y : C \vdash y : C <: A \quad y : C, x : A = (y : C \Rightarrow A) \vdash (V x) : B <: D
\]
we check that the corresponding domains and ranges are also subtyping, where subtyping is contravariant in the domain and covariant in the range. Since the ranges are dependent, we use the judgement for open terms, in an environment where \(y\) is any value of type \(C\), and \(x\) is the result of casting \(Y\) from type \(C\) to type \(A\).
For open terms, we quantify over all closing substitutions and then check the corresponding relation on closed terms,
\[
\text{for all } \eta : \Delta, (\eta^*(M) \rightarrow^\ast V \implies V : \eta^*(A) <: \eta^*(B))
\]
**Properties of subtyping**
It is easy to show that subtyping with a witness implies compatibility and is reflexive. It is a little trickier to formulate transitivity.
**Corollary 5.1 (Properties of subtyping with a witness).**
- **Compatibility**: \(V : A <: B\) implies \(A \sim B\).
- **Reflexivity**: \(V : A <: A\).
- **Transitivity**: If \(V : A <: B\) and \(y = (V : A \Rightarrow) \vdash y : B <: C\) then \(V : A <: C\).
Compatibility and reflexivity hold for all four relations, and transitivity holds for ordinary and naive subtyping.
We also adapt the Tangram property of Wadler and Findler [2009] to subtyping with a witness. The original Tangram property consists of two factoring lemmas:
- \(A <: B\) iff \(A <:^+ B\) and \(A <:-^+ B\),
- \(A <:^+_n B\) iff \(A <:+ B\) and \(B <:- A\).
In the second factoring lemma, the type on the left-hand side of the negative subtyping is swapped with the type on the right-hand side. The first of these adapts straightforwardly to witnesses, while the second uses a trick similar to that for transitivity.
**Conjecture 5.2 (Tangram with a witness).**
- \(V : A <: B\) iff \(V : A <:^+ B\) and \(V : A <:- B\).
- \(V : A <:_n B\) iff \(V : A <:^+ B\) and \(y = (V : A \Rightarrow B) \vdash y : B <:- A\).
### 5.2 Subtyping without a witness
We can recover the two-place subtyping of Wadler and Findler [2009] by using the four-place relation to quantify over all possible witnesses.
\[
A <: B \overset{\text{def}}{=} x : A \vdash x : A <: B.
\]
This immediately yields the following corollaries.
**Corollary 5.3 (Properties of subtyping without a witness).**
- **Compatibility:** \( A <: B \) implies \( A \sim B \).
- **Reflexivity:** \( A <: A \).
- **Transitivity:** If \( A <: B \) and \( B <: C \) then \( A <: C \).
Compatibility and reflexivity hold for all four relations, and transitivity holds for ordinary and naive subtyping.
**Corollary 5.4 (Tangram without a witness).**
- \( A <: B \) iff \( A <:^+ B \) and \( A <:^- B \), and
- \( A <:^ n B \) iff \( A <:^+ B \) and \( B <:^- A \).
### 5.3 Blame Safety
We give an inductive definition of safety with respect to a blame label in Figure 10. The key part of the definition is as follows. Assume an environment \( \Delta \). A cast
\[
M : A \xrightarrow{p} B
\]
is safe for \( p \) if \( \Delta \vdash M : A <:^+ B \), and a cast
\[
M : A \xrightarrow{-p} B
\]
is safe for \( p \) if \( \Delta \vdash M : A <:^- B \), while a cast
\[
M : A \xrightarrow{q} B
\]
with \( |q| \neq |p| \) is always safe for \( p \). We also need to check that any other types or terms appearing within the term are also safe for \( p \).
An easy induction establishes the following Lemma:
**Lemma 5.5 (Safe terms have safe types).** If \( \Delta \vdash rt M : A \) and \( \Delta \vdash M \) safe \( p \) then \( \Delta \vdash A \) safe \( p \).
In a similar manner to Wadler and Findler [2009] we state our blame safety result in terms of blame safety preservation and blame safety progress. As above, our general strategy aims to prove the result in two stages. The first shows safety preservation from open judgments to the empty context:
**Conjecture 5.6 (Blame safety for open terms).** Assume a closing substitution \( \eta : \Delta \) where every value in \( \eta \) is safe, that is, if \( \eta(x) = V \) then \( \cdot \vdash V \) safe \( p \). If \( \Delta \vdash rt M : A \) and \( \Delta \vdash M \) safe \( p \) then \( \cdot \vdash \eta^*(M) \) safe \( p \).
For closed terms, we have blame safety preservation and progress.
**Conjecture 5.7 (Blame safety preservation).** If \( \cdot \vdash rt M : A \) and \( \cdot \vdash M \) safe \( p \) and \( M \rightarrow N \) then \( \cdot \vdash N \) safe \( p \).
**Conjecture 5.8 (Blame safety progress).** If \( \cdot \vdash rt M : A \) and \( \cdot \vdash M \) safe \( p \) then \( M \neq \text{blame } p \).
From these, we can then conclude the Blame Theorem of Wadler and Findler [2009].
**Corollary 5.9 (Blame Theorem).** Let \( C \) be a context of type \( C \) containing a hole of type \( B \), and let \( M \) be a term of type \( A \), where \( p \) and \( \neg p \) do not appear in \( C, M, A, B, \) or \( C \).
- If \( A <: B \) then \( C[M : A \xrightarrow{p} B] \rightarrow^* \text{blame } p \) and \( C[M : A \xrightarrow{-p} B] \rightarrow^* \text{blame } \neg p \).
- If \( A <:^ n B \) then \( C[M : A \xrightarrow{p} B] \rightarrow^* \text{blame } p \).
- If \( B <:^ n A \) then \( C[M : A \xrightarrow{-p} B] \rightarrow^* \text{blame } \neg p \).
Safety for contexts.
\[
\frac{\Delta \text{ safe } p \quad \Delta \vdash A \text{ safe } p}{\Delta, x : A \text{ safe } p} \quad \frac{\Delta \vdash A \text{ safe } p \quad \Delta \vdash M \text{ safe } p}{\Delta, x : A = M \text{ safe } p}
\]
\[
\frac{\Delta \text{ safe } p \quad \Delta \vdash P \text{ safe } p}{\Delta, P \text{ safe } p}
\]
Safety for types.
\[
\frac{\Delta \text{ safe } p \quad \Delta \vdash A \text{ safe } p \quad \Delta, x : A \vdash P \text{ safe } p}{\Delta \vdash ((x : A) \rightarrow B) \text{ safe } p}
\]
\[
\frac{\Delta \vdash \iota \text{ safe } p}{\Delta \vdash A \text{ safe } p}
\]
Safety for Terms.
\[
\frac{\Delta \text{ safe } p \quad \Delta \vdash \tilde{M} \text{ safe } p}{\Delta \vdash \text{ op}(\tilde{M}) \text{ safe } p} \quad \frac{\Delta \text{ safe } p \quad x \in \text{ dom}(\Delta)}{\Delta, x : A \vdash N \text{ safe } p} \quad \frac{\Delta \vdash \lambda x : A. N \text{ safe } p}{\Delta \vdash (\lambda x : A. N) \text{ safe } p}
\]
\[
\frac{\Delta \vdash L \text{ safe } p \quad \Delta \vdash V \text{ safe } p}{\Delta \vdash (L \ V) \text{ safe } p} \quad \frac{\Delta \vdash M \text{ safe } p \quad \Delta, x : A \vdash M \vdash N \text{ safe } p}{\Delta \vdash (\text{let } x = M \text{ in } N) \text{ safe } p}
\]
\[
\frac{\Delta \vdash P \text{ safe } p \quad \Delta, P \vdash M \text{ safe } p \quad \Delta, \neg P \vdash N \text{ safe } p}{\Delta \vdash (\text{if } P \text{ then } M \text{ else } N) \text{ safe } p}
\]
\[
\frac{\Delta \vdash M \text{ safe } p \quad \Delta \vdash M : A \ll B \quad \Delta, B \vdash N \text{ safe } p}{\Delta \vdash (M : A \Rightarrow B) \text{ safe } p}
\]
\[
\frac{\Delta \vdash M \text{ safe } p \quad \Delta \vdash M : A \ll B \quad \Delta, \neg B \vdash M \text{ safe } p}{\Delta \vdash (M : A \Rightarrow B) \text{ safe } p}
\]
\[
\frac{\Delta \vdash M \text{ safe } p \quad |q| \neq |p|}{\Delta \vdash (M : A \Rightarrow B) \text{ safe } p} \quad \frac{\Delta \vdash B \text{ safe } p}{\Xi \vdash V \text{ safe } p} \quad \frac{\Xi \vdash P \text{ safe } p \quad q \neq p}{\Xi \vdash (V : G \Rightarrow \ast) \text{ safe } p}
\]
\[
\frac{\Xi \vdash (V : A \Rightarrow (x : A)\{P\}) \text{ safe } p \quad \Delta \vdash \text{ blame } q \text{ safe } p}{\Delta \vdash \text{ blame } p \text{ safe } p}
\]
Fig. 10. Blame Safety for Contexts, Types, and Terms
6 RELATED WORK
Hybrid Type Checking. Hybrid type checking [Flanagan 2006] allows for writing dependently-typed programs whose type-checking is potentially undecidable — hybrid type checker will insert dynamic casts to ensure that the typing discipline is enforced during run-time and provides a
guarantee that if the program is indeed well-typed it will reduce successfully. However, hybrid type checking only permits refinements and dependent functions over simple types.
**Manifest Contracts.** Greenberg et al. [2010, 2012] focus on the interplay between contracts and hybrid type checking. Our language $\lambda dB$ is similar to their $\lambda H$ as we both fix the order of evaluation to be call-by-value, however they need to define denotational semantics of types and kinds for their closing substitutions to break a circularity problem that would arise from the subsumption rule, whereas we forego subsumptions and define closing substitutions by composing context morphisms and substitutions arising by evaluating closed environments.
**Gradual Refinement Types.** Lehmann and Tanter [2017] present a gradually typed language with refinement types. Like Ou et al. [2004], Flanagan [2006], and Wadler and Findler [2009], refinements are only over base types, and like Ou et al. [2004], refinements must be decidable (they use a SAT solver), and have a strongly restricted syntax rather than ranging over arbitrary terms. Their work is novel in that they consider unknown refinements and apply the methodology of AGT [Garcia et al. 2016]. They do not consider blame.
**Dependent Interoperability.** Tanter and Tabareau [2015] and Dagand et al. [2016, 2018] present an interoperability framework added as a library to Coq. They do not support refinement types per se, but do support the related notion of sigma types and are not restricted to base types. They do not consider blame.
**Approximate Normalization for Gradual Dependent Types.** Eremondi et al. [2019] present a gradual type theory. Their work is novel in that they consider unknown values as well as unknown types, and again they apply the methodology of AGT [Garcia et al. 2016]. As with the previous work, they do not support refinement types per se, but do support the related notion of sigma types and are not restricted to base types. They do not consider blame.
**REFERENCES**
|
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/138098041/_dB_Blame_ZALEWSKI_DOA07012020_AFV.pdf", "len_cl100k_base": 16375, "olmocr-version": "0.1.50", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 94211, "total-output-tokens": 20144, "length": "2e13", "weborganizer": {"__label__adult": 0.00037980079650878906, "__label__art_design": 0.0003886222839355469, "__label__crime_law": 0.0002834796905517578, "__label__education_jobs": 0.0005941390991210938, "__label__entertainment": 7.200241088867188e-05, "__label__fashion_beauty": 0.00016868114471435547, "__label__finance_business": 0.0002257823944091797, "__label__food_dining": 0.0004150867462158203, "__label__games": 0.0005660057067871094, "__label__hardware": 0.0006337165832519531, "__label__health": 0.0004775524139404297, "__label__history": 0.00028061866760253906, "__label__home_hobbies": 7.998943328857422e-05, "__label__industrial": 0.0003476142883300781, "__label__literature": 0.0004146099090576172, "__label__politics": 0.00033736228942871094, "__label__religion": 0.0005588531494140625, "__label__science_tech": 0.0122222900390625, "__label__social_life": 9.143352508544922e-05, "__label__software": 0.0035915374755859375, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00031375885009765625, "__label__transportation": 0.000560760498046875, "__label__travel": 0.00021266937255859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64281, 0.01213]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64281, 0.17748]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64281, 0.77348]], "google_gemma-3-12b-it_contains_pii": [[0, 1093, false], [1093, 4205, null], [4205, 8483, null], [8483, 12319, null], [12319, 15446, null], [15446, 18762, null], [18762, 22131, null], [22131, 25965, null], [25965, 26374, null], [26374, 29163, null], [29163, 30201, null], [30201, 33868, null], [33868, 37354, null], [37354, 39341, null], [39341, 42233, null], [42233, 43723, null], [43723, 46487, null], [46487, 49055, null], [49055, 51606, null], [51606, 54655, null], [54655, 57317, null], [57317, 61856, null], [61856, 64281, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1093, true], [1093, 4205, null], [4205, 8483, null], [8483, 12319, null], [12319, 15446, null], [15446, 18762, null], [18762, 22131, null], [22131, 25965, null], [25965, 26374, null], [26374, 29163, null], [29163, 30201, null], [30201, 33868, null], [33868, 37354, null], [37354, 39341, null], [39341, 42233, null], [42233, 43723, null], [43723, 46487, null], [46487, 49055, null], [49055, 51606, null], [51606, 54655, null], [54655, 57317, null], [57317, 61856, null], [61856, 64281, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64281, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64281, null]], "pdf_page_numbers": [[0, 1093, 1], [1093, 4205, 2], [4205, 8483, 3], [8483, 12319, 4], [12319, 15446, 5], [15446, 18762, 6], [18762, 22131, 7], [22131, 25965, 8], [25965, 26374, 9], [26374, 29163, 10], [29163, 30201, 11], [30201, 33868, 12], [33868, 37354, 13], [37354, 39341, 14], [39341, 42233, 15], [42233, 43723, 16], [43723, 46487, 17], [46487, 49055, 18], [49055, 51606, 19], [51606, 54655, 20], [54655, 57317, 21], [57317, 61856, 22], [61856, 64281, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64281, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
5f4f37801a65ab496e652cb9d3ea151b295b240f
|
Systems and methods are disclosed for performing counterexample guided abstraction refinement by transforming a design into a functionally equivalent Control and Data Flow Graph (CDFG); performing a hybrid abstraction of the design; generating a hybrid abstract model; and checking the hybrid abstract model.
FIG. 1 (PRIOR ART)
FIG. 2
module main (clk);
input clk;
reg [31:0] a, b;
initial a = 1;
initial b = 0;
always @ (posedge clk)
begint
if (a < 100)
a <= b + a;
b <= a;
end
endmodule
void main ()
{
int a, b;
int a_NS;
a = a_NS = 1;
b = 0;
LOOP: //add check
if (a < 100)
a_NS = b + a;
b = a;
a = a_NS;
goto LOOP;
}
always @(posedge clk) begin
...
if (v > 1024) begin
if (...) set the error bit;
end
...
v = v + x;
end
FIG. 4
1: A = 100;
2: i = 0;
3: assume (i>=A)
4: j = 0;
5: assume (j>=B)
6: k = i + j;
7: assume (k < A+B)
1: while (R)
2: {... i=i+1;}
3: while (S)
4: {... j=j+1;}
5: F = Q //Q = R&S
6: if (F)
7: ERROR.
FIG. 5
HYBRID COUNTEREXAMPLE GUIDED ABSTRACTION REFINEMENT
[0001] The present application claims priority to Provisional Application Ser. No. 60/910,231 filed Apr. 5, 2007, the content of which is incorporated by reference.
BACKGROUND
[0002] The present invention relates to abstraction refinement of hardware or software.
[0003] Classic CEGAR (Counterexample guided Abstraction Refinement) is typically used to check whether a “property” is satisfied by a “model” (of a hardware design or a software program)—that’s called model checking. When the original model is too large, model checking cannot be used directly. In that case, one can build a simplified model by omitting some details of the original model, and hope that the simplified model is enough to prove the property. The abstract model is built in such a way that it contains all behaviors of the original model, and maybe more—if that helps reducing the size.
[0004] FIG. 1 shows the classic CEGAR process. First, an initial abstraction of the design (hardware or software) is done (10). Next, the abstraction is generated (12). The abstraction goes through model checking process (14) and tested for counter-examples (16). If a counter example is not found, the abstraction is proven (18). If the counter-example is found, the process checks for feasibility of the abstracted counter-example (20) and checks for concrete counterexamples (22). If a concrete counter-example is found, the abstraction is refined (24) and if not, the abstraction is refined using the counter-example as a guide (28).
[0005] Model checking is applied to the abstract model. If the property holds in the abstract model, it implies that the property holds in the original model. If the property does not hold, model checking produces a counterexample (CEX)—an execution trace showing how the property is violated. If that abstract counterexample corresponds to real trace in the original model (called feasible, or concretizable), then a real bug is found. Otherwise, the abstract model needs to be refined such that some of the “previously omitted details” will be added back, to make the simplified model more accurate.
[0006] Variable hiding and predicate abstraction are two frequently used abstraction techniques in CEGAR. Both methods create over-approximated models, and therefore are conservative with respect to universal properties such as LTL. Since the abstract model may have more behaviors than the concrete model, if a property holds in the abstract model, it also holds in the concrete model; however, if a property fails in the abstract model, it may still be correct in the concrete model. The abstraction refinement loop consists of three phases: abstraction, model checking, and refinement. Typically, one starts with a coarse initial abstraction and applies model checking. If the property fails in the abstract model and the model checker returns an abstract counterexample, a concretization procedure is used to check whether a concrete counterexample exists. If a concrete counterexample does not exist, the abstract counterexample is spurious. Spurious counterexamples are used during refinement to identify the needed information currently missing in the abstraction.
[0007] The variable hiding abstraction, or localization reduction, partitions the set of state variables of the model into a visible subset and an invisible subset. In the abstract model, the transition functions of visible variables are preserved as is, and the invisible variables are abstracted as pseudo-primary inputs. Since the invisible variables are left unconstrained, the abstract model has all possible execution traces of the original model, and possibly more. The cone-of-influence (COI) reduction can be regarded as a special case of variable hiding abstraction, wherein variables in the transitive fan-in of the property variables are marked as visible. Program slicing in software analysis is similar to COI reduction, and can be viewed as another special case. Compared to COI reduction, which produces an exact model for deciding the given property, variable hiding in general is more aggressive and may lead to spurious counterexamples.
[0008] In variable hiding, the abstraction computation is efficient. Given a set of visible variables, the abstract model can be built directly from a textual description of the original system, without the need for computing the concrete transition relation in the first place. This is advantageous because in practice the concrete transition relation may be too complex to compute. However, in variable hiding only existing state variables and transition functions can be used to construct the abstract model, which in general limits the chance of finding a concise abstraction. Despite this restriction, variable hiding has been relatively successful in abstracting large hardware designs, especially when combined with the use of SAT solvers. This is because the models tend to be well-partitioned and as a result, system properties often can be localized to a few submodules.
[0009] Predicate abstraction is more flexible than variable hiding since it allows a choice of predicates for abstraction, and has been used to verify both software and hardware. In predicate abstraction, a finite set of predicates is defined over the set X of concrete state variables and each predicate corresponds to a fresh Boolean variable pᵢ ∈ P. With these predicates, the model is mapped from the concrete state space (induced by X) into an abstract state space (induced by P). The main disadvantage of predicate abstraction is the expensive abstraction computation. Unlike in variable hiding, this computation is not compositional; the worst-case complexity is exponential in the number of predicates. When the number of predicates is large, the abstraction computation time often goes up significantly. Cartesian abstraction has been proposed to alleviate this problem; however, it leads to a further loss of accuracy in the abstraction.
[0010] Traditional hardware models are well structured, in that existing state variables and transition functions are often sufficient for constructing a concise abstraction for most user-defined properties. In this case, exploiting the extra flexibility provided by predicate abstraction may not be very crucial. However, with the increasing use of higher level modeling and description languages in today’s hardware design practice, the functional and structural partitionings may no longer directly correspond with each other, and as a result, the correctness of a property may not be easily localized to a few variables or submodules. In such cases, predicate abstraction is generally more effective. Furthermore, for system-level designs the boundary between hardware and software is getting blurred, and there is a need for abstraction method that work well on both.
SUMMARY
[0011] In one aspect, a process for verifying the correctness of a design includes transforming the design into a Control
and Data Flow Graph (CDFG); generating a hybrid abstract model; and checking the correctness of the hybrid abstract model.
[0012] In another aspect, a system to check a design includes a converter to transform the design into a Control and Data Flow Graph (CDFG); a module to perform a hybrid abstraction of the design and to generate a hybrid abstract model; and verifier to check the hybrid abstract model.
[0013] In yet another aspect, a hybrid abstraction method combines variable hiding with predicate abstraction in the same counterexample guided abstraction refinement loop. Refinements based on weakest preconditions to add new predicates can be used, and under certain conditions trade in the predicates for visible variables in the abstract model. Heuristics for improving the overall performance can be based on static analysis to identify useful candidates for visible variables, and lazy constraints can be used to find more effective refinement.
[0014] Advantages of certain embodiments of the system may include one or more of the following. The hybrid abstraction with the CEGAR framework can be used in verifying word-level Verilog designs. Experimental results show that the new method matches the better of the two existing abstraction methods, and outperforms them both in many cases. This may be due to the hybrid abstract model being more concise than either extreme when allowed to have both visible variables as well as predicates. Although hardware verification is discussed, the main ideas (and hybrid CEGAR) are directly applicable to verifying software programs also. The flexibility in the hybrid approach provides a uniform way to handle models derived from both hardware and software, and results in effective and concise abstractions automatically.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] FIG. 1 shows an exemplary classic CEGAR process.
[0016] FIG. 2 shows a hybrid CEGAR process.
[0017] FIGS. 3-5 show exemplary abstractions of designs.
DESCRIPTION
[0018] FIG. 2 shows a hybrid CEGAR process. The process of FIG. 2 automatically transforms word-level Verilog designs into functionally equivalent Control and Data Flow Graphs (CDFGs); the CDFGs serve as input to the CEGAR procedure. The method to apply the new hybrid CEGAR procedure to software programs (sequential model) or to word-level hardware (reactive model). First, the hardware design is transformed into a functionally equivalent control and data flow graph (CDFG) (100). Correspondingly, a software program can be directly modeled in a CDFG (102). Regardless of hardware or software design, the CDFG is received (104). Next, an initial abstraction of the design (hardware or software) is made (106). Next, a hybrid abstraction is generated (108) and includes variable hiding as well as predicate abstraction. Variable hiding and predicate abstraction are two popular abstraction methods to obtain simplified models for model checking. Variable hiding and predicate abstraction can be regarded as two extremes that have complementary strengths. The system's hybrid approach operates in the spectrum between the two extremes, to provide more robust and concise abstractions. Specifically, a hybrid abstraction is used that allows both visible state variables and predicates in the same abstract model. Algorithms are provided for optimizing the abstraction computation, and for deciding when to add more visible state variables and when to add more new predicates within a CEGAR framework. Heuristics also improves the overall performance, based on static analysis to identify useful candidates for visible variables, and use of lazy constraints to find more effective unsatisfiable cores for refinement.
[0019] The hybrid abstraction goes through model checking process (110) and tested for counter-examples (112). If a counter example is not found, the abstraction is proven (114). If the counter-example is found, the process checks for concrete counterexamples of the abstract counterexample (116). If a concrete counter-example is found, the abstraction is refuted (120) and if not, the abstraction is refined using the counter-example as a guide (122). An abstraction computation then applies syntactic rules on the CDFG model (126). The hybrid abstraction method allows both visible state variables and new predicates in the same abstract model. An efficient abstraction computation method is applied that uses a set of syntactic level rules to add correlation constraints (between visible variables and predicates, and among predicates) upfront. The method provides a heuristic refinement algorithm that uses word-level lazy constraints and unsatisfiability core (UNSAT core) computation to improve the quality of the refinement. The UNSAT core based refinement algorithm to add more “correlation constraints” lazily (on-demand) in order to remove spurious transitions. The abstraction computation also receives precomputed candidate visible variables (124). The process heuristically evaluates the cost of the hybrid abstract model and to decide, during refinement, when to trade new predicates for visible variables. A static analysis based method for pre-computing a list of candidate visible variables for hybrid abstraction, in order to avoid the overhead of discovering visible variables through multiple refinement steps. The abstraction computation is processed in the hybrid abstraction operation 108 and the process repeats.
[0020] The hybrid abstraction with the CEGAR framework can be used in verifying word-level Verilog designs. Experimental results show that the new method matches the better of the two existing abstraction methods, and outperforms them both in many cases. This may be due to the hybrid abstract model being more concise than either extreme when allowed to have both visible variables as well as predicates. Although hardware verification is discussed, the main ideas (and hybrid CEGAR) are directly applicable to verifying software programs also. The flexibility in the hybrid approach provides a uniform way to handle models derived from both hardware and software, and results in effective and concise abstractions automatically.
[0021] Turning now to the Abstraction Methods, let \( X = \{ x_1, \ldots, x_n \} \) be a finite set of variables representing the current state of the model, and \( X' = \{ x'_1, \ldots, x'_n' \} \) be the set of variables representing the next state; then a valuation \( X \) or \( X' \) of the state variables represents a state. A model is denoted by the tuple \( \langle T, I \rangle \), where \( T(X, X') \) is the transition relation and \( I(X) \) is the initial state predicate. \( X \) is an initial state if \( I(X) \) holds; similarly, \( (X, X') \) is a state transition if \( T(X, X') \) is true. In the symbolic model checking, the transition relation of a model and the state sets are represented symbolically by Boolean functions in terms of a set of state variables. For hardware models, all state variables are assumed to belong to finite domains. The concrete transition relation \( T(X, X') \) is defined as follows,
$T = \bigwedge_{i=1}^{n} T_i(X, X').$
[0022] where $T_i$ is an elementary transition relation. Each $x_i \in X$ has an elementary transition relation $T_i$, defined as $x_i' = \delta_i(X)$, where $\delta_i(X)$ is the transition function of $x_i$.
[0023] Variable hiding marks a subset $X_0 = \{x_1, \ldots, x_n\} \subseteq X$ of state variables as visible. The set of remaining variables (called invisible variables) is denoted by $X_{inv} = (X \setminus X_0)$. For $x_i \in X_{inv}$, let $T_i = \text{true}$. The abstract model (via variable hiding) is defined as $<T_{inv}, I_v>$ such that,
$T_i = \bigwedge_{i=1}^{n} T_i(X, X')$
$I_v = \exists X_{inv} \cdot I(X)$
[0024] $T_r(X, X')$ may depend on some invisible current-state variables in $X_{inv}$, which are treated as free inputs. In model checking, free inputs are existentially quantified during image computation. One can explicitly remove $X_{inv}$ variables by existential quantification,
$T_r = \exists X_{inv} \cdot T_r(X, X')$
[0025] However, this may cause a further loss of accuracy since $T_r \subseteq T_r$. In practice, using $T_r$ as opposed to $T_r$ in model checking often gives better results.
[0026] In predicate abstraction, consider a set $P = \{P_1, \ldots, P_n\}$ of predicates over variables in $X$. A new set $\bar{P} = \{\bar{P}_1, \ldots, \bar{P}_n\}$ of Boolean state variables are added for the predicates such that $P_i$ is true if $P_i(X)$ evaluates to true. The abstract model (via predicate abstraction) is defined as $<T_{P_r}, I_P>$ such that,
$T_p = \exists X, X' \cdot T(X, X') \land \bigwedge_{i=1}^{n} P_i(X) \iff P_i(X')$
$I_P = \exists X \cdot I(X) \land \bigwedge_{i=1}^{n} P_i(X)$
[0027] The mapping from $T$ to $T_{P_r}$ or predicate image computation, is expensive. Most existing tools developed for hardware verification use either BDDs or a SAT solver to compute the predicate image. For instance, one can build a Boolean formula for $T(X, X') \iff P_i(X) \iff P_i(X')$ as the input to a SAT solver; $T_{P_r}(\bar{P})$ is obtained by enumerating all the satisfying solutions of the formula in terms of variables in $\bar{P}$ and $P_i$.
[0028] In the worst case, the number of satisfying assignments in $T_{P_r}$ is exponential in the number of predicates. Abstraction computation may become intractable when the number of predicates is large. In such cases, one has to resort to a less precise abstract transition relation $T_{P_r}$ (such that $T_{P_r} \subseteq T_{P_r}$). In Cartesian abstraction, for instance, the set $\bar{P}$ is partitioned into smaller subsets where predicate images are computed separately for each individual subset, and the resulting relations are conjoined together to obtain $T_{P_r}$.
[0029] Next, the cost of Abstractions is discussed. The conciseness of abstraction in terms of the number of Boolean state variables in the abstract model is evaluated. In model checking, the state space is exponential in the number of state variables, making the number of state variables an effective indicator of the hardness of model checking.
[0030] Turning now to the cost of a Predicate, in variable hiding abstraction, a visible variable $x_1 \in X$, with domain $\text{dom}(x_1)$, has a cost equal to $\log(\text{dom}(x_1))$, where $\text{dom}(x_1)$ is the cardinality of the set. If binary encoding is used for $x_1$ in the concrete model and $\log(\text{dom}(x_1))$ is the number of bits for encoding $x_1$, the cost of an invisible variable is 0. In predicate abstraction, since all variables in $\bar{P}$ are in the Boolean domain, the cost of each $\bar{P}_i \in \bar{P}$ or each corresponding predicate $P_i(X)$ is 1. To facilitate the comparison of predicate abstraction with variable hiding, the cost of $P_i$ (which is 1) is distributed evenly to the concrete state variables in $P_i(X)$ as follows: If there are 1 supporting $X$ variables appearing in the expression $P_i(X)$, the predicate adds a cost of $(1/1)$ to each of these variables. When there are visible variables, the cost of a predicate is distributed evenly to its supporting invisible variables only. If all the variables appearing in $P_i(X)$ are already made visible, then the predicate is redundant since adding it will not improve the accuracy of the abstraction.
**EXAMPLE 1**
[0031] The predicate $P_{12}(x+v=10)$ adds ½ each to the costs of $u$ and $v$; the predicate $P_{12}(x=0)$ adds ½ each to the costs of $u$, $v$, and $w$.
**EXAMPLE 2**
[0032] When $u$ is a visible variable, the predicate $P_{12}(u+v=10)$ adds 1 to the cost of $v$; the predicate $P_{12}(v=2v=3w)$ adds ½ each to the costs of $v$ and $w$, and $P_{12}(u=0)$ is redundant.
[0033] The total cost distributed to a concrete state variable $x_i \in X$ by predicates, denoted by $\text{cost}_r(x_i)$, is the sum of the costs incurred by all the predicates in which $x_i$ appears. Recall that in variable hiding, the cost of $x_i \in X$ is $\log(\text{dom}(x_i))$ when it is visible. Therefore, if $\text{cost}_r(x_i) > \log(\text{dom}(x_i))$, then predicate abstraction is considered to be less concise, since making $x_i$ visible requires less Boolean state variables than representing the predicates. On the other hand, if $\text{cost}_r(x_i) = \log(\text{dom}(x_i))$, then predicate abstraction is considered to be more concise.
[0034] Turning now to the cost of a Visible Variable, variable hiding can be viewed as a special case of predicate abstraction, wherein all possible valuations of a visible variable are provided as predicates.
[0035] In predicate abstraction, $T_{P_r}(P, \bar{P})$ is defined in the abstract state space; however, it can be mapped back to the original state space as follows, $T_{P_r}(Y, Y') =\:$
$\exists (P, P') \cdot T_{P_r}(P, P') \land \bigwedge_{i=1}^{n} P_i(Y) \iff P_i(Y')$
[0036] Here $Y$ and $Y'$ are used to represent the same sets of state variables as $X$ and $X'$. According to the mapping from $T(X, X')$ to $T_{P_r}(P, \bar{P})$, then $T_{P_r}(Y, Y') =$:
$\exists (X', X'') \cdot T(X, X') \land \bigwedge_{i=1}^{n} P_i(X) \iff P_i(X')$
This equation is interpreted as follows: In order to allow all the visible variables in \( T(X, X') \) to be preserved, while existentially quantifying invisible variables, one can define a set of new predicates for each \( x \in X \), as follows: let \( \text{dom}(x) = \{d_1, \ldots, d_k\} \), the set of predicates is \( \{x \wedge d_1, x \wedge d_2, \ldots, x \wedge d_k\} \).
However, preserving a visible variable \( x \) using these predicates may be inefficient since it requires \( |\text{dom}(x)| \) new Boolean state variables, one for each predicate \( (x \wedge d) \). In contrast, making \( x \) visible only requires \( \log |\text{dom}(x)| \) Boolean state variables. If all these predicates (representing valuations of \( x \wedge X' \)) are needed in order to decide the property at hand, then variable hiding provides an exponentially more concise abstraction.
Next, a hybrid abstraction method is presented that allows visible variables and predicates to be in the same abstract model. Given a set \( X = \{x_1, \ldots, x_n\} \) of visible variables and a set \( \{p_1, \ldots, p_m\} \) of predicates, together with a set \( P = \{p_1, \ldots, p_m\} \) of fresh Boolean variables, \( T_P(X, X', P) \), the new hybrid abstract transition relation, is defined as follows,
\[
T_H = T(X, X') \land T(p, P') \land \bigwedge_{i=1}^{m} p_i \Rightarrow p_j(X)
\]
The model can be viewed as a parallel composition of two abstract models \( T_P \) and \( T_P \), defined in terms of \( X \) and \( P \) variables, and connected through the correlation constraint \( \sim P \). Without loss of generality, for every predicate \( P(X) \), at least one of its supporting variables is invisible. (If all supporting \( X \) variables in \( P \) are visible, the redundant predicate \( P \) is removed.)
Since adding the correlation (third conjunct in the above formula) can make model checking significantly more expensive (due to a large BDD for \( T_P \)), a less precise abstraction can be used
\[
T_{H'} = T(X, X') \land T_P(p, P') \land \bigwedge_{i=1}^{m} p_i \Rightarrow p_j(X)
\]
Note that in addition to removing the correlation constraint between \( X \) and \( P \), \( T_P \) is replaced by \( T_P \) (Cartesian abstraction)—this removes the potential correlation among \( P \) variables. The advantage of using \( T_P \) is that it is cheaper to compute. The syntactic cone partitioning method can be used to enumerate the elementary transition relation of each predicate separately. That is, each next-state predicate \( P(X') \) is clustered with all the current-state predicates \( P(X) \) such that the supporting \( X \) variables of \( P(X) \) affect the next-state values of the supporting \( X' \) variables of \( P(X) \). If the correlation among some \( P \) variables is missing because of this Cartesian abstraction, it will be added back if needed during refinement.
The loss of both kinds of correlation constraints can cause spurious transitions to appear in the abstract model. An abstract transition \( (s, s') \), where \( s \) and \( s' \) are valuations of variables \( (P(X), P(X')) \), respectively, is spurious if no concrete transition exists between \( (s, s') \). There are two possible reasons for a spurious counterexample to appear: (1) there are spurious transitions because the abstraction computation in \( T_P \) is not precise; and (2) there are spurious counterexample segments because the sets of predicates and visible state variables are not sufficient. Note that a counterexample segment may be spurious even if none of its transition is spurious. During refinement, spurious transitions are removed, by identifying some needed constraints over variables in \( X, X', P, P' \) and conjoining them with \( T_P \).
Refinement for Spurious Transitions will be discussed next. For a spurious transition \( (s, s') \), there is no concrete transition between \( s \) and \( s' \) but \( T_P(s, s') \) is. Let the abstract state \( s = \{P_1, \ldots, P_m\} \) be a valuation of variables in \( P(X) \) and \( s' \) be a valuation of the variables in \( P(X') \), then \( (s, s') \) is spurious if the formula \( R(X, P, X', P') \), defined below, is not satisfiable.
\[
\begin{align*}
& (x_1 = x_1, \ldots, x_n = x_n, \bigwedge_{i=1}^{m} p_i(X) \Rightarrow p_i(X'), \bigwedge_{i=1}^{m} p_i(X') \Rightarrow p_i(X)) \\
& \land \bigwedge_{i=1}^{m} p_i(X) \land \bigwedge_{i=1}^{m} p_i(X')
\end{align*}
\]
A Boolean formula \( R \) is built for each abstract transition in the given counterexample, and use a SAT solver to check its satisfiability. If the formula is not satisfiable, then the transition is spurious.
Removing the spurious transition requires the addition of a constraint \( r(X, P, X', P') \), i.e., conjoining \( T_H \) with \( r \). The additional constraint \( r \) is defined as follows,
\[
r = \neg \left( \bigwedge_{i=1}^{m} x_i = x_i \land x_i = x_i', \bigwedge_{i=1}^{m} p_i(X) \Rightarrow p_i(X'), \bigwedge_{i=1}^{m} p_i(X') \Rightarrow p_i(X) \right)
\]
The constraint \( r \) can be strengthened by dropping the equality constraints on some irrelevant \( X \) and \( P \) variables. The irrelevant variables can be determined by analyzing the UNSAT core reported by the SAT solver. An UNSAT core of a Boolean formula is a subset of the formula that is sufficient for proving the unsatisfiability. If certain subformulas in \( R \), such as \( x_i = x_i \) and \( P(X) \leftrightarrow P(X') \), do not appear in the UNSAT core, then the equality constraints can be dropped from \( r \). The strengthened version of \( r \) is guaranteed to remove the spurious transition at hand.
If there is no spurious transition in a spurious counterexample, more predicates or visible variables are needed to refine the abstract model. Let \( X' \uparrow P' \) be the copy of \( (X \uparrow P) \) at the \( j \)-th time frame. If the counterexample \( s_0, \ldots, s_1 \) is spurious, the following formula is unsatisfiable,
\[
\bigwedge_{j=0}^{1} R(X', P', X'^{j+1}, P'^{j+1})
\]
Note that each \( R \) is satisfiable by itself. The spurious counterexample can be removed by using a weakest precondition (WP) based refinement method. Since the weakest precondition computation relies on the underlying representation of the concrete model, the refinement is discussed in more detail after the discussion of the hybrid CEGAR procedure.
The hybrid CEGAR procedure is presented based on models represented as Control and Data Flow Graphs (CDFGs). Intuitively, the CDFG representation allows a separation between control and data state, such that control states are represented explicitly in terms of basic blocks (with guarded transitions between blocks) and data states are represented implicitly in terms of symbolic data variables (with assignments that update data state). This provides a natural representation for software programs, where control states
correspond to control locations of the program and data states to values of program variables. For hardware models, Verilog is used as a representative HDL, and describe how to obtain CDFGs from word-level Verilog designs—this has certain features that impact the proposed abstraction and refinement techniques.
[0051] The CDFG is a concrete model, serving as input to the hybrid CEGAR procedure. The hybrid abstract model is computed directly from the CDFG model, with respect to a set X of visible variables and a set P={P_1, . . . , P_k} of predicates.
[0052] In transforming Verilog Designs into CDFGs, the Verilog design is transformed through rewriting to a functionally equivalent reactive program. This reactive program is formally represented as a control and data flow graph (CDFG).
**DEFINITION 3**
[0053] A control and data flow graph (CDFG) is a 5-tuple ≤B,E,V,A,6≥ such that
- B= {b_1, . . . , b_n} is a finite set of basic blocks, where b_i is the entry basic block.
- E⊂BxB is a set of edges representing transitions between basic blocks.
- V is a finite set of variables that consists of actual variables in the design and auxiliary variables added for modeling the synchronous semantics of hardware description.
- S=π⊂V is a labeling function that labels each basic block with a set of parallel assignments. S_{<xx>o} is the set of possible assignments.
- S=π⊂V is a labeling function that labels each edge with a conditional expression. S_{<xx>o} is the set of possible conditional expressions.
**EXAMPLE 4**
[0059] The Verilog example in Fig. 3 computes Fibonacci numbers. The equivalent CDFG is in the right. To maintain the synchronous semantics, the variable a in the sequence was added to hold the next-state value of the reg type variable a. The loop body corresponds to the execution of the always block exactly once (in a clock cycle). Since a<=b+a is a non-blocking assignment, i.e., a gets the current value of (b+a) at the next cycle (not immediately), when translating the assign b<=a, a was not substituted by a. Note that if it were a blocking assignment b<=a and b=a in the Verilog description, they would be translated into a<xx>o=b+a and a=b<xx>o.
[0060] In Fig. 3, each rectangle in the CDFG is a basic block and the edges are labeled by conditional expressions. For example, the transition from block 3 to block 4 is guarded by (a=100). Edges not labeled by any condition are assumed to have a label. Block 1 is the entry block and block 7 is the error block. Reachability properties in the Verilog model are translated into assertion checks at the beginning of the loop. For example, (a+b=200) is translated into if (a+b>200) ERROR. The verification problem consists of checking whether the ERROR block is reachable from the entry block. More complex properties (PSL or LTL) can be handled by first synthesizing them into monitors followed by the Verilog-to-CDFG translator.
[0061] The transformation from Verilog designs to CDFG representations is made easy by introducing the NS variables. The CDFG is a representation similar to a software program, except that it has a single infinite loop to emulate the reactivity of a hardware model. A clock cycle in the Verilog model corresponds to the execution of the loop body of the CDFG exactly once. Procedural statements from all the always blocks are sequentialized inside the infinite loop. Due to the addition of extra _NS variables for the reg type variables, e.g., a<xx>o for a as in FIG. 3, sequentialization of multiple synchronously running always blocks may take an arbitrary order. In one implementation, an order is selected that can minimize the number of added _NS variables, since such optimizations reduce the size of the concrete model and therefore speed up model checking.
[0062] The CDFG model is chosen in order to directly apply weakest-precondition based predicate abstraction refinement algorithms that have been developed for software programs. In the traditional synchronous model, these WP-based refinement algorithms are not directly applicable. Note that a synchronous model for Verilog designs is equivalent to summarizing all the statements in the loop body of the CDFG model, and creating a single basic block with a self loop, and with a set of parallel running assignments, one for each variable. Such a synchronous circuit model (as opposed to a reactive program) can be used, where significant modifications have to be made to the WP-based refinement algorithm. Even with these modifications, one has to simultaneously consider all possible branches inside each clock cycle, making the WP computation likely to blow up. In contrast, in the CDFG representation, the weakest precondition computation can be localized to a single execution path, therefore offering the possibility of creating abstraction at a finer granularity. Abstraction computation is also faster in the reactive model since SAT enumeration can be applied to assignments in each individual block of the CDFG, as opposed to all assignments of the single block of a synchronous model simultaneously.
[0063] Next, hybrid abstraction refinement for CDFGs will be discussed. A special state variable x, the program counter (PC), is added to represent the control locations of the CDFG; the domain of x is the set B of basic blocks. The set X of state variables of the model is assumed to be {x<xx>o}. In the sequel, x<xx>o is always the first element of X and therefore x<xx>o and x are interchangeable. The initial states of the model are modeled as l=x<xx>o, i.e., all possible valuations of V in the entry block b<xx>o. If the error block is b<xx>o∈B, the property to be verified is (x<xx>o=b<xx>o). The set of parallel assignments in each basic block b<xx>o∈B, denoted δ(b<xx>o), is written as x<xx>o<xx>o=b<xx>o<xx>o . . . c<xx>o<xx>o where c<xx>o<xx>o is the expression assigned to x<xx>o in block b<xx>o. The guard c<xx>o<xx>o(b<xx>o,b<xx>o) is the edge label from block b<xx>o to block b<xx>o if there is no such edge, c<xx>o<xx>o=false.
[0064] In trading Predicates for Visible Variables, with the hybrid abstraction, x<xx>o is always visible. X<xx>o={x<xx>o} and P={<xx>o}, and new predicates can be added using WP-based refinement. At the same time, the system checks to see if it is advantageous to trade some existing predicates for visible variables as follows:
[0065] Add new visible variables: For all x<xx>o∈X<xx>o, if the total cost distributed to x<xx>o by predicates is larger than log(dom(x<xx>o)), x<xx>o is made visible, i.e., the system adds x<xx>o to X<xx>o.
[0066] Remove redundant predicates: For a predicate P<xx>o(X) whose supporting X variables are all visible, the system removes the predicate and remove the corresponding V from P.
Modify correlation constraints: For all existing correlation constraints \( r(X_P,\mathbf{P}, \mathbf{P'}) \), if \( p \) and \( p' \) are in the support of \( r \), but \( P(X) \) has been declared as redundant and removed, the system existentially quantifies \( p \) and \( p' \) from \( r \), i.e., the system uses \( 3(p, p') r(X_P, \mathbf{P}, \mathbf{P'}) \) instead.
The initial hybrid abstract transition relation is \( T_P = T_P^* \). Given a set \( X_P \), the system computes \( T_P = \wedge T_I \) as follows: For \( x_i \in X_P \), \( T_I \) represents the control flow logic,
\[
T_I = \bigwedge_{j=1}^l \bigvee_{k=1}^r (x_i = b_j) \land (x_i' = b_j') \land c_{i,j,k}
\]
Since invisible variables are treated as free inputs, if \( c_{i,j}(X) \) contains invisible variables, the guard is nondeterministically chosen to be true or false (corresponding to if\(^*\)). For \( x_i \in X_P \) such that \( i=1 \),
\[
T_I = \bigwedge_{j=1}^l (x_i = b_j) \land (x_i' = c_{i,j})
\]
wherein \( c_{i,j}(X) \) is the RHS expression assigned to \( x_i \) in block \( j \). If there is no explicit assignment to \( x_i \) in \( X \), then \( c_{i,j} = x_i \).
Correlations between \( X \) and \( P \) variables, as well as correlations among \( P \) variables, are added lazily during refinement if spurious transitions occur. Next, the process for removing spurious counterexamples by adding new visible variables and predicates to \( X \) and \( P \), respectively, will be discussed.
In computing New Predicates in CDFGS, the system uses a weakest precondition based refinement algorithm for finding new predicates. Given a spurious counterexample with no spurious transition, first, the system identifies a subset of conditional expressions (guards) that are needed to prove the infeasibility of a concrete path. The system focuses on one path in the CDFG, \( \text{blk}_1, \ldots, \text{blk}_m \), determined by the counterexample. In this path, a basic block may appear more than once. The sequence of statements \( \pi = \text{blk}_1, \text{blk}_2, \ldots \) corresponding to this path consists of two kinds of statements: a basic block \( \text{blk}_k \), corresponds to a set of parallel assignments \( \Delta(\text{blk}_k) \), and a transition \( (\text{blk}_k, \text{blk}_{k+1}) \) corresponds to a branching statement \( \text{assume}(c) \) where \( c = 0(\text{blk}_k, \text{blk}_{k+1}) \).
EXAMPLE 5
A spurious counterexample segment in FIG. 3 corresponds to the sequence of blocks 1, 2, 3, 5, 6, 7. The sequence of program statements is shown below:
<table>
<thead>
<tr>
<th>blocks</th>
<th>transitions</th>
<th>statements</th>
<th>inUNSAT</th>
</tr>
</thead>
<tbody>
<tr>
<td>b1</td>
<td>a = 1;</td>
<td>a = _NS = 1;</td>
<td>yes</td>
</tr>
<tr>
<td></td>
<td>b = c;</td>
<td>yes</td>
<td></td>
</tr>
<tr>
<td>b2</td>
<td>b1 → b2</td>
<td></td>
<td></td>
</tr>
<tr>
<td>b3</td>
<td>b2 → b3</td>
<td>assume(a + b ≤ 200);</td>
<td></td>
</tr>
<tr>
<td>b4</td>
<td>b3 → b5</td>
<td>assume(a ≥ 100);</td>
<td></td>
</tr>
<tr>
<td>b5</td>
<td>b4 → b6</td>
<td>b = c;</td>
<td></td>
</tr>
</tbody>
</table>
(Continued)
A SAT solver checks the feasibility of a counterexample segment, where an unsatisfiable formula indicates that the counterexample is spurious. For each \( \text{c}(\text{blk}_k, \text{blk}_{k+1}) \), the system checks whether \( c \) appears in the UNSAT core. If it appears in the UNSAT core, then the guard \( c(X) \) is chosen and its weakest precondition \( WP(\pi,c) \) is computed with respect to the spurious prefix \( \pi \). \( WP(\pi,c) \) is the weakest condition whose truth before the execution of \( c \) is known after the execution. Let \( f(V,W) \) denote the substitution of \( W \) with \( V \) in function \( f(W) \). \( WP(\pi,c) \) is defined as follows: (1) for an assignment \( c(\text{v} = v) \), \( WP(\pi,c) = \text{false} \); (2) for a conditional statement \( \text{assume}(c) \), \( WP(\pi,c) = \text{false} \); (3) for a sequence of statements \( st1, st2 \), \( WP(st1, st2, \pi) = WP(st1, WP(st2, \pi)) \). Refinement corresponds to adding the new predicates appearing in \( WP(\pi,c) \) to the abstract model.
In this example, suppose that the guard \( (a+b \geq 200) \) appears in the UNSAT core and \( \pi \) is the sequence of statements in blocks 1, 2, 3, 5, 6, 7. Then \( WP(\pi, a+b > 200) \) provides the following new predicates: \( P_1(a+b \geq 200) \), \( P_2(a+b \leq 200) \), \( P_3(a+b \leq 200) \). Adding these predicates will remove the spurious counterexample, because in block 1, \( P_1 = \text{false} \); in blocks 5 and 6, \( P_2 P_3 P_5 = P_2 \); this makes the transition from block 2 to block 7, guarded by \( P_1 \), evaluate to false.
In the method, new predicates are directly added by the refinement algorithm, while visible variables are derived indirectly from the existing set of predicates by trading in predicates. An alternative is to selectively make some of the variables in the UNSAT core visible directly.
The system can easily add Syntactic Constraints. In \( \bar{\text{P}}_{\pi} \), the constraints \( \neg P \rightarrow WP(X) \) are left out completely, to make the abstraction computation cheaper. Although some of the needed correlation constraints can be lazily added during refinement of spurious transitions, this process can sometimes be inefficient due to the model checking expenses and number of refinement iterations. Therefore, certain cheaper constraints are added to \( \bar{\text{P}}_{\pi} \) upfront.
The following syntactic rules are used to decide which constraints to add. If \( \bar{\text{P}}_{\pi} T_{\pi} P \), the constraints \( \neg P \rightarrow WP(X) \) are left out completely, to make the abstraction computation cheaper. Although some of the needed correlation constraints can be lazily added during refinement of spurious transitions, this process can sometimes be inefficient due to the model checking expenses and number of refinement iterations. Therefore, certain cheaper constraints are added to \( \bar{\text{P}}_{\pi} \) upfront.
The conditional expressions \( c_{i,j}(X) \) is processed as follows:
1. If \( c_{i,j} \) is a constant (true or false), or all the supporting \( X \) variables of \( c_{i,j}(X) \) are visible, then do not change it;
2. else if \( c_{i,j}(X) \) is syntactically equivalent to (the negation of) a predicate \( P_j(X) \), then replace it by (the negation of) \( P_j \).
otherwise, replace it with (*), by adding a fresh primary input indicating a nondeterministic choice.
Note that in the third case, over-approximation of $\exists x_{\text{non-PC}} \cdot C_{\text{PC}}(X) \land p_i \rightarrow \text{WP}(P_i(X))$ is used; however, there is no approximation in the first two cases.
(Rule 2) for $x_i \in X_i$ such that $i = 1$ (non-PC variables),
$$T_i = \bigvee_{j=1}^n (x_i = b_j) \land (x'_i = e_{ij})$$
$e_{ij}(X)$, the expression assigned to $x_i$ in block $j$ is not approximated. The system uses $e_{ij}$ as is, even if there are invisible variables in its support—these invisible variables become pseudo-primary inputs.
(Rule 3) for $p_i \in P$ (predicate variables), $T_{p_i}$ is the elementary transition relation of $p_i$. The computation of $T_{p_i}$ is localized to the computation of $T_{p_{ij}}$ in each basic block $j$ (similar to $x'_i = e_{ij}$ for computing $T_i$).
$$T_{p_i} = \bigvee_{j=1}^n (x_i = b_j) \land T_{p_{ij}}, j$$
where $T_{p_{ij}} \exists x_{\text{non-PC}} \cdot \text{WP}(P_i)^+ \land p_i \rightarrow \text{WP}(P_i(X))$ is used to denote the weakest precondition of $P_i(X)$ with respect to the assignments in block $j$. Since the existential quantification ($\exists X_{\text{non-PC}}$) is expensive, the system computes $T_{p_{ij}}$ as follows:
if $\text{WP}(P_i)$ is a constant (true or false), or in the expression of $\text{WP}(P_i)$ all the supporting $X$ variables are already visible, then $T_{p_{ij}} = (p_i \rightarrow \text{WP}(P_i))$;
else if $\text{WP}(P_i)$ is equivalent to (the negation of) another predicate $P_i(X)$ or its negation, then $T_{p_{ij}}$ equals the negation of the formula ($p_i \rightarrow \neg P_i$);
else if enumerating the solutions of $p_i$ and $P$ variables for $p_i \rightarrow \text{WP}(P_i)^+ \land p_i \rightarrow \neg P$, is feasible, the enumeration result is used instead. The result represents a relation over $p_i$ and $P$;
otherwise, let $T_{p_{ij}} = p_i \rightarrow (*)$—by adding a fresh primary input to indicate a nondeterministic choice.
These heuristics are optional in that they do not affect the completeness of the overall CEGAR procedure. However, in practice they are very effective in reducing the spurious transitions, and hence avoiding the associated costs of model checking and large number of refinement iterations.
Additional heuristics can be used to improve the hybrid CEGAR procedure. These are based on a static identification of candidate variables to make visible quickly, and a lazy constraint technique to improve the quality of the unsatisfiable cores used for the purpose of refinement.
In static identification of Visible Variables, before the CEGAR loop starts, a simple static analysis is done on the CDFG to heuristically compute a small set of promising candidates of visible variables, i.e., variables that are likely to be made visible during the refinement process. In particular, the heuristic is used that for a state variable $v$, if (1) the next-state value of $v$ is determined by some arithmetic expression over the current-state value of $v$, and (2) the variable $v$ appears in some conditional expression guarding an error block, then $v$ is a promising candidate visible variable.
However, these precomputed candidates are not added as visible variables upfront, since static analysis alone is not a good indicator that these variables are needed to verify the property at hand. Instead, during refinement, if a candidate variable $v$ appears in the support of a predicate $P_i(X)$ in the UNSAT core, then $v$ is added as a visible variable even if its accumulative cost $c_{\text{acc}}(v)$ is not yet large enough.
The process can precompute candidates of visible variables. In other words, the system bypasses the step of first generating new predicates based on WP-based analysis. This is because in the subsequent refinement iterations, it is likely that a large number of new predicates (corresponding to the WP of $P_i$) are needed, due to the nature of $v$’s transition function. In Fig. 4, for instance, if the predicate ($v < 1024$) is in the UNSAT core, the subsequent refinements will add ($v < x < 1024$), ($v + 2x < 1024$), . . . as predicates—this is precisely the situation to avoid. In the hybrid abstraction, the situation can be avoided by adding $v$ as a visible variable immediately after the addition of the new predicate ($v < 1024$).
Next, Lazy Constraints in UNSAT Core will be discussed. An UNSAT core derived by the SAT solver can be used for refinement, both for spurious transitions (by identifying correlation constraints in the UNSAT core) and for spurious segments (by identifying the conditional expressions in the UNSAT core). There are often multiple UNSAT cores for the same unsatisfiable problem, and the SAT solver by default may not generate an UNSAT core that is better for refinement.
Consider the example in Fig. 5, where a spurious counterexample is shown on the left. Imagine that, for instance, lines 4 and 8 have complex loop bodies guarded by the conditions in lines 3 and 7, respectively; and the loop bodies contain $i = i + 1$ and $j = j + 1$. For this spurious counterexample, there are the UNSAT cores:
- Lines 1, 2 and 3,
- Lines 5, 6 and 7,
- Lines 1, 2, 5, 6, 9 and 10.
- Lines 3, 7 and 10.
Although any of these UNSAT cores can be used to remove the spurious counterexample, the last one is better since it immediately proves that ERROR is not reachable, as shown on the right of Fig. 5. The weakest precondition of $P(k < A + B)$ is $Q(i = i - A)$, which is implied when both $R(i = A)$ and $S(j = B)$ are true.
Modern SAT solvers are likely to report one of the first three UNSAT cores, due to the eager unit clause propagation used during pre-processing to handle the assignments to constants (lines 1, 2, 5, and 6). In this example, WP computation has to consider the (potentially complex) loop bodies. For instance, if the loops contain $i = i + 1$ and $j = j + 1$, then using the first UNSAT core will result in 1024 predicates.
This situation is avoided by formulating the satisfiability problem in a slightly different way. For each assignment statement of the form $st_i: v \leftarrow \text{const}$ in the spurious counterexample, the constraint in the corresponding SAT problem is $(v^i = \text{const})$. The system changes this constraint to:
$$(v^i = \text{const} \lor q)(v^j = \text{const} \lor \neg q)$$
where $q$ is a fresh Boolean variable. Note that the new constraint implies $(v^i = \text{const})$. However, the presence of the extra variable $q$ prevents the SAT solver from eagerly propagating the unit clauses due to $(v^i = \text{const})$ during preprocessing. This reduces the chances of such constant assignments appearing in the UNSAT core reported by the SAT solver. Therefore, although this approach does not guarantee
that the UNSAT core generated by the SAT solver provides the best refinement solution, it can significantly increase the chance of getting one.
**[0108]** This approach is similar to the lazy constraint method, where it was shown to be effective for finding good variable (latch) hiding abstractions. Here, it is applied in the context of predicate abstraction. Furthermore, the lazy con-
straints were applied at the bit-level, for modeling only the initial state values of latches. In contrast, they are applied at the word-level, to assignment statements appearing anywhere in the high-level description of the design or program. Another difference to note is that lazy constraints have been used for proof-based abstraction. In that setting, the use of lazy constraints can sometimes be expensive, especially on large problems corresponding to large code bases. In the setting, lazy constraints are used only during refinement, where the problem of checking the feasibility of an abstract counterexample is significantly smaller.
**[0109]** Experiments will be discussed next. The hybrid CEGAR procedure can be used for models represented as CDFGs. The proposed techniques are evaluated by comparing hybrid abstraction with the two existing abstraction methods—variable hiding and predicate abstraction—in the same CEGAR procedure. For the purpose of controlled experiments, the model checking algorithms are kept the same; both predicate abstraction and hybrid abstraction use the same weakest precondition based refinement algorithm to find new predicates, and variable hiding uses an UNSAT core based refinement algorithm to identify new visible variables. In the implementation, CUDD is used for BDD operations and a circuit SAT solver for SAT related operations. The experiments were conducted on a workstation with a 3 GHz Pentium 4 and 2 GB of RAM running Red Hat Linux.
**[0110]** A public Verilog front-end tool (called icarus Verilog) is used to translate Verilog designs into functionally equivalent CDFGs. The benchmarks include the VIS Verilog benchmarks. All examples are available in public domain. For these examples, invariant properties, which are expressed as reachability of an error block, are checked. Among the test cases, AR is an example computing the Fibonacci numbers (the parameterized bit-width is set to 32, although in the original versions, the bit-vectors have sizes of 500, 1000, and 2000 in all arithmetic operations);
<table>
<thead>
<tr>
<th>TABLE 1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Test Case</td>
</tr>
<tr>
<td>-----------</td>
</tr>
<tr>
<td>name</td>
</tr>
<tr>
<td>AR</td>
</tr>
<tr>
<td>pj_icram</td>
</tr>
<tr>
<td>pj_icu</td>
</tr>
<tr>
<td>sdlx</td>
</tr>
<tr>
<td>tloop</td>
</tr>
<tr>
<td>arbiter</td>
</tr>
<tr>
<td>icu99-a</td>
</tr>
<tr>
<td>icu99-b</td>
</tr>
<tr>
<td>icu99-c</td>
</tr>
<tr>
<td>icu99-d</td>
</tr>
</tbody>
</table>
(0111) The first three columns of Table 1 provide statistics on the examples: the first column shows the names of the designs; the second column shows the numbers of binary state variables (or registers) in the cone of influence, and the third column indicates whether the property is true. The next three columns compare the CPU time of the CEGAR procedure with different abstraction methods—varhide denotes variable hiding, predabs denotes predicate abstraction, and hybrid denotes the hybrid abstraction. The next three columns compare the number of iterations of the CEGAR procedure needed to prove the properties. The next three columns compare the final abstract models in terms of (Vars/Preds), i.e., the number of visible variables and the number of predicates, respectively. (Here a final abstract model is a model on which the property can be decided.) The last three columns show the results for the VCEGAR tool—the CPU time, the number of iterations, and the size of the final abstract models. All the experiments used the latest binary of VCEGAR (version 1.1).
**[0112]** Overall, the hybrid abstraction makes the CEGAR procedure more robust. The performance of hybrid consistently matched the better of the two existing methods varhide and predabs. For half of the examples, hybrid obtained the best runtime performance among the three. This may be due to the hybrid model being more concise than either of the two extremes. It is interesting to note that even though the currently implemented refinement approach is slightly biased toward predicates (converting predicates to visible variables, and not vice versa), the final abstract model in all examples included a non-trivial number of visible variables (other than $x_{1}$). Note also that the implementation of pure predicate abstraction has a runtime performance comparable to VCEGAR, although it computes abstractions at a significantly finer granularity.
**[0113]** More specifically, note that predicate abstraction timed out on the arbiter example, since a large number of predicates of the form $(i+\text{counter} < -127)$ such that $i=1, 2, \ldots$ is required (exponential in the bit-width of variable counter).
The itc99-d example is also hard for pure predicate abstraction, since it has a very long counterexample and requires a large number of predicates. Pure variable hiding abstraction worked well on these two examples, because it is able to localize the property to a small subset of variables (the final abstract model for arbiter, including the variable counter, has 30 Boolean state variables). The hybrid abstraction uses the same WP-based refinement algorithm as in predicate abstraction, but achieved a runtime performance and final sizes similar to variable hiding.
[0114] On the other hand, pure variable hiding was the slowest on the AR example, since it added all the variables of the model to prove the property (the final abstraction has 96 Boolean state variables). In contrast, both predicate abstraction and hybrid abstraction produced much smaller final abstract models (with only 6 Boolean state variables). Variable hiding also timed out on the loop example, which has a CDFG structure similar to the one in FIG. 5; variable hiding is inefficient for this example because the abstract model contains several complex arithmetic operations (large adders). The implementations of both predicate abstraction and hybrid abstraction completed this example. VCEGAR did not complete the loop example because its refinement is based on the standard UNSAT core reported by zChaff, which results in the addition of a number of predicates. In contrast, the lazy constraint heuristic refinement was used to obtain a more useful UNSAT core. This allowed a simpler abstract model to be built and therefore complete this example quickly.
[0115] In sum, the hybrid abstraction method combines variable hiding with predicate abstraction in the same counterexample guided abstraction refinement loop. Refinements based on weakest preconditions to add new predicates can be used, and under certain conditions trade in the predicates for visible variables in the abstract model. Heuristics for improving the overall performance can be based on static analysis to identify useful candidates for visible variables, and lazy constraints can be used to find more effective refinement. The experiments show that hybrid abstraction frequently outperforms the existing abstraction methods—it makes the CEGAR procedure more robust. Other static analysis techniques can be used to speed up the abstraction computation and to help computing better refinements.
[0116] The invention may be implemented in hardware, firmware or software, or a combination of the three. Preferably the invention is implemented in a computer program executed on a programmable computer having a processor, a data storage system, volatile and non-volatile memory and/or storage elements, at least one input device and at least one output device.
[0117] By way of example, a block diagram of a computer to support the system is discussed next. The computer preferably includes a processor, random access memory (RAM), a program memory (preferably a writable read-only memory (ROM) such as a flash ROM) and an input/output (I/O) controller coupled by a CPU bus. The computer may optionally include a hard drive controller which is coupled to a hard disk and CPU bus. Hard disk may be used for storing application programs, such as the present invention, and data. Alternatively, application programs may be stored in RAM or ROM. I/O controller is coupled by means of an I/O bus to an I/O interface. I/O interface receives and transmits data in analog or digital form over communication links such as a serial link, local area network, wireless link, and parallel link. Optionally, a display, a keyboard and a pointing device (mouse) may also be connected to I/O bus. Alternatively, separate connections (separate buses) may be used for I/O interface, display, keyboard and pointing device. Programmable processing system may be preprogrammed or it may be programmed (and reprogrammed) by downloading a program from another source (e.g., a floppy disk, CD-ROM, or another computer).
[0118] Each computer program is tangibly stored in a machine-readable storage media or device (e.g., program memory or magnetic disk) readable by a general or special purpose programmable computer, for configuring and controlling operation of a computer when the storage media or device is read by the computer to perform the procedures described herein. The inventive system may also be considered to be embodied in a computer-readable storage medium, configured with a computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner to perform the functions described herein.
[0119] The invention has been described herein in considerable detail in order to comply with the patent Statutes and to provide those skilled in the art with the information needed to apply the novel principles and to construct and use such specialized components as are required. However, it is to be understood that the invention can be carried out by specifically different equipment and devices, and that various modifications, both as to the equipment details and operating procedures, can be accomplished without departing from the scope of the invention itself.
What is claimed is:
1. A process for verifying the correctness of a design, comprising:
a. transforming the design into a Control and Data Flow Graph (CDFG);
b. generating a hybrid abstract model; and
c. checking the correctness of the hybrid abstract model.
2. The method of claim 1, wherein the hybrid abstract model is generated by an abstraction through variable hiding and predicate abstraction.
3. The method of claim 2, wherein the abstraction allows visible state variables and predicates in the hybrid abstract model.
4. The method of claim 1, wherein abstraction is comprised of precomputing one or more visible variables.
5. The method of claim 1, comprising applying one or more syntactic rules to efficiently build the hybrid abstract model.
6. The method of claim 1, wherein checking the correctness comprises applying a counterexample guided abstraction refinement.
7. The method of claim 6, comprising
a. performing an initial abstraction of the design;
b. determining one or more counterexamples for the hybrid abstract model;
c. performing concretization of the counterexample to check whether the counterexample is spurious;
d. refining the hybrid abstract model based on the spurious counterexample, and
e. repeating steps b-d, until there are no more counterexamples or a true counterexample is found.
8. The method of claim 6, comprising evaluating the cost of the hybrid abstract model.
9. The method of claim 6, comprising deciding, during refinement, when to trade new predicates for visible variables.
10. The method of claim 6, comprising applying UNSAT core based refinement algorithm to add one or more correlation constraints on-demand to remove spurious transitions.
11. The method of claim 6, comprising applying a set of syntactic level rules to add correlation constraints.
12. The method of claim 11, wherein the correlation constraints comprise visible variables and predicates, and among predicates.
13. The method of claim 6, comprising
a. using one or more word-level lazy constraints in the hybrid abstract model; and
b. using UNSAT core computation to improve the quality of the refinement.
14. The method of claim 1, comprising automatically transforming word-level Verilog designs into functionally equivalent CDFGs.
15. The method of claim 1, comprising generating the CDFG for a word-level hardware (reactive model).
16. The method of claim 1, comprising generating the CDFG for a software program (sequential model).
17. A system to check a design comprising:
a. a converter to transform the design into a Control and Data Flow Graph (CDFG);
b. a module to perform a hybrid abstraction of the design and to generate a hybrid abstract model; and
c. a verifier to check the hybrid abstract model.
18. The system of claim 17, wherein the hybrid abstract model is generated by an abstraction through variable hiding and predicate abstraction.
19. The system of claim 18, wherein the abstraction allows visible state variables and predicates in the hybrid abstract model.
20. The system of claim 17, wherein abstraction is generated by precomputing one or more visible variables.
21. The system of claim 17, wherein the module applies one or more syntactic rules to efficiently build the hybrid abstract model.
22. The system of claim 17, wherein the verifier checks the correctness by applying a counterexample guided abstraction refinement.
* * * * *
|
{"Source-Url": "https://patentimages.storage.googleapis.com/d6/e2/ea/1c0bf9ee175353/US20090007038A1.pdf", "len_cl100k_base": 15006, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 19615, "total-output-tokens": 16757, "length": "2e13", "weborganizer": {"__label__adult": 0.0004096031188964844, "__label__art_design": 0.0007410049438476562, "__label__crime_law": 0.0004661083221435547, "__label__education_jobs": 0.0010776519775390625, "__label__entertainment": 9.238719940185548e-05, "__label__fashion_beauty": 0.0002033710479736328, "__label__finance_business": 0.0006461143493652344, "__label__food_dining": 0.00034308433532714844, "__label__games": 0.0014019012451171875, "__label__hardware": 0.0083465576171875, "__label__health": 0.0004801750183105469, "__label__history": 0.0004107952117919922, "__label__home_hobbies": 0.00021898746490478516, "__label__industrial": 0.001186370849609375, "__label__literature": 0.00028061866760253906, "__label__politics": 0.0003502368927001953, "__label__religion": 0.0005764961242675781, "__label__science_tech": 0.135498046875, "__label__social_life": 6.437301635742188e-05, "__label__software": 0.01262664794921875, "__label__software_dev": 0.83349609375, "__label__sports_fitness": 0.0003440380096435547, "__label__transportation": 0.0007519721984863281, "__label__travel": 0.00022089481353759768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63431, 0.03247]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63431, 0.55126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63431, 0.87269]], "google_gemma-3-12b-it_contains_pii": [[0, 309, false], [309, 336, null], [336, 655, null], [655, 1048, null], [1048, 8079, null], [8079, 15197, null], [15197, 21281, null], [21281, 28227, null], [28227, 34991, null], [34991, 41297, null], [41297, 48182, null], [48182, 54722, null], [54722, 61538, null], [61538, 63431, null]], "google_gemma-3-12b-it_is_public_document": [[0, 309, true], [309, 336, null], [336, 655, null], [655, 1048, null], [1048, 8079, null], [8079, 15197, null], [15197, 21281, null], [21281, 28227, null], [28227, 34991, null], [34991, 41297, null], [41297, 48182, null], [48182, 54722, null], [54722, 61538, null], [61538, 63431, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63431, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63431, null]], "pdf_page_numbers": [[0, 309, 1], [309, 336, 2], [336, 655, 3], [655, 1048, 4], [1048, 8079, 5], [8079, 15197, 6], [15197, 21281, 7], [21281, 28227, 8], [28227, 34991, 9], [34991, 41297, 10], [41297, 48182, 11], [48182, 54722, 12], [54722, 61538, 13], [61538, 63431, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63431, 0.08185]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
c502bf6b9dc93171db465b1436c5cb4bbd4d7a08
|
Abstract
No matter how hard we try, or how much innovation we throw at a problem, it is extremely difficult to build a system that can respond to every imaginable issue in a manner that will satisfy all users.
This technical report explores the concept of self-healing storage and provides practical solutions that address the NetApp® Data Fabric, whether on the premises or in the public cloud.
TABLE OF CONTENTS
1 Introduction to Self-Healing Systems ................................................................. 3
2 Self-Healing Systems Design ............................................................................... 4
2.1 System-Monitoring Component of Self-Healing Storage ....................................... 5
2.2 System Analysis and Alert Management Component of Self-Healing Storage .......... 9
2.3 Remediation Orchestration Component of Self-Healing Storage .............................. 12
3 Self-Healing Systems Implementation .................................................................... 17
3.1 Use Case 1: Automatic Capacity Management ..................................................... 17
3.2 Use Case 2: Automatic Inode Management in High-File-Count (HFC) Environments ................................................................. 28
3.3 Use Case 3: Automatic Performance Management (Tiering) ............................... 35
Appendix A: Integrating OnCommand Manager Alerts with WFA When Monitoring Multiple Clusters ........................................................................................................... 38
References .................................................................................................................. 44
1 Introduction to Self-Healing Systems
No matter how hard we try or how much innovation we throw at a problem, it is extremely difficult (perhaps impossible) to build a system that can respond to every imaginable issue in a manner that will satisfy all users. After all, we are creatures of opinions, and opinions are hard to predict and nearly impossible to automate. Of course, we wouldn't need to deal with such issues if all systems were perfect and never failed, but that is not today's reality. Perhaps it will be tomorrow.
Nowadays, most systems support some level of internal self-healing capability (some more than others). Internal self-healing requires the ability to identify aspects of the system that are performing suboptimally and to remediate those issues. Those processes are usually universal in nature, meaning that they do not depend on the type of workload for which the system is used. They are also usually fairly easy to describe in a consistent, repeatable, and predictable manner. Such internal self-healing capabilities might include RAID reconstruction, scale-out workload load balancing across compute nodes, or recovery from component failure.
The more compelling aspect of a self-healing system revolves around the customer's workload and the need for service levels. If an application is about to run out of space or reach a performance limit, should it be granted additional capacity or IOPS? What if the user paid only for a specific level of service? What if the space or IOPS is available, but allocating either will negatively affect some internal best practice of utilization threshold? If a document repository is rapidly growing to a substantial size, should we consider moving it to a more cost-effective object repository instead? If a system is delivering an acceptable level of performance but shows a consistent pattern of degradation, should empirical data be collected proactively in case a support engagement is required with the vendor? These questions are some of the many considerations that we encounter when pursuing a self-healing infrastructure.
This technical report explores the concept of self-healing storage and provides examples of practical solutions for NetApp Data Fabric systems, whether on the premises or in the public cloud.
2 Self-Healing Systems Design
Every system that supports adaptive, customizable, and expandable lifecycle management of actionable events must incorporate the following:
- A component that monitors the state and usage patterns of the system
- A component that performs anomaly detection and analysis (in its simplest form—an event and alert management component)
- A component that executes a prescribed remediation activity (an orchestration component)
Figure 1 shows the relationship between the components of the self-healing system, which provides continuous, cyclical feedback.
Figure 1) Components of a self-healing system.
Within the NetApp OnCommand® management suite, the NetApp Data Fabric provides all the components that are needed to deliver a self-healing system:
- System monitoring can be performed by OnCommand Unified Manager, OnCommand Performance Manager, and OnCommand Insight.
- Analysis and alert management can be performed by OnCommand Unified Manager and OnCommand Insight.
- Remediation orchestration can be performed by OnCommand Workflow Automation.
Figure 2 shows the NetApp Data Fabric solution for an end-to-end self-healing storage infrastructure.
Note About NetApp Service Level Manager
NetApp has recently released new management software called NetApp Service Level Manager. This software optimizes storage operations for predictable service levels, defines service-level objectives (SLOs), and enables the transition to a storage-as-a-service private cloud platform.
NetApp Service Level Manager follows a methodology called MAPE (Monitor, Analyze, Plan, Execute), which is similar to the process that is described in this document.
NetApp Service Level Manager is beyond the scope of this document, however. For additional details, see the References section.
2.1 System-Monitoring Component of Self-Healing Storage
A prerequisite to self-healing automation is the ability to auto-discover the environment, continuously monitor its behavior, and understand what is considered normal (baseline) and what is acceptable deviation from normal. Many products on the market can accomplish this task—with varying levels of functionality and integration. This document explores the NetApp OnCommand Unified Manager and OnCommand Performance Manager (focused on NetApp ONTAP powered solutions) products that are available to monitor the Data Fabric.
Introduction to OnCommand Unified Manager and OnCommand Performance Manager
OnCommand Unified Manager and OnCommand Performance Manager are included with ONTAP and provide a comprehensive data management solution for NetApp FAS and All Flash FAS systems:
- NetApp OnCommand Unified Manager is specifically designed to support the innovative capabilities of ONTAP software. It facilitates monitoring the health, availability, capacity, performance, and data
protection status of your NetApp clustered storage. It further provides alerts and vital information for proactive management.
- NetApp OnCommand Performance Manager is tightly integrated with Unified Manager. It delivers comprehensive storage performance monitoring and data retention, along with notifications and alerts for proactive management. It also provides timely system analysis and assists in root-cause identification for performance issues.
Together, these products provide a complete view of your NetApp storage health. A simplified GUI enhances the overall user experience with an integrated view of how the storage infrastructure is performing. Administrators can monitor health and performance attributes from a single portal by logging in to a single URL for access to Unified Manager and Performance Manager. The Favorites dashboard (Figure 4) is a personalized feature where users can store frequently accessed views, which helps streamline monitoring of essential storage data. With this feature, you can:
- Get proactive notifications about storage events and issues.
- Quickly drill down to vital information with color-coded topology views.
- Receive suggestions for corrective actions for fast remediation.
- Use groups and annotations to filter notifications and report information.
- Access standard reports or create custom operational reports.
Figure 3) OnCommand Unified Manager dashboard.
Monitoring and alerting are fully dynamic. Trending and historical information—with graphical representations of the storage topology—help facilitate analysis. This approach optimizes use, capacity, and performance across the storage infrastructure. Full monitoring of your NetApp SnapMirror®, SnapVault®, and MetroCluster™ environments is incorporated. Simple, peered, cascaded, and fan-out
Simple, Powerful Management for NetApp ONTAP
The OnCommand Unified Manager management server provides the foundation for improved availability, scalability, supportability, performance, and security of the storage infrastructure.
Maximize availability. Use Unified Manager to view overall storage health by cluster, aggregate, or storage virtual machine (SVM). From the main dashboard, you can easily see availability, capacity, performance, and protection status or drill down to see details about physical or logical objects. This information enables quick understanding of where availability is at risk, the corrective actions that can be taken to eliminate potential issues, and the steps to avoid unplanned downtime.
Maintain control over capacity. Unified Manager helps project when available capacity will be at levels that require expansion. Notifications include details with recommended actions so that you can quickly determine where to provision volumes for new workloads. It also identifies where to move volumes to minimize the risk of running out of capacity.
Automate data protection. In a busy storage environment, confidence that your data is adequately protected is essential. With OnCommand Unified Manager, you can immediately determine whether data is protected or is at risk. Health monitoring for SnapMirror, SnapVault, and MetroCluster environments identifies when data protection relationships are in jeopardy because of capacity or configuration changes. Unified Manager helps automate data protection when provisioning new storage, promoting a consistent process that follows best practices.
Get critical operational reporting. To provide vital operational information, Unified Manager comes with several standard reports. Optionally, customized reports to meet the specific needs of your organization can be generated. Reports cover topics such as configuration, capacity use, operational status, storage efficiency, and inventory. For example, you can easily determine whether configurations comply with best practices, or you can review use trends to determine when there is a risk of running out of capacity.
You can view reports in PDF, HTML, and CSV formats; filter, sort, and group data within the standard reports; or schedule reports to run later. Unified Manager also provides an interface that extracts the operational data and enables import to another reporting solution.
View historical data. By viewing the volume move status and history and the junction path change history, you can track cluster events better and can make better use of ONTAP features.
Integrated Performance Management
Managing performance can be a time-consuming task when it is done manually or with limited tools. OnCommand Performance Manager is an integrated component of Unified Manager that monitors systems and alerts about potential performance issues so that optimal performance can be obtained. It also helps troubleshoot, isolates potential problems, and offers directed solutions to performance issues based on system analysis.
OnCommand Performance Manager is easily launched within the Unified Manager dashboard through single sign-on. By using automated analytics, it provides cluster status and cluster health performance metrics, including latency, IOPS, megabytes per second, disk use, and node use.
OnCommand Performance Manager continually monitors and analyzes performance to simplify performance management for NetApp FAS and NetApp All Flash FAS systems. The performance dashboard within Performance Manager sorts clusters by level of importance, allowing simple clicks to drill down into cluster details. The intuitive UI allows navigation, exploration of performance trends, and comparison of the performance for storage objects within clusters. You can quickly and easily view and compare multiple objects, identify areas of concern, and proactively manage and optimize storage performance.
**Optimize performance.** By using built-in system, dynamic, and user-defined policy thresholds, OnCommand Performance Manager detects and alerts on performance incidents. By using the performance explorer, you can easily compare performance workloads. With the suggested corrective actions, you can quickly troubleshoot performance issues and quickly resolve events. Other features include:
- **Thresholds and alerts.** Built-in system-defined and dynamic thresholds define custom policy thresholds for greater control over FAS and All Flash FAS alerts.
- **Network services.** Performance Manager automatically alerts administrators when off-storage network services—such as network virus scanning, Lightweight Directory Access Protocol (LDAP) authentication, and other network tasks—cause I/O response time to cross a threshold.
- **Policy group limit.** Performance Manager monitors and analyzes all the quality-of-service (QoS) policies that affect workloads. In addition, it indicates when an abnormal or “bully” workload is causing throttling or is affecting response time.
- **Data processing.** Performance Manager provides detailed information about data processing activity and identifies which workloads have changed and have caused a CPU bottleneck.
- **Aggregates.** Performance Manager continuously monitors storage aggregates and confirms that they provide the optimal space, IOPS, and throughput for peak performance. If an aggregate is the source of unacceptable performance, administrators are notified so that they can take corrective action quickly.
• **All Flash FAS speed.** Performance Manager optimizes NetApp All Flash FAS to deliver peak performance, combined with superior flexibility and best-in-class data management for workloads that demand exceptional service.
**Streamline management with a single namespace.** NetApp Unified Manager and Performance Manager support the FlexGroup capability that was introduced in NetApp ONTAP 9.1. With a single namespace, you can seamlessly scale performance or capacity. When working with multiple physical file systems, a single namespace presents one virtual container as if all the data were centrally located. This configuration allows you to significantly simplify management from one Unified Manager interface.
### 2.2 System Analysis and Alert Management Component of Self-Healing Storage
The analysis and alert management component is the glue that ties the other components of the self-healing system together. It is responsible for identifying an event through the monitoring system (discussed in Section 2.1) and for triggering an alarm, which in turn initiates the remediation workflow (discussed in Section 2.3).
This document explores the NetApp OnCommand Unified Manager (focused on ONTAP powered solutions) product as the event and alert management system for the NetApp Data Fabric.
**Introduction to OnCommand Unified Manager Event Management Capability**
Events are notifications that are generated automatically when a predefined condition occurs or when a particular threshold is breached. These events enable you to take action to prevent issues that can lead to poor performance or service unavailability. Events include an impact area, severity, and an impact level.
Events are categorized by the type of impact area, such as availability, capacity, configuration, or protection. Events are also assigned a severity type and impact level, which help you determine whether immediate action is required.
You can access events in OnCommand Unified Manager either through the Health dashboard (see Figure 3) or through the Events tab.
**Figure 5) OnCommand Unified Manager events view.**
You can configure threshold-driven events through the Setup Options menu, accessible through Administration > Setup Options (see Figure 5).
From the Setup Options menu, expand the Thresholds section and select the relevant storage object (Aggregates, Volumes, or Relationships); see Figure 6.
Figure 6) OnCommand Unified Manager events threshold configuration.
With OnCommand Unified Manager, you can create alerts to notify staff when a particular event is generated. You can create alerts for a single resource, for a group of resources, or for events of a particular severity type. You can specify the frequency with which you want to be notified and you can associate a script to the alert. These scripts are executed automatically when the specific alert is generated, and they enable you to obtain information about storage objects for which the alert is generated.
OnCommand Unified Manager supports Perl, Shell, Windows PowerShell, and .bat script formats and provides the following information to the script during execution time:
- eventID
- eventSourceID
- eventSourceName
- `eventSourceType`
- `eventState`
- `eventArgs`
Table 1 shows sample argument values that are provided to a script at run time.
**Table 1) OnCommand Unified Manager alert script sample arguments.**
<table>
<thead>
<tr>
<th>Arguments</th>
<th>“Volume Space Nearly Full” Event</th>
<th>“Volume Space Full” Event</th>
</tr>
</thead>
<tbody>
<tr>
<td>args[0]</td>
<td>-eventID</td>
<td>-eventID</td>
</tr>
<tr>
<td>args[1]</td>
<td>50003</td>
<td>50003</td>
</tr>
<tr>
<td>args[2]</td>
<td>-eventName</td>
<td>-eventName</td>
</tr>
<tr>
<td>args[4]</td>
<td>Space</td>
<td>Space</td>
</tr>
<tr>
<td>args[5]</td>
<td>Nearly</td>
<td>Full</td>
</tr>
<tr>
<td>args[6]</td>
<td>Full</td>
<td>-eventSeverity</td>
</tr>
<tr>
<td>args[7]</td>
<td>-eventSeverity</td>
<td>error</td>
</tr>
<tr>
<td>args[8]</td>
<td>warning</td>
<td>-eventSourceID</td>
</tr>
<tr>
<td>args[9]</td>
<td>-eventSourceID</td>
<td>5428</td>
</tr>
<tr>
<td>args[10]</td>
<td>5428</td>
<td>-eventSourceName</td>
</tr>
<tr>
<td>args[11]</td>
<td>-eventSourceName</td>
<td>svm1_cluster2:/vol_test</td>
</tr>
<tr>
<td>args[12]</td>
<td>svm1_cluster2:/vol_test</td>
<td>-eventSourceType</td>
</tr>
<tr>
<td>args[13]</td>
<td>-eventSourceType</td>
<td>VOLUME</td>
</tr>
<tr>
<td>args[14]</td>
<td>VOLUME</td>
<td>-eventState</td>
</tr>
<tr>
<td>args[15]</td>
<td>-eventState</td>
<td>NEW</td>
</tr>
<tr>
<td>args[16]</td>
<td>NEW</td>
<td>-eventArgs</td>
</tr>
<tr>
<td>args[17]</td>
<td>-eventArgs</td>
<td>volNearlyFull=80 volFull=90 dfMountedOn=vol_test dfKBytesTotal=70656 dfKBytesUsed=68500 dfKBytesPercent=96.9485960149275 dfInodesTotal=31122 dfInodesUsed=102 dfInodesPercent=0.3277424330055909 dfKBytesAutoMaxSize=1258288 aggrDfBytesAvail=93327323136 isAutosizeEnabled=false autoIncrementSizeBytes=53686272 isVolSpaceGuaranteeVolume=true</td>
</tr>
<tr>
<td>args[18]</td>
<td>volNearlyFull=80 volFull=90</td>
<td>dfMountedOn=vol_test</td>
</tr>
</tbody>
</table>
By using scripts, integration with the Unified Manager alert mechanism, and the Workflow Automation workflow engine, you can leverage the WFA RESTful API interface.
### 2.3 Remediation Orchestration Component of Self-Healing Storage
In a world filled with opinions, habits, and “it depends,” where operational best practices can change with every release of a new service or adoption of a new technology, organizations need simplicity and flexibility. An open orchestration layer that readily adapts to a multitude of technologies and consumption models is required. Key to automation are a decision engine and enforced boundaries that are stipulated by the infrastructure owners. These key tenets are critical whether infrastructure resides in private, public, or hybrid clouds.
Many orchestration tools are available on the market. However, one tool is not only fully integrated with all the components of the Data Fabric, but is also available to all NetApp customers with no additional software license requirements. That tool is OnCommand Workflow Automation.
### Introduction to OnCommand Workflow Automation
OnCommand Workflow Automation (WFA) is a software solution that automates storage management tasks, such as provisioning, migration, decommissioning, data protection configurations, and data cloning operations.
NetApp OnCommand Workflow Automation delivers on the Data Fabric vision, providing automation and integration to meet the demands of today's evolving IT organizations. Whether managing storage on the premises or in a cloud environment, Workflow Automation makes it easy to quickly create simple or complex workflows. Storage administrators can create storage workflows for the most frequent tasks and make them available to consumers for one-click automation. Storage architects can automate time-consuming, complex processes to meet cost-saving objectives, a common goal when moving data to the cloud. OnCommand WFA helps provide consistency, predictability, and standardization across the Data Fabric.
With Workflow Automation, you can integrate storage workflows with existing IT orchestration processes for fast, seamless delivery of services to your business consumers. And by implementing Workflow Automation in storage management operations, you can also standardize and integrate processes to align with NetApp best practices for data management.
A workflow is a repetitive and procedural task that consists of sequential steps. The goal is a consistent, repeatable, and predictable result. Just a few examples of what you can do with WFA include:
- Provision, migrate, or decommission storage for databases or file systems.
- Automate your data protection process to a disaster recovery location on the premises or in the cloud.
- Set up storage for an application as part of an end-to-end orchestration process.
- Clone and set up a new development and testing environment, then tear it down when it’s no longer needed.
• Set up the FlexPod® Datacenter system or virtual desktops, or conduct a centralized NetApp SnapCenter® software activation.
Storage architects can define workflows to follow best practices and to meet organizational requirements, such as the following:
• Adhering to required naming conventions
• Setting unique options for storage objects
• Selecting appropriate resources
• Integrating internal configuration management database (CMDB) and ticketing applications
Figure 7) OnCommand Workflow Automation architecture.
Workflow Automation integrates with OnCommand Unified Manager to collect data about your storage environment (see Figure 7). With this integration, you can leverage information in numerous OnCommand Unified Manager instances to automate storage and data protection processes globally. You can also automate processes across multivendor storage environments by leveraging OnCommand Insight. Simply download the OnCommand Insight Connector from the Storage Automation Store to integrate OnCommand Insight as a data source for Workflow Automation.
In addition, Workflow Automation can connect to internal systems to collect information to use in resource selection, to open trouble tickets, and more. These capabilities allow you to accelerate storage service delivery and reduce time to market.
For example, Workflow Automation integrates with VMware vRealize Operations, VMware vRealize Automation, and VMware vCloud Automation Center to deliver custom IT as a service with a single click. To invoke your workflows from an external portal or from the data center orchestration software, simply use the REST APIs that Workflow Automation provides. OnCommand Workflow Automation is a flexible framework. The same approach can be taken to integrate and automate with other orchestrators such asCisco UCS Director or Microsoft System Center Orchestrator.
OnCommand Workflow Automation features include:
- **An execution portal** to invoke workflows, verify the status of workflow execution, and access logs.
- **Designer portal** to build workflows. The designer portal includes several building blocks, such as commands, templates, finders, filters, and functions, that are used to create workflows. The designer enables you to include advanced capabilities with workflows such as automated resource selection, row repetition (looping), and approval points. The designer portal also includes building blocks, such as dictionary entries, cache queries, and data source types, for caching data from external systems.
- **An administration portal** for tasks such as setting up WFA, connecting to data sources, and configuring user credentials.
- **Web services interfaces** to invoke workflows from external portals and data center orchestration software, with full support for RESTful APIs that leverage JSON and XML.
- **The Storage Automation Store** to download WFA packs (workflows and commands that are provided by either NetApp, NetApp partners, or the WFA community).
When you first log in to the Workflow Automation software, you are presented with the production-ready workflows portal.
Figure 8) WFA Portal view.
As Figure 8 shows, you can divide workflows into categories that are based on context. Examples of workflow categories include AltaVault, E-Series, ONTAP, SolidFire, DevOps, setup, migration, virtualization, and cloud. The view of available workflows and categories depends on the user’s account and the role-based access control (RBAC) rules that are associated with those workflows.
For information about a workflow’s execution status (log), schedules, and configured data sources (acquisition units through which WFA collects information about the infrastructure), go to the Execution tab (Figure 9).
The Execution tab is also where you can store credentials information. One of the advantages of using Workflow Automation instead of scripts is the increased security that results from not exposing user names and passwords. WFA stores credentials information in a secure and encrypted format. This information cannot be accessed directly by any user.
The Designer tab (Figure 10) is where automation architects can design and implement new workflows.
Not every aspect of WFA can be covered in this document. Following are a few of the more important components of this tab:
- **Commands** are the building blocks of the workflow. They are the steps that the workflow executes in sequence. Workflow Automation provides many commands as part of the basic installation package.
You can add other commands by downloading them from the Storage Automation Store, by finding them through community sharing, by purchasing them from professional service organizations, or by writing them in-house. WFA commands are scripts that are written in PowerShell or Perl.
- **Filters** are used by the WFA decision-making engine to identify potential resources for a given task. For example, if a workflow needs to identify a solid-state drive (SSD) storage pool resource, a filter to find all relevant storage pools (aggregate) that meet the required criteria can be employed. These criteria can include RAID type, media type, nonroot, capacity requirements, and so on.
- **Finders** are used to identify the "best" resource among all potential resources for a task. Finders use filters to locate all relevant resources and use prioritization logic to determine the one optimal resource for the requirements. An example of an optimal resource is the aggregate with the most free space or the aggregate with the greatest amount of available IOPS.
- **Workflows** are the collection of commands with the additional logic required to define a task to be automated.
Another important feature is the Storage Automation Store (Figure 11).
**Figure 11** WFA Storage Automation Store view.

The Storage Automation Store is a portal from which you can download certified software packs to use with OnCommand Workflow Automation, OnCommand Unified Manager, and OnCommand Performance Manager to extend data management capabilities.
3 Self-Healing Systems Implementation
The use cases for self-healing storage systems are almost endless and range from resource (capacity and performance) management all the way to proactive issue avoidance activities. This technical report discusses the following sample use cases in detail:
- **Use case 1: automatic capacity management.** In this use case, the system reacts to a NetApp FlexVol® volume running out of capacity and checks the feasibility of a space allocation increase while maintaining a maximum storage pool (aggregate) utilization of X%. If the host storage pool does not have enough capacity, the system investigates similar tier storage pools; if a match is found, the volume is relocated and is then expanded.
- **Use case 2: automatic inode management in high-file-count (HFC) environments.** In this use case, the system reacts to a FlexVol volume running out of available inodes and increases the number of inodes to maintain X% inode utilization.
- **Use case 3: automatic performance management (tiering).** In this use case, the system leverages the I/O density (I/O per terabyte stored) metric to identify the proper performance tier for each FlexVol volume. If necessary, the system nondisruptively moves the volumes to a different storage pool (aggregate) or compute node type.
As stated previously, the use cases for self-healing storage systems are almost endless. Some additional use cases include:
- Proactive performance metric collection (such as the NetApp Perfstat collection tool) that is based on the performance profile of a system (for example, a performance latency trigger)
- Adaptive QoS management that is based on the I/O density of a workload
- Conformance validation of protection settings that is based on the protection tier of an asset
- Detection and remediation of configuration drift
- Detection and remediation of ransomware attacks
### 3.1 Use Case 1: Automatic Capacity Management
In this use case, the system reacts to a FlexVol volume running out of capacity and checks the feasibility of a space allocation increase while maintaining a maximum storage pool (aggregate) utilization of X%. If the host storage pool does not have enough capacity, the system investigates the availability of similar storage pools. If a match is found, the volume is relocated and is then expanded. You should compare and contrast this approach with the internal self-healing capability of the ONTAP native volume auto-grow feature.
**Creating a Remediation Orchestration Workflow**
The first step in the implementation is to design and create a remediation workflow in OnCommand Workflow Automation.
**Best Practice**
A best practice for all automation workflow design is to clearly specify the requirements, develop an approach that the workflow will follow, and translate it to a logic-based flowchart model before starting the actual implementation.
Table 2 lists the requirements and implementation methodology for this workflow.
Table 2) Use case 1 requirements and methodology.
<table>
<thead>
<tr>
<th>Workflow Requirements</th>
<th>Workflow Implementation Methodology</th>
</tr>
</thead>
<tbody>
<tr>
<td>• Increase the size of a FlexVol volume.</td>
<td>• Find all relevant information about the FlexVol volume.</td>
</tr>
<tr>
<td>• Maintain aggregate space utilization at or below X%.</td>
<td>• Determine whether the hosting aggregate can sustain the growth of the volume without violating the utilization guideline.</td>
</tr>
<tr>
<td>• If necessary, relocate the volume to a different aggregate of a similar tier.</td>
<td>• If it can, increase the size of the FlexVol volume.</td>
</tr>
<tr>
<td>• If it can’t, find another aggregate of the same tier, move the volume, and then increase the volume size.</td>
<td></td>
</tr>
</tbody>
</table>
The flowchart in Figure 12 represents the logic of this workflow.
Figure 12) Use case 1 flowchart.
Now you are ready to create your workflow. To create a workflow, log in to the Workflow Automation portal, click the Designer tab, and click the New ( New ) button.
Figure 13) WFA new workflow design view.
Remember that a workflow is a repetitive and procedural task that consists of sequential commands. As Figure 13 shows, all the available commands in WFA are listed on the left panel and are divided in contextual categories (AltaVault, Cloud, and so on). Building your workflow starts with the simple task of identifying the relevant commands and then dragging them in the correct order to the workflow design pane.
Figure 14 is a view of the completed workflow with the layered implementation flowchart above it. In WFA, the symbol ※ indicates a conditional run. In this case, the commands with that symbol run only if the
hosting aggregate space validation condition is not met. In addition, you can customize the workload to any level of utilization that you want. Further, you can use tailored heuristics in deciding how much to increase the volume’s size.
Figure 14) Use case 1 WFA workflow view.
Workflow Automation makes it very easy to test a workflow before actual execution: Simply click the Preview button (bottom right of Figure 14).
Figure 15) Use case 1: previewing a workflow run.
When you click the Preview button, a pop-up menu appears (Figure 15). You are asked to provide the relevant required workflow parameters (in this example case, the cluster name, storage virtual machine name, and volume name). The options in the drop-down menus are not hard-coded (although they could be), but, rather, are data that WFA has collected about the environment from its data sources.
Once again, select the correct parameters and click the Preview button. The workflow runs, and “would be” results are displayed (Figure 16). In this case, the volume would be moved to a new aggregate before its size would be increased.
Figure 16) Use case 1 results of a dry run.
Creating a Workflow Alert Launch Script for OnCommand Unified Manager
Before creating the relevant OnCommand alert, you should create a script (both PowerShell and Perl samples are provided here) that can be attached to the alert for automated execution.
Following is the PowerShell version of the script (OCUM_WFA_Capacity_Event.ps1):
```powershell
# 2017-02-09 yaron@netapp.com
# OCUM_WFA_Capacity_Event.ps1
# This script is meant to be used as part of an automated self-healing process
# that remediates high volume capacity utilization scenarios. It is meant to be used
# in conjunction with OCUM 7.x Windows-based systems and OnCommand WFA 4.1.x.
#
# IMPORTANT: This code assumes a single ONTAP cluster is present in the
# environment for code simplification. If environment contains more than one
# cluster see Appendix A.
#
# (c) 2017 NetApp Inc., All Rights Reserved
#
# Check the following link in case you run into SSL certificate issues
# https://d-fens.ch/2013/12/20/nobrainer-ssl-connection-error-when-using-powershell/
#============================================================================
# CHANGE IF DESIRED PRIOR TO UPLOADING TO OCUM SERVER
$cluster_name = "cluster1"
$wfa_rest_server = "https://wfa.demo.netapp.com/rest"
$wfa_username = "admin"
$wfa_password = "Netapp1!"
#============================================================================
#========================================
# DO NOT CHANGE CODE BELOW THIS LINE
#============================================================================
if ($args[6] -eq "Full"){
# This is an "Inodes Nearly Full" event
$source_name = $args[12]
$event_state = $args[16]
} else {
# This is an "Inodes Full" event
$source_name = $args[11]
$event_state = $args[15]
}
$event_id = $args[1]
# Ignore all non-new events
if (($event_state.ToString()).ToLower() -ne "new"){
exit
}
# Extract SVM and Volume names from OCUM-provided arguments
```
Calling the Alert Launch Script (OCUM_WFA_Capacity_Event.ps1) with appropriate parameters can be performed via the following.
```
$c = "cluster1"
$WS = "https://wfa.demo.netapp.com/rest"
The arousal state is full for volume "vol1":
$OCUM_WFA_Capacity_Event.ps1 -cluster $c -wfa $WS -source_name "vol1" -state full
```
This PowerShell script is meant to be used as part of an automated self-healing process that remediates high volume capacity utilization scenarios. It is meant to be used in conjunction with OCUM 7.x Windows-based systems and OnCommand WFA 4.1.x.
```powershell
$svm_name = $source_name.Split("{:}/")[0]
$volume_name = $source_name.Split("{:}/")[2]
# Prepare WFA credentials
$securePassword = ConvertTo-SecureString $wfa_password -AsPlainText -Force
# Find RESTful execution uri for WFA workflow
$restCommand = $wfa_rest_server + "/workflows?name=Resize_Volume_with_Data_Mobility"
try {
$output = Invoke-RestMethod -Method Get -Uri $restCommand -Credential $cred
} catch {
# An error occurred. Either workflow does not exist or credentials are wrong
exit
}
$workflow_execution_uri = ($output.collection.workflow.link | ?{$_.rel -eq 'execute'}).href
# Initiate workflow execution
$body = @"{
"comments": "OCUM triggered workflow. Event ID: $event_id",
"userInputValues": [
{ "key": "cluster_name", "value": "$cluster_name" },
{ "key": "svm_name", "value": "$svm_name" },
{ "key": "volume_name", "value": "$volume_name" }
]
}"
$output = Invoke-RestMethod -Method Post -Uri $workflow_execution_uri -Credential $cred -Body $body -ContentType "application/json"
# Find RESTful job monitoring link for WFA workflow
$job_status_uri = ($output.job.link | ?{$_.rel -eq 'self'}).href
if ($job_status_uri.Count -gt 1) { $job_status_uri = $job_status_uri[0] }
# Wait until workflow job either completes successfully or fails
do {
Start-Sleep -Seconds 2
$output = Invoke-RestMethod -Method Get -Uri $job_status_uri -Credential $cred
$jobStatus = $output.job.jobStatus
if ($jobStatus.score -eq 5) { $jobStatus.uri[0] }
} while ($jobStatus.result !eq "COMPLETED") -and ($jobStatus.result !eq "FAILED")
```
**Note:** To use PowerShell or .bat scripts, OnCommand Unified Manager must be installed on a Windows Server. If it is installed on a Linux server, use either Perl or Shell.
The following Perl script is provided as a reference in case PowerShell cannot be used.
```perl
# 2017-02-09 yaron@netapp.com
# OCUM_WFA_Capacity_Event.pl
#
# This script is meant to be used as part of an automated self-healing process
# that remediate high volume capacity utilization scenarios. It is meant to be used in
# conjunction with OCUM 7.x Windows-based systems and OnCommand WFA 4.1.x.
#
# IMPORTANT: This code assumes a single ONTAP cluster is present in the
# environment for code simplification. If environment contains more than one
# cluster see Appendix A.
```
use REST::Client;
use JSON;
use MIME::Base64;
# Unmark the next line if you run into SSL certificate issues
#$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0;
# CHANGE IF DESIRED PRIOR TO UPLOADING TO OCUM SERVER
my $cluster_name = 'cluster1';
my $wfa_rest_server = 'https://wfa.demo.netapp.com';
my $wfa_username = 'admin';
my $wfa_password = 'Netapp1!';
# DO NOT CHANGE CODE BELOW THIS LINE
if ($ARGV[6] == 'Full') {
# This is an "Volume Space Nearly Full" event
my $source_name = $ARGV[12];
my $event_state = $ARGV[16];
} else {
# This is an "Volume Space Full" event
my $source_name = $ARGV[11];
my $event_state = $ARGV[15];
}
my $event_id = $ARGV[1];
# Ignore all non-new events
if ($event_state == 'NEW') {
exit;
}
# Extract SVM and Volume names from OCUM-provided arguments
my ($svm_name, $volume_name) = split /:/, $source_name;
# Find RESTful execution url for WFA workflow
my $headers = {Accept => 'application/json', Authorization => 'Basic ' .
encode_base64($wfa_username . ':' . $wfa_password), 'Content-Type' => 'application/json'};
my $client = REST::Client->new();
$client->setHost($wfa_rest_server);
$client->GET('/rest/workflows?name=Resize_Volume_with_Data_Mobility', $headers);
my @response_json = @{ decode_json($client->responseContent()) };
my $workflow_execution_uri = '/rest/workflows/' . $response_json[0]->{'uuid'} . '/jobs';
# Initiate workflow execution
my $json_body = '{
"comments":"OCUM triggered workflow. Event ID: ' . $event_id . '",
"userInputValues":{
{
"key":"ClusterName",
"value":"$cluster_name"
},
{
"key":"SvmName",
"value":"$svm_name"
},
{
"key":"VolumeName",
"value":"$volume_name"
}
}
}';
# Find RESTful job monitoring link for WFA workflow
Creating an OnCommand Unified Manager Alert
To create an OnCommand Unified Manager alert:
1. Start with uploading the workflow launch script to the OnCommand Unified Manager server. To upload the script to the OnCommand Unified Manager server, select Administration > Manage Scripts, and click the Add button (Figure 17).

2. Browse to the file to upload and provide a brief description (optional). See Figure 18.
3. Create an alert by selecting Administration > Manage Alerts and clicking the Add button (Figure 19).
a. Provide an alert name and a description (Figure 20).
b. Select the resources to which the alert will be applied. In this example, all the volumes in the environment are selected (Figure 21).
Figure 20) OnCommand Unified Manager, first step in adding an alert.

Figure 21) OnCommand Unified Manager, second step in adding an alert.

c. Select the events that will trigger this alert. In this example, the Volume Space Nearly Full (Warning) event and the Volume Space Full (Error) event are selected (Figure 22).
Figure 22) OnCommand Unified Manager, third step in adding an alert.

d. Select the actions to be taken when the alert is triggered. This example has selected to execute the script that was uploaded earlier in the process (Figure 23).
That’s it! You have created a fully automated process that deals with volumes running out of capacity.
Testing the Self-Healing Scenario
To test use case 1:
1. Create a 1GB volume (Figure 24).
Figure 24) Testing use case 1: Create a volume.
2. Copy enough data to increase the volume’s capacity utilization to above 80% and observe that an event is created (Figure 25 and Figure 26, respectively).
Figure 25) Testing use case 1: volume capacity status.
```
<table>
<thead>
<tr>
<th>Name</th>
<th>Aggregate</th>
<th>Status</th>
<th>Thin Provisioned</th>
<th>N Used</th>
<th>Available Space</th>
<th>Total Space</th>
<th>Storage Efficiency</th>
<th>Is Volume Moved</th>
</tr>
</thead>
<tbody>
<tr>
<td>automatic_capacity_1</td>
<td>aggSATA</td>
<td>Online</td>
<td>No</td>
<td>95</td>
<td>49.42 MB</td>
<td>1 GB</td>
<td>Disabled</td>
<td>No</td>
</tr>
<tr>
<td>svml_cluster1_root</td>
<td>aggSATA</td>
<td>Online</td>
<td>No</td>
<td>5</td>
<td>18.84 MB</td>
<td>29 MB</td>
<td>Disabled</td>
<td>No</td>
</tr>
</tbody>
</table>
```
Figure 26) Testing use case 1: OnCommand Unified Manager Events view.
The event triggered an alert, in turn executing a Workflow Automation workflow to remediate the issue (Figure 27).
Figure 27) Testing use case 1: WFA Execution Status view.
The volume was increased in size. Utilization went down from 95% to 31%, and in this case, no movement to a different aggregate was necessary (Figure 28).
Figure 28) Testing use case 1: remediated volume capacity status.
```
<table>
<thead>
<tr>
<th>Name</th>
<th>Aggregate</th>
<th>Status</th>
<th>Thin Provisioned</th>
<th>N Used</th>
<th>Available Space</th>
<th>Total Space</th>
<th>Storage Efficiency</th>
<th>Is Volume Moved</th>
</tr>
</thead>
<tbody>
<tr>
<td>automatic_capacity_1</td>
<td>aggSATA</td>
<td>Online</td>
<td>No</td>
<td>31</td>
<td>2.04 GB</td>
<td>3 GB</td>
<td>Disabled</td>
<td>No</td>
</tr>
<tr>
<td>svml_cluster1_root</td>
<td>aggSATA</td>
<td>Online</td>
<td>No</td>
<td>5</td>
<td>18.84 MB</td>
<td>29 MB</td>
<td>Disabled</td>
<td>No</td>
</tr>
</tbody>
</table>
```
3.2 Use Case 2: Automatic Inode Management in High-File-Count (HFC) Environments
In this use case, the system reacts to a FlexVol volume running out of available inodes and increases the number of inodes to maintain X% inode utilization.
Creating a Remediation Orchestration Workflow
Table 3 lists the requirements and implementation methodology for this workflow.
Table 3) Use case 2 requirements and methodology.
<table>
<thead>
<tr>
<th>Workflow Requirements</th>
<th>Workflow Implementation Methodology</th>
</tr>
</thead>
<tbody>
<tr>
<td>• Increase the number of available inodes in a FlexVol volume.</td>
<td>• Find all the relevant information about the FlexVol volume.</td>
</tr>
<tr>
<td>• Inode utilization should be at or below X%.</td>
<td>• Calculate the new total count of inodes in the FlexVol volume by following this formula:</td>
</tr>
<tr>
<td></td>
<td>new_total_inode_count = current_used_inodes / X%</td>
</tr>
<tr>
<td></td>
<td>(where X is the desired maximum inode utilization and is a value between 1 and 99).</td>
</tr>
<tr>
<td></td>
<td>• Change the total number of inodes based on the calculated value.</td>
</tr>
</tbody>
</table>
The flowchart in Figure 29 represents the logic of this workflow.
Figure 29) Use case 2 flowchart.
Figure 30 is a view of the completed workflow with the layered implementation flowchart above it.
Figure 30) Use case 2 WFA workflow view.
Workflow Automation enables you to define legal values for the various parameters that are leveraged within the workflow. You can also define global values that are easy to modify if requirements change. Figure 31 shows how to define a global constant for the inode maximum threshold utilization (set to 70% in the example).
Execution of the workflow completes with the volume’s inode count increasing from 881 to 1133 (using a default value of 70% maximum utilization). See Figure 32.
Creating a Workflow Alert Launch Script for OnCommand Unified Manager
Before creating the relevant OnCommand alert, you should create a script that can be attached to the alert for automated execution.
Following is a PowerShell version of the script (OCUM_WFA_Inode_Event.ps1):
```powershell
# 2017-02-09 yaron@netapp.com
# OCUM_WFA_Inode_Event.ps1
```
# This script is meant to be used as part of an automated self-healing process
# that remediates high inode utilization scenarios. It is meant to be used in
# conjunction with OCUM 7.x Windows-based systems and OnCommand WFA 4.1.x.
# IMPORTANT: This code assumes a single ONTAP cluster is present in the
# environment for code simplification. If environment contains more than one
# cluster see Appendix A.
# (c) 2017 NetApp Inc., All Rights Reserved
# Check the following link in case you run into SSL certificate issues
# https://d-fens.ch/2013/12/20/nobrainer-ssl-connection-error-when-using-powershell/
# CHANGE IF DESIRED PRIOR TO UPLOADING TO OCUM SERVER
$cluster_name = "cluster1"
$wfa_rest_server = "https://wfa.demo.netapp.com/rest"
$wfa_username = "admin"
$wfa_password = "Netapp1!"
# DO NOT CHANGE CODE BELOW THIS LINE
if ($args[5] -eq "Full") {
# This is an "Inodes Nearly Full" event
$source_name = $args[11]
$event_state = $args[15]
} else {
# This is an "Inodes Full" event
$source_name = $args[10]
$event_state = $args[14]
}
# Ignore all non-new events
if (($event_state.ToString()).ToLower() -ne "new") {
exit
}
# Extract SVM and Volume names from OCUM-provided arguments
$svm_name = $source_name.Split("/:/").[0]
$volume_name = $source_name.Split("/:/").[2]
# Prepare WFA credentials
$securePassword = ConvertTo-SecureString $wfa_password -AsPlainText -Force
# Find RESTful execution url for WFA workflow
$restCommand = $wfa_rest_server + "/workflows?name=Modify_Volume_Inode_Count"
try {
$output = Invoke-RestMethod -Method Get -Uri $restCommand -Credential $cred
} catch {
# An error occurred. Either workflow does not exist or credentials are wrong
exit
}
$workflow_execution_uri = ($output.collection.workflow.link | ?{$_._rel -eq 'execute'}).href
# Initiate workflow execution
$body = @
{ "comments":"OCUM triggered workflow. Event ID: $event_id",
"userInputValues":{
"key":"cluster_name",
"value":$cluster_name
}
}
Creating an OnCommand Unified Manager Alert
To create an OnCommand Unified Manager alert:
1. Start with uploading the workflow launch script to the OnCommand Unified Manager server. To upload the script to the OnCommand Unified Manager server, follow the same steps that are described in section 3.1 (in the subsection “Creating an OnCommand Unified Manager Alert”). See Figure 33.
Figure 33) OnCommand Unified Manager Manage Scripts view.
2. Follow the same steps that are described in section 3.1 and create an alert by using the parameters that are detailed in Table 4.
Table 4) Parameters for alert creation for use case 2.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Name</td>
<td>High inode utilization alert</td>
</tr>
<tr>
<td>Description</td>
<td>Alert used in a self-healing automated workflow</td>
</tr>
<tr>
<td>Resources</td>
<td><< All Volumes >></td>
</tr>
<tr>
<td>Events</td>
<td>• Inodes Nearly Full (Warning)</td>
</tr>
<tr>
<td></td>
<td>• Inodes Full (Error)</td>
</tr>
<tr>
<td>Action</td>
<td>Execute script OCUM_WFA_Inode_Event</td>
</tr>
</tbody>
</table>
3. When the alert has been created, the Manage Alerts view should look like Figure 34.
Figure 34) OnCommand Unified Manager Manage Alerts view.
That’s it! You have created an automated process that deals with volumes running out of available inodes.
Testing the Self-Healing Scenario
To test use case 2:
1. Create a 30MB volume. Creating a small volume yields a fairly low number of inodes by default.
Figure 35) Testing use case 2: Create a volume.
As Figure 35 shows, there are currently 881 total inodes, with 97 of them being used.
2. Create 653 small files to increase the total number of used inodes to 750 (85% inode utilization). See Figure 36.
3. The high inode utilization threshold causes an event to be generated (Figure 37), in turn triggering an alert that notifies Workflow Automation to execute a remediating workflow (Figure 38).
As Figure 39 shows, the total number of inodes was increased to 1,070, thus lowering the inode utilization to 70%.
3.3 Use Case 3: Automatic Performance Management (Tiering)
In this use case, the system leverages the I/O density (I/O per terabyte stored) metric to identify the proper performance tier for each FlexVol volume. If necessary, the volumes are then nondisruptively moved to a different storage pool (aggregate) or compute node type.
This use case is somewhat different from the previous two in that it is not intended to be triggered dynamically through an event. Rather, it is designed to be scheduled and run once a day or once a week to revalidate and reassess the volume storage tier assignment.
An I/O density approach describes the performance of a storage service, regardless of the media or the protocol that is used to deliver that service. The key measurements for a storage service include:
- **Latency.** This is the time between when a storage controller receives a request for data and when it is able to answer that request.
- **IOPS/TB.** This term relates to the number of I/Os performed per second per terabyte (TB) of data stored.
**Note:** IOPS/TB is a key measure for delivering consistent storage service to the end consumer.
These service levels are typically described for the consumer in a service catalog. The service catalog defines the storage service in terms of several parameters: IOPS/TB; latency; frequency of backups; frequency of off-site replication; retention of backups; and, in most circumstances, the cost of the service for a given quantity and interval (for example, TB/month).
For more information about service-oriented delivery of storage, you can schedule a Service Design Workshop, a one-day engagement that NetApp offers to customers who are interested in optimizing the service delivery process.
Automation of performance tiers requires constant monitoring of the I/O density metric. One way of tracking this information is by leveraging either OnCommand Performance Manager or OnCommand Insight. (The two products differ in the frequency of data collection and in the granularity of historical data.) Normally we look at the 95th percentile of I/O density over a period of seven days per workload (volume). Both monitoring applications can synchronize that data with OnCommand Workflow Automation through relevant data sources.
Access to this data can help you determine the proper tiering design, as shown in Table 5.
Table 5) Services and IOPS/TB settings example.
<table>
<thead>
<tr>
<th></th>
<th>Silver—Tier 2</th>
<th>Gold—Tier 1</th>
<th>Platinum—Tier 0</th>
</tr>
</thead>
<tbody>
<tr>
<td>SLO (burst QoS throttle)</td>
<td>512 IOPS/TB</td>
<td>2,048 IOPS/TB</td>
<td>12,288 IOPS/TB</td>
</tr>
<tr>
<td>SLA (to end customer)</td>
<td>128 IOPS/TB</td>
<td>512 IOPS/TB</td>
<td>6,144 IOPS/TB</td>
</tr>
<tr>
<td>Media type</td>
<td>SATA</td>
<td>SAS</td>
<td>SSD</td>
</tr>
</tbody>
</table>
The flowchart in Figure 40 represents the logic of this workflow.
Figure 40) Use case 3 flowchart.

Figure 41) Use case 3 WFA workflow view.

The workflow view in Figure 41 is another indication of the power and the simplicity of OnCommand Workflow Automation. Although there is only a single row in the workflow, this row is executed against every nonroot volume in the environment through a feature called repeat rows. (The feature is represented by the curved arrow symbol next to the row indicator.) This feature uses a filter (discussed in section 2.3) to identify all the relevant resources that the workflow should run against.
The workflow can then be scheduled to run daily by using the built-in scheduler of WFA (Figure 42).
Figure 42) Use case 3 WFA scheduler.
Appendix A: Integrating OnCommand Manager Alerts with WFA When Monitoring Multiple Clusters
From Table 1, when a volume-based alert in NetApp OnCommand Unified Manager (OCUM) is triggered, OCUM passes the identifying details of the relevant volume to the alert script. OCUM passes this information in the form of `svm_name:/volume_name` (the value of the `-eventSourceName` argument). What is not being passed to the alert script is the containing cluster. If a single cluster is being monitored, that information is not a concern. However, if more than one cluster is actively being monitored, you need a method to identify the affected cluster.
This appendix explores one method to determine the affected cluster by using SQL queries against the OCUM database. These queries are based on the OCUM volume ID that is included in the arguments that are passed to the script (`-eventSourceID`). The goal is to find the OCUM object that represents the relevant volume, and from that object extract the volume, SVM, and cluster names that are needed to pass along to the WFA workflow.
This appendix discusses the alert script for use case 1 (automatic capacity management) and shows both PowerShell and Perl examples. If you use PowerShell, be sure to download and install Connector/Net (a fully managed ADO.NET driver for MySQL) on the WFA server from https://dev.mysql.com/downloads/connector/net/. If you use Perl, install the DBD package (DBI package for managing MySQL).
**Note**
The following blog post from TheTechArch is a great resource for querying the OCUM database by using PowerShell: [http://thetecharch.com/2016/03/querying-ocum-database-using-powershell-2/](http://thetecharch.com/2016/03/querying-ocum-database-using-powershell-2/).
Before you proceed with the alert script, create a database user in OCUM that can be used to query the `ocum_report` database:
1. Log in to OCUM and select Manage Users from the Administration drop-down menu (Figure 43).
**Figure 43** OnCommand Unified Manager Manage Users drop-down menu.
2. In the Manage Users view, click the Add button and input the following information (Figure 44 and Table 6):
Figure 44) OnCommand Unified Manager: creating new database user form.
Table 6) Parameters for OCUM database user creation.
<table>
<thead>
<tr>
<th>Parameter</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>Type</td>
<td>Database User</td>
</tr>
<tr>
<td>Name</td>
<td>Name of user (dbuser in our example)</td>
</tr>
<tr>
<td>Password</td>
<td>User password (Netapp1! in our example)</td>
</tr>
<tr>
<td>Role</td>
<td>Report Schema</td>
</tr>
</tbody>
</table>
3. After the user has been created, proceed with the updated alert scripts.
Following is a PowerShell script example (OCUM_WFA_Capacity_Event_with_SQL.ps1):
```powershell
# 2017-02-09 yaron@netapp.com
# OCUM_WFA_Capacity_Event_with_SQL.ps1
#
# This script is meant to be used as part of an automated self-healing process
# that remediates high volume capacity utilization scenarios. It is meant to be used
# in conjunction with OCUM 7.x Windows-based systems and OnCommand WFA 4.1.x.
#
# (c) 2017 NetApp Inc., All Rights Reserved
#
# Check the following link in case you run into SSL certificate issues
# https://d-fens.ch/2013/12/20/nobrainer-ssl-connection-error-when-using-powershell/
#
# CHANGE IF DESIRED PRIOR TO UPLOADING TO OCUM SERVER
$wfa_rest_server = "https://wfa.demo.netapp.com/rest"
$wfa_username = "admin"
$wfa_password = "Netapp1!"
$ocum_username = "dbuser"
$ocum_password = "Netapp1!"
$ocum_server = "192.168.0.74"
```
# Function MySQL queries OCUM database
# usage: MySQL -Query <sql-query>
function MySQL {
Param(
[Parameter(
Mandatory = $true,
ParameterSetName = '',
ValueFromPipeline = $true)
]
[string]$Query
)
$MySQLAdminUserName = $ocum_username
$MySQLAdminPassword = $ocum_password
$MySQLDatabase = 'ocum_report'
$MySQLHost = $ocum_server
$ConnectionString = "server=" + $MySQLHost + ";port=3306;Integrated Security=False;uid=" + $MySQLAdminUserName + ";pwd=" + $MySQLAdminPassword + ";database=" + $MySQLDatabase
Try {
[void][System.Reflection.Assembly]::LoadFrom('C:\Program Files (x86)\MySQL\MySQL Connector Net 6.9.9\Assemblies\v4.5\MySql.Data.dll')
$Connection.ConnectionString = $ConnectionString
$Connection.Open()
$DataSet = New-Object System.Data.DataSet
$RecordCount = $DataAdapter.Fill($DataSet, "data")
$DataSet.Tables[0]
}
Catch {
Write-Host "ERROR : Unable to run query : $query" $Error[0]
}
Finally {
$Connection.Close()
}
}
if ($args[6] -eq "Full"){
# This is an "Inodes Nearly Full" event
$source_id = $args[10]
$event_state = $args[16]
} else {
# This is an "Inodes Full" event
$source_id = $args[9]
$event_state = $args[15]
}
$event_id = $args[1]
# Ignore all non-new events
if (($event_state.ToString()).ToLower() -ne "new"){
exit
}
# Extract Cluster, SVM and Volume names from OCUM database
$sql_query = "SELECT
volume.name AS 'Volume',
cluster.name AS 'Cluster',
svm.name AS 'Svm'
FROM
volume,
cluster,
svm
"
WHERE
volume.id=$source_id
AND cluster.id=volume.clusterId
AND svm.id=volume.svmId
")
$ocum_vol = MySQL -query $sql_query
$cluster_name = $ocum_vol.Cluster
$svm_name = $ocum_vol.Svm
$volume_name = $ocum_vol.Volume
# Prepare WFA credentials
$securePassword = ConvertTo-SecureString $wfa_password -AsPlainText -Force
# Find RESTful execution uri for WFA workflow
$restCommand = $wfa_rest_server + "/workflows?name=Resize_Volume_with_Data_Mobility"
try {
$output = Invoke-RestMethod -Method Get -Uri $restCommand -Credential $cred
} catch {
# An error occurred. Either workflow does not exist or credentials are wrong
exit
}
$workflow_execution_uri = ($output.collection_workflow.link | ?{$_ -rel -eq 'execute'}).href
# Initiate workflow execution
$body = @"{
"comments":"OCUM triggered workflow. Event ID: $event_id",
"userInputValues":[
{"key":"cluster_name", "value":"$cluster_name"},
{"key":"svm_name", "value":"$svm_name"},
{"key":"volume_name", "value":"$volume_name"}
]
}"
$output = Invoke-RestMethod -Method Post -Uri $workflow_execution_uri -Credential $cred -Body $body -ContentType "application/json"
# Find RESTful job monitoring link for WFA workflow
$job_status_uri = ($output.job.link | ?{$_ -rel -eq 'self'}).href
if ($job_status_uri.Count -gt 1){$job_status_uri = $job_status_uri[0]}
# Wait until workflow job either completes successfully or fails
do {
Start-Sleep -Seconds 2
$output = Invoke-RestMethod -Method Get -Uri $job_status_uri -Credential $cred
$jobStatus = $output.job.jobStatus
while (($jobStatus -ne "COMPLETED") -and ($jobStatus -ne "FAILED"))
}
And following is the comparable Perl script example (OCUM_WFA_Capacity_Event_with_SQL.pl):
# 2017-02-09 yaron@netapp.com
# OCUM_WFA_Capacity_Event_with_SQL.pl
# This script is meant to be used as part of an automated self-healing process
# that remediate high volume capacity utilization scenarios. It is meant to be used in
# conjunction with OCUM 7.x Windows-based systems and OnCommand WFA 4.1.x.
use REST::Client;
use JSON;
use MIME::Base64;
use DBI;
# Unmark the next line if you run into SSL certificate issues
#$ENV{PERL_LWP_SSL_VERIFY_HOSTNAME}=0;
# CHANGE IF DESIRED PRIOR TO UPLOADING TO OCUM SERVER
my $wfa_rest_server = 'https://wfa.demo.netapp.com';
my $wfa_username = 'admin';
my $wfa_password = 'Netapp1!';
my $ocum_username = 'dbadmin';
my $ocum_password = 'Netapp1!';
my $ocum_server = 'ocum.demo.netapp.com';
# DO NOT CHANGE CODE BELOW THIS LINE
if ($ARGV[6] == 'Full') {
# This is an "Volume Space Nearly Full" event
my $source_id = $ARGV[10];
my $event_state = $ARGV[16];
} else {
# This is an "Volume Space Full" event
my $source_id = $ARGV[9];
my $event_state = $ARGV[15];
}
my $event_id = $ARGV[1];
# Ignore all non-new events
if ($event_state == 'NEW') {
exit;
}
# Extract Cluster, SVM and Volume names from OCUM database
my $sql_query = "SELECT
volume.name AS 'Volume',
cluster.name AS 'Cluster',
svm.name AS 'Svm'
FROM
volume,
cluster,
svm
WHERE
volume.id=$source_id
AND cluster.id=volume.clusterId
AND svm.id=volume.svmId";
my $dsn = "DBI:mysql:ocum_report:" . $ocum_server;
my $%attr = { PrintError=>0, # turn off error reporting via warn()
RaiseError=>1 # report error via die()
};
my $dbh = DBI->connect($dsn,$ocum_username,$ocum_password,$%attr);
my $sth = $dbh->prepare($sql_query);
$sth->execute();
while(my $row = $sth->fetchrow_array()){
my $volume_name = $row[0];
my $cluster_name = $row[1];
my $svm_name = $row[2];
}
$sth->finish();
$dbh->disconnect();
# Find RESTful execution url for WFA workflow
my $headers = {Accept => 'application/json', Authorization => 'Basic ' . encode_base64($wfa_username . ':' . $wfa_password), 'Content-Type' => 'application/json'};
my $client = REST::Client->new();
$client->setHost($wfa_rest_server);
$client->GET('/rest/workflows?name=Resize_Volume_with_Data_Mobility', $headers);
my @response_json = @{$client->responseContent()};
my $workflow_execution_uri = '/rest/workflows/' . $response_json[0]->{'uuid'}. '/jobs';
# Initiate workflow execution
my $json_body = '{
"comments":"OCUM triggered workflow. Event ID: ' . $event_id . '",
"userInputValues": [
{
"key":"ClusterName",
"value":' . $cluster_name . '
},
{
"key":"SvmName",
"value":' . $svm_name . '
},
{
"key":"VolumeName",
"value":' . $volume_name . '
}
]
};
# Find RESTful job monitoring link for WFA workflow
$client->POST($workflow_execution_uri, $json_body, $headers);
my $response = decode_json($client->responseContent());
# Wait until workflow job either completes successfully or fails
do {
sleep 2;
$client->GET($response->{'jobId'} . '/jobs?status=COMPLETED');
my $jobStatus = decode_json($client->responseContent());
$jobStatus = $jobStatus->{'jobStatus'};
} while ($jobStatus != 'COMPLETED' & $jobStatus != 'FAILED');
References
The following references are relevant to this TR:
https://library.netapp.com/ecm/ecm_download_file/ECMLP2597424
- OnCommand Workflow Automation: REST API Primer
https://library.netapp.com/ecm/ecm_download_file/ECMLP2597425
- OnCommand Storage Management Community
- TR-4438: IT as a Service—Simplifying Application and Storage Provisioning Using NetApp OnCommand Workflow Automation and System Center Orchestrator 2012 R2
www.netapp.com/us/media/tr-4438.pdf
www.netapp.com/us/media/tr-4103.pdf
- TR-4217: Automating and Orchestrating the Software—Defined Data Center: Using NetApp and VMware to Build Your Cloud
- TR-4217: ONTAP Storage Service Deployment Guide—Deploying Systems Based on a Service Design Workshop
- TR-4572: The NetApp Solution for Ransomware
www.netapp.com/us/media/tr-4572.pdf
- OnCommand Management Software and Management Integration Tools
- Make Storage & Data Management Both Easier to Use & Scale with NetApp Service Level Manager
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature versions described in this document are supported for your specific environment. The NetApp IMT defines the product components and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each customer's installation in accordance with published specifications.
Copyright Information
Copyright © 1994–2017 NetApp, Inc. All rights reserved. Printed in the U.S. No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark Information
NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks of NetApp, Inc. Other company and product names may be trademarks of their respective owners.
|
{"Source-Url": "https://www.netapp.com/us/media/tr-4585.pdf", "len_cl100k_base": 15586, "olmocr-version": "0.1.53", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 115510, "total-output-tokens": 17665, "length": "2e13", "weborganizer": {"__label__adult": 0.00032448768615722656, "__label__art_design": 0.00063323974609375, "__label__crime_law": 0.0005888938903808594, "__label__education_jobs": 0.0017423629760742188, "__label__entertainment": 0.0002300739288330078, "__label__fashion_beauty": 0.00017344951629638672, "__label__finance_business": 0.00249481201171875, "__label__food_dining": 0.00024127960205078125, "__label__games": 0.0010900497436523438, "__label__hardware": 0.0061492919921875, "__label__health": 0.0002701282501220703, "__label__history": 0.0003905296325683594, "__label__home_hobbies": 0.00025773048400878906, "__label__industrial": 0.0012865066528320312, "__label__literature": 0.00030541419982910156, "__label__politics": 0.00035881996154785156, "__label__religion": 0.0004036426544189453, "__label__science_tech": 0.135498046875, "__label__social_life": 0.00014269351959228516, "__label__software": 0.3642578125, "__label__software_dev": 0.482421875, "__label__sports_fitness": 0.0001766681671142578, "__label__transportation": 0.0003690719604492187, "__label__travel": 0.00019633769989013672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70053, 0.01554]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70053, 0.13358]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70053, 0.83925]], "google_gemma-3-12b-it_contains_pii": [[0, 398, false], [398, 1731, null], [1731, 4029, null], [4029, 5217, null], [5217, 6880, null], [6880, 8697, null], [8697, 11760, null], [11760, 14211, null], [14211, 16327, null], [16327, 17416, null], [17416, 19508, null], [19508, 22473, null], [22473, 24349, null], [24349, 26226, null], [26226, 27004, null], [27004, 28587, null], [28587, 31579, null], [31579, 33235, null], [33235, 34107, null], [34107, 36902, null], [36902, 39378, null], [39378, 41225, null], [41225, 41698, null], [41698, 41859, null], [41859, 42180, null], [42180, 42616, null], [42616, 42860, null], [42860, 45105, null], [45105, 46886, null], [46886, 47404, null], [47404, 49511, null], [49511, 50857, null], [50857, 51518, null], [51518, 51828, null], [51828, 54204, null], [54204, 55409, null], [55409, 55446, null], [55446, 57490, null], [57490, 58881, null], [58881, 60781, null], [60781, 62908, null], [62908, 64536, null], [64536, 65874, null], [65874, 67410, null], [67410, 70053, null]], "google_gemma-3-12b-it_is_public_document": [[0, 398, true], [398, 1731, null], [1731, 4029, null], [4029, 5217, null], [5217, 6880, null], [6880, 8697, null], [8697, 11760, null], [11760, 14211, null], [14211, 16327, null], [16327, 17416, null], [17416, 19508, null], [19508, 22473, null], [22473, 24349, null], [24349, 26226, null], [26226, 27004, null], [27004, 28587, null], [28587, 31579, null], [31579, 33235, null], [33235, 34107, null], [34107, 36902, null], [36902, 39378, null], [39378, 41225, null], [41225, 41698, null], [41698, 41859, null], [41859, 42180, null], [42180, 42616, null], [42616, 42860, null], [42860, 45105, null], [45105, 46886, null], [46886, 47404, null], [47404, 49511, null], [49511, 50857, null], [50857, 51518, null], [51518, 51828, null], [51828, 54204, null], [54204, 55409, null], [55409, 55446, null], [55446, 57490, null], [57490, 58881, null], [58881, 60781, null], [60781, 62908, null], [62908, 64536, null], [64536, 65874, null], [65874, 67410, null], [67410, 70053, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70053, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70053, null]], "pdf_page_numbers": [[0, 398, 1], [398, 1731, 2], [1731, 4029, 3], [4029, 5217, 4], [5217, 6880, 5], [6880, 8697, 6], [8697, 11760, 7], [11760, 14211, 8], [14211, 16327, 9], [16327, 17416, 10], [17416, 19508, 11], [19508, 22473, 12], [22473, 24349, 13], [24349, 26226, 14], [26226, 27004, 15], [27004, 28587, 16], [28587, 31579, 17], [31579, 33235, 18], [33235, 34107, 19], [34107, 36902, 20], [36902, 39378, 21], [39378, 41225, 22], [41225, 41698, 23], [41698, 41859, 24], [41859, 42180, 25], [42180, 42616, 26], [42616, 42860, 27], [42860, 45105, 28], [45105, 46886, 29], [46886, 47404, 30], [47404, 49511, 31], [49511, 50857, 32], [50857, 51518, 33], [51518, 51828, 34], [51828, 54204, 35], [54204, 55409, 36], [55409, 55446, 37], [55446, 57490, 38], [57490, 58881, 39], [58881, 60781, 40], [60781, 62908, 41], [62908, 64536, 42], [64536, 65874, 43], [65874, 67410, 44], [67410, 70053, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70053, 0.07761]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
3b5a897f4024bb62de7c6e35707c5368e9343fb8
|
Facing the Truth: Benchmarking the Techniques for the Evolution of Variant-Rich Systems
Daniel Strüber¹, Mukelabai Mukelabai¹, Jacob Krüger², Stefan Fischer³, Lukas Linsbauer³, Jabier Martinez⁴, Thorsten Berger¹
¹Chalmers | University of Gothenburg, Sweden, ²University of Magdeburg, Germany, ³JKU Linz, Austria, ⁴Tecalia, Spain
ABSTRACT
The evolution of variant-rich systems is a challenging task. To support developers, the research community has proposed a range of different techniques over the last decades. However, many techniques have not been adopted in practice so far. To advance such techniques and to support their adoption, it is crucial to evaluate them against realistic baselines, ideally in the form of generally accessible benchmarks. To this end, we need to improve our empirical understanding of typical evolution scenarios for variant-rich systems and their relevance for benchmarking. In this paper, we establish eleven evolution scenarios in which benchmarks would be beneficial. Our scenarios cover typical lifecycles of variant-rich system, ranging from clone & own to adopting and evolving a configurable product-line platform. For each scenario, we formulate benchmarking requirements and assess its clarity and relevance via a survey with experts in variant-rich systems and software evolution. We also surveyed the existing benchmarking landscape, identifying synergies and gaps. We observed that most scenarios, despite being perceived as important by experts, are only partially or not at all supported by existing benchmarks—a call to arms for building community benchmarks upon our requirements. We hope that our work raises awareness for benchmarking as a means to advance techniques for evolving variant-rich systems, and that it will lead to a benchmarking initiative in our community.
CCS CONCEPTS
• Software and its engineering → Software product lines; Software evolution;
KEYWORDS
software evolution, software variability, product lines, benchmark
1 INTRODUCTION
Evolving a variant-rich software system is a challenging task. Based on feature additions, bugfixes, and customizations, a variant-rich system evolves in two dimensions: (1) in its variability when new variants are added over time, and (2) in each individual variant, as variants are continuously modified. From these dimensions, various evolution scenarios arise. For example, variability may be managed using clone & own [25], that is, by copying and modifying existing variants. In this case, changes performed on one variant are often propagated to other variants (variant synchronization). When the number of variants grows, a project initially managed using clone & own might be migrated to an integrated product-line platform [8, 13, 50], comprising a variability model [19, 38] and implementation assets with variability mechanisms (e.g., preprocessor annotations or composable modules). In this case, all assets in all variants that correspond to a given feature must be identified (feature location). Supporting developers during such scenarios requires adequate techniques, many of which have been proposed in recent years [2, 3, 7, 8, 10, 20, 27, 29, 37, 39, 48, 50, 60, 72, 77, 79, 87, 90, 94, 95].
The maturity of a research field depends on the availability of commonly accepted benchmarks for comparing new techniques to the state of the art. We define a benchmark as a framework or realistic dataset that can be used to evaluate the techniques of a given domain. Realistic means that the dataset should have been initially created by industrial practitioners; it may be augmented with meta-data that can come from researchers. In the case of evolving variant-rich systems, despite the progress on developing new techniques and tools, evaluation methodologies are usually determined ad hoc. To evaluate available techniques in a more systematic way, a common benchmark set has yet to emerge.
Inspired by a theory of benchmarks in software engineering [91], we believe that the community can substantially move forward by setting up a common set of benchmarks for evaluating techniques for evolving variant-rich systems. With this goal in mind, we follow typical recommendations for benchmark development [91]: to lead the effort with a small number of primary organizers, to build on established research results, and to incorporate community feedback to establish a consensus on the benchmark. As such, our long-term goal is to establish a publicly available benchmark set fulfilling the requirements of successful benchmarks [91]: clarity, relevance, accessibility, affordability, solvability, portability, and scalability.
In this paper, as a step towards this long-term goal, we lay the foundations for a benchmark set for evaluating techniques for evolving variant-rich systems. We conceive the scenarios that the benchmark set needs to support, show the relevance and clarity of our descriptions based on community feedback, and survey the state of the art of related datasets to identify potential benchmarks.
We make the following contributions:
- Eleven scenarios for benchmarking the techniques that support developers when evolving variant-rich systems (Sec. 2), including sub-scenarios, requirements, and evaluation metrics;
- A community survey with experts on software variability and evolution, focusing on the clarity and relevance of our scenarios (Sec. 3) and relying on an iterative, design-science approach;
- A survey of existing benchmarks for the scenarios (Sec. 4), selected upon our experience and the community survey;
- An online appendix with further information (e.g., links to benchmarks) and a replication package with the questionnaire and its data: https://bitbucket.org/easelab/evobench/
We observed that various scenarios are only partially or not at all supported by existing benchmarks. We also identified synergies between scenarios and available benchmarks, based on the overlap of required benchmarking assets. Based on the positive feedback regarding the clarity and relevance of our benchmark descriptions, we believe that our work paves the way for a consolidated benchmark set for techniques used to evolve variant-rich systems.
2 EVOLUTION SCENARIOS
We establish eleven scenarios for techniques that support developers during the evolution of variant-rich systems. For each scenario, we argue how the relevant techniques can be evaluated with a benchmark. We introduce each scenario with a description, a list of more detailed sub-scenarios, a list of requirements for effective benchmarks, and a list of metrics for comparing the relevant techniques.
2.1 Methodology
To select the scenarios and construct the descriptions, we followed an iterative process involving all authors. We took inspiration from our experience as experts in software product line research, our various studies of evolution in practice [12, 13, 15, 17, 34, 35, 37, 42, 54, 56, 59, 67, 73, 74], and the mapping studies by Assunção et al. [8] and Laguna and Crespo [48]. Based on these sources, an initial list of scenarios emerged in a collaborative brainstorming session. Each scenario was assigned to a responsible author who developed an initial description. Based on mutual feedback, the authors refined the scenario descriptions and added, split, and merged scenarios and their descriptions. Each scenario description was revised by at least three authors. Eventually, a consensus on all scenario descriptions was reached. Afterwards, we performed a community survey to assess the clarity and relevance of the descriptions. The final version of the descriptions, as shown below, incorporates feedback from the survey (see the methodology description in Sec. 3).
2.2 Running Example
As a running example for the evolution of variant-rich systems, consider the following typical situation from practice.
Initially, a developer engineers, evolves, and maintains a single system, for instance, using a typical version-control system (e.g., Git). At some point, a customer requests a small adaptation. The developer reacts by adding a configuration option and variation points (e.g., based on if statements) in the code. Later, another customer requests a more complex adaption. The developer reacts by copying the initial variant (i.e., creating a clone) of the system and adapting it to the new requirements (a.k.a., clone & own). Over time, further customers request specific adaptations and the developer uses either of these two strategies.
When the number of variants grows, this ad hoc reuse becomes inefficient. Namely, it becomes challenging and error-prone to identify which existing variant to clone and which parts (i.e., features) of other variants to incorporate in the new variant. The same applies to maintenance, as it is not clear which variants are affected by a bug or update. Any bug or update then needs to be fixed for each existing variant individually. Furthermore, an increasing number of configuration options challenges developers through intricate dependencies that need to be managed; and variation points clutter the source code, challenging program comprehension.
2.3 Scenario Descriptions
We now introduce our scenarios based on the running example, providing descriptions, sub-scenarios, benchmarking requirements and evaluation metrics. We focus on evaluation metrics that are custom to the scenario at hand. Some additional characteristics of interest, such as performance and usability, are important in all scenarios and should be supported by adequate metrics as well. Assessing the correctness or accuracy of a technique may require a ground truth, a curated, manually produced or (at least) checked set of assets assumed to be correct. Some scenarios involve the design choice of picking a metric from a broader class of metrics (e.g., similarity metrics); in these cases we specify only the class.
We visualize each scenario by means of a figure. Each figure provides a high-level overview of the respective scenario, representing the involved assets with boxes, techniques with rounded boxes, relationships with dashed arrows, and actions with solid arrows. In cases where a scenario has multiple sub-scenarios with varying kinds of assets, we show the superset of all required assets from all sub-scenarios. Each figure includes a large arrow on its left-hand side, indicating the direction of system evolution.
**Variant Synchronization (VS).** When evolving a variant-rich system based on clone & own, the developer frequently needs to synchronize variants. Bugfixes or feature implementations that are performed in one variant need to be propagated to other variants—a daunting task when performed manually. An automated technique (illustrated in Fig. 1) could facilitate this process by propagating changes or features contained in a variant [77, 78].
**Sub-scenarios**
- **VS1:** Propagation of changes across variants
- **VS2:** Propagation of features across variants
**Benchmark requirements**
We visualize each scenario by means of a figure. Each figure provides a high-level overview of the respective scenario, representing the involved assets with boxes, techniques with rounded boxes, relationships with dashed arrows, and actions with solid arrows. In cases where a scenario has multiple sub-scenarios with varying kinds of assets, we show the superset of all required assets from all sub-scenarios. Each figure includes a large arrow on its left-hand side, indicating the direction of system evolution.
**Variant Synchronization (VS).** When evolving a variant-rich system based on clone & own, the developer frequently needs to synchronize variants. Bugfixes or feature implementations that are performed in one variant need to be propagated to other variants—a daunting task when performed manually. An automated technique (illustrated in Fig. 1) could facilitate this process by propagating changes or features contained in a variant [77, 78].
**Sub-scenarios**
- **VS1:** Propagation of changes across variants
- **VS2:** Propagation of features across variants
**Benchmark requirements**
We visualize each scenario by means of a figure. Each figure provides a high-level overview of the respective scenario, representing the involved assets with boxes, techniques with rounded boxes, relationships with dashed arrows, and actions with solid arrows. In cases where a scenario has multiple sub-scenarios with varying kinds of assets, we show the superset of all required assets from all sub-scenarios. Each figure includes a large arrow on its left-hand side, indicating the direction of system evolution.
**Variant Synchronization (VS).** When evolving a variant-rich system based on clone & own, the developer frequently needs to synchronize variants. Bugfixes or feature implementations that are performed in one variant need to be propagated to other variants—a daunting task when performed manually. An automated technique (illustrated in Fig. 1) could facilitate this process by propagating changes or features contained in a variant [77, 78].
**Sub-scenarios**
- **VS1:** Propagation of changes across variants
- **VS2:** Propagation of features across variants
**Benchmark requirements**
We visualize each scenario by means of a figure. Each figure provides a high-level overview of the respective scenario, representing the involved assets with boxes, techniques with rounded boxes, relationships with dashed arrows, and actions with solid arrows. In cases where a scenario has multiple sub-scenarios with varying kinds of assets, we show the superset of all required assets from all sub-scenarios. Each figure includes a large arrow on its left-hand side, indicating the direction of system evolution.
**Variant Synchronization (VS).** When evolving a variant-rich system based on clone & own, the developer frequently needs to synchronize variants. Bugfixes or feature implementations that are performed in one variant need to be propagated to other variants—a daunting task when performed manually. An automated technique (illustrated in Fig. 1) could facilitate this process by propagating changes or features contained in a variant [77, 78].
**Sub-scenarios**
- **VS1:** Propagation of changes across variants
- **VS2:** Propagation of features across variants
**Benchmark requirements**
We visualize each scenario by means of a figure. Each figure provides a high-level overview of the respective scenario, representing the involved assets with boxes, techniques with rounded boxes, relationships with dashed arrows, and actions with solid arrows. In cases where a scenario has multiple sub-scenarios with varying kinds of assets, we show the superset of all required assets from all sub-scenarios. Each figure includes a large arrow on its left-hand side, indicating the direction of system evolution.
**Variant Synchronization (VS).** When evolving a variant-rich system based on clone & own, the developer frequently needs to synchronize variants. Bugfixes or feature implementations that are performed in one variant need to be propagated to other variants—a daunting task when performed manually. An automated technique (illustrated in Fig. 1) could facilitate this process by propagating changes or features contained in a variant [77, 78].
**Sub-scenarios**
- **VS1:** Propagation of changes across variants
- **VS2:** Propagation of features across variants
**Benchmark requirements**
We visualize each scenario by means of a figure. Each figure provides a high-level overview of the respective scenario, representing the involved assets with boxes, techniques with rounded boxes, relationships with dashed arrows, and actions with solid arrows. In cases where a scenario has multiple sub-scenarios with varying kinds of assets, we show the superset of all required assets from all sub-scenarios. Each figure includes a large arrow on its left-hand side, indicating the direction of system evolution.
System evolution. Due to the drawbacks associated with Variant Integration (VI) (illustrated in Fig. 2).
Evaluation metrics
- Accuracy: A metric for measuring the similarity between ground truth and computed variant implementation
Variant Integration (VI). Due to the drawbacks associated with clone & own [6, 25], a developer may deem it beneficial to manage the variant-rich system as a product-line platform. Such a platform comprises a variability model (e.g., feature [38] or decision model [19]) and implementation assets with a variability mechanism (e.g., preprocessor annotations or feature modules) that supports the on-demand generation of product variants. From the decision to move towards a product-line platform, two major variant integration tasks (a.k.a., extractive product-line adoption [43]) arise (illustrated in Fig. 2).
The first task is to enable the transition from the cloned variants to a platform [8]. Available techniques for this purpose take as input a set of products and produce as output a corresponding product-line platform [69]. Yet, further evolving the resulting platform can be challenging due to its variability—assets may be difficult to comprehend and modify. Therefore, the second task is to support extending and evolving a product line by means of individual, concrete product variants [51, 94]. This allows engineers to focus on concrete products during evolution to then feed the evolved product back into the platform to evolve it accordingly. Such techniques can be supported by variation control systems [51, 94] and approaches for incremental product-line adoption [6] from cloned variants.
Sub-scenarios
- VI1: Integrate a set of variants into the product-line platform
- VI2: Integrate changes to variants into the product-line platform
Benchmark requirements
- VI1: Set of individual variants
- VI2: Set of revisions of a product-line platform
- VI1/2: Product-line platform after correct integration (ground truth)
Evaluation metrics
- Accuracy: A metric for measuring the similarity between the ground truth and the computed product-line platform
Feature Identification and Location (FIL). Both, as an aid to better support clone & own development and to prepare the migration to a product-line platform, developers may wish to determine which features exist in the system and which features are implemented in which assets (e.g., source code, models, requirements or other types of artifacts). For this purpose, they may rely on feature identification and feature location techniques (illustrated in Fig. 3). Feature identification aims to determine which features exist, whereas feature location aims to define the relationship of features to assets.
Feature identification is useful when the knowledge about features is only given implicitly in the assets, rather than explicitly as in a feature model. The objective is to analyze assets to extract candidate feature names. This can involve techniques to study domain knowledge or vocabulary of the considered domain, workshops to elicit features from experts [42], or automated techniques [61, 70, 100].
When done manually, feature location is a time-consuming and error-prone activity [45]. It has a long tradition for maintenance tasks (e.g., narrowing the scope for debugging code related to a feature), but is also highly relevant for identifying the boundaries of a feature at the implementation level to extract it as a reusable asset during re-engineering [47]. In this sense, it is related to traceability recovery. Feature location is usually expert-driven in industrial settings, however, several techniques based on static analysis, dynamic analysis, and information retrieval, or hybrid techniques, exist [8].
Sub-scenarios
- FIL1: Feature identification in single variants
- FIL2: Feature identification in multiple variants
- FIL3: Feature location in single systems
- FIL4: Feature location in multiple variants
Benchmark requirements
- FIL1/2/3/4: Assets representing variants, such as: implementation code, requirements, documentation, issue tracker data, change logs, version-control history
- FIL1/2/3/4: List of features (ground truth for FIL1/2)
- FIL3/4: Feature locations in sufficient granularity, such as files, folders, code blocks (ground truth)
Evaluation metrics
- Accuracy: Precision and Recall. Some authors in the literature use metrics, such as Mean Reciprocal Rank, that assess the accuracy of a ranking of results [18, 99].
Constraints Extraction (CE). In a variant-rich system, some features may be structurally or semantically related to other features. Initially, this information is not explicitly formalized, which makes it harder for the developer to understand these relationships. To this end, the developer may use an automated constraints extraction technique (illustrated in Fig. 4).
Constraints extraction is a core prerequisite for feature-model synthesis. However, even if the goal is not to obtain a model, explicitly knowing the constraints can help checking the validity of platform configurations, reducing the search space for combinatorial interaction testing (CIT, see below), and documenting features with their dependencies. The benchmark can be used to evaluate the extraction of constraints from various inputs, specifically, the product-line implementation (either code of individual variants or of a platform, [68, 69]), a set of example configurations [22], or natural-language artifacts, such as documentation. Over the development history, when a feature model exists, the constraints in the feature model would be annotated with their source (e.g., a def-use dependency between function definition and function call or domain dependency from hardware [69]). Considering cloned systems, constraints extraction can also be helpful to compare the variability that is implemented in different variants.
**Sub-scenarios**
- CE1: Constraints extraction from example configurations
- CE2: Constraints extraction from implementation code
- CE3: Constraints extraction from natural-language assets
**Benchmark requirements**
- CE1: Example configurations
- CE2: Implementation code of one or several variants
- CE3: Natural-language assets (e.g., documentation)
**Evaluation metrics**
- Accuracy: Similarity of configuration spaces (likely syntactic approximation; semantic comparison is a hard problem)
**Feature Model Synthesis (FMS).** To keep an overview understanding of features and their relationships, developers may want to create a feature model. Feature model synthesis (illustrated in Fig. 5) is an automated technique that can provide an initial feature model candidate. As input, it can rely on a given set of configurations, a set of variants (together with a list of features that each variable implements) or a product matrix to produce a feature model from which these assets can be derived.
**Sub-scenarios**
- FMS1: Feature model synthesis from example configurations
- FMS2: Feature model synthesis from an implementation
- FMS3: Feature model synthesis from a product matrix
**Evaluation metrics**
- Accuracy: Similarity of configuration spaces
- Recall: Coverage of configuration spaces
- Precision: Correctness of configuration spaces
**Architecture Recovery (AR).** When migrating cloned variants to a product-line platform, the developer may want to define a reference architecture for the resulting platform, using architectural models. Architectural models provide a different abstraction of the system structure than feature models, focusing on details and dependencies of implemented classes. Architecture recovery techniques (illustrated in Fig. 6) can extract architectural models automatically.
Various works [26, 41, 84, 92] focus on reverse engineering and comparing architectures from cloned variants to propose architectural models as a starting point for product-line adoption. Such models can include class, component, and collaboration diagrams that may be refined later on. For instance, the initial models may be used as input for a model-level variant integration technique, producing a platform model with explicit commonality and variability. Additional use cases include analyzing and comparing models to identify commonality and variability, or performing an automated analysis based on models.
**Sub-scenarios**
- AR1: Architecture extraction from a configurable platform
- AR2: Architecture extraction from a set of variants
**Evaluation metrics**
- Accuracy: Similarity of extracted to ground truth models
Transformations (TR). To reduce manual effort during evolution tasks, such as refactoring or synchronization of multiple dependent assets in a variant-rich system, the developer may rely on transformation techniques. Transformation techniques are used to change system assets in an automated way. Tool support ranges from lightweight refactoring tools in IDEs to advanced model transformation languages with dedicated execution engines. Model transformations are used for manifold practical purposes, including translation, migration, and synchronization of assets [55].
When transforming a product-line platform (illustrated in Fig. 7), three sub-scenarios arise: First, to refactor the platform, improving its structure while behavior preservation is ensured for each variant [82]. Second, to partially refactor the platform [72] in such a way that only a controlled subset of all variants is changed. Third, to lift a given transformation from the single-product case to the platform, changing all variants consistently [80].
**Sub-scenarios**
- TR1: Refactoring of a product-line platform
- TR2: Partial refactoring of a product-line platform
- TR3: Lifting of a model transformation to a product-line platform
**Benchmark requirements**
- TR1/2: Product-line platform with feature model and implementation code
- TR3: Product-line platform with feature model and implementation model
- TR1/2/3: Transformation specification; for example, reference implementation
- TR1/2/3: Transformed implementation (ground truth)
**Evaluation metrics**
- Correctness: Number of errors
- Conciseness: Number of elements or lines of code of the given transformation
**Functional Testing (FT).** After evolving the variant-rich system, it is important to ensure it still behaves in the expected way. For instance, the variants that were available before the evolution should still work after evolving the system. Regression testing aims to identify faults that may arise after the system has been changed and functionality does no longer work as before. Functional testing of variable software (illustrated in Fig. 8) adds challenges compared to conventional software testing, due to the variability that can influence the functionality of the variants.
For a product-line platform, we can divide testing into two phases: First, domain testing of common parts of the system. Second, application testing of variant-specific parts and interactions [24, 49]. In the case of clone & own, we can only do application testing for individual variants. To reduce testing effort, existing techniques aim to reuse test assets as much as possible. Assets from domain testing are reused in application testing, while trying to only test parts that are specific to selected variants to avoid redundancies. Similarly, it is useful to avoid redundancies after the evolution of the system, to test only parts relevant for the changes that have been applied. Moreover, for application testing it is unrealistic to test all possible variants. The most common technique used for the selection of variants is Combinatorial Interaction Testing (CIT), which identifies a subset of variants where interaction faults are most likely to occur, based on some coverage criteria [23]. Finally, evolution potentially makes some test cases outdated, because they no longer fit the evolved system. In such cases, system and tests must co-evolve [44].
**Sub-scenarios**
- FT1: Test generation for domain testing
- FT2: Test generation for application testing
- FT3: Test co-evolution
**Evaluation metrics**
- Efficiency: Number of faults detected in relation to number of known faults
- Test effort: Number of tested variants, number of executed tests, execution time of tests only if all tests are executed on the same system, reuse of test assets
**Analysis of Non-Functional Properties (ANF).** Various non-functional or quality properties can be important for variant-rich systems, for example, performance in a safety-critical system [33], memory consumption in an embedded system with resource limitations [32], and usability aspects in human-computer interaction systems [58]. Therefore, the analysis of non-functional properties
in variant-rich systems (illustrated in Fig. 9) is crucial [67], as constraints on non-functional properties can be violated when the system evolves.
Developers would like to know the effect of specific features and feature interactions on the investigated quality property, particularly to identify possible improvements or regressions when changes were introduced. Such effects can be captured using a property influence model for the quality property under study, for instance, a performance influence model in the case of Siegmund et al. [89]. Also, an important analysis scenario is to identify optimal configurations that maximize one or multiple quality criteria while satisfying certain quality constraints [90]. This analysis is relevant for evolution when trying to balance various conflicting quality properties and understanding their relationships and trade-offs [76]. To this end, an inter-relationship model can be derived by analyzing the pareto front obtained during multi-criteria optimization. The considered analyses can be expensive, not only because of the combinatorial explosion in large systems, but also because computing non-functional properties can be a resource-intensive task.
Sub-scenarios
- ANF1: Analysis of impacts of features and feature interactions on quality properties
- ANF2: Optimization of configurations towards given quality criteria
- ANF3: Analysis of trade-offs between relationships among non-functional properties
Benchmark requirements
- ANF1/2/3: Feature model
- ANF1/2/3: Quality information, either given by annotations (e.g., extended feature models [11]), or by a method to calculate or estimate for a given product the quality metrics under study
- ANF1: Reference property influence model (ground truth)
- ANF2: Reference configuration (ground truth)
- ANF3: Reference inter-relationship model (ground truth)
Evaluation metrics
- Accuracy: Similarity between computed and reference model (ANF1/3), fitness of computed configuration in comparison to reference configuration (ANF2)
Visualization (VZ). To facilitate incremental migration [6] of clone & own-based variants to a product-line platform, the developer may want to visually inspect relations between features and implementation assets. Such a relation-visual inspection can be provided by visualization techniques (illustrated in Fig. 10).
During product-line engineering, visualizing variability in software assets can be useful for scenarios, such as product configuration [71, 76], testing (e.g., pairwise testing) [53], and constraint discovery [62]. Andam et al. [5] propose several feature-oriented
views that exploit feature annotations [37] embedded by developers in the source code during development for tracing feature locations. A benchmark could be used to evaluate the effectiveness of several visualization techniques addressing the same sub-scenario. The main goal of benchmarking is to assess developer performance when using different techniques, which requires experimentation with human participants on selected development tasks.
Sub-scenarios
- VZ1: Visualizations for feature evolution and maintenance
- VZ2: Visualizations for constraint discovery
- VZ3: Visualizations for feature interaction assessment
Benchmark requirements
- VZ1/2/3: Implementation code with feature locations (preferably embedded feature traceability annotations, instead of only variability annotations for optional parts of source code)
- VZ1/2/3: Scenario-related tasks for developers, such as code comprehension and bug-finding tasks, based on generated visualizations
Evaluation metrics
- Developer performance: correctness, completion time in scenario-related tasks
Co-Evolution of Problem Space and Solution Space (CPS). After migrating the variant-rich system to a product-line platform and to further evolve it, the developer has to evolve both, the problem space (feature model) and the solution space (assets, such as architecture models and code). Evolving the solution space first can lead to outdated feature models that are inconsistent with the implementation. Evolving the problem space first limits the effects that changes to the implementation are allowed to have. To address these issues, an automated technique (illustrated in Fig. 11) may recommend co-evolution steps to keep both in sync.
For instance, when evolving the solution space first, the technique could extract updated feature dependencies (e.g., an additional dependency on another feature) based on their modified implementation (e.g., due to an additional method call) and suggest modifications to the problem space that reflect the changes made to the solution space. An important property is that problem space and solution space are consistent after every evolution step.
Sub-scenarios
- CPS1: Co-evolving the solution space based on problem space evolution
- CPS2: Co-evolving the problem space based on solution space evolution
Benchmark requirements
We performed our questionnaire survey in March 2019. The participation from members of the community on software variability and evolution this recommendation by performing a questionnaire survey. From 12 responses, presenting an improved version of the scenario descriptions to the remaining eight participants. This intervention is short of the scenario descriptions with respect to clarity.
3 COMMUNITY SURVEY
To develop benchmarks, Sim et al. [91] suggest that incorporating community feedback is essential to establish consensus. We followed this recommendation by performing a questionnaire survey with members from the community on software variability and evolution. To gather feedback on the clarity and relevance of our scenario descriptions, two crucial quality criteria for a successful benchmark [91], our survey focused on two research questions:
- RQ1: How clear are our scenario descriptions?
- RQ2: How relevant are the described scenarios?
In the following, we report the details on our methodology, the results, and threats to validity.
3.1 Methodology
We performed our questionnaire survey in March 2019. The participants for our survey were recruited from two sources: First, we contacted all participants (excluding ourselves) of a Dagstuhl seminar on variability and evolution; the two most relevant research areas (https://dagstuhl.de/en/program/calendar/semhp/?semnr=19191). Second, we contacted authors of recent papers on the same topic. We invited 71 individuals, 41 of whom Dagstuhl participants. A total of 20 individuals completed our survey in the given timeframe.
Our questionnaire comprised three parts. First, we presented the general context of our benchmark, including the running example description we introduced in Sec. 2.2. Second, we described the eleven scenarios that we presented in Sec 2. For each, we included the textual description as well as the list of sub-scenarios. We asked the participants to rate the clarity (using a 5-point Likert scale) of each scenario description (RQ1) with the question: To which extent do you agree that the scenario is clearly described with respect to its usage context and purpose for benchmarking? Then, we asked the participants to assess the relevance of each overall scenario and its sub-scenarios (RQ2) with the question: To which extent do you agree that supporting the following sub-scenarios is important? To assess the completeness of our descriptions, we asked the participants to name relevant sub-scenarios not yet considered. Finally, as a prerequisite for our survey of benchmarks (cf. Sec. 4), we asked the participants to name relevant benchmarks they were aware of. A replication package with the questionnaire and all data is found at: https://bitbucket.org/easelab/evobench/.
The initial responses to our survey pointed out a number of shortcomings in the scenario descriptions with respect to clarity. We used these responses to revise the questionnaire after the first 12 responses, presenting an improved version of the scenario descriptions to the remaining eight participants. This intervention is justified by the methodological framework of design science [75], which emphasizes the continuous improvement of research artifacts based on actionable feedback, thus presenting a best-effort approach. The most significant change was to remove two sub-scenarios (one from the variant synchronization and one from the transformation scenario). In other cases, we reworded the scenario descriptions to add more explanations, examples, and avoid potentially confusing wording. To make sure that our revision indeed led to an improvement, we checked the clarity scores after the revision. We found that the clarity scores improved in all cases.
3.2 Results
Figure 12 provides an overview of the results. For each scenario, we show the distribution of answers to our questions about clarity (RQ1) and relevance (RQ2). We further explain the results based on the textual feedback provided along with the answers.
RQ1: Clarity. For all scenarios, a majority of the participants gave a positive score for clarity. A ratio between 55 % and 90 % gave an agree or strongly agree. The scenario receiving the most negative scores (21 %) was variant synchronization. From the textual feedback provided for this scenario, we observed that several participants struggled to understand a sub-scenario related to the classification of changes into either evolutionary or functional. For example, one participant stated that “it is not entirely clear how an evolutionary change differs from a functional one.” After we removed this sub-scenario and its description in the revision, we found that 86 % of the remaining participants gave a positive score. For the transformation scenario, we observed the same increase of positive scores (to 86 %) after we removed a sub-scenario related to the replacement of the used variability mechanism. For the other scenarios with comparatively many neutral or negative answers, we did not find any repeated issues occurring in the textual explanations.
RQ2: Relevance. A majority of participants (between 55 % and 95 %) assessed the relevance of each scenario positively. Interestingly, despite the lower scores for clarity, variant synchronization is among the two scenarios deemed relevant by 95 % of all participants. To study this discrepancy further, we analyzed the scores per sub-scenario. We found that most participants considered the sub-scenario that we removed in the revision (classify changes, 33 % positive responses) less relevant than the remaining variant synchronization sub-scenarios. Likewise, transformations attracted 100 % positive scores for overall relevance after we removed the least relevant sub-scenario (exchange variability mechanism, 33 % positive responses). In other cases with comparatively fewer positive scores (architecture recovery and problem-solution space co-evolution; 60 % and 63 % positive scores, respectively), it was not obvious from the textual comments how these scores can be explained. An interesting case is visualization. Despite the overall mid-range responses, two participants deemed it particularly relevant, but hard to benchmark: “I believe visualization has much potential to improve many tasks in evaluation of variant-rich systems. […] Evaluation itself, in terms of measuring the impact, is harder.”
The participants’ feedback confirms the clarity and relevance of our benchmark descriptions. The scenarios variant synchronization, feature identification & location, and constraints extraction were considered most relevant.
3.3 Threats to Validity
The external validity of our survey is threatened by the number of participants. However, since we focus on a highly specialized population—the community of variability and evolution experts—valid conclusions about that population can be supported by a smaller sample than a large population would require. By inviting the attendees of a relevant Dagstuhl seminar, we benefit from a pre-selection of experts in this area. Regarding conclusion validity, the confidence in our clarity scores could be improved by asking the participants to solve comprehension tasks, rather than having them rate the description clarity. However, such an experiment would have taken much more time and, therefore, would have risked to affect the completion rate.
4 SURVEYING EXISTING BENCHMARKS
In this section, we survey a selection of benchmarks with regard to their applicability to the scenarios we introduced in Sec. 2.
4.1 Methodology
Selection. As starting point for our selection of benchmarks, we collected a list of benchmarks that we were aware of, due to our familiarity with the field (convenience sampling). To get a more complete overview in a systematic way, we gathered additional benchmarks using a dedicated question in our community survey, in which we asked the participants to name benchmarks that they are aware of. Finally, since we found that a dedicated benchmark was not available for each scenario, we also considered benchmarks from related areas, such as traceability research, and identified whether they match our scenarios. From these steps, we derived an initial list of 17 benchmark candidates.
Based on our definition of benchmark, as given in Sec. 1, we defined the following inclusion criteria:
- **I1** The availability of a dataset based on one or more systems created by industrial practitioners, and
- **I2a** The availability of a ground truth for assessing the correctness of a given technique, or
- **I2b** The availability of a framework for assessing other properties of interest.
From the initial 17 benchmark candidates, nine satisfied the inclusion criteria, meaning that they provided a suitable dataset, and either a ground truth or a framework for assessing a relevant technique. We focused on these nine benchmarks in our survey and excluded eight additional ones that did not satisfy all criteria. The excluded candidates can be considered as notable datasets, as they may still offer some value for benchmarking. We discuss the selected benchmarks in Sec. 4.2, and the notable datasets in Sec. 5.
**Assessment.** To determine how well our eleven scenarios are supported by the identified benchmarks and to identify synergies between benchmarks and scenarios, we assessed the suitability of each benchmark for each scenario. To this end, for a given benchmark candidate, we considered the requirements given in the benchmark descriptions (Sec. 2) and checked whether it fulfills the requirements and provides the artifacts that we defined.
4.2 Results
In Table 1, we provide an overview of the considered benchmarks and scenarios. The area from which the benchmark originally stems is given as original context in the table. A full circle indicates full support for at least one sub-scenario, a half-filled circle indicates partial support (i.e., a subset of the required artifacts is available) for at least one sub-scenario, and an empty circle indicates no support of the given scenario by means of the given benchmark. In the following, we briefly introduce the benchmarks and explain the cases in which a scenario is fully or partially supported.
**ArgoUML-SPL FLBench** [57] has a long tradition as benchmark for feature location in single systems and in families of systems [56]. The ground truth consists of feature locations for eight optional features of ArgoUML at the granularity of Java classes and methods. A feature model is available. The framework allows to generate predefined scenarios (a set of configurations representing a family) and to calculate metrics reports for a given feature location result. Given that this benchmark only contains eight optional features, we may still offer some value for benchmarking. We discuss the selected benchmarks in Sec. 4.2, and the notable datasets in Sec. 5.
Drupal, which may indicate faults that were introduced by the evolution of the system. This dataset is useful for the scenario of functional testing, to evaluate whether the selected variants for application testing cover relevant feature interactions that are known to contain faults. Moreover, the information of feature interactions could be used to partially benchmark visualization.
**Eclipse FLBench** [63] is a benchmarking framework for feature location techniques in single systems and in families of systems. Since the ground-truth traces map features to components (i.e., Eclipse plugins), the granularity is coarse and there are no cross-cutting features, thus justifying only partial support of feature location. This benchmark supports different Eclipse releases, each containing around 12 variants, 500 features, and 2,500 components. The Eclipse FLBench also contains information about feature dependencies and hierarchy, but only “requires” constraints, thus justifying partial support of constraints extraction and FM synthesis.
**Linux Kernel FL Bench** [98] is a database containing the ground-truth traces from a selection of features of the Linux Kernel to corresponding C code. It contains the locations of optional features within 12 product variants derived from three Linux kernel releases. For each variant, we have around 2,400 features and 160,000 ground-truth links between features and code units. The database contains information about “requires” and “excludes” feature constraints, as well as the feature model hierarchy, making it a suitable ground truth for constraints extraction and feature-model synthesis. However, as it was not its intended usage, more complex feature constraints are not captured.
**Marlin & BCWallet** [42] is a dataset of feature locations (represented as embedded code annotations), feature models, and feature fact sheets of two open-source systems, all of which can serve as ground truth for feature identification and location techniques. It comprises both mandatory and optional features. The annotations can also serve as input for feature dashboards that provide visualizations with several metrics [5], for instance, assets related to a feature, scattering degrees, and developers associated with each feature.
**ClafwerWebTools** [37] is a dataset with feature locations of both mandatory and optional features, as well as feature models, together with an evolution history. ClafwerWebTools is a clone & own-based system that evolved in four variants. Like Marlin & BCWallet, the locations are embedded into the source code. It can be used to evaluate feature-location techniques exploiting historical information, or visualization techniques showing the evolution of features.
**DoSC** (Detection of Semantic Changes [101]) is a dataset with revision histories of eight Java projects for benchmarking semantic change detection tools. Semantic changes are commits that correspond to entries from an issue tracking system; they can be considered as features in a broader sense. Traces from semantic changes to implementation elements are included, thus providing a ground truth for feature location (partially supported, since only optional features are considered) and a basis for visualization. The revision histories also provide a rich data source for benchmarking transformation and variant integration. However, full support is prohibited by the lack of a feature model and available ground truths.
**SystemsSwVarModels** [15] comprises a corpus of 128 extracted real-world variability models from open-source systems-software, such as the Linux kernel, the eCos operating system, BusyBox, and 12 others. The models are represented in the variability modeling languages Kconfig [85] and CDL [14], with the benchmark providing tools to analyze and transform these models into their configuration space semantics (expressed as Boolean, arithmetic, and string constraints), abstracted as propositional logics formulas. As such, these formulas can be used to benchmark constraints extraction from codebases and feature model synthesizes. To some extent, the corpus can be used to benchmark feature-oriented visualizations (e.g., slicing feature models) and problem & solution space co-evolution.
**TraceLab CoEST** [40] is an initiative of the Center of Excellence for Software and Systems Traceability gathering a set of case studies on traceability recovery with their corresponding ground-truth traces. We can find benchmarks with traces from requirements to source code, from requirements to components, from high- to low-level requirements, from use cases to source code, and other types of traces that partially satisfy the needs of evaluating feature location techniques in single systems.
**Variability Bug Database** [1] is an online database of variability-related bugs in four open-source repositories: The Linux kernel, BusyBox, Marlin, and Apache. The meta-data provided for bug entries include a description, a type (e.g., “expected behavior violation”), a configuration, and pointers to a revision where the bug appears and where it is fixed. This database is especially useful for functional testing, as it provides a ground truth in the form of faults together with the configurations in which they appear. The projects contain #ifdef directives that can be considered as
<table>
<thead>
<tr>
<th>Benchmark</th>
<th>Original Context</th>
<th>VS</th>
<th>VI</th>
<th>FIL</th>
<th>CE</th>
<th>FMS</th>
<th>AR</th>
<th>TR</th>
<th>FT</th>
<th>ANF</th>
<th>VZ</th>
<th>CPS</th>
</tr>
</thead>
<tbody>
<tr>
<td>ArgoUML-SPL FLBench [57]</td>
<td>Feature location</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>Drupal [81]</td>
<td>Bug detection</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>Eclipse FLBench [63]</td>
<td>Feature location</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>LinuxKernel FLBench [98]</td>
<td>Feature location</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>Marlin & BCWallet [42]</td>
<td>Feature location</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>ClafwerWebTools [37]</td>
<td>Traceability</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>DoSC [101]</td>
<td>Change discovery</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>SystemsSwVarModels [15]</td>
<td>FM synthesis</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>TraceLab CoEST [40]</td>
<td>Traceability</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
<tr>
<td>Variability bug database [1]</td>
<td>Bug detection</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
<td>○</td>
</tr>
</tbody>
</table>
variability annotations, rendering the database partially suitable for benchmarking feature location and visualization.
While we identified synergies between the scenarios and existing benchmarks, the overall coverage is still low: A complete benchmark is only available for three of the eleven considered techniques. Four scenarios lack any benchmark: variant synchronization, analysis of non-functional properties, architecture recovery, and co-evolution of problem & solution space. The former two were deemed as particularly relevant in our community survey.
5 RELATED WORK
Besides the benchmarks we analyzed in the previous section, we are aware of several datasets and proposals that aim to achieve similar ideas, and benchmarks from related areas.
Repositories. Some repositories collect artifacts or full projects in the domain of software-product-line engineering. For example, spl2go (http://spl2go.cs.ovgu.de/) provides a set of various software product lines. However, most of these systems are based on student projects and they provide solely downloadable code. A more extensive overview especially on extractive software-product-line adoption is offered by the ESPLA catalog [56]. ESPLA collects information from existing papers, rather than providing data or an infrastructure by itself. Similarly, tools like FeatureIDE [65] and PEoPL [9] provide some complete example product lines, but are neither industrial nor do they have ground truths.
Case Studies and Datasets. Some case studies have been introduced that partly aimed to provide the basis for establishing benchmarks. The potentially best-known and first of such studies is the graph product line introduced by Lopez-Herrejon and Batory [52], McGregor [64] reports experiences of using the fictional arcade product line in teaching, but focuses solely on reporting established practices. Recently, several case studies have been reported that particularly aim to provide suitable data sets for evaluating techniques for the evolution and extraction for software product lines. For example, Martinez et al. [59] extracted a software product line from educational robotic variants. The Apo-Games [46] are a set of real-world games, realized using clone & own, with which the authors aim to provide a benchmark for the extraction of software product lines based on community contributions. Two recent works in fact provide datasets detailing the migration of dedicated subsets of the cloned games into product line platforms [4, 21]. BeTTy [83] is a feature model generator, focused on benchmarking and testing automated analysis techniques for feature models. Tzoref-Brill and Maoz [96] provide a dataset for assessing co-evolution techniques for combinatorial models and tests. A combinatorial model, similar to a configuration, is a set of bindings of parameters to concrete values. Finally, SPLiOT [66] provides a set of 1,073 feature models and constraints, also including an editor and analysis framework. It mostly includes academic feature models and toy examples. None of these works represent a benchmark according to our criteria, namely that they are based on assets created by practitioners and provided together with a ground truth or assessment framework.
Benchmarks in Related Areas. Various benchmarks have been proposed in areas that are closely related to variability engineering. SAT solvers are often applied in the context of software variability, specially for family-based analyses. The annual SAT competitions [36] provide various benchmarks and are important enablers for the SAT community. In the area of software-language engineering, the language workbench challenge [28] is an annual contest with the goal of promoting knowledge exchange on language workbenches. Model transformations provide the capability to represent product composition as transformation problem and have several established benchmarks, for instance, on graph transformation [97] and scalable model transformations [92]. While these benchmarks are complementary to the ones we consider in this paper, they report best practices that should be applied when implementing our envisioned benchmark set.
6 CONCLUSION AND ROADMAP
In this paper, we aimed to pave the way for a consolidated set of benchmark for techniques that support developers during the evolution of variant-rich systems. We studied relevant scenarios, investigated the clarity and relevance of the scenarios in a survey with variability and evolution experts, and surveyed the state of the art in benchmarking of these techniques.
Results. In summary, our main results are:
- We identified 11 scenarios covering the evolution of variant-rich systems, together with requirements for benchmarking the relevant techniques.
- Community feedback shows that our scenarios are clearly defined and important to advance benchmarking in the area.
- Only three out of the 11 scenarios are completely supported by existing benchmarks, highlighting the need for a consolidated benchmark set with full support for all scenarios.
Roadmap. Our results suggest the following research roadmap to eventually achieve such an envisioned benchmark set.
As a key goal, we aim to set up a common infrastructure for all scenarios presented in this paper. This way, we can utilize synergies between benchmarks, specifically by means of shared datasets and assets. Where available, we may reuse publicly available implementations of benchmark frameworks and integrate them.
Most scenarios require a manually curated ground truth. Therefore, creating ground truths for the available datasets is a substantial effort, a call for investing more resources into benchmarking.
A further important goal is to broaden the scope of datasets. Most available datasets are based on open-source projects from traditional embedded systems. It is worthwhile to include datasets from upcoming domains that need variability handling, including data-analytics software [30], service robotics [31], and cyber-physical systems [16].
Raising awareness for the challenges and opportunities for benchmarking takes a concerted effort. Developers of new techniques shall be encouraged to use our benchmark infrastructure, articulate gaps in the benchmark literature, and fill them by contributing their own benchmarks. We plan to advertise our initiative in the appropriate mailing lists and social media.
Acknowledgments. Supported by ITEA project REnVAM² funded by Vinnova Sweden (2016-02804). We thank the participants of Dagstuhl seminar 19191, all survey participants, and Tewfik Ziadi for input and comments on earlier versions of this paper.
|
{"Source-Url": "http://www.cse.chalmers.se/~bergert/paper/2019-splc-benchmark.pdf", "len_cl100k_base": 11996, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42001, "total-output-tokens": 12225, "length": "2e13", "weborganizer": {"__label__adult": 0.00033092498779296875, "__label__art_design": 0.0003025531768798828, "__label__crime_law": 0.00024247169494628904, "__label__education_jobs": 0.0009555816650390624, "__label__entertainment": 5.59687614440918e-05, "__label__fashion_beauty": 0.0001518726348876953, "__label__finance_business": 0.0002570152282714844, "__label__food_dining": 0.00023734569549560547, "__label__games": 0.0006036758422851562, "__label__hardware": 0.0006046295166015625, "__label__health": 0.0003361701965332031, "__label__history": 0.0002491474151611328, "__label__home_hobbies": 7.069110870361328e-05, "__label__industrial": 0.0002758502960205078, "__label__literature": 0.00027179718017578125, "__label__politics": 0.0002168416976928711, "__label__religion": 0.0003447532653808594, "__label__science_tech": 0.010284423828125, "__label__social_life": 8.350610733032227e-05, "__label__software": 0.005733489990234375, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.0002359151840209961, "__label__transportation": 0.0003657341003417969, "__label__travel": 0.0001806020736694336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57834, 0.01608]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57834, 0.10657]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57834, 0.90493]], "google_gemma-3-12b-it_contains_pii": [[0, 5050, false], [5050, 15976, null], [15976, 20830, null], [20830, 24523, null], [24523, 28724, null], [28724, 33695, null], [33695, 40323, null], [40323, 44615, null], [44615, 51183, null], [51183, 57834, null], [57834, 57834, null], [57834, 57834, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5050, true], [5050, 15976, null], [15976, 20830, null], [20830, 24523, null], [24523, 28724, null], [28724, 33695, null], [33695, 40323, null], [40323, 44615, null], [44615, 51183, null], [51183, 57834, null], [57834, 57834, null], [57834, 57834, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57834, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57834, null]], "pdf_page_numbers": [[0, 5050, 1], [5050, 15976, 2], [15976, 20830, 3], [20830, 24523, 4], [24523, 28724, 5], [28724, 33695, 6], [33695, 40323, 7], [40323, 44615, 8], [44615, 51183, 9], [51183, 57834, 10], [57834, 57834, 11], [57834, 57834, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57834, 0.04938]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
2763f315631c14196696c12739bb18385501bf27
|
A domain-specific high-level programming model
Farouk Mansouri, Sylvain Huet, Dominique Houzet
To cite this version:
Farouk Mansouri, Sylvain Huet, Dominique Houzet. A domain-specific high-level programming model. Concurrency and Computation: Practice and Experience, Wiley, 2015, Concurrency and Computation: Practice & Experience, <10.1002/cpe.3622>. <hal-01204811>
HAL Id: hal-01204811
https://hal.archives-ouvertes.fr/hal-01204811
Submitted on 26 Oct 2015
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
A domain-specific high-level programming model
Farouk. Mansouri, Sylvain. Huet and Dominque. Houzet. *
Gipsa-Lab, 11 rue des Mathmatiques, Grenoble Campus BP46, F-38402 SAINT MARTIN D’HERES CEDEX, France
Email: firstname.lastname@gipsa-lab.grenoble-inp.fr
SUMMARY
Nowadays, computing hardware continues to move toward more parallelism and more heterogeneity, to obtain more computing power. From personal computers to supercomputers, we can find several levels of parallelism expressed by the interconnections of multi-core and many-core accelerators. On the other hand, computing software needs to adapt to this trend, and programmers can use parallel programming models (PPM) to fulfill this difficult task. There are different PPMs available that are based on tasks, directives, or low level languages or library. These offer higher or lower abstraction levels from the architecture by handling their own syntax. However, to offer an efficient PPM with a greater (additional) high-level abstraction level while saving on performance, one idea is to restrict this to a specific domain and to adapt it to a family of applications. In the present study, we propose a high-level PPM specific to digital signal processing applications. It is based on data-flow graph models of computation, and a dynamic run-time model of execution (StarPU). We show how the user can easily express this digital signal processing application, and can take advantage of task, data and graph parallelism in the implementation, to enhance the performances of targeted heterogeneous clusters composed of CPUs and different accelerators (e.g., GPU, Xeon Phi).
1. INTRODUCTION
Since about 2005, the hardware trend has moved toward the era of multi-cores and many-cores architecture to offer improved performance. However, these architectures are still difficult to handle. Programmers need to express parallelism of tasks and data, and deal with their constraints, such as communication, load balancing, memory management, and synchronization. In addition, programmers have to face heterogeneity problems with the use of different softwares for each accelerator. For all of these reasons, parallel programming models (PPM)s have emerged. These are a programming concept that is used to abstract the cited hardware specificities in order to decrease implementation needs on parallel and heterogeneous clusters, while increasing performance, which is not a simple problem.
In the present study, we exploit the idea that a domain-specific parallel programming model (PPM) can offer high abstraction, and at the same time, allow efficient implementation of a family of applications on a heterogeneous cluster. Following this idea, we focus on the digital signal processing (digital signal processing (DSP)) domain, where the major challenge is to more easily express these applications in a high-level mode while efficiently exploiting the performance of the clusters. Thus, we propose a design flow that is based on a dynamic run-time (StarPU), to efficiently implement DSP applications by simply specifying them graphically as a data flow graph (Data Flow Graph (DFG)) model of computation (model of computation (MoC)).
The organization of this paper will be as follow: first, in section 2, we present PPMs and their classification according to their abstraction level. In section 3, we present related studies on the implementation of DSP applications on heterogeneous clusters, and we study and extract their characteristics that are necessary to optimize their implementation on a cluster. Finally, we position
our approach by comparing it to related studies. In section 4, we describe our programming model and detail its functionalities. Finally, in section 5, we present a comparison of the implementation of two applications, to validate our approach.
2. PARALLEL-PROGRAMMING MODELS
A PPM is a bridge between a system developer’s natural model of an application and its implementation on parallel architectures. It is a programming concept to provide connections between the user applications and the functionality of clusters. A PPM has to achieve the best trade-off between designer productivity and implementation efficiency. Indeed, as presented in [1], a good PPM should have some specific proprieties, such as: (1) simplification of the programming effort; (2) portability and architecture undependability; (3) efficiency of implementation.
To achieve all of these requirements in a PPM might be a difficult task, knowing that several of them are in opposition to each other. The abstraction levels of PPMs is an important basis to distinguish them, because they affect several evaluation metrics, as cited above. In [1], they proposed to classify PPMs according to whether or not they offer some inherent functionality of abstraction: (1) Decomposition of a program into parallel threads or processes; (2) Mapping of threads to processing elements (PEs); (3) Communication among the threads; and (4) Synchronization among the threads. Figure 1 shows a classification of several PPMs according to the abstraction level of their hardware.
3. RELATED STUDIES ON IMPLEMENTATION OF DIGITAL SIGNAL PROCESSING APPLICATIONS ON PARALLEL AND HETEROGENEOUS ARCHITECTURES
Throughout the history of computing, DSP applications have pushed the limits of compute power. Calculations have been performed on different hardware, like the FPGA, DSP, and SMP platforms. Recently, DSP applications have seen rapid advance in multimedia computing and high-speed communications. They process large data volumes using complex algorithms. In response to these advances, research is focusing on the implementation of several of these applications on parallel architectures, including heterogeneous accelerators like GPU, Xeon Phi and others. Next, we present the state of the art of the implementation of DSP applications on these architectures.
Low-level parallel-programming models (mostly explicit implementation) Traditional implementations are based on low-level tools or PPMs. In [2], they implemented a Ray-tracing application on a Cray XD-1 many-core architecture with distributed memory architecture. They used a message-passing interface (MPI) to decompose the algorithm over hundreds of MPI processes that iteratively performed a number of rays, while communicating over the network. In [4], a synthetic aperture sequential beamforming GPU implementation of ultrasound imaging was performed. The algorithm processed two-dimensional data sampling. The authors used both OpenCL[5] and OpenGL[6] to decompose the algorithm into thread blocks. Each block of threads processed a two-dimensional subset of input image. In [7], they used CUDA[8] PPM to implement a medical ultrasound algorithm to generate B-mode images. They decomposed the algorithm into four kernels that were sequentially processed on the GPU to execute one frame. The algorithm was
(2015)
iteratively re-executed for each frame of the input dataset. In [9], they implemented a least mean square algorithm on different GPUs by also using a CUDA model. They optimized their algorithms for a least mean square algorithm family using shared memory. These implementations using low-level models are efficient and the authors achieved good performance in comparison to sequential implementation. However, the programming effort is significant. Indeed, for all of the cited PPMs, the programmer has to carry out the following manually: (1) decompose the algorithm into threads by unrolling loops of data samples to exploit data parallelism, or loops of algorithm steps to exploit task parallelism; (2) manage the memory allocation for each thread; (3) synchronize the threads to save data coherence, using barriers, Mutex or Semaphore; (4) manage communications among threads over shared memory or distributed memory architecture; and (5) map the parts that result from the decomposition over threads or processes. In addition, the low-level PPMs used are specific to a kind of PE, so CUDA, OpenCL, and OpenGL for GPU (accelerators), and Pthread and MPI for multi-core and many-core. Finally, these are also specific to one memory architecture. MPI for distributed memory, and the others for shared memory architecture.
**Directive-based parallel-programming models (partly implicit implementation)** Common tools and models that are used to implement DSP applications on parallel and heterogeneous architecture are directive-based high-level PPMs. The users explicitly annotate the sequential code to decompose the algorithm into parallel parts only at compilation times that can increase the productivity and portability of the codes. The user does not care about communication, synchronization, and mapping in default mode, but has to explicitly specify them in performance mode. In [10], they used OpenMP[11] to implement geodesic applications on multi-core architecture. The authors inserted directives to data parallelize the execution of two-dimensional images, where each OpenMP thread processed a row at a time. The algorithm was iteratively repeated until convergence. OpenMP is a commonly used standard and targets both multi-core and many-core architecture, and accelerators in its last version (OpenMP 4.0). This allows the user to unroll loops over CPU or accelerator threads, or to construct sets of dependent tasks. However, it is a structured (fork-join) execution model. It allows parts of parallel regions, called Chunks, to be dynamically scheduled, but their sizes are manually fixed by the user, and there is no task scheduling across heterogeneous PEs. OpenMP implementation and more details are discussed in section 5.1.1. Another PPM very close to OpenMP is OpenACC[12]. This is also a directive-based PPM, but it is more oriented to GPU accelerators. In [13], they presented an OpenACC implementation of a three-dimensional elastic wave simulator on a multi-core plus GPUs architecture. They manually decomposed the algorithm over two MPI processes, and parallel regions to off-load onto GPUs. They obtained good performance in comparison to the programming effort. However, it is necessary for the user to manually tune the data allocation and transfer, and the scheduling of parallel regions on GPU architecture. In addition, the user has to use MPI to target the distributed memory of the two GPUs. Finally, the OpenACC implementation required a commercial compiler, such as PGI or Cap’s. Other directive-based tools and models, like HMPP[14] or OmpSS[15], follow the same model of OpenMP, with some extensions. However, they include the same inconvenience in their implementation of DSP applications. They do not manage data-flow dependencies, except for OmpSS and OpenMP 4.0. They are restricted to shared memory architecture. In addition, in performance mode, the user has to deal with the architecture to map parallel work over PEs.
**Task-based parallel-programming models (mostly implicit implementation)** Other models and run-times to implement DSP applications are task-based PPMs. Using these, the user explicitly decomposes the algorithm in the form of a graph of the task, in general in the form of directional acyclic graphs (Directional Acyclic Graph (DAG)s) of tasks that are managed according to data-flow dependencies. The user is saved from explicitly managing the communication, synchronization, and scheduling of tasks over the architecture. In [16], they used StarPU[17] to implement applications of coin and contour detection in a large-volume database of images on heterogeneous cluster. They pre-loaded all of the images into memory, and then executed the algorithm that was decomposed into DAGs of tasks, where each task partly processed one image on the CPU or GPU. The authors obtained good performances in comparison to sequential implementation, and exploited all of the PEs of the cluster using dynamic scheduling, like work-stealing or heterogeneous earliest
finish time (HEFT). However, they needed to manually construct the DAGs of the tasks. They had to manipulate the API to create codelets, tasks and buffers. They had to link tasks between these, using some functions and data-struct. Finally, they had to submit the DAG to the run-time. Cilk\cite{18}, X-Kaapi\cite{19}, TBB\cite{20}, Ptask\cite{21}, are run-times and models based on DAGs of tasks, but these all include the same cited disadvantages.
**Data flow graph model-of-computation-based parallel-programming models (DSP-specific implementation)** There are models and tools based on DFG MoCs that can be used to simulate and implement DSP applications on different parallel platforms. As discussed in \cite{22, 23}, the DFG MoC models algorithms as nodes, known as operators, that represent sub-functions of the system. These are connected with directional edges that represent FIFO buffers and that transport sub-result data, known as Tokens, to synchronize operations. An operation runs as soon as all of its inputs become valid. Thus, this model is inherently parallel and allows the user to easily express task parallelism on the application. In \cite{24}, they presented StreamIT, which is a language with specific syntax to textually model the algorithm as a DFG. This includes an execution model to implement the algorithm that exploits different levels of parallelism, and targets multi-core architecture, but not accelerators. Another major tool to model, simulate and execute DSP applications is Ptolemy \cite{22}. This offers the expression of several DFG MoCs using a graphical interface; however, this execution model is restricted to a mono-core DSP. PREESM \cite{25} is another tool to implement DSP applications on multi-cores using DFG MoCs. However, the user has to manually express the application and the architecture in the form of a graph, and to statically map these according to specific scenarios. All of these tools are specific for simulation and execution of DSP applications. They take in account the characteristics, such as data-flow synchronization, granularity, iterative forms, and scaling. These are implicit models for communications and synchronization. However, they are restricted to mono-core or multi-core accelerators. In \cite{26}, they presented an extension of StreamIT that can automatically map streaming applications represented as DFGs onto GPUs. However, the mapping is static and it is performed at compilation time. In \cite{27}, they proposed a compiler-based PPM to implement streaming DSP applications on multi-GPU architecture. The tool they proposed is in the form of a source-to-source compiler that generates GPU and CPU codes from the SystemC \cite{28} description of the application as a DFG. They also used the SynDEX tool \cite{29} to map application graphs onto architecture graphs. Thus, also here, the user has to manually represent the architecture as a graph, and the scheduling generated from SynDEX is static. The same inconvenience characterizes the PACCO \cite{30, 31} tool. This is manual and static scheduling, and the user has to describe the architecture graph. In addition, it is a BSP tool \cite{32} that implies a synchronization barrier for each iteration. In \cite{33}, they introduced a framework based on OpenCL to execute DSP applications specified as DFGs on a heterogeneous cluster. However, the user has to manually map the operators onto OpenCL workers in charge of PEs.
**Summary and contribution** For all of the above-cited implementations of DSP applications, we can characterize them as follows: they use repetitive (iterative) processing of an input set of digital signal samples to produce an output set; e.g., a set of images. DSP algorithms are usually and preferably modeled with inherently parallel DFG MoCs \cite{22, 23}, where the application is designed with an oriented graph. The nodes represent the operators (functions of an algorithm) and the edges represent the data exchange between these, as sub-results or variables. In the parallel implementation of DSP applications, programmers must exploit these specificities to take advantage of targeted heterogeneous and parallel architectures. First, to highlight the operators that can be executed in parallel (task parallelism), they have to express their algorithm with a set of tasks, using threads or process. They have to manage thread communication and synchronization according to the data-flow application dependencies (edges of graphs) on both shared and distributed memory architectures. In addition, to benefit from the accelerator capacity to speed up the SIMD processing (data parallelism), the user has to off-load a part of their tasks toward PEs, like GPU or Xeon Phi. The user has to deal with memory allocation on the accelerators, copy-in the input data, launch the execution, copy-out the results, and finally free the allocated memory space. DSP applications are also mostly iterative, so in some cases, the designer has to unroll the main loop of the application to increase the task parallelism, to increase the occupancy of the
computing units. So, the designer must duplicate the process (thread) in charge of executing the main loop, while taking care of the data coherency. The programmer must also cope with other difficulties, like overlapping the communication with the computation, the load balancing between PEs that assumes a scheduling algorithm that takes into account the communication cost, the efficient memory management to reduce the bottleneck due to allocating or freeing buffers, and the unnecessary synchronizations that can delay the execution, plus more.
Taking into account all of these implementation constraints carried out by hand, this is particularly bad and leads to application-specific implementation. Using low-level tools and models like CUDA and MPI presented in section 3, the programmer has to combine the handling of some API, language or extension of language to explicitly decompose the algorithm, and to manage memory, communications, synchronizations, and scheduling. These models are very close to hardware language and restricted to a specific architecture, which can decrease the productivity. Otherwise, users can use directive-based tools like OpenMP and OpenACC presented in section 3 to implement DSP applications on heterogeneous architecture. However, it can be hard for them to exploit the above-cited characteristics of this application family such as DFG decomposition and synchronization. In addition, to obtain good performance, users have to focus on allocation and communication of data, and scheduling of parallel region onto threads. Another solution the users can use to implement DSP applications on heterogeneous architecture is the task-based tools like StarPU discussed in paragraph 3. These tools are more adapted for decomposition and synchronization of DSP applications. However, programmers have to manipulate their API to create, delete, and submit tasks. They have to create buffers and link them to synchronize tasks. They also have to manually unroll loops of input data to exploit data parallelism, to create a DAG of tasks. In addition, users have to take care of the overhead cost due to run-time management. Finally, users can use the DFG MoC-based models cited in section 3, which are the most adapted tools for implementing DSP applications. However, according to our discussion in section 3, part of these do not target heterogeneous architecture. Other parts of these are static tools; i.e., at compilation time construction and mapping of tasks. Finally, the others are manual scheduling of applications, and are not architecture aware. So, in our opinion, there are no tools that can implement DSP applications using DFG MoCs on heterogeneous clusters that use an architecture-aware run-time, and that offer dynamic scheduling. In [34, 35], they discussed the advantage of dynamic versus static scheduling for DFG applications. Indeed, even if several DFG MoCs are decidable and predictable at compilation time, there are some dynamic DFG MoCs that need to be scheduled at run-time. In addition, some applications are data dependent, where the processing time is not predictable and changes according to the input data. Finally, dynamic scheduling is more adapted for clusters to optimally take advantage of all of the heterogeneous PEs by load balancing the work. Thus, for all of these reasons, in the next section we propose to enrich the StarPU programming model with a novel design flow, to adapt it for implementing DSP applications represented with DFG models of computation that simplify their expression and computation, and at the same time increase their performance. On this basis, our major contributions are:
**Conceptual contribution:** A novel design flow to implement DSP applications on heterogeneous clusters based on DFG models of computation in high-level abstraction, and the architecture-aware and dynamic run-time (StarPU).
**Functional contribution:** Based on DSP domain characteristics, we propose the additional functionalities of:
1. Dynamic and implicit construction of the DAG of the task.
2. Automatic unfolding of the DFG of the application.
3. Implicit allocation and reuse of buffers.
4. Automatic execution and saving of the initialization part of repetitive tasks.
5. Dynamic auto-tunning of GPU tasks.
4. THE PROPOSED PROGRAMMING MODEL
In this section, we propose the use of SignalPU, a novel parallel programming model based on DFG models of computation, and the dynamic run-time StarPU [17] to implement DSP applications on heterogeneous clusters. With SignalPU, programmers do not have to manipulate StarPU API to deal with the algorithm expression and the architecture specificities, like memory management, task creation and synchronization, execution placement, and others. Using the DFG model of computation, they can implicitly express tasks, data, and graph parallelisms in their implementations, to optimally take advantage of the hardware. In addition, because it is based on StarPU, our PPM makes use of shared and distributed memory architectures. It deals with many-node clusters using the MPI, and it can target several accelerators: GPU, Xeon Phi, Cell., etc.
Figure 2. SignalPU design: Three levels of processing
We present our proposed PPM in the form of three levels of processing, as shown in Figure 2. First, the user can easily express the application with a DFG using an XML interface. Thus, the user is freed to manipulate the StarPU API for: tasks and codelet creating, buffer management between pairs of tasks, and jobs submitted onto the corresponding PE. Second, at run-time, the DFG-XML is analyzed and transformed using some DSP-adapted functionalities (e.g., graph-unfolding techniques [36], pipelining of tasks, buffer re-use, MPI multi-node distributions) to produce a DAG of the independent tasks distributed over MPI nodes. The goal is to express all levels of parallelism, while limiting the overhead due to memory and task management. Also, at this level of processing, the user does not have to deal with any API to unroll the main loop, manage necessary memory buffers, submit and synchronize tasks, or distribute the processing over MPI nodes. Finally, at the third level, the StarPU run-time is in charge of task management, scheduling and load balancing, data-flow dependency, and PE management. However, we have added two optimizations: initialization saving, and auto-tuning, to enhance the performance of each task. Next, we illustrate these levels through the synthetic example application presented in Figure 3.
Figure 3. DFG model of an example of a DSP application. Z-1 is a delayed auto-dependence.
4.1. Level 1: SignalPU DFG-XML interface
In this step, we propose an interface that is based on the DFG model of computation and the XML description, where the programmer can easily express the application. First, the programmer has to describe each operator (function) of the algorithm in the form of a node (vertex) using an XML structure. The programmer has to input the name of the functions, which will be called in the code, the number of input and output arguments of these functions, and the architecture kind that corresponds to each of them (e.g., CPU, GPU,...). Second, the programmer has to describe in the same manner all of the data flows in the form of graph edges, with a structure that includes information about the type and size of the data that exchange between operators. After this, a DFG-XML of the application is produced. Finally, the programmer has to include its functions in the code, written in the form of a combination of two sub-functions: an initialization one, and an execution one (Init(..), Exec(..)). The XML code below (Table I) gives an example of how a node and an edge of the DFG can be described.
(2015)
exemplify that technique through the z-transformation of a digital signal: $Z_{j+k} = a X_{j+k} + b Y_{j+k}$
Thus, as illustrated in Figure 4, at run-time, we express first each operator of the DFG-XML with a StarPU structure, called a "codelet", which denotes tasks and contains all of the information about the corresponding operator (e.g., number of input arguments, number of output arguments, function identifiers, architecture kind). We then iteratively J-unfold the DFG to create a DAG of tasks. The unfolding degree can be adjusted dynamically according to the measured occupancy, the load balancing, and the available resources.
Table I. A portion of the XML code that represents a description of one node and one edge.
4.2. Level 2: SignalPU implementation
As illustrated in Figure 4, at run-time, the XML description of the DFG is parsed with the Boost Graphml Reader [37], and the StarPU API is invoked to generate an implementation that can highlight all of the levels of parallelism (i.e., task, data, graph parallelism) to be efficiently executed on the cluster. Several instances of the graph will be submitted to StarPU run-time using the unfolding techniques presented in the following section. The later sections describe first the buffer reuse strategy to avoid unnecessary memory allocation and release, second, the pipelining functionality to limit the number of submitted tasks, and finally, the decomposition of the DAG of tasks over MPI nodes.
Graph unfolding Unfolding is a transformation technique to duplicate the functional blocks to reveal hidden parallelism of the DSP program in such a way that preserves its functional behavior at its outputs. It is usually used for low-level implementation on FPGA, DSP, and ASIC hardware [36]. We propose to use it to unroll the main loop of the application and to increase task parallelism.
As illustrated in Figure 4, at run-time, we express first each operator of the DFG-XML with a StarPU structure, called a "codelet", which denotes tasks and contains all of the information about the corresponding operator (e.g., number of input arguments, number of output arguments, function identifiers, architecture kind). We then iteratively J-unfold the DFG to create a DAG of tasks. The unfolding degree can be adjusted dynamically according to the measured occupancy, the load balancing, and the available resources.
Buffer reuse Many DSP applications have static communication patterns. So, to avoid unnecessary overhead due to buffer allocations and freeing, a fixed number of buffers is allocated according to the available resources (i.e., the available memory on the node), the DFG-XML description of the application (i.e., data type, data size, number of dependencies), and the unfolding degree. Each time an iteration (graph level) of the original submitted DAG is finished, its buffers are used by the next submitted iteration (graph level). To implement this mechanism and to control the unfolding degree, we use a semaphore initialized with $J$. Each time an iteration of the original graph is finished, a semaphore is given to the main loop of the control thread that is waiting for a semaphore before launching a new iteration.
Tasks pipelining In addition, all of the DSP applications process a high number of iterations, so to reduce overhead due to task management (e.g., dependency management, scheduling, task status updating), we limit the number of submitted tasks using pipelining functionality. So, at run-time, by using semaphores, only a fixed number of pipeline levels are submitted, where this number is $J$, the unfolding degree, and each pipeline level corresponds to one graph level in the DAG. As illustrated in Figure 4, the pipeline depth (length) is four, and a semaphore locks all of the buffers of the four graph levels. So, to add (submit) a new graph level to the pipeline, it is necessary to wait for the termination of the processing of one iteration (graph level) of the submitted graph.
MPI multi-node distribution Finally, our design flow allows the user to deploy the application over multi-nodes using the StarPU-MPI [38] library. Indeed, the user can distribute the processing costs over the MPI nodes in a SIMD way by assigning to each node the unit of data needed to be processed. So, using a predefined function called \texttt{MPI\_data\_schedule()}, the user simply has to specify the number of the iterations that the node has to process. Of note, the DAG of tasks is dynamically divided across nodes. Below we give a simplistic example of how the user can process this function:
```c
1 int MPI_data_schedule(int loop, int rank) { return( loop % rank ); }
```
At run-time, each MPI node processes its part of the DAG composed of graph levels corresponding to the number of iterations, as specified by the user. All of the MPI nodes follow the same design described by the functionality cited above. So each node dynamically unfolds its part of the DAG of tasks, and it automatically re-uses buffers and pipeline tasks. Also, each MPI node uses the functionality we cite next.
### 4.3. Level 3: SignalPU run-time (StarPU)
In this step, as illustrated in Figure 5, we focus on the efficient execution of the tasks. On each MPI node, StarPU run-time manages the submitted sub-DAG of the tasks generated in the previous level. StarPU dynamically schedules them by taking into account the data-flow dependencies between the tasks, to guarantee data coherence and to limit superfluous synchronizations. At run-time and on each MPI node, the submitted unfolded part of the DAG of tasks relative to this node are stored on a pile and scheduled. StarPU dynamically assures that they are executed on the "best" computation unit using some heuristic algorithms to balance the load, such as work-stealing or HEFT [17], while taking into account the location of the data. Also, to mask communication cost, we enable asynchronous data copy between accelerators and the host, and we activate the streaming processing to increase the occupancy of the computation units. Finally, to reduce the execution time of each task, we propose two DSP-specific functionalities, as described below:
**Initialization saving** Many DSP operators include an initialization part to initialize the data structures used for the computation. This implementation phase can be long relative to the
computation time. For example, the Gabor filter used in the Saliency application presented in section 5 has an initialization phase of about 380 milliseconds, compared to the computation phase that has a duration of about 2.5 milliseconds on a Quadro4000 GPU. Thus, it is obvious that this phase has to be executed only once for each processing element on which the operator is mapped. However, no such native mechanism exists in StarPU. Therefore, we propose that the designer specifies its operators with two sub-functions: an initialization one, named "Init", which is executed only once on each processing element, and an execution one, named "Exec", which contains the computations. With this mechanism, the Init sub-function is executed only once on each processing element and the initialized data are preserved.
Kernels auto-tuning As stated above, clusters today are heterogeneous in terms of the processing element types (e.g., CPU, GPU, Xeon Phi), but they can also be heterogeneous for one processing element type. For example, a cluster can contain GPUs of different generations, with different computation capabilities. The GPUs can be different in terms of, for example, the number of cores and registers, the size of the memory levels, the throughput, and the latency. The GPU performances are particularly affected by the threads per block parameter. Moreover, the optimal value of this parameter can depend on the GPU characteristics. Thus, we propose an iterative-specific functionality to determine the best thread per block parameter for GPU tasks. We propose a function interface, which iteratively explores a three-dimensional space of parameters (threads per block (x, y, z)) limited by the designer, which saves the best ones for each kernel for each device. Thus, due to the auto-determined most efficient parameter, the processing time of tasks is enhanced, while the user is freed to adapt the code of each kernel according to each processing element.
5. EVALUATION
In this section, we present some comparisons, experimentation, and results used to validate our approach. We use two DSP applications that we implement using: the SignalPU, MPI+OpenMP+CUDA, and PACCO tools, to provide expressiveness and performance comparisons. First, we present these applications and compare their implementation according to the time effort and abstraction level. Second, we show and discuss the results of each experiment by comparing the performances.
5.1. Applications and implementations
Here, we present the two DSP applications and describe their implementation. First, a synthetic application as a computing intensive case with inherent task parallelism is presented and implemented using SignalPU and MPI+OpenMP+CUDA. Second, we show a relevant real-world application (image processing), which is a relatively communication-bound case without inherent task parallelism, with the implementation of the SignalPU and PACCO tools.
Algorithm 1: Synthetic DSP application.
Require: A image $\mathbf{r}_{\mathbf{im}}$ of size $w \times l$.
Ensure: Processed image $\mathbf{p}_{\mathbf{im}}$ of size $w \times l$.
1: for each image $\mathbf{r}_{\mathbf{im}}$ in Nbr do
2: $(\mathbf{Var}_{11}, \mathbf{Var}_{12}) \leftarrow$ Producer()
3: $\mathbf{Var}_{21} \leftarrow$ PixelProcessSimu($\mathbf{Var}_{11}, k_1$)
4: $\mathbf{Var}_{22} \leftarrow$ PixelProcessSimu($\mathbf{Var}_{12}, k_2$)
5: $(\mathbf{Var}_{31}, \mathbf{Var}_{32}, \mathbf{Var}_{33}) \leftarrow$ JoinFork($\mathbf{Var}_{21}, \mathbf{Var}_{22}$)
6: $\mathbf{Var}_{41} \leftarrow$ PixelProcessSimu($\mathbf{Var}_{31}, k_3$)
7: $\mathbf{Var}_{42} \leftarrow$ PixelProcessSimu($\mathbf{Var}_{32}, k_4$)
8: $\mathbf{Var}_{43} \leftarrow$ PixelProcessSimu($\mathbf{Var}_{33}, k_5$)
9: $\mathbf{p}_{\mathbf{im}} \leftarrow$ Consumer($\mathbf{Var}_{41}, \mathbf{Var}_{42}, \mathbf{Var}_{43}$)
10: end for
Algorithm 2: Static pathway of the visual model.
Require: An image $\mathbf{r}_{\mathbf{im}}$ of size $w \times l$
Ensure: The saliency map $\mathbf{s}_{\mathbf{im}}$ of size $w \times l$
1: $\mathbf{r}_{\mathbf{im}} \leftarrow$ Hanningfilter($\mathbf{r}_{\mathbf{im}}$)
2: $\mathbf{r}_{\mathbf{fim}} \leftarrow$ FFT($\mathbf{r}_{\mathbf{fim}}$)
3: for $i = 1$ to orientations do
4: for $j = 1$ to frequencies do
5: $\mathbf{cf}_{\mathbf{maps}[i,j]} \leftarrow$ GaborFilter($\mathbf{cf}_{\mathbf{fmaps}[i,j]}$)
6: $\mathbf{c}_{\mathbf{maps}[i,j]} \leftarrow$ FFT($\mathbf{cf}_{\mathbf{maps}[i,j]}$)
7: $\mathbf{r}_{\mathbf{maps}[i,j]} \leftarrow$ Interactions($\mathbf{c}_{\mathbf{maps}[i,j]}$)
8: $\mathbf{r}_{\mathbf{normaps}[i,j]} \leftarrow$ Normalizations($\mathbf{r}_{\mathbf{maps}[i,j]}$)
9: end for
10: end for
11: $\mathbf{s}_{\mathbf{im}} \leftarrow$ Summation($\mathbf{r}_{\mathbf{normaps}[i,j]}$)
5.1.1. Synthetic digital signal-processing applications To allow users to easily test our PPMs, we designed a library of operators for several PE types with a parameterized workload. We use this library to build test cases with different graph structures, communication, and computation loads. In this paper, we present the experimentation based on the synthetic application described in Algorithm 1, and this application simulates a real-word computation-intensive application that includes different granularities of operators that can be executed on both CPUs and GPUs, and which
****
Table II. An MPI+OpenMP+CUDA pseudo code of a synthetic application. Left the table a. Right the table b.
contains multiple dependencies (inherent task parallelism). The $k$ parameters are used to adjust the workload of each operator. In Figure 6, we show the DFG-XML description of the synthetic application. This DFG-XML description of the application is the major part of the expression that the user has to do. The user only has to model the algorithm in the form of a DFG of operators, by giving information about edges and nodes. The user focuses only on the expression of the algorithm without manipulating any particular API or Pragma syntax. On the contrary, in an MPI+OpenMP+CUDA implementation represented in Table II, the user has first to express the functions in the form of tasks using the **omp task** directive (Table II-b, lines 4, 7, 10, ...). The user has to link these using the **omp depend** directives (Table II-b, lines 4, 7, 10, ...) to data-synchronize the functions. The user has to unfold the main loop over the parallel region of the threads using the **omp parallel** directive (Table II-a, lines 21, 26). In addition, to target the GPU, the user has to call the CUDA function (Table II-b, lines 8, 11, 21,...), and has to put allocations out of the loop to avoid overhead due to allocation and free (Table II-a, line 24). The user has to asynchronously copy-in and copy-out, and call the CUDA functions to overlap the communication and computation (Table II-a, lines 3, 5, 7), and has to manually and statically load the balance operators over the GPUs (Table II-b, line 2). The user also has to tune the CUDA parameters of each operator according to each GPU (Table II-a, line 4). Finally, to distribute the computation over the nodes, the user has to handle three PPMs. The user has to take care of the memory allocation, and the data communication over discrete memory. The user has to manipulate the directive pragma to create the tasks and to link them. The user has to manage the load balancing and some of the other functionalities close to the architecture characteristics. Thus, it is easier for the user to use SignalPU to implement DSP applications on heterogeneous clusters.
5.1.2. Saliency application
We also experiment with our approach on a real-world application based on the primate retina, with the visual saliency model used to locate the regions of interest; i.e., the capability of human vision to focus on particular places in a visual scene. For implementation, we use Algorithm 2 that was described in the preliminary work of our team [39, 40].
implement the application with our programming models, the first step is to model Algorithm 2 with a DFG-XML description using the SignalPU interface. For this, we represent each function (\texttt{Hanningfilter}(), . . . ) with a node in the graph, which includes their own characteristics (e.g., architecture kind, input arguments, output arguments, function identifier). Then, we represent the data flow between each operator with an edge in the graph, which includes its characteristics (e.g., data type, data size, input/output arguments). In Figure 7, we show the DFG-XML result of this step. The second step is to include the function code in the program, where each function is in the form of initialization and execution sub-functions (Init( ), Exec( )). Thus, we do not have to use the StarPU API to describe the application tasks. In comparison, PACCO implementation is the same, regarding what the user has to do. However, the execution model is different. Indeed, the PACCO execution model is an MPI+Pthread+CUDA implementation based on an iterative BSP model [32], where the execution of the operators and the transfer of the data are overlapped inside each synchronized iteration. Thus, in contrast, SignalPU is more adapted to DSP implementations because it is a data-flow synchronization.

### 5.2. Experience and results
In this subsection, we describe the experimentation we carried out to evaluate the performance of our approach, and we present and discuss the results obtained. The architecture used for the experimentation is a heterogeneous CPU-GPU cluster that includes four nodes, as described in Table III, connected via an infiniband network. The dataset used for the experimentation is a set of images with 512x512 pixels.
#### 5.2.1. Global performance
The first experiment we present is the processing of a number of images on hardware configurations. The aim is to measure the scalability of our implementation and how much it can take advantage of the parallelism levels (e.g., task, data and graph parallelism) and the capability of the architecture.

**SignalPU versus MPI+OpenMP+CUDA**
First, we process 1,000 images with the computationally-intensive synthetic application on several CPU-GPU configurations using these two implementations: a SignalPU with an unfolding of 10, compared to the MPI+OpenMP+CUDA low-level implementation presented above in section 5.1.2. The results of the comparison are shown in Figure 8. For the CPU only configurations, we can note that the speed-up is proportional to the number of cores in the two implementations, due to the task, data and graph parallelism. Task parallelism is achieved through the expression of task dependencies. Data and graph parallelism are exploited by unfolding the graph. On CPUs-only cluster, OpenMP implementation get slightly better results than SignalPU. That is due to the runtime overhead and the dynamic management of tasks. For the CPU-GPU configurations over multi-nodes architecture, performances are enhanced.
by exploiting data parallelism on GPUs. So, for SignalPU implementation, we obtain about 61x speed-up if we use 1 CPU core + 1 Quadro4000 GPU, compared to a 1-core configuration. For MPI+OpenMP+CUDA, we only reach about 55x. The difference in these results is due to the model of execution. Indeed, with MPI+OpenMP+CUDA, the execution model is the Fork-Join model, where threads of unfolding parallel regions concurrently invoke CPU and GPU system drivers to execute tasks. However, in SignalPU implementation, the model of execution is the scheduling of DAG of tasks over the CPU and GPU workers, thus the invocation is done only once for each device. For one CPU core + 2 Quadro4000, we get 93x speed-up of performance with SignalPU, and only about 69x with MPI+OpenMP+CUDA. By using 1 CPU core + 2 Quadro4000 + Quadro2000, we get a speed-up of about 125x with SignalPU, and only about 89x with MPI+OpenMP+CUDA. These differences in the result are for the same reasons as the execution models. In addition, the dynamic scheduler of SignalPU is more efficient because it uses some optimizations, such as out-of-order and data prefetching, which increase device occupancy. Finally, the performance still increases with the number of nodes: we obtain a speed-up of 508x with SignalPU on 4 nodes exploiting a total of 4CPUs+7GPUs. However, we obtain a speed-up of about 274x with the MPI+OpenMP+CUDA implementation on the same hardware configuration. We can explain the differences in scalability by the weakness of the static scheduling of the MPI+OpenMP+CUDA solution compared to the dynamic one used in SignalPU implementation. Thus, in comparison to an equivalently tuned MPI+OpenMP+CUDA implementation, SignalPU produces the best performance on the heterogeneous cluster experiments, due to the dynamic runtime and the DAG of task scheduling model of execution.
Figure 9. Comparison of global performance of the visual saliency application implementations.
SignalPU versus PACCO(MPI+Pthread+CUDA) In the same manner, we process 1,000 images with the communication bounded saliency application, and we compare our SignalPU with an unfolding degree of 10 (J=10) to the PACCO optimized BSP implementation based on MPI+Pthread+CUDA cited in subsection 5.1.2. The comparisons of both of these implementations are shown on Figure 9. For the first comparison of the results illustrated in the first and the second groups of bars in Figure 9, we use a Quadro 4000 + 1 CPU core, and we obtain a better performance for the SignalPU implementation (32 s) compared to the performance of the PACCO implementation (38 s). This is due to the synchronization of the task. Indeed, in the PACCO implementation, the synchronizations between iterations (graphs) are carried out using a global synchronization barrier, which leads to an imbalanced load. However, in our SignalPU implementation, we use only data dependencies to synchronize the processing. So, as soon as the data is available, new tasks are launched. The second hardware configuration (2 Quadro 4000 + 1 CPU core) enhances the performance of both of the implementations: it reduces the processing time for the SignalPU implementation, and reaches a speed-up of 1.7x, due to the graph and data parallelism performed by unfolding the DFG. However, for the PACCO implementation, we only reach 1.2x speed-up compared to the previous implementation, by only exploiting data parallelism over the pipelining of the tasks. This difference is explained by the scheduling of the tasks. Indeed, the PACCO implementation does not scale well because the scheduling is static, while the DFG of the application has few imbalanced execution-time coarse-grained operators. Thus, the most loaded device delays the iteration. In contrast, in the SignalPU implementation, the dynamic scheduling of several unfolded graphs (iterations) allows the optimal balance of the load between the GPUs (as discussed for the next experiment; section 5.2.2). For the multi-nodes configurations, the performance of both implementations is enhanced by balancing the data processing over the nodes. In the SignalPU implementation it is achieved throught the MPI distribute functionality. For the PACCO implementation, we manually unfold the application graph of and map each actors
on the architecture. Thus, we reduce the global processing time of each implementation on scalable cluster. Finally, we get about 4.2x speed-up on 4CPUs+7GPUs compared to the first implementation executed on 1 CPU Core+1 Quadro 4000. However, for the PACCO implementation, we get only an about 2.1x speed-up with the same experimental setups. The differences can be explained by the previously mentioned reasons: synchronization and load balancing over processing elements and nodes. Indeed, with SignalPU, we balance the load in a SIMD way over the MPI nodes using the $\text{MPI}_\text{data}\_\text{schedule}$ function described in 4.2, whereas in the PACCO implementation, the iteration is manually and statically unbalanced over the nodes, which delays its processing time. In addition the global synchronization of at the end of the super step of BSP model of execution blocks the scheduling of future iterations.
5.2.2. Unfolding versus scheduling In this second experiment, we are interested in indicating the performance gain obtained by combining the unfolding techniques and the dynamic scheduling. We experiment with the processing of 1,000 images on a CPU-GPU hardware configuration, with both applications using the following SignalPU implementation:
1. **Static scheduling**: This is an iterative implementation without unfolding ($J=1$), where we statically map each operator (node of DFG-XML) to a PE. An exhaustive exploration of all of the possible static placements was carried out to select the best placement.
2. **Dynamic scheduling**: In this implementation without unfolding ($J=1$), we use in addition a dynamic scheduler of StarPU to balance the load inside the loop.
3. **Static scheduling with unfolding**: In this implementation we 10x unfold the main loop, but the scheduling is the same as the first implementation.
4. **Dynamic scheduling with unfolding**: In this last implementation, the test includes both functionalities, the dynamic scheduling (work stealing), and the graph unfolding ($J=10$).
In Figure 10, which is related to the synthetic application, and Figure 11, which is related to the saliency application, we show the performance of the four processed implementations of each application on a CPU+GPU hardware configuration. The results are presented in the form of four groups of bars, where each group of bars represents the global execution time of an implementation. Each bar composing a group represents the global processing time on each processing element, and its decomposition is: execution time, labelled as "Execution", which represents the effective processing time on the computing unit; sleeping time, labelled as "Sleep", which represents the time when any computation is done on the device; and the overhead time, which is labelled as "Overhead" and represents the necessary time to manage the work and the device.

The first group bars in Figure 10 (left) illustrates the global processing time of 1,000 images with the dynamic scheduling implementation. This represents a bad performance for two reasons. First, the load balancing between the GPUs is done, but it is poor, as shown in the "Execution" section. The scheduler cannot optimally share the jobs between the GPUs because of the granularity of the tasks (coarse grained). Second, the sleeping time of each GPU is high, as shown in the "Sleep" section, because of the iterative form of the implementation, which forces available devices to wait for the slowest one.
In the second group of bars in Figure 10, labelled "Static scheduling", the performance is enhanced in the saliency implementation (Figure 11), because the optimal manual scheduling is more efficient than the previous dynamic scheduling, so the iteration time is reduced, and thus the execution times ("Execution" section) and sleeping times ("Sleep" section) on the GPUs decrease. Note in the saliency application case that only 1 GPU (Quadro400) is used to process
all of the operators. The task parallelism is not exploited because the application design does not contain inherent parallelism. However, for the synthetic application, the performances decrease in comparison to the first implementation. This is due to the imbalance of the load inside the iteration obtained by statically mapping the operators on GPUs.
The third group of bars in Figure 10, labeled "Static scheduling with unfolding", shows more performance improvement compared to the second one. Indeed, even if the execution times ("Exec" section) do not change, the operators are processed with the same placement, and the sleeping times are consequently reduced due to the unfolding techniques, which allow the overlapping of the processing of several iterations. Thus, the devices do not wait for the last task of the iteration before processing their next tasks.
The last group of bars in Figure 10, labeled "Dynamic scheduling with unfolding" represents the best performance compared to the other implementations. The execution time is enhanced due to the load balancing that is done by the dynamic scheduler, dealing with enough numbers of tasks made available by the unfolding techniques allowing it to more efficiently distribute jobs between processing elements. Thus, as proposed in our PPM, it is interesting to combine the use of unfolding techniques and dynamic scheduling to efficiently take advantage of the device availability.
5.2.3. Overhead
In that experiment, we focused on the estimation of the overhead due to the run-time management (e.g., management of device, task, memory, dependencies) and the impact of using the proposed functionalities (e.g., pipelining of the task, buffer reusing) described in sub-section 4.2. So, for that, we use the SignalPU implementation of the synthetic and saliency applications to process an increasing dataset of images (from 1,000 to 10,000), and we measure the percentage of overhead on the global execution time. In Figure 12, we show the evolution of this percentage for three implementations: a PACCO implementation of saliency application shown in the blue line marked with diamonds, which converges to 2% of the overhead rate because it uses static scheduling and does not manage data synchronization (static run-time). A SignalPU implementation of the synthetic and saliency applications, which is illustrated in red and green lines in Figure 12, indicated with squares and triangles, respectively, which stabilizes to 7% of the overhead rate. It should be noted that even if the overhead of the SignalPU implementation is greater than the overhead of the PACCO implementation, the SignalPU implementation exploits the hardware better and leads to better performance.
5.2.4. Task performance
The last experiment we present is the evolution of the processing time of the tasks during the iterations. The goal is to show the impact of the proposed functionality (e.g., initialization saving, operator auto-tuning). For each application, we measure the processing time of two operators: \texttt{PixelProcessingSimu()} for the synthetic application, and \texttt{Interaction()} for the saliency application, during the first 15 iterations on two GPUs. The results are shown in Figure 13 for the saliency and the saliency applications. First, we note that the initialization part of each operator that is represented with a dashed line at the first iteration of each line is done only once for each device, due to the proposed initialization saving optimization. Second, we show that the computation times can vary from single up to double, according to the threads per block parameter,
and that the exploration converges to an optimum which depends on the GPU architecture. The tuning is carried out iteratively, and the best parameters are saved. So, we finally obtain (128,1,1) and (256,1,1) for the \texttt{PixelProcessingSimu} operator on the GTX780 and Quadro4000 and (32,8,1) and (32,16,1) for the \texttt{Interaction} operator on the GTX780 and Quadro4000. Thus, due to the proposed execution model functionality of initialization saving and task auto-tuning, we take advantage of the hardware capability to enhance the execution time of the tasks, while the designer does not have to worry about this work.
Figure 13. Evolution of the execution time of the task according to the thread per block parameter: \texttt{PixelProcessigSimu} of synthetic application (top), and \texttt{Interaction} of Saliency application (bottom).
6. CONCLUSION
In this paper, we have presented SignalPU, a DSP-domain specific PPM based on a DFG model of computation and a dynamic run-time (StarPU) that allows programmers to easily and efficiently implement their DSP applications on parallel and heterogeneous architecture. Thus, we have shown that it is possible to construct a domain-specific PPM to achieve both productivity and performance using a dynamic task-based run-time. In this study, we have presented PPMs and have classified them according to the abstraction level. After that, we presented related studies on the implementation of DSP applications on parallel and heterogeneous clusters, and we classified and discussed these. Thus, according to the extracted characteristics of the DSP implementation, we have proposed a novel high-level abstraction approach of PPM, named as SignalPU, which is structured at the following three levels: (1) DFG-XML interface: The application is easily modeled in the form of a DFG model using an XML interface. The user does not need to deal with the StarPU library to express an application. (2) Implementation: Using some functionalities (e.g., graph unfolding, buffer reuse, task pipelining, MPI multi-node distribution), the implementation of the application is optimized to optimally exploit all of the levels of parallelism, while overcoming run-time overhead. The user does not need to worry about architecture specificity to implement an application. (3) Run-time: The execution is effectively managed to take advantage of the availability and the capability of the devices. The user gets an efficient sequence of execution due to the StarPU scheduler, and each task is enhanced on each device due to the proposed functionalities of task auto-tuning and initialization saving. Finally, the implementations of two applications were presented. We have shown how easy it is to implement these in comparison to an MPI+OpenMP+CUDA expression, while getting enhanced performances. We have shown that this approach offers better performance than PACCO, a BSP tool based on MPI+Pthread+CUDA that we previously developed. In future work, we would like to generalize our approach on more complex DSP applications, like dynamic ones represented with Boolean data flow, and on a data-dependent application. Also, we would like to study how to offer to the user the choice of the unfolding degree, taking into account the targeted clusters. We also want to adapt the proposed auto-tuning technique for other parameters and other kinds of operators. Finally, we will work on automatic multi-node load balancing based on the unfolding.
REFERENCES
|
{"Source-Url": "https://core.ac.uk/download/pdf/51939195.pdf", "len_cl100k_base": 12428, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 46669, "total-output-tokens": 15382, "length": "2e13", "weborganizer": {"__label__adult": 0.0004732608795166016, "__label__art_design": 0.0008983612060546875, "__label__crime_law": 0.0004277229309082031, "__label__education_jobs": 0.00109100341796875, "__label__entertainment": 0.0001665353775024414, "__label__fashion_beauty": 0.0002334117889404297, "__label__finance_business": 0.00036406517028808594, "__label__food_dining": 0.0004038810729980469, "__label__games": 0.001087188720703125, "__label__hardware": 0.007472991943359375, "__label__health": 0.0007700920104980469, "__label__history": 0.0006656646728515625, "__label__home_hobbies": 0.0001780986785888672, "__label__industrial": 0.0012559890747070312, "__label__literature": 0.0002925395965576172, "__label__politics": 0.0004436969757080078, "__label__religion": 0.0009975433349609375, "__label__science_tech": 0.365234375, "__label__social_life": 9.268522262573242e-05, "__label__software": 0.01090240478515625, "__label__software_dev": 0.6044921875, "__label__sports_fitness": 0.00040078163146972656, "__label__transportation": 0.0011205673217773438, "__label__travel": 0.00029015541076660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65346, 0.04667]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65346, 0.37597]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65346, 0.8918]], "google_gemma-3-12b-it_contains_pii": [[0, 1006, false], [1006, 4601, null], [4601, 7946, null], [7946, 12971, null], [12971, 18086, null], [18086, 22384, null], [22384, 25884, null], [25884, 29886, null], [29886, 32274, null], [32274, 37656, null], [37656, 40267, null], [40267, 43543, null], [43543, 47840, null], [47840, 51884, null], [51884, 55542, null], [55542, 59038, null], [59038, 65346, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1006, true], [1006, 4601, null], [4601, 7946, null], [7946, 12971, null], [12971, 18086, null], [18086, 22384, null], [22384, 25884, null], [25884, 29886, null], [29886, 32274, null], [32274, 37656, null], [37656, 40267, null], [40267, 43543, null], [43543, 47840, null], [47840, 51884, null], [51884, 55542, null], [55542, 59038, null], [59038, 65346, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65346, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65346, null]], "pdf_page_numbers": [[0, 1006, 1], [1006, 4601, 2], [4601, 7946, 3], [7946, 12971, 4], [12971, 18086, 5], [18086, 22384, 6], [22384, 25884, 7], [25884, 29886, 8], [29886, 32274, 9], [32274, 37656, 10], [37656, 40267, 11], [40267, 43543, 12], [43543, 47840, 13], [47840, 51884, 14], [51884, 55542, 15], [55542, 59038, 16], [59038, 65346, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65346, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
8a33b35c1932118ee8a3556163de4af9cccb7fe2
|
Package `ggOceanMaps`
January 8, 2022
**Type** Package
**Title** Plot Data on Oceanographic Maps using 'ggplot2'
**Version** 1.2.6
**Date** 2022-01-05
**URL** https://mikkovihtakari.github.io/ggOceanMaps/
**BugReports** https://github.com/MikkoVihtakari/ggOceanMaps/issues
**Description** Allows plotting data on bathymetric maps using 'ggplot2'. Plotting oceanographic spatial data is made as simple as feasible, but also flexible for custom modifications. Data that contain geographic information from anywhere around the globe can be plotted on maps generated by the basemap() or qmap() functions using 'ggplot2' layers separated by the '+' operator. The package uses spatial shapefiles stored in the 'ggOceanMapsData' package, geospatial packages for R to manipulate, and the 'ggspatial' package to help to plot these shapefiles. High-resolution shapefiles for detailed maps are stored on GitHub and downloaded automatically when needed.
**Depends** R (>= 3.5.0), ggplot2, ggspatial
**Imports** sp, raster, sf, rgeos, methods, utils, stars, smoothr, units, dplyr, parallel
**Suggests** ggOceanMapsData, cowplot, knitr, rmarkdown, scales, rgdal, ggnewscale
**Additional_repositories** https://mikkovihtakari.github.io/drat
**License** GPL-3
**Encoding** UTF-8
**RoxygenNote** 7.1.2
**NeedsCompilation** no
**Author** Mikko Vihtakari [aut, cre] (Institute of Marine Research, <https://orcid.org/0000-0003-0371-4319>), Yves Reecht [ctb] (Institute of Marine Research, <https://orcid.org/0000-0003-3583-1843>), Hadley Wickham [ctb], Simon O’Hanlon [ctb], Roger Bivand [ctb]
basemap
Create a ggplot2 basemap for plotting variables
Description
Creates a ggplot2 basemap for further plotting of variables.
Usage
basemap(
x = NULL,
limits = NULL,
data = NULL,
shapefiles = NULL,
bathymetry = FALSE,
glaciers = FALSE,
rotate = FALSE,
legends = TRUE,
legend.position = "right",
lon.interval = NULL,
lat.interval = NULL,
bathy.style = "poly_blues",
bathy.border.col = NA,
bathy.size = 0.1,
land.col = "grey60",
land.border.col = "black",
land.size = 0.1,
)
Arguments
x The limit type (limits or data) is automatically recognized from the class of this argument.
limits Map limits. One of the following:
• **numeric vector** of length 4: The first element defines the start longitude, the second element the end longitude (counter-clockwise), the third element the minimum latitude and the fourth element the maximum latitude of the bounding box. The coordinates can be given as decimal degrees or coordinate units for shapefiles used by a projected map. Produces a rectangular map. Latitude limits not given in min-max order are automatically ordered to respect this requirement.
• **single integer** between 30 and 88 or -88 and -30 produces a polar map for the Arctic or Antarctic, respectively.
Can be omitted if data or shapefiles are defined.
data A data frame, SpatialPolygons, or sf shape containing longitude and latitude coordinates. If a data frame, the coordinates have to be given in decimal degrees. The limits are extracted from these coordinates and produces a rectangular map. Suited for situations where a certain dataset is plotted on a map. The function attempts to **guess the correct columns** and it is advised to use intuitive column names for longitude (such as "lon", "long", or "longitude") and latitude ("lat", "latitude") columns. Can be omitted if limits or shapefiles are defined.
shapefiles Either a list containing shapefile information or a character argument referring to a name of pre-made shapefiles in shapefile_list. This name is partially matched. Can be omitted if limits or data are defined as decimal degrees.
bathymetry Logical indicating whether bathymetry should be added to the map.
glaciers Logical indicating whether glaciers and ice-sheets should be added to the map.
rotate Logical indicating whether the projected maps should be rotated to point towards the pole relative to mid-longitude limit. Experimental.
legends Logical indicating whether the legend for bathymetry should be shown.
legend.position The position for ggplot2 legend. See the argument with the same name in theme.
lon.interval, lat.interval Numeric value specifying the interval of longitude and latitude grids. NULL finds reasonable defaults depending on limits.
bathy.style Character defining the style for bathymetry contours. Alternatives:
- "poly_blues" plots polygons filled with different shades of blue.
- "poly_greys" plots polygons filled with different shades of gray.
- "contour_blues" contour lines with different shades of blue.
- "contour_grey" plots gray contour lines.
land.col, gla.col, grid.col Character code specifying the color of land, glaciers and grid lines, respectively. Use NA to remove the grid lines.
land.border.col, gla.border.col, bathy.border.col Character code specifying the color of the border line for land, glacier, and bathymetry shapes.
land.size, gla.size, bathy.size, grid.size Numeric value specifying the width of the border line land, glacier and bathymetry shapes as well as the grid lines, respectively. Use the LS function for a specific width in pt. See Details.
base_size Base size parameter for ggplot. See ggtheme.
projection.grid Logical indicating whether the coordinate grid should show projected coordinates instead of decimal degree values. Useful to define limits for large maps in polar regions.
expand.factor Expansion factor for map limits with the data argument. Can be used to zoom in and out automatically limited maps. Defaults to 1.1. Set to NULL to ignore.
verbose Logical indicating whether information about the projection and guessed column names should be returned as message. Set to FALSE to make the function silent.
Details
The function uses ggplot2, ggspatial, GIS packages of R, and shapefiles to plot maps of the world’s oceans.
Projections
If the shapefiles are not specified, the function uses either the limits or data arguments to decide which projection to use. Up-to-date conditions are defined in define_shapefiles and shapefile_list functions. At the time of writing, the function uses three different projections (given as EPSG codes)
- 3995 WGS 84 / Arctic Polar Stereographic. Called "ArcticStereographic". For max latitude (limits[4]) >= 60 (if min latitude (limits[3]) >= 30), and single integer latitudes >= 30 and <= 89.
- 3031 WGS 84 / Antarctic Polar Stereographic. Called "AntarcticStereographic". For max latitude (limits[4]) <= -60 (if min latitude (limits[3]) <= -30), and single integer latitudes <= -30 and >= -89.
- 4326 WGS 84 / World Geodetic System 1984, used in GPS. Called "DecimalDegree". For min latitude (limits[3]) < 30 or > -30, max latitude (limits[4]) < 60 or > -60, and single integer latitudes < 30 and > -30.
Limits
If the limits are in decimal degrees, the longitude limits ([1:2]) specify the start and end segments of corresponding angular lines that should reside inside the map area. The longitude limits are defined counter-clockwise. The latitude limits [3:4] define the parallels that should reside inside the limited region given the longitude segments. Note that the actual limited region becomes wider than the polygon defined by the coordinates (shown in Examples). Using data to limit the map expands the map all around the data points to make them fit into the map. If the limits are given as projected coordinates or as decimal degrees for maps with -60 < latitude < 60, limits elements represent lines encompassing the map area in cartesian space.
Pre-made shapefiles
If the limits are not defined as decimal degrees (any longitude outside range [-180, 180] or latitude [-90, 90]), the function will ask to specify shapefiles. The shapefiles can be defined by partially matching the names of the pre-made shapefiles in shapefile_list (e.g. "Ar" would be enough for "ArcticStereographic") or by specifying custom shapefiles.
Custom shapefiles
Custom shapefiles have to be a named list containing at least following elements:
- **land** Object name of the SpatialPolygonsDataFrame containing land. Required.
- **glacier** Object name of the SpatialPolygonsDataFrame containing glaciers. Use NULL if glaciers are not needed.
- **bathy** Object name of the SpatialPolygonsDataFrame containing bathymetry contours. Use NULL if bathymetry is not needed.
See Examples.
Line width and font size
The line size aesthetics in ggplot2 generates approximately 2.13 wider lines measured in pt than the given values. If you want a specific line width in pt, use the internal function LS to convert the desired line width to ggplot2 equivalent. A similar function is also available for font sizes (FS).
CRS warnings
Open-source GIS systems are rolling over to a new to a new projection definition system. The changes to underlying systems appear to sometimes trigger warnings the user can ignore as long as the resulting map looks OK. Bug reports regarding these warnings are appreciated.
Value
Returns a ggplot map, which can be assigned to an object and modified as any ggplot object.
Author(s)
Mikko Vihtakari
References
Note that if you use this function to generate maps for a publication, it is advised to cite the underlying data. The spatial data used by this function have been acquired from following sources:
• **Land polygons.** Natural Earth Data 1:10m Physical Vectors with the Land and Minor Island datasets combined. Distributed under the CC Public Domain license (terms of use).
• **Glacier polygons.** Natural Earth Data 1:10m Physical Vectors with the Glaciated Areas and Antarctic Ice Shelves datasets combined. Distributed under the CC Public Domain license (terms of use)
See Also
ggplot
Other basemap functions: qmap(), shapefile_list(), transform Coord()
Examples
```r
# The easiest way to produce a map is to use the limits
# argument and decimal degrees:
if(requireNamespace("ggOceanMapsData")) {
basemap(limits = 60)
}
# Bathymetry and glaciers can be added using the respective arguments:
basemap(limits = -60, bathymetry = TRUE, glaciers = TRUE)
# The easiest way to add data on the maps is to use the ggspatial functions:
dt <- data.frame(lon = c(-150, 150), lat = c(60, 90))
basemap(data = dt, bathymetry = TRUE) +
geom_spatial_point(data = dt, aes(x = lon, y = lat), color = "red")
## Not run:
# Note that writing out data = dt is required because there are multiple
# underlying ggplot layers plotted already:
basemap(data = dt) +
geom_spatial_point(dt, aes(x = lon, y = lat), color = "red")
#> Error: `mapping` must be created by `aes()`
## End(Not run)
# If you want to use native ggplot commands, you need to transform your data
# to the projection used by the map:
if(requireNamespace("ggOceanMapsData")) {
dt <- transform_coord(dt, bind = TRUE)
basemap(data = dt) + geom_point(data = dt, aes(x = lon.proj, y = lat.proj))
}
# The limits argument of length 4 plots a map anywhere in the world:
basemap(limits = c(100, 160, -20, 30), bathymetry = TRUE)
# The argument leads to expanded maps towards poles:
dt <- data.frame(lon = c(-160, 160, 160, -160), lat = c(80, 80, 60, 60))
basemap(limits = c(160, -160, 60, 80)) +
geom_spatial_polygon(data = dt, aes(x = lon, y = lat),
fill = NA, color = "red")
# The limits are further expanded when using the data argument:
basemap(data = dt) +
geom_spatial_polygon(data = dt, aes(x = lon, y = lat),
fill = NA, color = "red")
# Rotate:
basemap(data = dt, rotate = TRUE) +
geom_spatial_polygon(data = dt, aes(x = lon, y = lat),
fill = NA, color = "red")
## To find UTM coordinates to limit a polar map:
basemap(limits = 60, projection.grid = TRUE)
basemap(limits = c(2.5e4, -2.5e6, 2e6, -2.5e5), shapefiles = "Arctic")
# Using custom shapefiles
data(bs_shapes, package = "ggOceanMapsData")
basemap(shapefiles = list(land = bs_land, glacier = NULL, bathy = bs_bathy),
bathymetry = TRUE)
# grid.col = NA removes grid lines, rotate = TRUE rotates northwards
basemap(limits = c(-180, -140, 50, 70), grid.col = NA, rotate = TRUE)
# Rename axis labels
basemap(limits = c(-140, -105, 20, 40), bathymetry = TRUE) + xlab("Lat")
# Remove axis labels
basemap(limits = c(0, 60, 68, 80)) + labs(x = NULL, y = NULL)
basemap(limits = c(0, 60, 68, 80), rotate = TRUE) +
theme(axis.title = element_blank(),
axis.text = element_blank(),
axis.ticks.x = element_blank(),
axis.ticks.y = element_blank())
dist2land
*Calculate distance to the closest land for coordinates in a data frame*
**Description**
Calculates the closest distance to land for coordinates in a data frame
**Usage**
```r
dist2land(
data,
lon = NULL,
lat = NULL,
shapefile = NULL,
proj.in = convert_crs(4326),
bind = TRUE,
dist.col = "ldist",
binary = FALSE,
cores = getCores(),
verbose = TRUE
)
```
**Arguments**
- **data**: Data.frame containing geographic coordinates
- **lon, lat**: Either the names of the longitude and latitude columns in `data` or `NULL` to **guess the longitude and/or latitude columns** in `data`.
- **shapefile**: Land shape to which distances should be calculated. Either a character argument referring to a name of pre-made shapefiles in `shapefile_list`, a single `SpatialPolygons` object or `NULL` to enable automatic definition of the land shapes based on `data`.
- **proj.in**: `proj4string` projection argument for the coordinates in `data`.
- **bind**: Logical indicating whether x should be returned with the distances (TRUE, default) or should the distances be returned as vector (FALSE).
- **dist.col**: The name of the distance column, if `bind = TRUE`. Defaults to "dist".
- **binary**: Logical indicating whether binary (TRUE = the position is in the ocean, FALSE = the position is on land) should be returned instead of distances. Speeds up the function considerably.
- **cores**: Integer value defining how many cores should be used in the distance calculations. Parallelization speeds up the function (see `parallel::mclapply`), but naturally eats up computer resources during the calculation. Set to 1 to remove parallelization.
- **verbose**: Logical indicating whether information about the process should be returned as messages. Set to FALSE to make the function silent.
dist2land
Details
The function calculates distances using projected coordinates and the rgeos::gDistance function. These distances do not consider the curvature of the Earth unless the projection of the used land shape does so (check out geosphere::dist2Line and this SO solution if you want exact distances). The function is fairly slow for large datasets. If you only want to use the function to remove (wrong) observations reported on land, set the binary argument to TRUE. This speeds up the calculations considerably.
The dist2land function offers parallel processing, which speeds up the calculations for large datasets. Parallel processing has not been tested under Windows yet and may not work.
Value
Returns a vector if bind = FALSE, otherwise a data frame. The distances are given in a new column defined by the dist.col argument. The distances are kilometers if binary = FALSE, otherwise logical (TRUE = the position is in the ocean, FALSE = the position is on land).
Author(s)
Mikko Vihtakari
Examples
```r
# Simple example:
dt <- data.frame(lon = seq(-20, 80, length.out = 41), lat = 50:90)
dt <- dist2land(dt, cores = 1)
qmap(dt, color = ldist) + scale_color_viridis_c()
# No premade shapefiles for datasets covering the entire globe
data.frame(lon = -20:20, lat = seq(-90, 90, length.out = 41))
dist2land(dt, cores = 1) # wrong!
## Not run:
dt <- data.frame(lon = seq(-179, 179, length.out = 1000), lat = rep(60, 1000))
# The distance calculation is slow for large datasets
system.time(dist2land(dt))
#> user system elapsed
#> 0.073 0.041 5.627
# The parallel processing speeds it up
system.time(dist2land(dt, cores = 1))
#> user system elapsed
#> 19.719 1.237 20.894
# binary = TRUE further speeds the function up
system.time(dist2land(dt, binary = TRUE))
#> user system elapsed
#> 1.624 0.041 1.680
## End(Not run)
```
geonorge_bathymetry
*Open Geonorge bathymetry shapefiles*
**Description**
Opens and formats Geonorge bathymetry shapefiles ready for plotting in ggOceanMaps
**Usage**
```r
geonorge_bathymetry(filepath, layer = NULL, verbose = FALSE, output.sf = FALSE)
```
**Arguments**
<table>
<thead>
<tr>
<th>Argument</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>filepath</td>
<td>Character string defining the path to the .gml file. Must contain the file extension.</td>
</tr>
<tr>
<td>layer</td>
<td>Character string defining the layer containing depth information. If NULL assumed to be "dybdeareal".</td>
</tr>
<tr>
<td>verbose</td>
<td>Logical indicating whether information the reading process should be returned.</td>
</tr>
<tr>
<td>output.sf</td>
<td>Logical indicating whether an sf (TRUE) or sp polygon should be returned.</td>
</tr>
</tbody>
</table>
**Details**
You can download the bathymetry polygon shapefiles from Geonorge. Download the file in GLM format.
**Value**
An sf or sp object containing the depth polygons. Uses same projection than bathy (see CRS).
**Author(s)**
Mikko Vihtakari
**See Also**
Other create shapefiles: `clip_shapefile()`, `raster_bathymetry()`, `vector_bathymetry()`
Description
qmap is a shortcut similar to ggplot2’s `qplot` designed to quickly plot data with a limited range of options.
Usage
```r
qmap(
data,
x = NULL,
y = NULL,
geom = "point",
limits = NULL,
shapefiles = NULL,
bathymetry = FALSE,
glaciers = FALSE,
rotate = FALSE,
legends = TRUE,
legend.position = "right",
lon.interval = NULL,
lat.interval = NULL,
bathy.style = "poly_blues",
bathy.border.col = NA,
bathy.size = 0.1,
land.col = "grey60",
land.border.col = "black",
land.size = 0.1,
gla.col = "grey95",
gla.border.col = "black",
gla.size = 0.1,
grid.col = "grey70",
grid.size = 0.1,
base_size = 11,
projection.grid = FALSE,
expand.factor = 1.1,
verbose = FALSE,
...
)
```
Arguments
- `data`: Data frame to use.
- `x, y, ...`: Aesthetics passed into each layer. Longitude and latitude columns are automatically recognized using the `guess_coordinate_columns` function.
geom Character argument specifying geom(s) to draw. Defaults to "point". Other alternatives are "text" and "label". The "text" option can also be triggered by simply mapping a variable to label (see Examples).
limits Map limits. One of the following:
- **numeric vector** of length 4: The first element defines the start longitude, the second element the end longitude (counter-clockwise), the third element the minimum latitude and the fourth element the maximum latitude of the bounding box. The coordinates can be given as decimal degrees or coordinate units for shapefiles used by a projected map. Produces a rectangular map. Latitude limits not given in min-max order are automatically ordered to respect this requirement.
- **single integer** between 30 and 88 or -88 and -30 produces a polar map for the Arctic or Antarctic, respectively.
Can be omitted if data or shapefiles are defined.
shapefiles Either a list containing shapefile information or a character argument referring to a name of pre-made shapefiles in `shapefile_list`. This name is partially matched. Can be omitted if limits or data are defined as decimal degrees.
bathymetry Logical indicating whether bathymetry should be added to the map.
glaciers Logical indicating whether glaciers and ice-sheets should be added to the map.
rotate Logical indicating whether the projected maps should be rotated to point towards the pole relative to mid-longitude limit. Experimental.
legends Logical indicating whether the legend for bathymetry should be shown.
legend.position The position for ggplot2 legend. See the argument with the same name in `theme`.
lon.interval Numeric value specifying the interval of longitude and latitude grids. NULL finds reasonable defaults depending on limits.
lat.interval Numeric value specifying the interval of longitude and latitude grids. NULL finds reasonable defaults depending on limits.
bathy.style Character defining the style for bathymetry contours. Alternatives:
- "poly_blues" plots polygons filled with different shades of blue.
- "poly_greys" plots polygons filled with different shades of gray.
- "contour_blues" contour lines with different shades of blue.
- "contour_grey" plots gray contour lines.
bathy.border.col Character code specifying the color of the border line for land, glacier, and bathymetry shapes.
bathy.size Numeric value specifying the width of the border line land, glacier and bathymetry shapes as well as the grid lines, respectively. Use the `LS` function for a specific width in pt. See Details.
land.col Character code specifying the color of land, glaciers and grid lines, respectively. Use NA to remove the grid lines.
land.border.col Character code specifying the color of the border line for land, glacier, and bathymetry shapes.
land.size Numeric value specifying the width of the border line land, glacier and bathymetry shapes as well as the grid lines, respectively. Use the LS function for a specific width in pt. See Details.
gla.col Character code specifying the color of land, glaciers and grid lines, respectively. Use NA to remove the grid lines.
gla.border.col Character code specifying the color of the border line for land, glacier, and bathymetry shapes.
gla.size Numeric value specifying the width of the border line land, glacier and bathymetry shapes as well as the grid lines, respectively. Use the LS function for a specific width in pt. See Details.
grid.col Character code specifying the color of land, glaciers and grid lines, respectively. Use NA to remove the grid lines.
grid.size Numeric value specifying the width of the border line land, glacier and bathymetry shapes as well as the grid lines, respectively. Use the LS function for a specific width in pt. See Details.
base_size Base size parameter for ggplot. See ggtheme.
projection.grid Logical indicating whether the coordinate grid should show projected coordinates instead of decimal degree values. Useful to define limits for large maps in polar regions.
expand.factor Expansion factor for map limits with the data argument. Can be used to zoom in and out automatically limited maps. Defaults to 1.1. Set to NULL to ignore.
verbose Logical indicating whether information about the projection and guessed column names should be returned as message. Set to FALSE to make the function silent.
Value
Returns a ggplot map, which can be assigned to an object and modified as any ggplot object.
Author(s)
Mikko Vihtakari
See Also
Other basemap functions: basemap(), shapefile_list(), transform_coord()
Examples
dt <- data.frame(lon = c(-100, -80, -60), lat = c(10, 25, 40), var = c("a", "a", "b"))
# Set color
if(requireNamespace("ggOceanMapsData")) {
qmap(dt, color = I("red"))
raster_bathymetry
Simplify a bathymetry raster ready for vectorization
Description
Simplifies bathymetry raster ready for the vector_bathymetry function. Warning: processing may take a long time if the bathymetry raster is large.
Usage
raster_bathymetry(
bathy,
depths,
proj.out = NULL,
proj.bathy, boundary = NULL,
file.name = NULL,
aggregation.factor = NA,
verbose = TRUE
)
Arguments
bathy A raster object or a string giving the path to a bathymetry NetCDF or grd file
depths Numeric vector giving the cut points for depth contours (see cut).
proj.out A character string specifying the PROJ6 projection argument for the output. See st_crs and proj.org. If NULL, the projection is retrieved from bathy. If proj.out == proj.bathy, the output will not be reprojected.
proj.bathy
A character string specifying the CRS projection arguments for the input (bathy). Only required if bathy lacks CRS information. If missing, "EPSG:4326" is assumed.
boundary
A st_polygon object, text string defining the file path to a spatial polygon, or a numeric vector of length 4 giving the boundaries for which bathy should be cut to. Should be given as decimal degrees. If numeric vector, the first element defines the minimum longitude, the second element the maximum longitude, the third element the minimum latitude and the fourth element the maximum latitude of the bounding box. Use NULL not to cut bathy.
file.name
A character string specifying the file path without extension where the output should be saved. If NULL a temporary file will be used. See writeRaster.
aggregation.factor
An integer defining the fact argument from the aggregate function. Set to NA to ignore aggregation.
verbose
Logical indicating whether information about guessed projection should be returned as message. Set to FALSE to make the function silent.
Details
You can use GEBCO, IBCAO, ETOPO1 bathymetry grids downloaded from respective sources as the bathy argument. The bathymetry grids read from files must be in NetCDF/grd format. Alternatively use the marmap::getNOAA.bathy function to download ETOPO1 bathymetry and convert it to a raster object using the marmap::as.raster function.
Note that the size of the output is heavily influenced by the number of depth contours (depths) as well as the resolution of bathy and choice of aggregation.factor. To make the vector_bathymetry function and consequent plotting faster, limiting the details of the bathymetry raster may be desirable.
Value
A list with a raster object the containing projected bathymetry defined by the proj.out argument and a data frame of depth intervals.
Author(s)
Mikko Vihtakari
References
See Also
Other create shapefiles: clip_shapefile(), geonorge_bathymetry(), vector_bathymetry()
reorder_layers
Move basemap land, glacier and grid layers on top of other ggplot layers
Description
Moves existing land, glacier and grid layers on top of other layers. Useful for hiding region polygons under land.
Usage
reorder_layers(p)
Arguments
p ggplot object from the basemap function.
Details
This function has not been tested properly yet and is likely to contain bugs.
Value
Returns a ggplot object with land, glacier and grid layers on top.
Author(s)
Mikko Vihtakari
See Also
Other customize shapefiles: auto_limits(), theme_map()
Arguments
name: A character argument giving the name of a pre-made shapefile. Will be partially matched. Use "all" to list all available ones.
get.data: Logical indicating whether spatial data should be returned instead of names of spatial data objects.
Details
Custom shapefiles for basemap should be defined as lists with (at least) following names (everything should be provided as characters):
- land: Object name of the SpatialPolygonsDataFrame containing land. Required.
- glacier: Object name of the SpatialPolygonsDataFrame containing glaciers. Use NULL if glaciers are not needed.
- bathy: Object name of the SpatialPolygonsDataFrame containing bathymetry contours. Use NULL if bathymetry is not needed.
All linked spatial data objects must be in same projection. Pre-made shapefiles contain additional elements that are used in the basemap function, but not required for custom shapefile datasets.
Value
Returns a data frame of provided pre-made shapefiles, if name = "all". Returns a shapefile list containing the information for a particular map otherwise.
Author(s)
Mikko Vihtakari
See Also
Other basemap functions: basemap(), qmap(), transform_coord()
Examples
shapefile_list("all")
shapefile_list("Arctic") # partial matching
theme_map A ggplot2 theme for maps
Description
A ggplot2 theme for maps.
Usage
theme_map(...)
transform_coord
Arguments
... additional arguments passed to ggtheme.
grid.col Character code specifying the color of grid lines. Use NA to remove the grid lines.
grid.size Numeric value specifying the width of grid lines.
Value
A ggplot2 theme layer.
See Also
Other customize shapefiles: auto_limits(), reorder_layers()
transform_coord Transform spatial coordinates to another projection
Description
Transforms spatial coordinates from original projection (decimal degrees assumed) to another projection.
Usage
transform_coord(
x = NULL,
lon = NULL,
lat = NULL,
new.names = "auto",
proj.in = 4326,
proj.out = NULL,
verbose = FALSE,
bind = FALSE,
na = "ignore"
)
Arguments
x Data frame to be transformed. Can be omitted if numeric vectors are assigned to lon and lat.
lon, lat Either a name of the longitude and latitude columns in x or a numeric vector containing longitude and latitude coordinates. Use NULL to guess the longitude and/or latitude columns in x.
new.names Character vector of length 2 specifying the names of transformed longitude and latitude columns, respectively. Alternatively NULL, which returns column names from x or "auto", which uses NULL if bind = FALSE and c("lon.proj", "lat.proj") if bind = TRUE.
transform_coord
proj.in The original CRS. If NULL, the projection is taken from x. x must be a spatial object in that case.
proj.out Character. Either NULL, CRS the coordinates should be transformed to or a name of shapefiles in shapefile_list. If NULL, the output projection will be automatically determined from data. This option requires decimal degrees as input option.
verbose Logical indicating whether information about the projection should be returned as message. Set to FALSE to make the function silent.
bind logical. Should only transformed coordinates be returned (FALSE, default) or should x be returned with transformed coordinates (TRUE)?
na character specifying the NA action for missing coordinates. The "ignore" option ignores the coordinates and returns NAs to transformed coordinates. The "remove" option removes missing values from x returning a message while doing it. Any other character argument will trigger na.fail stopping the function in case of missing coordinates.
Details
If x is specified, the function guesses longitude and latitude columns from x by default.
Value
Returns a data frame with transformed spatial coordinates.
Author(s)
Mikko Vihtakari
See Also
Other basemap functions: basemap(), qmap(), shapefile_list()
Examples
# Coordinates are automatically transformed to the pre-made shapefile projections:
x <- data.frame(lon = c(-150, 150), lat = c(60, 90))
transform_coord(x)
transform_coord(x, bind = TRUE)
x <- data.frame(lon = c(-150, 150), lat = c(20, 50))
transform_coord(x, bind = TRUE) # no transformation required.
vector_bathymetry
Create a polygon bathymetry from a raster bathymetry file
Description
Vectorizes bathymetry rasters. Designed to be used for the output of `raster_bathymetry` function. Warning: processing may take a long time if the bathymetry raster is large.
Usage
```r
vector_bathymetry(
bathy,
drop.crumbs = NULL,
remove.holes = NULL,
smooth = FALSE,
output.sf = FALSE
)
```
Arguments
- **bathy**: bathyRaster object from the `raster_bathymetry` function.
- **drop.crumbs**: Single numeric value specifying a threshold (area in km2) for disconnected polygons which should be removed. Set to NULL to bypass the removal. Uses the `drop_crumbs` function.
- **remove.holes**: Single numeric value specifying a threshold (area in km2) for holes which should be removed. Set to NULL to bypass the removal. Uses the `fill_holes` function.
- **smooth**: Logical indicating whether the pixelated contours should be smoothed. Uses the `smooth_ksmooth` function.
- **output.sf**: Logical indicating whether an `sf` (TRUE) or `sp` (FALSE) polygon should be returned.
Details
The `drop.crumbs` and `remove.holes` arguments can be used to make the resulting object smaller in file size. The `smooth` argument can be used to remove the pixelated contours, but often increases file size. Note also that using this option will bias the contours with respect to real world.
Value
An `sf` or `sp` object containing the depth polygons. Uses same projection than bathy (see CRS).
Author(s)
Mikko Vihtakari
See Also
Other create shapefiles: `clip_shapefile()`, `geonorge_bathymetry()`, `raster_bathymetry()`
Index
* basemap functions
basemap, 2
qmap, 11
shapefile_list, 16
transform_coord, 18
* create shapefiles
geonorge_bathymetry, 10
raster_bathymetry, 14
vector_bathymetry, 20
* customize shapefiles
reorder_layers, 16
theme_map, 17
* shapefiles
shapefile_list, 16
aggregate, 15
auto_limits, 16, 18
basemap, 2, 13, 16, 17, 19
clip_shapefile, 10, 15, 20
CRS, 10, 15, 19, 20
cut, 14
define_shapefiles, 4
dist2land, 8
drop_crumbs, 20
fill_holes, 20
FS, 5
geonorge_bathymetry, 10, 15, 20
ggplot, 5, 6, 13
ggplot2, 4, 5
ggspatial, 4
ggtheme, 4, 13, 18
guess the correct columns, 3
guess the longitude and/or latitude
columns, 8, 18
guess_coordinate_columns, 11
list containing shapefile information,
3, 12
LS, 4, 5, 12, 13
proj4string, 8
qmap, 6, 11, 17, 19
qplot, 11
raster, 14, 15
raster_bathymetry, 10, 14, 20
reorder_layers, 16, 18
sf, 3, 10, 20
shapefile_list, 3–6, 8, 12, 13, 16, 19
smooth_ksmooth, 20
spatial, 19
SpatialPolygons, 3, 8
SpatialPolygonsDataFrame, 5, 17
st_crs, 14
st_polygon, 15
theme, 3, 12
theme_map, 16, 17
transform_coord, 6, 13, 17, 18
vector_bathymetry, 10, 14, 15, 20
writeRaster, 15
|
{"Source-Url": "https://cran.r-project.org/web/packages/ggOceanMaps/ggOceanMaps.pdf", "len_cl100k_base": 8288, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 46788, "total-output-tokens": 9910, "length": "2e13", "weborganizer": {"__label__adult": 0.0003964900970458984, "__label__art_design": 0.0022907257080078125, "__label__crime_law": 0.00035643577575683594, "__label__education_jobs": 0.0014047622680664062, "__label__entertainment": 0.00035834312438964844, "__label__fashion_beauty": 0.0002231597900390625, "__label__finance_business": 0.00041794776916503906, "__label__food_dining": 0.0004813671112060547, "__label__games": 0.0017957687377929688, "__label__hardware": 0.00232696533203125, "__label__health": 0.0003924369812011719, "__label__history": 0.0026569366455078125, "__label__home_hobbies": 0.0003190040588378906, "__label__industrial": 0.0008759498596191406, "__label__literature": 0.00041413307189941406, "__label__politics": 0.0005345344543457031, "__label__religion": 0.0006847381591796875, "__label__science_tech": 0.324951171875, "__label__social_life": 0.00019991397857666016, "__label__software": 0.1895751953125, "__label__software_dev": 0.46728515625, "__label__sports_fitness": 0.0004343986511230469, "__label__transportation": 0.0008378028869628906, "__label__travel": 0.0008969306945800781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 33903, 0.0344]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 33903, 0.86395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 33903, 0.6537]], "google_gemma-3-12b-it_contains_pii": [[0, 1585, false], [1585, 2100, null], [2100, 4350, null], [4350, 6832, null], [6832, 9360, null], [9360, 11194, null], [11194, 12714, null], [12714, 14525, null], [14525, 16387, null], [16387, 17634, null], [17634, 18571, null], [18571, 21378, null], [21378, 23327, null], [23327, 24098, null], [24098, 26409, null], [26409, 26966, null], [26966, 28320, null], [28320, 29578, null], [29578, 31166, null], [31166, 32776, null], [32776, 33903, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1585, true], [1585, 2100, null], [2100, 4350, null], [4350, 6832, null], [6832, 9360, null], [9360, 11194, null], [11194, 12714, null], [12714, 14525, null], [14525, 16387, null], [16387, 17634, null], [17634, 18571, null], [18571, 21378, null], [21378, 23327, null], [23327, 24098, null], [24098, 26409, null], [26409, 26966, null], [26966, 28320, null], [28320, 29578, null], [29578, 31166, null], [31166, 32776, null], [32776, 33903, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 33903, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 33903, null]], "pdf_page_numbers": [[0, 1585, 1], [1585, 2100, 2], [2100, 4350, 3], [4350, 6832, 4], [6832, 9360, 5], [9360, 11194, 6], [11194, 12714, 7], [12714, 14525, 8], [14525, 16387, 9], [16387, 17634, 10], [17634, 18571, 11], [18571, 21378, 12], [21378, 23327, 13], [23327, 24098, 14], [24098, 26409, 15], [26409, 26966, 16], [26966, 28320, 17], [28320, 29578, 18], [29578, 31166, 19], [31166, 32776, 20], [32776, 33903, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 33903, 0.01113]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
c7fa9d488092a0505f26cec0e862df69afddcd93
|
[REMOVED]
|
{"Source-Url": "http://www-sop.inria.fr/lemme/Tamara.Rezk/publication/post16Bie-Rezk.pdf", "len_cl100k_base": 13854, "olmocr-version": "0.1.49", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 78063, "total-output-tokens": 17039, "length": "2e13", "weborganizer": {"__label__adult": 0.0004916191101074219, "__label__art_design": 0.000400543212890625, "__label__crime_law": 0.001190185546875, "__label__education_jobs": 0.0007176399230957031, "__label__entertainment": 0.00010991096496582033, "__label__fashion_beauty": 0.00019073486328125, "__label__finance_business": 0.0003952980041503906, "__label__food_dining": 0.0003657341003417969, "__label__games": 0.0011119842529296875, "__label__hardware": 0.0014362335205078125, "__label__health": 0.0007357597351074219, "__label__history": 0.0003211498260498047, "__label__home_hobbies": 0.00010657310485839844, "__label__industrial": 0.000537872314453125, "__label__literature": 0.0005793571472167969, "__label__politics": 0.000492095947265625, "__label__religion": 0.0005049705505371094, "__label__science_tech": 0.10858154296875, "__label__social_life": 0.0001024007797241211, "__label__software": 0.01299285888671875, "__label__software_dev": 0.86767578125, "__label__sports_fitness": 0.000247955322265625, "__label__transportation": 0.000518798828125, "__label__travel": 0.0001741647720336914}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57715, 0.03409]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57715, 0.46918]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57715, 0.8098]], "google_gemma-3-12b-it_contains_pii": [[0, 2590, false], [2590, 5808, null], [5808, 8806, null], [8806, 11599, null], [11599, 14612, null], [14612, 17590, null], [17590, 20220, null], [20220, 22993, null], [22993, 25282, null], [25282, 27610, null], [27610, 30312, null], [30312, 30907, null], [30907, 34469, null], [34469, 37327, null], [37327, 40322, null], [40322, 42631, null], [42631, 45176, null], [45176, 47633, null], [47633, 50192, null], [50192, 52047, null], [52047, 55425, null], [55425, 57715, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2590, true], [2590, 5808, null], [5808, 8806, null], [8806, 11599, null], [11599, 14612, null], [14612, 17590, null], [17590, 20220, null], [20220, 22993, null], [22993, 25282, null], [25282, 27610, null], [27610, 30312, null], [30312, 30907, null], [30907, 34469, null], [34469, 37327, null], [37327, 40322, null], [40322, 42631, null], [42631, 45176, null], [45176, 47633, null], [47633, 50192, null], [50192, 52047, null], [52047, 55425, null], [55425, 57715, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57715, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57715, null]], "pdf_page_numbers": [[0, 2590, 1], [2590, 5808, 2], [5808, 8806, 3], [8806, 11599, 4], [11599, 14612, 5], [14612, 17590, 6], [17590, 20220, 7], [20220, 22993, 8], [22993, 25282, 9], [25282, 27610, 10], [27610, 30312, 11], [30312, 30907, 12], [30907, 34469, 13], [34469, 37327, 14], [37327, 40322, 15], [40322, 42631, 16], [42631, 45176, 17], [45176, 47633, 18], [47633, 50192, 19], [50192, 52047, 20], [52047, 55425, 21], [55425, 57715, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57715, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-26
|
2024-11-26
|
155c8e8dd0a9d878f2e80dcd2758774908767955
|
Package ‘tripack’
September 1, 2015
Version 1.3-7
Title Triangulation of Irregularly Spaced Data
Author Fortran code by R. J. Renka.
R functions by Albrecht Gebhardt <albrecht.gebhardt@aau.at>.
With contributions from Stephen Eglen <stephen@anc.ed.ac.uk>,
Sergei Zuyev <sergei@stams.strath.ac.uk> and
Denis White <white.denis@epamail.epa.gov>
Maintainer Albrecht Gebhardt <albrecht.gebhardt@aau.at>
Description A constrained two-dimensional Delaunay triangulation package
providing both triangulation and generation of voronoi mosaics of
irregular spaced data.
License ACM | file LICENSE
Date 2015-09-01
NeedsCompilation yes
License_restricts_use yes
Repository CRAN
Date/Publication 2015-09-01 20:46:27
R topics documented:
add.constraint .................................................. 2
cells .......................................................... 4
circles .......................................................... 5
circtest ......................................................... 6
circum .......................................................... 6
circumcircle .................................................... 7
convex.hull ..................................................... 9
identify.tri ..................................................... 10
in.convex.hull ............................................... 11
left ............................................................ 12
neighbours ..................................................... 13
on.convex.hull ............................................... 14
add.constraint
Add a constraint to an triangulaion object
Description
This subroutine provides for creation of a constrained Delaunay triangulation which, in some sense, covers an arbitrary connected region R rather than the convex hull of the nodes. This is achieved simply by forcing the presence of certain adjacencies (triangulation arcs) corresponding to constraint curves. The union of triangles coincides with the convex hull of the nodes, but triangles in R can be distinguished from those outside of R. The only modification required to generalize the definition of the Delaunay triangulation is replacement of property 5 (refer to tri.mesh by the following:
5') If a node is contained in the interior of the circumcircle of a triangle, then every interior point of the triangle is separated from the node by a constraint arc.
In order to be explicit, we make the following definitions. A constraint region is the open interior of a simple closed positively oriented polygonal curve defined by an ordered sequence of three or more distinct nodes (constraint nodes) P(1),P(2),...,P(K), such that P(I) is adjacent to P(I+1) for I = 1,...,K with P(K+1) = P(1). Thus, the constraint region is on the left (and may have nonfinite area) as the sequence of constraint nodes is traversed in the specified order. The constraint regions must not contain nodes and must not overlap. The region R is the convex hull of the nodes with constraint regions excluded.
Note that the terms boundary node and boundary arc are reserved for nodes and arcs on the boundary of the convex hull of the nodes.
The algorithm is as follows: given a triangulation which includes one or more sets of constraint nodes, the corresponding adjacencies (constraint arcs) are forced to be present (Fortran subroutine EDGE). Any additional new arcs required are chosen to be locally optimal (satisfy the modified circumcircle property).
Usage
```
add.constraint(tri.obj,cstx,csty,reverse=FALSE)
```
Arguments
- **tri.obj**: object of class "tri"
- **cstx**: vector containing x coordinates of the constraint curve.
- **csty**: vector containing y coordinates of the constraint curve.
- **reverse**: if TRUE the orientation of the constraint curve is reversed.
Value
An new object of class "tri".
References
See Also
`tri`, `print.tri`, `plot.tri`, `summary.tri`, `triangles`, `convex.hull`.
Examples
```r
# we will use the simple test data from TRIPACK:
data(tritest)
tritest.tr<-tri.mesh(tritest)
opar<-par(mfrow=c(2,2))
plot(tritest.tr)
# include all points in a big triangle:
tritest.tr<-add.constraint(tritest.tr,c(-0.1,2,-0.1),
c(-3,0.5,3),reverse=TRUE)
# insert a small cube:
tritest.tr <- add.constraint(tritest.tr, c(0.4, 0.4,0.6, 0.6),
c(0.6, 0.4,0.4, 0.6),
reverse = FALSE)
par(opar)
```
cells
extract info about voronoi cells
Description
This function returns some info about the cells of a voronoi mosaic, including the coordinates of
the vertices and the cell area.
Usage
cells(voronoi.obj)
Arguments
voronoi.obj object of class voronoi
Details
The function calculates the neighbourhood relations between the underlying triangulation and trans-
lates it into the neighbourhood relations between the voronoi cells.
Value
returns a list of lists, one entry for each voronoi cell which contains
cell cell index
center cell 'center'
neighbours neighbour cell indices
nodes 2 times nnb matrix with vertice coordinates
area cell area
Note
outer cells have area=NA, currently also nodes=NA which is not really useful – to be done later
Author(s)
A. Gebhardt
See Also
voronoi.mosaic, voronoi.area
Examples
```r
data(tritest)
tritest.vm <- voronoi.mosaic(tritest$x, tritest$y)
tritest.cells <- cells(tritest.vm)
# highlight cell 12:
plot(tritest.vm)
polygon(t(tritest.cells[[12]]$nodes), col="green")
# put cell area into cell center:
text(tritest.cells[[12]]$center[1],
tritest.cells[[12]]$center[2],
tritest.cells[[12]]$area)
```
Description
This function plots circles at given locations with given radii.
Usage
```
circles(x, y, r, ...)
```
Arguments
```
x vector of x coordinates
y vector of y coordinates
r vector of radii
... additional graphic parameters will be passed through
```
Note
This function needs a previous plot where it adds the circles.
Author(s)
A. Gebhardt
See Also
`lines`, `points`
Examples
```r
x<-rnorm(10)
y<-rnorm(10)
r<-runif(10,0,0.5)
plot(x,y, xlim=c(-3,3), ylim=c(-3,3), pch="+")
circles(x,y,r)
```
cirktest
cirktest / sample data
Description
Sample data for the link{circumcircle} function.
cirktest2 are points sampled from a circle with some jitter added, i.e. they represent the most complicated case for the link{circumcircle} function.
circum
Determine the circumcircle of a triangle
Description
This function returns the circumcircle of a triangle.
Usage
circum(x, y)
Arguments
x Vector of three elements, giving the x coordinates of the triangle nodes.
y Vector of three elements, giving the y coordinates of the triangle nodes.
Details
This is an interface to the Fortran function CIRCUM found in TRIPACK.
Value
x 'x' coordinate of center
y 'y' coordinate of center
radius circumcircle radius
signed.area signed area of triangle (positive iff nodes are numbered counter clock wise)
aspect.ratio ratio "radius of inscribed circle"/"radius of circumcircle", varies between 0 and 0.5
0 means collinear points, 0.5 equilateral triangle.
Note
This function is mainly intended to be used by circumcircle.
The `circumcircle` function returns the (smallest) circumcircle of a set of n points. It takes the following arguments:
- `x`: vector containing x coordinates of the data. If `y` is missing, `x` should contain two elements $x$ and $y$.
- `y`: vector containing y coordinates of the data.
- `num.touch`: How often should the resulting circle touch the convex hull of the given points? Default: 2. Possible values: 2 or 3.
- `plot`: Logical, produce a simple plot of the result. Default: FALSE.
- `debug`: Logical, more plots, only needed for debugging. Default: FALSE.
The function can be used as follows:
```
circumcircle(x, y = NULL, num.touch=2, plot = FALSE, debug = FALSE)
```
Examples:
```
circum(c(0,1,0),c(0,1))
```
The `circumcircle` function is useful for determining the circumcircle of a set of points, which is the circle that passes through all the points and has the smallest radius.
References:
Details
This is a (naive implemented) algorithm which determines the smallest circumcircle of n points:
First step: Take the convex hull.
Second step: Determine two points on the convex hull with maximum distance for the diameter of
the set.
Third step: Check if the circumcircle of these two points already contains all other points (of the
convex hull and hence all other points).
If not or if 3 or more touching points are desired (num.touch=3), search a point with minimum
enclosing circumcircle among the remaining points of the convex hull.
If such a point cannot be found (e.g. for data(circtest2)), search the remaining triangle combi-
nations of points from the convex hull until an enclosing circle with minimum radius is found.
The last search uses an upper and lower bound for the desired minimum radius:
Any enclosing rectangle and its circumcircle gives an upper bound (the axis-parallel rectangle is
used).
Half the diameter of the set from step 1 is a lower bound.
Value
x ‘x’ coordinate of circumcircle center
y ‘y’ coordinate of circumcircle center
radius radius of circumcircle
Author(s)
Albrecht Gebhardt
See Also
convex.hull
Examples
data(circtest)
# smallest circle:
circumcircle(circtest,num.touch=2,plot=TRUE)
# smallest circle with maximum touching points (3):
circumcircle(circtest,num.touch=3,plot=TRUE)
# some stress test for this function,
data(circtest2)
# circtest2 was generated by:
# 100 random points almost one a circle:
# alpha <- runif(100,0,2*pi)
# x <- cos(alpha)
# y <- sin(alpha)
# circtest2<-list(x=cos(alpha)+runif(100,0,0.1),
convex.hull
# y=sin(alpha)+runif(100,0,0.1))
# circumcircle(circtest2,plot=TRUE)
convex.hull
Return the convex hull of a triangulation object
Description
Given a triangulation tri.obj of n points in the plane, this subroutine returns two vectors containing the coordinates of the nodes on the boundary of the convex hull.
Usage
convex.hull(tri.obj, plot.it=FALSE, add=FALSE,...)
Arguments
tri.obj object of class "tri"
plot.it logical, if TRUE the convex hull of tri.obj will be plotted.
add logical. if TRUE (and plot.it=TRUE), add to a current plot.
... additional plot arguments
Value
x x coordinates of boundary nodes.
y y coordinates of boundary nodes.
Author(s)
A. Gebhardt
References
See Also
tri, print.tri, plot.tri, summary.tri, triangles, add.constraint.
Examples
# rather simple example from TRIPACK:
data(tritest)
tr<-tri.mesh(tritest$x, tritest$y)
convex.hull(tr, plot.it=TRUE)
# random points:
rand.tr<-tri.mesh(runif(10), runif(10))
plot(rand.tr)
rand.ch<-convex.hull(rand.tr, plot.it=TRUE, add=TRUE, col="red")
# use a part of the quakes data set:
data(quakes)
quakes.part<-quakes[(quakes[,1]<-17 & quakes[,1]>=-19.0 &
quakes[,2]<-182.0 & quakes[,2]>=180.0),]
quakes.tri<-tri.mesh(quakes.part$lon, quakes.part$lat, duplicate="remove")
plot(quakes.tri)
convex.hull(quakes.tri, plot.it=TRUE, add=TRUE, col="red")
identify.tri
Identify points in a triangulation plot
Description
Identify points in a plot of "x" with its coordinates. The plot of "x" must be generated with plot.tri.
Usage
## S3 method for class 'tri'
identify(x,...)
Arguments
x object of class "tri"
...
additional parameters for identify
Value
an integer vector containing the indexes of the identified points.
Author(s)
A. Gebhardt
See Also
tri, print.tri, plot.tri, summary.tri
### Examples
```r
data(tritest)
tritest.tr<-tri.mesh(tritest$x, tritest$y)
plot(tritest.tr)
identify.tri(tritest.tr)
```
---
### in.convex.hull
*Determines if points are in the convex hull of a triangulation object*
### Description
Given a triangulation `tri.obj` of `n` points in the plane, this subroutine returns a logical vector indicating if the points \((x_i, y_i)\) are contained within the convex hull of `tri.obj`.
### Usage
```r
in.convex.hull(tri.obj, x, y)
```
### Arguments
- `tri.obj`: object of class "tri"
- `x`: vector of x-coordinates of points to locate
- `y`: vector of y-coordinates of points to locate
### Value
Logical vector.
### Author(s)
A. Gebhardt
### References
### See Also
`tri`, `print.tri`, `plot.tri`, `summary.tri`, `triangles`, `add.constraint`, `convex.hull`
Examples
# example from TRIPACK:
data(tritest)
tr<-tri.mesh(tritest$x, tritest$y)
in.convex.hull(tr, 0.5, 0.5)
in.convex.hull(tr, c(0.5, -1, 1), c(0.5, 1, 1))
# use a part of the quakes data set:
data(quakes)
quakes.part<-quakes[(quakes[,1]<-10.78 & quakes[,1]>-19.4 &
quakes[,2]<182.29 & quakes[,2]>165.77),]
q.tri<-tri.mesh(quakes.part$lon, quakes.part$lat, duplicate="remove")
in.convex.hull(q.tri, quakes$lon[990:1000], quakes$lat[990:1000])
left
Determines whether given points are left of a directed edge.
Description
This function returns a logical vector indicating which elements of the given points P0 are left of
the directed edge P1->P2.
Usage
left(x0, y0, x1, y1, x2, y2)
Arguments
x0 Numeric vector, 'x' coordinates of points P0 to check
y0 Numeric vector, 'y' coordinates of points P0 to check, same length as 'x'.
x1 'x' coordinate of point P1
y1 'y' coordinate of point P1
x2 'x' coordinate of point P2
y2 'y' coordinate of point P2
Value
Logical vector.
Note
This is an interface to the Fortran function VLEFT, which is modeled after TRIPACKs LEFT function
but accepts more than one point P0.
Author(s)
A. Gebhardt
neighbours
See Also
in.convex.hull
Examples
left(c(0,0,1,1),c(0,1,0,1),0,0,1,1)
neighbours List of neighbours from a triangulation object
Description
Extract a list of neighbours from a triangulation object
Usage
neighbours(tri.obj)
Arguments
tri.obj object of class "tri"
Value
nested list of neighbours per point
Author(s)
A. Gebhardt
References
See Also
tri, print.tri, plot.tri, summary.tri, triangles
Examples
data(tritest)
tritest.tr<-tri.mesh(tritest$x, tritest$y)
tritest.nb<-neighbours(tritest.tr)
on.convex.hull
Determines if points are on the convex hull of a triangulation object
Description
Given a triangulation tri.obj of n points in the plane, this subroutine returns a logical vector indicating if the points \((x_i, y_i)\) lay on the convex hull of tri.obj.
Usage
on.convex.hull(tri.obj, x, y)
Arguments
tri.obj object of class "tri"
x vector of x-coordinates of points to locate
y vector of y-coordinates of points to locate
Value
Logical vector.
Author(s)
A. Gebhardt
References
See Also
tri.print.tri, plot.tri, summary.tri, triangles.add.constraint, convex.hull, in.convex.hull.
Examples
# example from TRIPACK:
data(tritest)
tr<-tri.mesh(tritest$x, tritest$y)
on.convex.hull(tr, 0.5, 0.5)
on.convex.hull(tr, c(0.5, -1, 1), c(0.5, 1, 1))
# use a part of the quakes data set:
data(quakes)
q.tri<-tri.mesh(quakes.part$lon, quakes.part$lat, duplicate="remove")
on.convex.hull(q.tri, quakes.part$lon[1:20], quakes.part$lat[1:20])
Description
This version of `outer` evaluates `FUN` only on that part of the grid `cxxy` that is enclosed within the convex hull of the points `(px,py)`. This can be useful for spatial estimation if no extrapolation is wanted.
Usage
```r
outer.convhull(cx, cy, px, py, FUN, duplicate = "remove", ...)
```
Arguments
- `cx`: x coordinates of grid
- `cy`: y coordinates of grid
- `px`: vector of x coordinates of points
- `py`: vector of y coordinates of points
- `FUN`: function to be evaluated over the grid
- `duplicate`: indicates what to do with duplicate `(px,py)` points, default "remove".
- `...`: additional arguments for `FUN`
Value
Matrix with values of `FUN` (NAs if outside the convex hull).
Author(s)
A. Gebhardt
See Also
`in.convex.hull`
Examples
```r
x <- runif(20)
y <- runif(20)
z <- runif(20)
z.lm <- lm(z ~ x + y)
f.pred <- function(x, y)
(predict(z.lm, data.frame(x = as.vector(x), y = as.vector(y))))
xg <- seq(0, 1, 0.05)
yg <- seq(0, 1, 0.05)
image(xg, yg, outer.convhull(xg, yg, x, y, f.pred))
points(x, y)
```
plot.tri
Plot a triangulation object
Description
plots the triangulation "x"
Usage
```r
## S3 method for class 'tri'
plot(x, add=FALSE, xlim=range(x$x), ylim=range(x$y),
do.points=TRUE, do.labels = FALSE, isometric=FALSE,...)
```
Arguments
- `x` object of class "tri"
- `add` logical, if TRUE, add to a current plot.
- `do.points` logical, indicates if points should be plotted.
- `do.labels` logical, indicates if points should be labelled
- `xlim,ylim` x/y ranges for plot
- `isometric` generate an isometric plot (default FALSE)
- `...` additional plot parameters
Value
None
Author(s)
A. Gebhardt
References
See Also
- `tri`, `print.tri`, `summary.tri`
Examples
```r
# random points
plot(tri.mesh(rpois(100, lambda=20), rpois(100, lambda=20), duplicate="remove"))
# use a part of the quakes data set:
data(quakes)
quakes.part<-quakes[(quakes[,1]<=-10.78 & quakes[,1]>=-19.4 &
quakes[,2]<=182.29 & quakes[,2]>=165.77),]
quakes.tri<-tri.mesh(quakes.part$lon, quakes.part$lat, duplicate="remove")
plot(quakes.tri)
# use the whole quakes data set
# (will not work with standard memory settings, hence commented out)
# plot(tri.mesh(quakes$lon, quakes$lat, duplicate="remove"), do.points=F)
```
Description
Plots the mosaic "x".
Dashed lines are used for outer tiles of the mosaic.
Usage
```r
## S3 method for class 'voronoi'
plot(x, add=FALSE,
xlim=c(min(x$tri$x) -
0.1*diff(range(x$tri$x)),
max(x$tri$x) +
0.1*diff(range(x$tri$x))),
ylim=c(min(x$tri$y) -
0.1*diff(range(x$tri$y)),
max(x$tri$y) +
0.1*diff(range(x$tri$y))),
all=FALSE,
do.points=TRUE,
main="Voronoi mosaic",
sub=deparse(substitute(x)),
isometric=FALSE,
...)
```
Arguments
- **x**: object of class "voronoi"
- **add**: logical, if TRUE, add to a current plot.
- **xlim**: x plot ranges, by default modified to hide dummy points outside of the plot
- **ylim**: y plot ranges, by default modified to hide dummy points outside of the plot
all show all (including dummy points in the plot
do.points logical, indicates if points should be plotted.
main plot title
sub plot subtitle
isometric generate an isometric plot (default FALSE)
... additional plot parameters
Value
None
Author(s)
A. Gebhardt
References
See Also
voronoi, print.voronoi, summary.voronoi
Examples
# plot a random mosaic
plot(voronoi.mosaic(runif(100), runif(100), duplicate="remove"))
# use isometric=TRUE and all=TRUE to see the complete mosaic
# including extreme outlier points:
plot(voronoi.mosaic(runif(100), runif(100), duplicate="remove"),
all=TRUE, isometric=TRUE)
# use a part of the quakes data set:
data(quakes)
quakes.part<-quakes[(quakes[,1]<-17 & quakes[,1]>=-19.0 &
quakes[,2]<182.0 & quakes[,2]>=180.0),]
quakes.vm<-voronoi.mosaic(quakes.part$lon, quakes.part$lat,
duplicate="remove")
plot(quakes.vm, isometric=TRUE)
# use the whole quakes data set
# (will not work with standard memory settings, hence commented out here)
#plot(voronoi.mosaic(quakes$lon, quakes$lat, duplicate="remove"), isometric=TRUE)
plot.voronoi.polygons
plots an voronoi.polygons object
Description
plots an voronoi.polygons object
Usage
## S3 method for class 'voronoi.polygons'
plot(x, which, color=TRUE, ...)
Arguments
x
object of class voronoi.polygons
which
index vector selecting which polygons to plot
color
logical, determines if plot should be colored, default: TRUE
...
additional plot arguments
Author(s)
A. Gebhardt
See Also
voronoi.polygons
Examples
### Should be DIRECTLY executable !! ----
### ==> Define data, use random,
### or do help(data=index) for the standard data sets.
data(tritest)
tritest.vm <- voronoi.mosaic(tritest$x, tritest$y)
tritestvp <- voronoi.polygons(tritest.vm)
plot(tritestvp)
plot(tritestvp, which=c(1,3,5))
print.summary.tri
*Print a summary of a triangulation object*
**Description**
Prints some information about tri.obj
**Usage**
```r
## S3 method for class 'summary.tri'
print(x, ...)
```
**Arguments**
- `x`
object of class "summary.tri", generated by `summary.tri`.
- `...`
additional parameters for `print`
**Value**
None
**Author(s)**
A. Gebhardt
**References**
**See Also**
`tri`, `tri.mesh`, `plot.tri`, `summary.tri`.
---
print.summary.voronoi
*Print a summary of a voronoi object*
**Description**
Prints some information about x
**Usage**
```r
## S3 method for class 'summary.voronoi'
print(x, ...)
```
Arguments
x object of class "summary.voronoi", generated by summary.voronoi.
... additional parameters for print
Value
None
Author(s)
A. Gebhardt
References
See Also
voronoi, voronoi.mosaic, print.voronoi, plot.voronoi, summary.voronoi.
print.voronoi
References
See Also
tri, plot.tri, summary.tri
print.voronoi Print a voronoi object
Description
prints a summary of "x"
Usage
## S3 method for class 'voronoi'
print(x, ...)
Arguments
x object of class "voronoi"
... additional parameters for print
Value
None
Author(s)
A. Gebhardt
References
See Also
voronoi, plot.voronoi, summary.voronoi
**summary.tri**
*Return a summary of a triangulation object*
---
**Description**
Returns some information (number of nodes, triangles, arcs, boundary nodes and constraints) about object.
**Usage**
```r
## S3 method for class 'tri'
summary(object,...)
```
**Arguments**
- `object` object of class "tri"
- `...` additional parameters for summary
**Value**
An object of class "summary.tri", to be printed by `print.summary.tri`. It contains the number of nodes (n), of arcs (na), of boundary nodes (nb), of triangles (nt) and constraints (nc).
**Author(s)**
A. Gebhardt
**References**
**See Also**
`tri, print.tri, plot.tri, print.summary.tri`.
summary.voronoi
Return a summary of a voronoi object
Description
Returns some information about object
Usage
```r
## S3 method for class 'voronoi'
summary(object, ...)
```
Arguments
- `object` object of class "voronoi"
- `...` additional parameters for `summary`
Value
Object of class "summary.voronoi".
It contains the number of nodes (nn) and dummy nodes (nd).
Author(s)
A. Gebhardt
References
See Also
`voronoi`, `voronoi.mosaic`, `print.voronoi`, `plot.voronoi`, `print.summary.voronoi`
tri
A triangulation object
Description
R object that represents the triangulation of a set of 2D points, generated by \texttt{tri.mesh} or \texttt{add.constraint}.
Arguments
- \texttt{n}: Number of nodes
- \texttt{x}: x coordinates of the triangulation nodes
- \texttt{y}: y coordinates of the triangulation nodes
- \texttt{tlist}: Set of nodal indexes which, along with \texttt{tlptr}, \texttt{tlend}, and \texttt{tlnew}, define the triangulation as a set of \texttt{n} adjacency lists – counterclockwise-ordered sequences of neighboring nodes such that the first and last neighbors of a boundary node are boundary nodes (the first neighbor of an interior node is arbitrary). In order to distinguish between interior and boundary nodes, the last neighbor of each boundary node is represented by the negative of its index.
- \texttt{tlptr}: Set of pointers in one-to-one correspondence with the elements of \texttt{tlist}. \texttt{tlist[tlptr[i]]} indexes the node which follows \texttt{tlist[i]} in cyclical counterclockwise order (the first neighbor follows the last neighbor).
- \texttt{tlend}: Set of pointers to adjacency lists. \texttt{tlend[k]} points to the last neighbor of node \texttt{k} for \texttt{k = 1,...,n}. Thus, \texttt{tlist[tlend[k]]}<0 if and only if \texttt{k} is a boundary node.
- \texttt{tlnew}: Pointer to the first empty location in \texttt{tlist} and \texttt{tlptr} (list length plus one).
- \texttt{nc}: number of constraints
- \texttt{lc}: starting indices of constraints in \texttt{x} and \texttt{y}
- \texttt{call}: call, which generated this object
Note
The elements \texttt{tlist}, \texttt{tlptr}, \texttt{tlend} and \texttt{tlnew} are mainly intended for internal use in the appropriate Fortran routines.
Author(s)
A. Gebhardt
References
See Also
\texttt{tri.mesh,print.tri,plot.tri,summary.tri}
tri.dellens
Compute the Delaunay segment lengths
Description
Return a vector of Delaunay segment lengths for the voronoi object. The Delaunay triangles connected to sites contained in exceptions vector are ignored (unless inverse is TRUE, when only those Delaunay triangles are accepted).
The exceptions vector is provided so that sites at the border of a region can be removed, as these tend to bias the distribution of Delaunay segment lengths. Exceptions can be created by voronoi.findrejectsites.
Usage
tri.dellens(voronoi.obj, exceptions = NULL, inverse = FALSE)
Arguments
- voronoi.obj: object of class "voronoi"
- exceptions: a numerical vector
- inverse: Logical
Value
A vector of Delaunay segment lengths.
Author(s)
S. J. Eglen
See Also
voronoi.findrejectsites, voronoi.mosaic
Examples
data(tritest)
tritest.vm <- voronoi.mosaic(tritest$x, tritest$y)
tritest.vm.rejects <- voronoi.findrejectsites(tritest.vm, 0, 0, 1)
trilens.all <- tri.dellens(tritest.vm)
trilens.acc <- tri.dellens(tritest.vm, tritest.vm.rejects)
trilens.rej <- tri.dellens(tritest.vm, tritest.vm.rejects, inverse=TRUE)
par(mfrow=c(3,1))
dotchart(trilens.all, main="all Delaunay segment lengths")
dotchart(trilens.acc, main="excluding border sites")
dotchart(trilens.rej, main="only border sites")
tri.find
Locate a point in a triangulation
Description
This subroutine locates a point \( P=(x,y) \) relative to a triangulation created by `tri.mesh`. If \( P \) is contained in a triangle, the three vertex indexes are returned. Otherwise, the indexes of the rightmost and leftmost visible boundary nodes are returned.
Usage
`tri.find(tri.obj,x,y)`
Arguments
- `tri.obj`: an triangulation object
- `x`: x-coordinate of the point
- `y`: y-coordinate of the point
Value
A list with elements \( i1, i2, i3 \) containing nodal indexes, in counterclockwise order, of the vertices of a triangle containing \( P=(x,y) \), or, if \( P \) is not contained in the convex hull of the nodes, \( i1 \) indexes the rightmost visible boundary node, \( i2 \) indexes the leftmost visible boundary node, and \( i3 = 0 \). Rightmost and leftmost are defined from the perspective of \( P \), and a pair of points are visible from each other if and only if the line segment joining them intersects no triangulation arc. If \( P \) and all of the nodes lie on a common line, then \( i1 = i2 = i3 = 0 \) on output.
Author(s)
A. Gebhardt
References
See Also
`tri.print.tri, plot.tri, summary.tri, triangles, convex.hull`
Examples
```r
data(tritest)
tritest.tr<-tri.mesh(tritest$x,tritest$y)
plot(tritest.tr)
pnt<-list(x=0.3,y=0.4)
triangle.with.pnt<-tri.find(tritest.tr,pnt$x,pnt$y)
attach(triangle.with.pnt)
lines(tritest$x[c(i1,i2,i3,i1)],tritest$y[c(i1,i2,i3,i1)],col="red")
points(pnt$x,pnt$y)
```
tri.mesh
*Create a delaunay triangulation*
Description
This subroutine creates a Delaunay triangulation of a set of N arbitrarily distributed points in the plane referred to as nodes. The Delaunay triangulation is defined as a set of triangles with the following five properties:
1) The triangle vertices are nodes.
2) No triangle contains a node other than its vertices.
3) The interiors of the triangles are pairwise disjoint.
4) The union of triangles is the convex hull of the set of nodes (the smallest convex set which contains the nodes).
5) The interior of the circumcircle of each triangle contains no node.
The first four properties define a triangulation, and the last property results in a triangulation which is as close as possible to equiangular in a certain sense and which is uniquely defined unless four or more nodes lie on a common circle. This property makes the triangulation well-suited for solving closest point problems and for triangle-based interpolation.
The triangulation can be generalized to a constrained Delaunay triangulation by a call to `add.constraint`. This allows for user-specified boundaries defining a nonconvex and/or multiply connected region.
The operation count for constructing the triangulation is close to \(O(N)\) if the nodes are presorted on X or Y components. Also, since the algorithm proceeds by adding nodes incrementally, the triangulation may be updated with the addition (or deletion) of a node very efficiently. The adjacency information representing the triangulation is stored as a linked list requiring approximately \(13N\) storage locations.
Usage
```r
tri.mesh(x, y = NULL, duplicate = "error")
```
triangles
Arguments
x vector containing x coordinates of the data. If y is missing x should contain two elements $x$ and $y$.
y vector containing y coordinates of the data.
duplicate flag indicating how to handle duplicate elements. Possible values are: “error” – default, “strip” – remove all duplicate points, “remove” – leave one point of duplicate points.
Value
An object of class "tri"
References
See Also
tri, print.tri, plot.tri, summary.tri, triangles, convex.hull, neighbours, add.constraint.
Examples
data(tritest)
tritest.tr<-tri.mesh(tritest$x, tritest$y)
tritest.tr
triangles Extract a list of triangles from a triangulation object
Description
This function extracts a triangulation data structure from an triangulation object created by tri.mesh. The vertices in the returned matrix (let’s denote it with retval) are ordered counterclockwise with the first vertex taken to be the one with smallest index. Thus, retval[i,"node2"] and retval[i,"node3"] are larger than retval[i,"node3"] and index adjacent neighbors of node retval[i,"node1"]. The columns trx and arcx, x=1,2,3 index the triangle and arc, respectively, which are opposite (not shared by) node nodex, with trix=0 if arcx indexes a boundary arc. Vertex indexes range from 1 to N, triangle indexes from 0 to NT, and, if included, arc indexes from 1 to NA = NT+N-1. The triangles are ordered on first (smallest) vertex indexes, except that the sets of constraint triangles (triangles contained in the closure of a constraint region) follow the non-constraint triangles.
Usage
triangles(tri.obj)
Arguments
tri.obj object of class "tri"
Value
A matrix with columns node1,node2,node3, representing the vertex nodal indexes, tr1,tr2,tr3, representing neighboring triangle indexes and arc1,arc2,arc3 representing arc indexes.
Each row represents one triangle.
Author(s)
A. Gebhardt
References
See Also
tri, print.tri, plot.tri, summary.tri, triangles
Examples
# use a slightly modified version of data(tritest)
data(tritest2)
tritest2.tr<-tri.mesh(tritest2$x, tristest2$y)
triangles(tritest2.tr)
tripack-internal Internal functions
Description
Internal tripack functions
Details
These functions are not intended to be called by the user.
tritest
tritest / sample data
Description
A very simply set set of points to test the tripack functions, taken from the FORTRAN original. tritest2 is a slight modification by adding runif(-0.1,0.1) random numbers to the coordinates.
References
voronoi
Voronoi object
Description
An voronoi object is created with voronoi.mosaic
Arguments
x, y x and y coordinates of nodes of the voronoi mosaic. Each node is a circumcircle center of some triangle from the Delaunay triangulation.
node logical vector, indicating real nodes of the voronoi mosaic. These nodes are the centers of circumcircles of triangles with positive area of the delaunay triangulation.
If node[i]=FALSE, (c[i],x[i]) belongs to a triangle with area 0.
n1, n2, n3 indices of neighbour nodes. Negative indices indicate dummy points as neighbours.
tri triangulation object, see tri.
area area of triangle i. area[i]=1 indicates a removed triangle with area 0 at the border of the triangulation.
ratio aspect ratio (inscribed radius/circumradius) of triangle i.
radius circumradius of triangle i.
dummy.x, dummy.y x and y coordinates of dummy points. They are used for plotting of unbounded tiles.
Author(s)
A. Gebhardt
voronoi.area
Calculate area of Voronoi polygons
Description
Computes the area of each Voronoi polygon. For some sites at the edge of the region, the Voronoi polygon is not bounded, and so the area of those sites cannot be calculated, and hence will be NA.
Usage
`voronoi.area(voronoi.obj)`
Arguments
`voronoi.obj` object of class "voronoi"
Value
A vector of polygon areas.
Author(s)
S. J. Eglen
See Also
`voronoi`,
Examples
```r
data(tritest)
tritest.vm <- voronoi.mosaic(tritest$x, tritest$y)
tritest.vm.areas <- voronoi.area(tritest.vm)
plot(tritest.vm)
text(tritest$x, tritest$y, tritest.vm.areas)
```
**Voronoi Sites at the Border of the Region (to be Rejected)**
**Description**
Find the sites in the Voronoi tessellation that lie at the edge of the region. A site is at the edge if any of the vertices of its Voronoi polygon lie outside the rectangle with corners \((x_{\text{min}}, y_{\text{min}})\) and \((x_{\text{max}}, y_{\text{max}})\).
**Usage**
```r
voronoi.findrejectsites(voronoi.obj, xmin, xmax, ymin, ymax)
```
**Arguments**
- `voronoi.obj`: object of class "voronoi"
- `xmin`: minimum x-coordinate of sites in the region
- `xmax`: maximum x-coordinate of sites in the region
- `ymin`: minimum y-coordinate of sites in the region
- `ymax`: maximum y-coordinate of sites in the region
**Value**
A logical vector of the same length as the number of sites. If the site is a reject, the corresponding element of the vector is set to TRUE.
**Author(s)**
S. J. Eglen
**See Also**
- `tri.dellens`
Create a Voronoi mosaic
Description
This function creates a Voronoi mosaic.
It creates first a Delaunay triangulation, determines the circumcircle centers of its triangles, and connects these points according to the neighbourhood relations between the triangles.
Usage
voronoi.mosaic(x,y=NULL,duplicate="error")
Arguments
x vector containing x coordinates of the data. If y is missing x should contain two elements $x$ and $y$.
y vector containing y coordinates of the data.
duplicate flag indicating how to handle duplicate elements. Possible values are: "error" – default, "strip" – remove all duplicate points, "remove" – leave one point of duplicate points.
Value
An object of class voronoi.
Author(s)
A. Gebhardt
See Also
voronoi,voronoi.mosaic,print.voronoi,plot.voronoi
Examples
# example from TRIPACK:
data(tritest)
tritest.vm<-voronoi.mosaic(tritest$x, tritest$y)
tritest.vm
# use a part of the quakes data set:
data(quakes)
quakes.part<-quakes[(quakes[,1]<-17 & quakes[,1]>=-19.0 &
quakes[,2]<-182.0 & quakes[,2]>=180.0),]
quakes.vm<-voronoi.mosaic(quakes.part$lon, quakes.part$lat, duplicate="remove")
quakes.vm
Description
This function extracts polygons from a `voronoi.mosaic` object.
Usage
```r
voronoi.polygons(voronoi.obj)
```
Arguments
- `voronoi.obj` object of class `voronoi.mosaic`
Value
Returns an object of class `voronoi.polygons` with unnamed list elements for each polygon. These list elements are matrices with columns `x` and `y`.
Author(s)
Denis White
See Also
`plot.voronoi.polygons.voronoi.mosaic`
Examples
```r
# Should be DIRECTLY executable !! ----
##R == Define data, use random,
##R or do help(data=index) for the standard data sets.
data(tritest)
tritest.vm <- voronoi.mosaic(tritest$x, tritest$y)
tritest.vp <- voronoi.polygons(tritest.vm)
tritest.vp
```
INDEX
tri.vordist (tripack-internal), 30
triangles, 3, 9, 11, 13, 14, 27, 29, 29, 30
tripack-internal, 30
tritest, 31
tritest2 (tritest), 31
voronoi, 18, 21, 22, 24, 31, 32, 34
voronoi.area, 4, 32
voronoi.findrejectsites, 26, 33
voronoi.findvertices
(tripack-internal), 30
voronoi.mosaic, 4, 21, 24, 26, 31, 32, 34, 34, 35
voronoi.polyarea (tripack-internal), 30
voronoi.polygons, 19, 35
|
{"Source-Url": "https://cran.r-project.org/web/packages/tripack/tripack.pdf", "len_cl100k_base": 9793, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 75132, "total-output-tokens": 13014, "length": "2e13", "weborganizer": {"__label__adult": 0.0003066062927246094, "__label__art_design": 0.001674652099609375, "__label__crime_law": 0.0004093647003173828, "__label__education_jobs": 0.0023365020751953125, "__label__entertainment": 0.00023937225341796875, "__label__fashion_beauty": 0.00022411346435546875, "__label__finance_business": 0.0004301071166992187, "__label__food_dining": 0.00042510032653808594, "__label__games": 0.0013494491577148438, "__label__hardware": 0.001522064208984375, "__label__health": 0.00048065185546875, "__label__history": 0.0008335113525390625, "__label__home_hobbies": 0.0003523826599121094, "__label__industrial": 0.0009565353393554688, "__label__literature": 0.0003693103790283203, "__label__politics": 0.00034809112548828125, "__label__religion": 0.00054931640625, "__label__science_tech": 0.449951171875, "__label__social_life": 0.0002294778823852539, "__label__software": 0.06982421875, "__label__software_dev": 0.466064453125, "__label__sports_fitness": 0.00037288665771484375, "__label__transportation": 0.0005068778991699219, "__label__travel": 0.0002956390380859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39136, 0.01593]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39136, 0.59849]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39136, 0.65764]], "google_gemma-3-12b-it_contains_pii": [[0, 1545, false], [1545, 3010, null], [3010, 4580, null], [4580, 5402, null], [5402, 6261, null], [6261, 7282, null], [7282, 8355, null], [8355, 9936, null], [9936, 10942, null], [10942, 11986, null], [11986, 12962, null], [12962, 14138, null], [14138, 14833, null], [14833, 16054, null], [16054, 17102, null], [17102, 17935, null], [17935, 19297, null], [19297, 20679, null], [20679, 21421, null], [21421, 22225, null], [22225, 22634, null], [22634, 23323, null], [23323, 24140, null], [24140, 24805, null], [24805, 26817, null], [26817, 28112, null], [28112, 29484, null], [29484, 31441, null], [31441, 33182, null], [33182, 33997, null], [33997, 35367, null], [35367, 35987, null], [35987, 36902, null], [36902, 38061, null], [38061, 38745, null], [38745, 38745, null], [38745, 39136, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1545, true], [1545, 3010, null], [3010, 4580, null], [4580, 5402, null], [5402, 6261, null], [6261, 7282, null], [7282, 8355, null], [8355, 9936, null], [9936, 10942, null], [10942, 11986, null], [11986, 12962, null], [12962, 14138, null], [14138, 14833, null], [14833, 16054, null], [16054, 17102, null], [17102, 17935, null], [17935, 19297, null], [19297, 20679, null], [20679, 21421, null], [21421, 22225, null], [22225, 22634, null], [22634, 23323, null], [23323, 24140, null], [24140, 24805, null], [24805, 26817, null], [26817, 28112, null], [28112, 29484, null], [29484, 31441, null], [31441, 33182, null], [33182, 33997, null], [33997, 35367, null], [35367, 35987, null], [35987, 36902, null], [36902, 38061, null], [38061, 38745, null], [38745, 38745, null], [38745, 39136, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39136, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39136, null]], "pdf_page_numbers": [[0, 1545, 1], [1545, 3010, 2], [3010, 4580, 3], [4580, 5402, 4], [5402, 6261, 5], [6261, 7282, 6], [7282, 8355, 7], [8355, 9936, 8], [9936, 10942, 9], [10942, 11986, 10], [11986, 12962, 11], [12962, 14138, 12], [14138, 14833, 13], [14833, 16054, 14], [16054, 17102, 15], [17102, 17935, 16], [17935, 19297, 17], [19297, 20679, 18], [20679, 21421, 19], [21421, 22225, 20], [22225, 22634, 21], [22634, 23323, 22], [23323, 24140, 23], [24140, 24805, 24], [24805, 26817, 25], [26817, 28112, 26], [28112, 29484, 27], [29484, 31441, 28], [31441, 33182, 29], [33182, 33997, 30], [33997, 35367, 31], [35367, 35987, 32], [35987, 36902, 33], [36902, 38061, 34], [38061, 38745, 35], [38745, 38745, 36], [38745, 39136, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39136, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-28
|
2024-11-28
|
1100b921734ba9429dbe3426329547093ae5c40b
|
VERIFICATION OF THE MOBILE AGENT NETWORK SIMULATOR – A TOOL FOR SIMULATING MULTI-AGENT SYSTEMS
MARIO KUSEK, KRESIMIR JURASOVIC AND GORDAN JEZIC
University of Zagreb
Faculty of Electrical Engineering and Computing
Department of Telecommunications, Unska 3, Zagreb, HR-10000, Croatia
mario.kusek@fer.hr
kresimir.jurasovic@fer.hr
gordan.jezic@fer.hr
Received (Day Month Year)
Revised (Day Month Year)
Accepted (Day Month Year)
This paper deals with the verification of a multi-agent system simulator. Agents in the simulator are based on the Mobile Agent Network (MAN) formal model. It describes a shared plan representing a process which allows team formation according to task complexity and the characteristics of the distributed environment where these tasks should be performed. In order to verify the simulation results, we compared them with performance characteristics of a real multi-agent system, called the Multi-Agent Remote Maintenance Shell (MA-RMS). MA-RMS is organized as a team-oriented knowledge based system responsible for distributed software management. The results are compared and analyzed for various testing scenarios which differ with respect to network bandwidth as well as task and network complexity.
Keywords: multi-agent system; mobile agent network; verification, simulator.
1. Introduction
In recent years multi-agent systems based on autonomous software which migrates from host to host while communicating and cooperating with other agents in order to perform operations in place of their owner, have been applied in telecommunications, business software modeling, computer games, and many other fields. A multi-agent system containing mobile and intelligent agents is a promising paradigm for network and distributed systems management. This is particularly true for software operation and configuration in large environments, such as mobile telecommunication networks or Grid networks. Software operation and maintenance of communication systems distributed over the network is a complex procedure. It is known from experience that the same software run on a target system can give results different from those obtained on the test system. Isolating software under maintenance to prevent side effects which can influence normal operation, and support for
performing operations remotely are serious problems.
The Remote Maintenance Shell (RMS) is a solution for remote software operations and maintenance [12]. It represents the protected environment for software operations performed by mobile agents. The Multi–Agent Remote Maintenance Shell (MA–RMS) is based on a Mobile Agent Network (MAN) and is organized as a team–oriented multi–agent system. It consists of a master agent and a team of agents, where knowledge of the system is shared between the master and team agents. In systems with a high level of complexity, such as the MA–RMS, it is difficult to verify its properties formally or in a real system. In order to check various behavior of a multi–agent system such as different agent coordination strategies, creating a simulation is the only viable approach. Simulations are capable of simulating the functionality of the system and therefore be used to perform system analysis faster and cheaper.
In this article we present the Mobile Agent Network (MAN) simulator, a tool for simulating multi–agent systems. Agents in the simulator are based on the MAN formal model. It describes a shared plan representing a process which allows team formation according to task complexity and the characteristics of a distributed environment where these tasks should be performed. Verification of the simulator is done by the comparison with real multi–agent system MA–RMS responsible for distributed software management. Various testing scenarios are taken into consideration which differ with respect to network bandwidth, as well as task and network complexity.
The paper is organised as follows: after presentation of related work in this Sec., the next Sec. deals with the Mobile Agent Network formal model. Sec. 3 elaborates MAN simulator and explains simulation of agent systems and network nodes. Multi–agent Remote Maintenance Shell system, its architecture and remote software operations are presented in Sec. 4. Sec. 5 deals with laboratory architecture and measurements, and compares and elaborates performances of the MAN simulator with the results from MA–RMS system, and Sec. 6 concludes the paper.
1.1. Related Work
Our first step towards simulating a Mobile Agent Network was to study the features of existing simulators. Since MA–RMS was programmed using the JADE agent platform, it was necessary that the simulator also be capable of simulating these agents. The first simulator we analyzed was the Multi–Agent System Simulator (MASS). This simulator focuses on validating different coordinations and adaptive qualities of a multi–agent system in an unpredictable environment [6]. It does not consider an environment where agents migrate from one place to another using computer networks. In order to achieve this, a custom made component which conforms to the Java Agent Framework must be developed [19]. This component would not be compatible with the JADE agent platform. Thus, this simulator was not a viable option. The authors from [11] concentrate on how to simulate agents in a distributed
system and use the network only for simulation. In this simulator, an agent can be moved from one place to another in a 2D environment; implementing a computer network in such an environment is more complicated than implementing the whole simulator from scratch. On the other hand, the authors from [3] have built an event–based simulation framework with a completely connected network. However, different network topologies cannot be modeled. This framework does not conform to our MAN model because the duration time of one operation in the MAN cannot be simulated by modeling behavior using a Distilled StateCharts based approach. Another simulation toolkit is MASON [14, 15]. It is a single–process discrete–event simulation core and visualization toolkit. It is conceived as a core library for building a domain–specific custom simulation library. Its special emphasis is on swarm simulations. In order to simulate the MAN model, the custom simulation library must be developed. For this reason, we divided the process into two phases: independent simulation and integration with MASON. The first phase is described in this paper.
2. The Mobile Agent Network
The Mobile Agent Network (MAN) is used for modeling agent organization and coordination in an agent team. The idea is that the user sends a request to the system. The request is then decomposed into a software operation task graph and executed by mobile software agents. The MAN is represented by a triple \( \{A, S, N\} \), where \( A \) represents a multi–agent system consisting of cooperating and communicating mobile agents that can migrate autonomously from node to node; \( S \) is a set of \( m \) nodes in which the agents perform operations; and \( N \) is a network that connects nodes and assures agent mobility.
Each processing node \( S_i \) has a unique \( address_i \) from the set of addresses, \( address = \{address_1, address_2, \ldots, address_i, \ldots, address_m\} \). An agent is defined by a triple, \( agent_k = \{name_k, address_k, task_k\} \), where \( name_k \) defines the agent’s unique identification, \( address_k \in address \) represents the list of nodes to be visited by the agent and \( task_k \) denotes the functionality the agent provides in the form of \( task_k = \{s_1, s_2, \ldots, s_i, \ldots, s_p\} \) representing a set of assigned elementary operations \( s_i \). When hosted by node \( S_i \in address_k \), \( agent_k \) performs elementary operation \( s_i \in task_k \). If an operation requires specific data, the agent carries this data during migration [9].
A network \( N \) is represented by an undirected graph, \( N = (S, E) \) which denotes network connections and assures agent mobility. The set of processing nodes is denoted as \( S = \{S_1, S_2, \ldots, S_i, \ldots, S_m\} \). \( E \) represents the set of edges \( e_{ij} \) between \( S_i \) and \( S_j \) implying that nodes \( S_i \) and \( S_j \) are connected. The communication time \( c_{ij} \) between tasks \( t_i \) and \( t_j \) (explained later) is associated with edge (link) \( e_{ij} \) which connects these nodes. This way, a delay is incorporated into the communication channel. The following three types of network elements, with corresponding capacities, are defined: processing nodes, switches, and links.
2.1. Multi–Agent Systems
In systems comprised of people or organizations with different goals, a multi–agent system (MAS) is needed to handle their interactions. An MAS is categorized by the most important aspects of its agents: the degree of their heterogeneity and their communication. Examples of homogeneous communicating agents are agents that are inspired by the behavior of real ants [2]. These systems are typically used for agent–based routing and load balancing in telecommunication networks. Another MAS concept is a mobile agent team. This concept is used for solving complex tasks when individual agents’ expertise, information and/or resources are insufficient for the effective completion or performance of a task. Work regarding team–oriented multi–agent systems has mostly been inspired by two theories: the joint intentions theory [10] and the shared plans theory [5]. Both theories are based on observations of human teamwork and several solutions are based upon these theories. TEAM-CORE [18] introduces sophisticated monitoring techniques, which are separated from actual team–oriented communications. In the RETSINA system [4], agents only interoperate with other agents when they require another agent’s information or services. STEAM [17] integrates the following novel key concepts: team synchronization (to establish joint intentions); constructions for monitoring joint intentions and repair; and decision–theoretic communication selectivity. We propose a multi–agent system organized as a team of agents for remote software operations.
2.2. Team Oriented Multi–Agent Systems
Two multi–agent team oriented concepts are analyzed. The first is a master/slave concept, while the second consists of mobile agents with the same level of knowledge. Agents’ knowledge is composed of: agent capability, team capability, and situation–specific knowledge. Agent capability is knowledge regarding how and what kind of operation the agent can perform. Team capability is knowledge regarding the remaining agents in the team. Each agent knows where all other agents are and what operations they are executing. Situation specific knowledge is the capability of an agent to resolve unexpected situations.
The master/slave team model represents a centralized knowledge concept. An initial request composed of the set of software operations to be performed is submitted to the master agent. The master agent then creates a team of slave agents and sends them to different nodes in a network. All knowledge in this model is concentrated at the master agent who has three types of knowledge: its own capabilities, the team’s capabilities and situation–specific knowledge.
2.2.1. Centralized Knowledge Concept (CKC)
Based on user requests and operations, the master agent must be able to create a team plan, form a team of slave agents, and send them to perform the needed operations. Situation–specific knowledge represents the capability of the master
Verification of the mobile agent network simulator
agent to make decisions in unforeseen situations while executing the team plan. It can modify slave agents tasks during execution. Each slave agent from the team knows only how to perform one particular operation (i.e., the agents capability). If a problem arises during execution, the slave agent contacts the master agent. The master agent then takes appropriate action to try to solve the problem. If this is not possible, the master agent informs the user of the problem accordingly. When all operations have been successfully performed, the slave agents report their outcome to the master.
However, the centralized nature of situation–specific knowledge in this concept creates a considerable drawback, illustrated in the following example. Suppose the master agent has created a team plan and has formed a team of slave agents. If the state of certain remote nodes changes in the time interval between these two operations, a slave agent can be faced with an unexpected situation which it cannot resolve. The slave agent must contact the master agent in this situation creating the need for extra coordination.
2.2.2. Distributed Knowledge Concept (DKC)
In a distributed knowledge concept, all agents from the multi–agent system possess the same level of knowledge. Agents execute operations as a team while each individual agent possesses two types of knowledge: its individual capabilities and situation–specific knowledge. Each agent should know what kind of operation it can perform (its capability) and who can help if a problem arises during execution (situation–specific knowledge). A single agent can execute only one task. Furthermore, the agents correspond to the operations defined in Sec. 4.1.
Since a single agent executes only one task, the agent creates another agent which can solve its problem if an unexpected situation arises. Detection of such situations is part of its situation–specific knowledge. Thus, this concept does not require centralized knowledge nor centralized master–agent coordination. However, this requires creating a large number of agents in the system, each with only one task to execute. Agent creation and destruction in this environment generates more time and processor consumption than the centralized concept.
The organization of multi–agent systems suitable for network and distributed systems management considered in [20, 5, 16] is based on shared plans used by agent teams. An intelligent stationary agent is responsible for decomposing a complex management task into $n_t$ elementary operations and ordering these operations. The same agent also collects and interprets data regarding the characteristics of the nodes and the network in order to define a suitable agent team.
2.3. Shared Plan
The following assignments of elementary operations are considered the basic building blocks for identification of the agents’s shared plan:
**R1:** a single agent executes all operations on all nodes;
**R2:** an agent executes a single operation on one node only;
**R3:** an agent executes all operations on one node only;
**R4:** an agent executes a specific operation on all nodes;
**R5:** an agent executes a specific operation only once on all nodes;
**R6:** operations are assigned to the agents in order to exploit maximal parallelism of operation. Mutually independent operations are assigned to different agents, in order to execute them simultaneously on nodes with parallel execution supported;
**R7:** a hybrid solution combining R4 and R3. An agent is responsible for a specific operation on all nodes; all other agents execute all other operations, each on a different node;
**R8:** a hybrid solution combining R5 and R3 (specialization of R7 in the way R5 is specialization of R4).
Operations executed by agents are interdependent so when one operation finishes it must send its results to the operation which depends on it (explained in detail in Sec. 4.1). This is why we defined three types of agent communications:
1. **internal (I),** when the operations are performed by the same agent;
2. **local (L),** when the operations are performed by different agents at the same node;
3. **global (G),** when the operations are performed by different agents at different nodes.
Fig. 1 shows the lifecycle of an agent. Agent creation is characterized by its birth. After birth, the agent migrates to the first node where it has to execute an operation. If the agent carries data, its migration time is longer. Having arrived at the first node, it executes its operation and informs the other agents of its result via dialog. Dialog is the process of sending messages between two operations. The type of dialog depends on coordination (local or global communication). The agent then takes over the next operation on the task list. If this operation is to be executed at the same node, migration is skipped. If, however, the next operation is to be executed on another node, the agent migrates to the other node, executes the operation and then performs a dialog. This process is repeated for all operations from the task list \((task_k)\). The last operation is always the agents death by which the agent is disposed of.
The efficiency of the shared plan depends on the specific task submitted (its number, its ordering, and the complexity of its elementary operations) and environmental characteristics. The basic parameters which describe the environment are as follows: operation execution times, agent size, loaded agent size, message size, network topology, link bandwidth, shared plan type and network elements serving times. When an agent sends a message, the corresponding dialog fails if the receiving agent is not at the expected destination. In direct communication, the sender
periodically retries until the dialog is successful. In indirect communication, the sender creates a transport agent which migrates to the destination and delivers the message to the receiving agent upon arrival [1].
3. The MAN Simulator
3.1. The Simulator Core
The agent structure is defined in the MAN model. The simulator was programmed in Java as a part of the PhD thesis from [8]. The input data required to run a simulation are the same as environment characteristics. The simulation result is an operation graph execution matrix. Operation graph execution matrix analysis can find soft spots in the selected shared plan. The identified soft spots can then be used to improve the shared plan. After correcting the coordination model, a simulation with the same parameters is repeated and the results are compared. Operation graph execution matrix generation can be omitted from the simulation, in which case the only simulation result is the total execution time. This improves simulation performance and resource consumption.
The class diagram shown in Fig. 2 represents the core of the simulator. The main class is AgentSystem which represents the whole multi-agent system. It contains a list of nodes (class Node). Each node has a queue of agents (class Agent) at that node. An agent contains a queue of elementary operations (class Operation) that must be executed. Each operation has attributes such as: name, input variables, and list of destinations where the execution results need to be sent. Operation input data is stored in a map with the input variable name as a key. The value can be null, which means that the value is not set. Input data is used for preconditions in the operation graph.
3.2. **Graph Definition**
In order to run the simulation the operation graph needs to be defined. Let us use the simple graph from Fig. 3 as an example. In this example there are three operations \( t_1, t_2 \) and \( t_3 \). Operations \( t_1 \) and \( t_2 \) are to be executed at the node \( S_1 \) by agent \( A_1 \) while operation \( t_3 \) is to be executed at node \( S_2 \) by agent \( A_2 \).
The program needs to create an agent system and individual nodes (the following program lines 1–3). In line 1, an agent system is created. A node with name \( S_1 \) is created and added to the agent system in line 2.
```java
1 AgentSystem agentSystem = new AgentSystem();
2 agentSystem.createNode( "S1" );
3 agentSystem.createNode( "S2" );
```
The next step is to create agents. The method `createAgent` in the agent system creates an agent with the a specified name, at a specified node (line 4). The agent name must be unique to the agent system and the specified node must have been created before. For example, an agent named \( A_1 \) is created at the node \( S_1 \) in line 4.
Furthermore, the operations need to be created (lines 6–8), distributed to the agents (lines 9–11) and connected (lines 12–15). All operations are implemented as the NormalOperation class. The constructor of the operation has two parameters: the operation name and the name of node at which the operation will be executed (line 6). In order to assign an operation to an agent, the addOperation method must be called. In line 9, operation t1 is assigned to the agent A1 for execution.
```
6 NormalOperation t1 = new NormalOperation("t1", "S1");
7 NormalOperation t2 = new NormalOperation("t2", "S1");
8 NormalOperation t3 = new NormalOperation("t3", "S2");
9 a1.addOperation(t1);
10 a1.addOperation(t2);
11 a2.addOperation(t3);
```
Now the operations need to be connected. Operation t1 is connected to operation t2 and t3 in lines 14 and 15, respectively. These connections represent the sending of start signals between the connected operations. They are specified as the destination address where the operation sends the result of its execution. The operation which receives the start signal must know that it is supposed to receive such a signal before it is executed. For example, in line 12 it is specified that operation t2 should receive a start signal.
```
12 t2.createInputVariable("start");
13 t3.createInputVariable("start");
14 t1.addResultDestination(new DestinationAddressAtNodeOperation("start",t2));
15 t1.addResultDestination(new DestinationAddressAtNodeOperation("start",t3));
```
The last step is to start the simulation (line 18). If we want to be able to see how the simulation progressed during execution, an execution logger must be created and registered in the agent system. This is done in lines 16 and 17, respectively. During simulation execution all relevant data is logged in the execution logger. In order to print this data, the method serializeToString converts it to a string (line 19). The structure of logged data is shown in the class diagram in Fig. 4, while part of the collected data from this example is shown in the object diagram in Fig. 5. The method getExecutionTime in the agent system returns the total execution time.
```
16 SimulationExecutionLogger logger = new SimulationExecutionLogger();
17 agentSystem.setLogger(logger);
18 agentSystem.simulate();
19 String log = logger.serializeToString();
20 System.out.println(log);
21 System.out.println("execution time = " + agentSystem.getExecutionTime());
```
3.3. **Agent System Simulation**
3.3.1. **Simulation Execution**
Results of the simulation are stored in a structure described by the class diagram in Fig. 4. The main class is `SimulationExecutionLogger` which represents the whole simulation. It contains a list of elements executed at a specific time (class `InTimeExecution`). Each element has a time attribute. Each `InTimeExecution` object contains node executions (class `NodeExecution`). At each node in the system, there can only be one node execution at a time. Each node execution has a list of agent executions (class `AgentExecution`). An agent execution represents the execution of one agent at a specified time. In the specified time, the agent can execute an infinite number of actions and only one operation. Actions do not consume node processor time while operations do. This is the reason why only one operation can be executed at the node by one agent at a specified time.
The printout of simulation execution is the following:
<table>
<thead>
<tr>
<th>0</th>
<th>S1</th>
<th>A1</th>
<th>B</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>S2</td>
<td>A2</td>
<td>B</td>
</tr>
<tr>
<td>1</td>
<td>S1</td>
<td>A1</td>
<td>t1</td>
</tr>
<tr>
<td>2</td>
<td>S1</td>
<td>A1</td>
<td>t2</td>
</tr>
<tr>
<td>3</td>
<td>S1</td>
<td>A1</td>
<td>D</td>
</tr>
<tr>
<td>3</td>
<td>S2</td>
<td>A2</td>
<td>t3</td>
</tr>
<tr>
<td>4</td>
<td>S2</td>
<td>A2</td>
<td>D</td>
</tr>
</tbody>
</table>
Columns are separated by the symbol `|`. Individual columns represent: time period, node, agent, operation and actions. The first row indicates that in time period of 0Δt agent A1 was executing operation B as node S1. Since there is nothing else in that row agent A1 did not execute any actions in that time period. There are two
special operations: B — agent birth and D — agent death. Each agent consumes
node processor when it is created or destroyed. Actions are separated by the sym-
"
[126x671]bol ; (see row at time 2∆t). In this example we can see three actions: ‘Cr->start@t2”,
“CSI->start@t2” and “CSR->start@t3@A2@S2”. Possible actions and their notation are de-
scribed below:
• CSI->iv@tn — sending an internal message (between operations executed by the
same agent) to operation tn and setting input variable iv;
• CSL->iv@tn@an — sending a local message (between agents executing on the same
node) to operation tn executed by agent an and setting input variable iv;
• CSR->iv@tn@an@sn — sending a remote message (between agents executing on dif-
ferent nodes) to operation tn executed by agent an on node sn and setting input
variable iv;
• CR->iv@tn — receiving message in this agent on this node for operation tn and
setting input variable iv;
• TL->sn — the agent starts migrating towards node sn;
• TA — the agent has arrived at the node.
In our example, the first action is denoted as ”Cr->start@t2” and represents receiv-
ing a message for operation t2 and setting input variable start. The second action
”CSI->start@t2” represents sending an internal message to operation t2 and setting
input variable start. The third action is CSR->start@t3@A2@S2 and represents sending a
remote message to operation t3 executed by agent A2 on node S2. As we can see,
the order of actions is not important.
The part of simulation execution progress which is stored by the logger is shown
in the object diagram in Fig. 5. As shown, the SimulationExecutionLogger object has
references to InTimeExecution objects, which represent the simulation of time periods.
Each InTimeExecution object has two references to NodeExecution objects. Each of these
objects represents execution at one node. NodeExecution for node S1 in time period
2∆t has a reference to one AgentExecution, which represents the execution of agent
A1. This agent is executing one operation (reference to the object OperationExecution).
Each OperationExecution object must have a reference to the operation. In this example,
it is the NormalOperation object with its name attribute set to t2. This agent executes
three actions in the same time period, indicated with references to ActionExecution.
These actions represent sending and receiving messages.
The simulation execution starts by handling events in the event queue. It handles
the first event from the queue and checks if the queue is empty. If such is the case,
the simulation ends. If not, the handling of events in the queue is repeated. Each
event has a reference to a certain object in the simulator, eg. a node, an agent, an
operation, etc. Handling such events is represented as an incoming asynchronous
message in sequence diagrams. If an object is planning to send an asynchronous
message, it puts a message event in the queue. Creating a node in the agent system
puts the event with a Start Node message into the queue. After receiving a Start
Node message, the node begins with the execution (Fig. 6).
A *Start Node* message dequeues the first agent from the agent queue at the node in question and schedules agent execution by sending a *Schedule Agent* message to it. The agent then fetches the first operation from its operation queue and starts its execution of the operation by sending a *Start Operation* message. Operation execution does the work and then sends an *Operation Finished* message to the agent upon completion. The agent then removes the operation from the operation queue and sends an *Agent Finished Operation* message to the node. The node then moves the agent to the end of the agent queue and schedules the next agent at the node. Since there is only one agent in our example, the same one is scheduled again. After the agent receives a *Schedule Agent* message, it fetches the first operation from the operation queue. In our example, this is operation t₁. The agent then starts the operation and after it is executed, the operation sends a *Send Message* to the agent. After receiving this message, the agent sends the results to the operations that depend on the executed one. In our example, these operations are t₂ and t₃. Operation t₂ is executed by the same agent as operation t₁ so the agent needs to send internal message to these operations. More precisely, the agent sends a *Receive Message* to operation t₂ and a *Send Internal Message* to operation t₁ as a confirmation that the message was sent. When an operation receives a *Receive Message*, it sets its input data with the value specified in the received message. After setting all the input data to the desired value, the operation can be executed. If the first operation in the agent queue cannot be executed, the agent sends an *Agent Can Not Execute Operation* message to the node after which the node schedules the next agent in the queue. In our example, after sending internal messages (Fig. 6)
the agent also must also send a message to the operation \( t_2 \). Since this operation is executed by an agent at another node, a remote message must be sent. To do this agent \( A_1 \) sends a *Send Remote Message* to the node which in turn calls a *sendMessage* method in the network object. The network object is responsible for delivering messages through the network. It calls the *sendData* method responsible for sending network messages (details are described in Sec. 3.4). After calling this method, the operation \( t_1 \) receives confirmation that the message was sent in the form of a *Send Remote Message*. After a certain time (depending on the size of the message, the network topology, network elements and link capacities) the message arrives at the destination and a *Receive Message* is delivered to operation \( t_2 \). After all messages are delivered operation \( t_1 \) is finished.
Fig. 7 shows a scenario where an agent migrates from node \( S_1 \) to node \( S_2 \). After the agent ends the previous operation, the *operationFinished* method is called. This method removes the previous operation from the operation queue and calls the
prepareForExecutionNextOperation method, which checks if the agent has to migrate to another node in order to execute the subsequent operation. If so, the agent sends a Request Agent Migration message to the node where it is currently located (node S1). Node S1 calls the migrateAgentTo method in the network object after which the network confirms the start of migration by responding with an Agent Leave Node message. The node removes the agent from the agent queue and schedules the next agent at that node. In order to migrate an agent to another node, the network needs to calculate the size of the agent and send it to the network. The same mechanism used for sending agents is also used for sending messages. After the agent arrives at the destination node (node S2) the network sends an Arriving Agent message to this node. Node S2 enqueues the arrived agent in the agent queue and starts the node by sending a Start Node message to itself if it is not already running.
3.4. Network Simulation
The N in a MAN represents the physical network which agents use while migrating and communicating. The core of the simulated network is a component which is common to all network elements. Component can be regarded as a black box with a set of connectors. Each connector (marked with symbol \( C_i \) where \( i \) is the connector number) represents an input/output of the component. Connectors connect different components with logical links (\( LL_i \)). Logical links only logically connect entities in the network and do not introduce any link delay. There are three implementations of a component: the link, switch and processing node entities.
Fig. 8 shows an example of a network with one link, one switch and one processing node. The processing node is connected via its \( C_i \) connector to the link’s \( C_j \) connector with a logical link. Furthermore, the link’s \( C_j \) connector is connected with a logical link to the \( C_k \) connector of the switch. The switch entity can have
more than one connector allowing connections with multiple processing nodes or switches.
Processing node \((S_i)\) represents a network node from the MAN model. It contains two elements: a network host \((V_i)\) and an agent node \((AG_i)\). The network host offers communication functions to the agent node. The agent node represents the agent platform running on the processing node.
Link entities represent full-duplex physical links which connect nodes and switches in the network. Each link is limited by its network capacity in accordance with classical queuing theory. A link can be divided into two components: a queue \((TQ_i)\) and a service station \((P_i)\) [7]. The queue is used to store processing requests which cannot be processed at that particular time since the service station is already processing some other request. In the network model, a processing request contains data regarding the agent sent during the process of agent migration or the content of a message. The service station represents an Ethernet card used to send data through the network. The process of sending data over a link is performed in the following manner: first the link receives a processing request from a component connected to it through a connector. After receiving the request, it is stored in the queue. The service station then takes the request from the queue and sends the data to the destination component through the corresponding connector. The time needed to send the data is defined as follows:
\[
t_{si} = \frac{b_i}{C},
\]
where \(t_{si}\) is the service time for request \(i\), \(b_i\) is the size of the data being sent for request \(i\) and \(C\) is the link capacity. In our network model we assume that the queue is infinite employing the first-come-first-served queuing discipline. Furthermore, we assume there is only one service station at each link.
The switch entity represents a network switch used to transfer data between hosts. The switch is composed of three components: a queue, a service station and delivery logic. The queue and the service station are modeled using the same principles as for the link entity. The only difference is that the switch entity’s service station has a deterministic service time. The delivery logic component was introduced since a request needs to be sent to the corresponding outgoing connector.
(depending on the destination) after processing. It contains a routing table with a list of hosts and the connectors leading to them. The routing table is updated every time data is received from a host not present in the table. The delivery logic is placed after the service station element.
The `sendData` method in the network object is called in order to send one packet (an agent or a message) to the network (Fig. 9). The network object finds the host object (i.e., the representation of the network host) of the source node (an agent node) and calls the `send` method of host $S_1$. The host sends data to logical link by calling the `sendToLogicalLink` method in the connector $c_1$. The connector forwards data to the logical link $ll_1$ by calling the `sendToConnector` method. Since the logical link has two connectors it needs to know to which connector to send the data to. This is the reason why the source connector is a parameter of the calling method. Logical link $ll_1$ forwards data to connector $c_2$ by calling the `receiveDataFromLogicalLink` method. The connector knows from which logical link the data is coming from and calls the `receive` method in the link. The connector sends the reference to itself as a parameter. The link calls its `send` method which generates a `Link Event` message. This event represents the delay on the link. When such an event is handled by the simulator, the time is advanced to the time the data arrives at the other end of the link. After this, the `finishSending` method is called in order to send data to another logical link. This procedure is repeated between different components in the network until the data arrives at the destination host.
4. The Multi–Agent Remote Maintenance Shell
Software maintenance is, by IEEE definition, the modification of a software product after delivery. The main purposes of software maintenance are to correct faults, improve performance and/or other attributes, adapt the product to a changing environment and/or improve product maintainability [16]. Software management is also

a very demanding task in distributed systems since such systems are composed of a large number of computers located at geographically different areas. Service management operations, such as software migration, installation, starting or stopping become difficult when they have to be performed on tens or hundreds of computers. In the New Generation Network (NGN), which is characterized by the integration of traditional telecommunication systems and the Internet, this problem becomes even more important since it integrates different types of networks, programming technologies and provides an environment for new innovative personalized services. The NGN also supports mobility of terminals, users and services. In such environments, it is necessary to provide an advanced service management framework which can support such a dynamic and flexible service provisioning process. This process requires service deployment, configuration, control, upgrading, provisioning, monitoring, accounting, billing and self-management. Agents differ from other paradigms because they enable the development of software that is intelligent and can, thus, adapt to changing conditions.
4.1. Remote Software Operations
Remote software operations are started on the client side (management station) while most of the work is done on the server side (remote system). Software under maintenance can be found in different states in accordance with which maintenance actions must be defined (Fig. 10). The software migration operation includes all the actions necessary to transfer software from the management station to a remote system. Before migration, the software is in the initiated state. Installation data, specific to that particular piece of software, must be transferred together with the software itself. After the transfer is completed, a delivery report must be sent. After migration, the software is switched from the initiated to the inactive state and is ready for installation. The software installation operation includes all the

actions needed for software installation on the remote system. After installation, the software state is changed to the active/ready state. In this state, the software is active and ready for execution.
Four other operations are also defined:
- The **software starting** operation is composed of actions which change the software state from ready to running. In the running state, the software is being executed. In this state, software can be stopped, but must be restarted to reach the running state again. Possible errors and faults generated during the starting operation can be dangerous. In case of malfunctions, errors should be detected, the software execution stopped, and finally, a report should be sent to the user who initiated the starting operation;
- The **software stopping** operation can be activated only from the running state, in which case the software returns to the ready state. The software can be stopped in case of problems with the operating system or hardware (memory, processor). Furthermore, the user can initiate software stopping. After restarting, the software returns to the running state;
- The **software uninstallation** operation can be started only from the ready state, and after uninstallation, the software returns to the inactive state. Before uninstallation, it is important to determine the relationships between the software which is to be uninstalled and other applications installed on the same system. Some software units could be used by other applications and, thus, their removal could affect regular operation. If such relationships exist, the common part must be retained;
- The **software tracing** operation (passive logging) includes actions for collecting trace data during execution. Tracing can be initiated from the ready state and performed in the running state, or it can dynamically be turned on during software execution. The main idea of tracing is to collect input and output data from maintained software in order to analyze its correctness. Comparing log files from remote and home systems can be useful for debugging. It is important to underline that the first execution of new software should always be started with tracing turned on.
The following software operations will be considered: software migration, installation, setting execution parameters, starting, stopping, uninstallation and tracing. Each remote software operation is represented as an elementary task with its input and output parameters. A user request for remote software operations is represented by a directed acyclic graph $G = (T, L)$ where $T$ denotes a list of elementary tasks, and $L$ represents a set of directed edges which define precedence relations between tasks [13]. Each elementary task from the list $T$ corresponds to one software operation and is defined by $t_i = \{i, s_{ti}\}$. Element $i$ represents the elementary task number and $s_{ti}$ its service type. Each $s_i$ is defined by $s_i = \{I_i, O_i\}$ where $I_i$ represents a set of input data $I_i = \{i_1, i_2, \ldots, i_{ni}\}$ and $O_i$ a set of output data $O_i = \{o_1, o_2, \ldots, o_{no}\}$. A set of directed edges $L$ is defined by $L = \{l_1, l_2, \ldots, l_i, \ldots, l_{nl}\}$. Each $l_i$ is defined
by \( t_i = \{ t_{io}, o_i, t_{ii}, i_i \} \) where \( t_{io} \) is a task number of the output parameter \( o_i \) and \( t_{ii} \) is a task number of the input parameter \( i_i \). Input parameters which a task may receive in the process of creation are not presented in the task graph. The actual number of parameters (input/output) can only be determined at execution time.
For example, the install operation \( (s_2) \) has four inputs (Table 1). The first is the host name \((i_1)\) which represents the host where the task is to be executed. The second is the software name \((i_2)\). It is the name of the software that should be installed. Next is the start signal \((i_3)\) which is a trigger for starting the operation. Finally, the report address \((i_4)\) is the address of the agent and task to which it is sent. Report \((o_1)\) triggers another task to start its execution. As preconditions, software must be migrated. If this software is a version then testbed (version and
<table>
<thead>
<tr>
<th>Operation</th>
<th>Inputs</th>
<th>Outputs</th>
<th>Preconditions</th>
</tr>
</thead>
<tbody>
<tr>
<td>migrate ((s_1))</td>
<td>host name ((i_1)), software name ((i_2)), start signal ((i_3)), report address ((i_4))</td>
<td>report ((o_1))</td>
<td>if it is version then testbed must be migrated</td>
</tr>
<tr>
<td>install ((s_1))</td>
<td>host name ((i_1)), software name ((i_2)), start signal ((i_3)), report address ((i_4))</td>
<td>report ((o_1))</td>
<td>software migrated if it is version then testbed must be installed</td>
</tr>
<tr>
<td>set execution parameters ((s_1))</td>
<td>host name ((i_1)), software name ((i_2)), execution mode ((i_3)), mode parameters ((i_4)), start signal ((i_3)), report address ((i_4))</td>
<td>report ((o_1))</td>
<td>testbed installed one version installed for normal or testing mode 2 or more versions installed for parallel or selective mode</td>
</tr>
<tr>
<td>start ((s_1))</td>
<td>host name ((i_1)), software name ((i_2)), start signal ((i_3)), report address ((i_4))</td>
<td>report ((o_1))</td>
<td>execution parameters set</td>
</tr>
<tr>
<td>stop ((s_1))</td>
<td>host name ((i_1)), software name ((i_2)), start signal ((i_3)), report address ((i_4)), trace delivery address ((i_5))</td>
<td>report ((o_1))</td>
<td>trace ((o_2)), software started</td>
</tr>
<tr>
<td>uninstall ((s_1))</td>
<td>host name ((i_1)), tested or version name ((i_2)), start signal ((i_3)), report address ((i_4))</td>
<td>report ((o_1))</td>
<td>software installed software stopped if it is tested then it must not have versions installed</td>
</tr>
<tr>
<td>trace start ((s_1))</td>
<td>host name ((i_1)), software name ((i_2)), interactive trace ((i_3)), start signal ((i_4)), report address ((i_5))</td>
<td>report ((o_1))</td>
<td>software installed software started</td>
</tr>
<tr>
<td>trace stop ((s_1))</td>
<td>host name ((i_1)), software name ((i_2)), start signal ((i_3)), report address ((i_4)), trace data ((o_2))</td>
<td>report ((o_1))</td>
<td>trace started</td>
</tr>
<tr>
<td>plugging ((s_1))</td>
<td>user data ((i_1))</td>
<td>report ((o_1))</td>
<td>report ((o_1))</td>
</tr>
</tbody>
</table>
Table 1. Software Operation Tasks
Tesbed will be explained in detail in Sec. 4.2) should also be installed.
Table 1 includes two tasks which are not software operations: the planning task ($s_9$) and the report task ($s_{10}$). The planning task is responsible for obtaining getti
4.2. **MA–RMS architecture**
Agents are capable of autonomous actions which allow them to perform operations on network nodes by themselves. For this purpose, we have designed the MA–RMS.
The MA–RMS is an agent–based framework user for remote control over software on remote locations. There is no need for the administrator to be physically present at the actual location of the system being maintained. With centralized location management, it is possible to work on several remote systems at the same time. This saves both time and money since operations can be performed simultaneously from an office.
The basic MA–RMS concept is shown in Fig. 11. It consists of two main components: the MA–RMS Console and the MA–RMS Maintenance Environment (also called the MA–RMS Core). The MA–RMS Console has two components: a Management Console Agent and an MA–RMS GUI. An MA–RMS user uses the MA–RMS GUI to define operations which have to be performed at remote locations. It can also be used to track changes on remote systems. After the user starts execution of the defined tasks, they are dispatched to the Management Console Agent. This agent is a centralized agent used to supervise the process of software maintenance. Based on differences between the current state of the remote systems and the desired state of the systems, it generates a set of operations which need to be executed in order to get the system into the desired state. After generating the operations, they are distributed to a set of multi–operation agents according to a sub–team–coordination plan. This plan defines how to distribute operations among agents and

Verification of the mobile agent network simulator
is used to reduce network traffic and node load. Deciding which plan to use depends on network topology and node types.
There is also an automated version of the MA–RMS Console, referred to as AutoRMS. This version provides interfaces which allow external applications to schedule maintenance operations at the remote location. The application has to define the operations to be performed, software components, and a list of remote locations.
The Maintenance Environment must be preinstalled on remote network nodes in order for them to be managed. It also handles a local database which stores data regarding the installed software. It consists of four agents, where each agent is responsible for a certain set of functionalities. This increases parallelism and reduces the complexity of agents. The agents and their functionalities are described below:
- The Management Database Agent stores information regarding the installed softwares, their status and execution parameters. It registers and de-registers MA–RMS Consoles and notifies them when software status changes;
- The Cooperation Layer Agent is used for software migration. It is responsible for taking software from multi-operation agents and storing it in a local file repository;
- The Installation Dock Agent supports software installation and un-installation. During the software installation process all the necessary components are fetched from a local repository and installed within the Maintenance Environment according to the installation scripts. Furthermore, information regarding the software is sent to the Management Database Agent;
- The Application Handler Agent is responsible for software starting and stopping as well as trace activation and deactivation.
Fig. 11 shows only one MA–RMS Console and one Maintenance Environment but in reality there can be any number of them.
Each software component must be adopted to the MA–RMS execution model before it can be deployed on a remote network node. MA–RMS software consists of two components: the Application Testbed and the Application Version. The Application Testbed is an interface between the Application Version and the Maintenance Environment. It conveys all data from the outer environment to the Application Version and vice versa. It implements three APIs:
- The ResourceManager API which represents the input/output layer and provides a connection to system resources. Communication between software and the environment is handled by the Resource Manager;
- The Version Handler, which is responsible for starting and stopping software versions;
- The Trace Support API, which is used for collecting, modeling and delivering trace data to the Maintenance Environment;
The Application Version provides the actual functionality of the application.
The MA–RMS is organized as a team-oriented hybrid knowledge-based concept. It includes a master and team agents with knowledge shared between them. The Management Console agent is the master agent who has the same knowledge as master agents in the CKC: its own capabilities and team capabilities. Team agents have knowledge such as that possessed by agents in the DKC: its own capabilities and situation-specific knowledge. After starting the team, the master agent no longer cares about unexpected events during execution. Namely, team agents should know how to handle such events using their situation-specific knowledge. Slave agents in the MA–RMS are implemented as multi-operation agents.
4.3. Multi-Operation Agents
Software operations and all management actions in the MA–RMS are performed by mobile agents. They are responsible for software migration, installation, starting, stopping un-installation, running and tracing. Instead of sending messages between the management station and the remote system, tasks are decomposed into operations which are distributed among multi-operation agents. Agents are equipped with the required knowledge (e.g. communication protocol), data (e.g. software packages which have to be installed) and access rights to the remote system. Agents then migrate to the remote location and perform operations locally.
There are two ways of performing maintenance operations. The first is by using one universal agent capable of executing all possible operations the system supports. In complex systems, these agents can become too heavy and complex due to a large number of maintenance actions. Alternatively, a set of specialized cooperating and communicating agents, responsible for only a subset of possible operations, could be used. In the MA–RMS, we have adopted a hybrid principle combining the two.
Multi-operation agents (MOA) are generic carriers that can handle multiple operations. In general, these agents are only carriers, while the logic behind performing the actual operations is loaded during the process of distributing operations among multi-operation agents. This is performed by the Management Console. This approach is used since it is more flexible, allows easier system modeling and provides better control over the sequence in which operations will be executed. If needed, it is also possible to generate operations in such a way that we create specialized agents, universal agents or any combinations of the two.
5. Laboratory Measurements
The described MAN simulator was used in experiments which simulate the execution of operations in the MA–RMS system. Conducting this experiment was necessary to be able to compare with the results obtained by the actual multi-agent system and, thus, validate the results obtained by the simulator. If the results from this experiment were comparable with those obtained by the simulator, this would prove that the simulator could successfully be used to model the behavior of multi-agent systems based on MAN.
5.1. Laboratory configuration
Fig. 12 shows the configuration of the laboratory where the experiments were performed. Nine PCs were used: eight were used for hosting the MA–RMS servers and one hosted the ScenarioExecutionAgent and the AutoRMS station. Configuration of the PCs can be seen at Table 2.
The various testing scenarios differed with respect to three parameters: the number of operations to be performed (ranging from one to eight software installation operations), the number of PC on which these operations were to be performed (ranging between one to eight PCs) and the network bandwidth (512 Kbit/s, 1 Mbit/s and 10 Mbit/s networks) making a total of 192 measurements. Such a large number of measurements required some automation to reduce the time needed to perform them and the possibility of errors which can be caused by human interaction. Automation of the first two parameters was performed by the ScenarioExecutionAgent. It was responsible for scheduling the installation operations and keeping track of the results of the experiments. The network bandwidth parameter was
<table>
<thead>
<tr>
<th>Configuration type</th>
<th>Configuration value</th>
</tr>
</thead>
<tbody>
<tr>
<td>PC Model</td>
<td>Dell OptiPlex 170L</td>
</tr>
<tr>
<td>Processor</td>
<td>Intel Celeron 2.66 GHz</td>
</tr>
<tr>
<td>Physical Memory</td>
<td>512MB</td>
</tr>
<tr>
<td>Operating System</td>
<td>Windows XP Professional</td>
</tr>
<tr>
<td>Java Version</td>
<td>Sun JDK 1.5.0_09_b01</td>
</tr>
<tr>
<td>Jade Version</td>
<td>3.3</td>
</tr>
</tbody>
</table>
changed manually since we didn’t find any way to control it using an agent. To simulate different networks we used network bandwidth limiters on all the PCs included in the experiment.
A single scenario measurement was performed in the following way:
- The ScenarioExecutionAgent would read the scenario parameters from the XML configuration file (the number and the location of the software, and the number and the IP address of the PCs where the MA-RMS servers were installed);
- After gathering the parameters of the experiment, the agent would generate an installation request and send it to the AutoRMS station;
- The AutoRMS station would receive and process the request. After processing the request, the station would generate the operations needed to perform the installation request (every installation request consists of several sub-operations);
- The corresponding operations would be scheduled for execution by the Multi-OperationAgents (MOA) according to the scheduling algorithm used;
- The MOA would migrate to the MA-RMS server/s and perform operations;
- Upon completion of the scenario, the AutoRMS would send a notification message to the ScenarioExecutionAgent with the time needed to perform it;
- If there were additional scenarios left, this process would be repeated (There was a total of 192 scenarios).
The AutoRMS station was responsible for measuring the time needed to perform a scenario. Time measurement was started before scheduling the operations for execution and stopped upon operation completion.
5.2. Simulator parameters measurement
Before simulating the multi-agent system using the simulator, we measured the necessary simulation parameters on the real system. The parameters measured were as follows:
- The time needed to perform a single migrate operation;
- The time needed to perform a single installation and configuration operation;
- Traffic generated while migrating the MOA;
- Traffic generated while migrating the MOA with one migrate operation;
- Traffic generated while migrating the MOA with one installation and configuration operation.
To measure the time needed to perform operations, a modification of the MOA was required. Agents were modified with a timer that calculated the elapsed time. The timer was initiated before the operation was scheduled for execution by the agent and stopped after the agent received notification from the MA-RMS server that its request was completed.
Traffic generated by the migration process was measured using the Ethereal Network Protocol Analyzer. During a scenario, this tool would capture the network
traffic on the PCs’ network interface. The traffic generated was in the form of a Java
RMI (Java Remote Network Invocation) stream, used by the Jade agent platform
to migrate agents between two PCs. The packet analyzer was then used to calculate
the stream size.
The traffic parameters required to run the simulator were calculated as follows:
\[ S_{mo} = S_{amo} - S_a \]
\[ S_m = S_a + N \cdot S_{mo} \]
\[ S_{io} = S_{aio} - S_a \]
\[ S_i = S_a + N \cdot S_{io} \]
where \( S_{amo} \) is the traffic generated by the MOA with one migrate operation, \( S_a \)
is the traffic generated by the MOA without any operations, \( S_{mo} \) is the size of the
migrate operation, \( S_m \) is the size of the MOA with multiple migration operations, \( N \)
is the number of software components, \( S_{aio} \) is the traffic generated by the MOA with
one install operation, \( S_{io} \) is the size of the installation operation and \( S_i \) represents
the size of the MOA with multiple installation operations. The time needed to
migrate a MOA depends on the size of the agent and is calculated as follows:
\[ t = \frac{S}{B} \]
where \( t \) represents the time needed to migrate MOA, \( S \) is the agent size and \( B \)
represents the network bandwidth. All values are in bytes. The values obtained for
the size of the agent and the time needed to perform operations are shown in Table
3 where \( t_m \) represents the time needed to perform the migrate testbed operation,
\( t_{mv} \) is the migrate version operation, \( t_{it} \) is the install testbed operation, \( t_{iv} \) is the
install version operation and \( t_e \) is the time needed to configure the application.
5.3. Comparison of results
In this section, the simulation results obtained by the MAN simulator (with and
without network components) are compared with the results obtained by the MA–
RMS. In Fig. 13, 14 and 15 the x-axis represents the number of PCs on which the
<table>
<thead>
<tr>
<th>Table 3. Measured parameters</th>
</tr>
</thead>
<tbody>
<tr>
<td>Parameter name</td>
</tr>
<tr>
<td>----------------</td>
</tr>
<tr>
<td>( S_a )</td>
</tr>
<tr>
<td>( S_{amo} )</td>
</tr>
<tr>
<td>( S_{aio} )</td>
</tr>
<tr>
<td>( t_m )</td>
</tr>
</tbody>
</table>
software is to be installed in the MA–RMS Maintenance Environment, the y–axis represents the number of software installation requests and the z–axis shows the time needed to complete a single scenario. Accuracy of the simulator is shown in Fig. 16 and 17 and is calculated by the following formula
\[ RE = \left( \frac{t_{rms} - t_{man}}{t_{rms}} \right) \times 100 \]
where \( RE \) is relative error in % between the results. \( t_{rms} \) is the total execution time of the experiment in the RMS system and \( t_{man} \) is the total execution time of the experiment in the MAN simulator. Two versions of the simulator were analysed:
Fig. 13. Total execution time for 512 kbit/s
Fig. 14. Total execution time for 1 Mbit/s
without the network components and with the network components.
The results show that the total execution times of both the MAN simulator and the MA–RMS increase linearly with the number of software installation requests and Maintenance Environments. It also shows that the MAN simulator with network components shows better simulation results when compared with those obtained by the MA–RMS. The RE’s are 7.9% (for 512 kbit/s network bandwidth), 7.6% (for 1 Mbit/s network bandwidth) and 10.5% (for 10 Mbit/s network bandwidth).
Fig. 15. Total execution time for 10 Mbit/s
Fig. 16. RE for MAN without network components
The MAN simulator without network components has $RE$ of 16.6\% (for 512 kbit/s network bandwidth), 18.8\% (for 1 Mbit/s network bandwidth) and 16.9\% (for 10 Mbit/s network bandwidth).
The only anomaly in the results occurs for scenarios with few software components and small number of Maintenance Environments. In these scenarios, the difference increases up to the a maximum value of 81\% for the worst case scenario (i.e. the scenario with one software component, one Maintenance Environment and a network bandwidth of 10 Mbit/s). The cause of this anomaly is still unknown. It could be caused by a component(s) of the MA–RMS, the agent platform, or it could be caused by the some initialization process. Since the total execution time in these scenarios is quite small its effect can have a significant influence on the results. The accuracy of the simulator increases with the number of software components and Maintenance Environments.
The accuracy of the simulator is even better if we negate the influence of assumptions that were made by the simulator. The simulator does not take into consideration the influence of other PCs in the network and the software installed on them, which can generate traffic on the network interface. Measurements performed in the laboratory indicated that on average, these components generate 80 KB of traffic per minute (not taking into consideration traffic generated by the agent platform while communicating with other components in the agent system) which makes up for 3\% of the total traffic generated by the MA–RMS.
6. Conclusion and Future Work
In this paper, we presented the Mobile Agent Network simulator, a tool for simulating multi–agent systems. The agents in the simulator are based on the MAN formal
Verification of the mobile agent network simulator
model and organised as a team-oriented multi-agent system. The team consists of a master agent and a team of agents, where knowledge of the system is shared between the master and the team. Shared-plans described in the paper enable team formation according to task complexity and the characteristics of the distributed environment where they are to be performed. A detailed description of the simulator is given, including details regarding the simulator core, graph definition and simulation execution. The simulator is verified by comparison with the Multi-Agent Remote Maintenance Shell (MA-RMS) system, a team-oriented knowledge-based system responsible for distributed software management. The laboratory configuration used is described, along with various testing scenarios which differ with respect to the following three parameters: the number of operations to be performed, the number of PCs (nodes) on which these operations will be performed, and the network bandwidth. Two versions of the MAN simulator were compared: one without network components and one with them. The analysis shows that the RE of the MAN simulator without network components is 17.43% on average, while the RE of the MAN simulator with network components is 8.66% on average. As the number of software components and remote locations increases, so does the RE of the MAN simulator (it is below 3% for scenarios with eight software components and remote locations).
In the future we plan to integrate the simulator into the MASON framework in order to improve RE.
Acknowledgements
This work was carried out within the research project 036-0362027-1639 "Content Delivery and Mobility of Users and Services in New Generation Networks", supported by the Ministry of Science, Education and Sports of the Republic of Croatia.
References
Mario Kusek et al.
|
{"Source-Url": "http://agents.usluge.tel.fer.hr/webfm_send/2", "len_cl100k_base": 13979, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 76454, "total-output-tokens": 16493, "length": "2e13", "weborganizer": {"__label__adult": 0.0003314018249511719, "__label__art_design": 0.000576019287109375, "__label__crime_law": 0.0003376007080078125, "__label__education_jobs": 0.0031185150146484375, "__label__entertainment": 0.0001913309097290039, "__label__fashion_beauty": 0.00018274784088134768, "__label__finance_business": 0.0006799697875976562, "__label__food_dining": 0.000377655029296875, "__label__games": 0.0015840530395507812, "__label__hardware": 0.002269744873046875, "__label__health": 0.0005083084106445312, "__label__history": 0.0006456375122070312, "__label__home_hobbies": 0.00015985965728759766, "__label__industrial": 0.0008463859558105469, "__label__literature": 0.0004267692565917969, "__label__politics": 0.0002758502960205078, "__label__religion": 0.00043654441833496094, "__label__science_tech": 0.369140625, "__label__social_life": 0.0001531839370727539, "__label__software": 0.033599853515625, "__label__software_dev": 0.5830078125, "__label__sports_fitness": 0.0003371238708496094, "__label__transportation": 0.0007500648498535156, "__label__travel": 0.0002942085266113281}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68787, 0.0253]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68787, 0.51628]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68787, 0.91379]], "google_gemma-3-12b-it_contains_pii": [[0, 2297, false], [2297, 5363, null], [5363, 8673, null], [8673, 11640, null], [11640, 14589, null], [14589, 17459, null], [17459, 19173, null], [19173, 20265, null], [20265, 22724, null], [22724, 24281, null], [24281, 27404, null], [27404, 29297, null], [29297, 30463, null], [30463, 32469, null], [32469, 34835, null], [34835, 36972, null], [36972, 39056, null], [39056, 42295, null], [42295, 45631, null], [45631, 47563, null], [47563, 50408, null], [50408, 53426, null], [53426, 55036, null], [55036, 57643, null], [57643, 59991, null], [59991, 60720, null], [60720, 61344, null], [61344, 63108, null], [63108, 65953, null], [65953, 68787, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2297, true], [2297, 5363, null], [5363, 8673, null], [8673, 11640, null], [11640, 14589, null], [14589, 17459, null], [17459, 19173, null], [19173, 20265, null], [20265, 22724, null], [22724, 24281, null], [24281, 27404, null], [27404, 29297, null], [29297, 30463, null], [30463, 32469, null], [32469, 34835, null], [34835, 36972, null], [36972, 39056, null], [39056, 42295, null], [42295, 45631, null], [45631, 47563, null], [47563, 50408, null], [50408, 53426, null], [53426, 55036, null], [55036, 57643, null], [57643, 59991, null], [59991, 60720, null], [60720, 61344, null], [61344, 63108, null], [63108, 65953, null], [65953, 68787, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68787, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68787, null]], "pdf_page_numbers": [[0, 2297, 1], [2297, 5363, 2], [5363, 8673, 3], [8673, 11640, 4], [11640, 14589, 5], [14589, 17459, 6], [17459, 19173, 7], [19173, 20265, 8], [20265, 22724, 9], [22724, 24281, 10], [24281, 27404, 11], [27404, 29297, 12], [29297, 30463, 13], [30463, 32469, 14], [32469, 34835, 15], [34835, 36972, 16], [36972, 39056, 17], [39056, 42295, 18], [42295, 45631, 19], [45631, 47563, 20], [47563, 50408, 21], [50408, 53426, 22], [53426, 55036, 23], [55036, 57643, 24], [57643, 59991, 25], [59991, 60720, 26], [60720, 61344, 27], [61344, 63108, 28], [63108, 65953, 29], [65953, 68787, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68787, 0.10574]]}
|
olmocr_science_pdfs
|
2024-11-29
|
2024-11-29
|
20d4a9b752576a4aeaf55b8ce6dc258ccda2c0de
|
Mechanism and Algorithm for Indirect Schema Mapping Composition
Bo Wang
College of Information System and Management, National University of Defense Technology, Changsha, China
wbsteven@163.com
Bo Guo
College of Information System and Management, National University of Defense Technology, Changsha, China
Abstract—There are a large number of indirect schema mappings between peers in the network. To improve the efficiency of data exchange and queries, indirect mappings are needed to be composed. Direct mappings can be derived directly by the constraints defined between schemas, but not for indirect mappings’ composition. Defined the combination operations of schema elements in indirect mappings, and gave the expression of indirect mappings. Analyzed the composition of indirect mappings, and proposed a strategy, named schema element back, to solve the problem of indirect mapping composition, and gave the indirect mapping composition generation algorithm based on such strategy. Experiments showed that indirect mapping composition can improve the efficiency of data exchange, and compared with other non-full mapping composition generation algorithms, and indirect mapping composition generated by our algorithm based on schema element back strategy can completely eliminate the infection of media schema with no reduction of the composition efficiency.
Index Terms—indirect mapping composition; combination operation; schema element back
I. INTRODUCTION
Many data management tasks, such as data translation, information integration, and database design require manipulation of database schemas and mappings between them[1]. Schema mappings define the relationship between instances of two given schemas which can be divided into direct and indirect ones by the relationships of elements. Indirect mappings are widely existed, especially when schemas evolve, most mappings that used to be direct become indirect. In indirect mappings, elements between schemas are not directly associated, but related by some algebra operations.
Mapping composition refers to combining two mappings into a single one, which is useful for a variety of data management problems. Figure 1 shows a topology of a data sharing network, each node can be a data sources, an integrated data center or a logical mediator. When data are exchanged from node E to the integrated schema G, instances of E should be translated to G by mapping sequences: E —— C, C —— A and A —— G, and chaining mappings at run-time may be expensive because we may need to follow long and possibly redundant paths in the network. Note that different paths between a pair of nodes may yield different sets of query answers. To ensure the reliability of data exchange, we usually need to get all of the possible paths, and execute each possible data transplantation process according to the paths. Cost of performing such work will be quite large when there are a large amount of data sources. If certain nodes leave the network, then we may lose the mapping paths. Addressing these issues raises several static analysis questions regarding the network of mappings, and mapping composition lies at the core of them all. By pre-composing a select set of mapping chains in the network, we can directly execute the data exchange between the source schema and the target schema, which leads to significant run-time savings. Moreover, mapping composition arises in many practical settings like data integration, schema evolution and database designing:
In data integration, a query needs to be composed with a view definition. If the view definition is expressed using global-as-view (GAV), then this is an example of composing two functional mappings: a view definition that maps a database to a view, and a query that maps a view to a query result.
In schema evolution, a schema evolves to a new one, and the relationships between the two schemas may be described by a mapping. The original mappings on the old schema can be updated by mapping composition.
In a database design process, schema may evolve frequently via a sequence of incremental modifications. This produces a sequence of mappings between successive versions of the schema until the desired schema is reached, so a mapping from the original schema to the final one is needed, which can be obtained by composing the mappings between the successive versions of the schema. With this mapping, the designer can migrate data from the old schema to the new schema.
Another motivation for mapping composition comes from the framework of model management\cite{2}. One of the basic operators in model-management algebra is composition, and mappings are treated mostly as syntactic objects. Mapping management can be achieved by defining the composition and inverse operations.
Query composition is widely supported by most of the commercial data management tools, while the mapping composition is still rest on researches and experiments. If the mappings are functional, we can do the composition similarly as the query composition. However, there are a large number of non-functional mappings, most of which are indirect mappings, and we can not directly give the composition. Different from the query composition, indirect mappings can not be composed directly.
In this paper, we mainly discuss the composition problems of indirect mappings, and the rest of the paper is organized as follows. In section 2, we show the related works on mapping composition. In section 3 we introduce some concepts of mapping composition. In section 4 and 5, we firstly define the indirect mappings composition, and then propose an indirect mapping composition algorithm. Section 4 presents the experimental results. And the last part is the conclusion.
II. RELATED WORKS
Mapping composition is a challenging problem\cite{1} whose difficulty is quite sensitive to the expressiveness of the allowed mappings, and there are several researches on such problem\cite{4,5,6,7,8}, including methods based on tuple generation constraints and schema transformation\cite{9,10,6}, etc. Those mapping composition methods based on schema transformation define complicated schema transformation operations, which only support the computing of direct mappings, and not for those indirect ones with complex algebra operations between schema elements. Model management is a generic approach to solving problems of data programmability where precisely engineered mappings are required\cite{2}. A model management system supports the creation, compilation, reuse, evolution, and execution of mappings between schemas represented in a wide range of meta-models, where the composition operation is one of the ways to realize mapping reusing.
Researches on indirect mappings mostly consider the definitions and basic operations of non-direct element-to-element mappings between the source and target schemas\cite{11} without any further discussions about composition. Madhavan and Haleby\cite{3} showed that the composition of two given mappings expressed as GLAV formulas may not be expressible in a finite set of first-order constraints. Fagin et al.\cite{4} proved that the composition of certain kinds of first-order mappings may not be expressible in any first-order mappings, even by an infinite set of constraints, because the mapping language is not closed under composition. Nash et al.\cite{8} showed that for certain classes of first-order languages, it is undecidable to determine whether there is a finite set of constraints in the same language that represents the composition of two given mappings. In \cite{4}, Fagin et al. demonstrated the second-order mapping language, namely second order source- to-target tuple-generation dependencies (denoted by SOtgd), is closed under composition, and they also present a composition algorithm for this language. The second-order languages uses existentially quantified function symbols, which essentially can be thought of as Skolem function. A tuple-generating dependency specifies an inclusion of two conjunctive queries, $Q_1 \subseteq Q_2$. It is called source-to-target when $Q_1$ refers only to symbols from the source schema and $Q_2$ refers only to symbols from the target schema. However, for implementation, such kind of languages is not supported by standard SQL-based database tools.
Yu and Popa extended the algorithm of Fagin’s to handle nesting and apply it to some schema evolution scenarios, and they also discussed the optimizations of the composition result. Nash et al.\cite{8} studied the composition of first-order constraints that are not necessarily source-to-target. They considered dependencies that can express key constraints and inclusions of conjunctive queries $Q_1 \subseteq Q_2$, where $Q_1$ and $Q_2$ may reference symbols from both the source and target schema, but the composition of constraints in this language is not closed and whether a composition result exists is undecidable. They also gave an algorithm that produces a composition.
Berstein et al.\cite{3}, explored the mapping composition problem for constraints that are not restricted to being source-to-target, which extended the work of Nash and Fagin. They applied a “left-compose” step which allows the algorithm to handle mappings on which the algorithm in \cite{8} fails. They used algebra-based instead of logic-based language to express mappings, which can be directly supported by database tools. We call their method “best effort composition approach” (denoted by BECA). BECA tries to eliminate as many relation symbols from the middle schema as possible. Given schemas $\sigma_1$, $\sigma_2$, and $\sigma_3$, mappings computed by BECA have the form of $\sigma_1 \rightarrow \sigma_2' \rightarrow \sigma_3$, where $\sigma_2' \subseteq \sigma_2$. That means, in some cases it may be better to eliminate some symbols from $\sigma_2$ successfully, rather than insist on either
eliminating all them or failing. Since there may be some elements in the middle schema remain at the end of BECA, the result is not a perfect composition.
A main reason that $\sigma_2'$ is not empty after BECA, is that there may be indirect mappings. In this paper, we use a strategy namely schema elements back to solve the problem. Although this method expands $\sigma_1 : \sigma_1' \rightarrow \sigma_1'$, it can completely eliminate $\sigma_2$, and a composed mapping only depends on the source and target schema will be generated.
III. MAPPING COMPOSITION
Definition 1 Let $R_i$ and $R_j$ be two binary relations, the composition $R_i \circ R_j$ of $R_i$ and $R_j$ is the binary relation: $R_i \circ R_j = \{(x, z) : \exists y \in R_i(z, y) \in R_j \}$.
Let $M = (S, T, \Sigma)$ be a schema mapping model, where $S$ and $T$ are the schemas with no relation symbols in common and $\Sigma$ is a set of formulas of some logical formulas over $<S, T>$. Then let $Inst(M)$ be a binary relation between instances over $S$ and $T$. We define composition of two schema mappings $M_i$ and $M_j$ using the composition of the binary relations $Inst(M_i)$ and $Inst(M_j)$.
Definition 2[4] Let $M_{12} = (S_i, S_2, \Sigma_{12})$ and $M_{32} = (S_2, S_3, \Sigma_{32})$ be two schema mappings such that the schemas $S_i, S_2, S_3$ have no relation symbol in common pairwise. A schema mapping $M = (S, S, \Sigma_{12})$ is a composition of $M_{12}$ and $M_{32}$ if $Inst(M) = Inst(M_{12}) \circ Inst(M_{32})$ which means that $Inst(M) = \{(i_1, i_2) : (i_i, i_2) \in Inst(M_{12}) \land (i_1, i_2) \in Inst(M_{32})\}$, where $i_1$ is the instance of $S_i$ with $1 \leq i \leq 3$.
Figure 2 shows an example of direct mapping composition, where $S_1$ is the middle schema. Data exchange between $S_1$ and $S_2$ is achieved by composing $M_{12}$ and $M_{32}$. Fagin et al. have proved the existence inevitability of mapping composition, and Alan Nash[1] gives the prerequisite for the existence of mapping composition, that is two mappings have a common schema, which is consistent with Fagin’s result. Both of them proved the existence of mapping composition by the relationship between query and mappings, and the instance isomorphism.
Example 1: Consider the following schemas $S_1, S_2$, and $S_3$. $S_1$ consists of a single binary relation symbol $Maint$, which associates the name of maintenance person with the equipment he maintains. Schema $S_2$ consists of a similar binary relation symbol $Maint'$, which is a copy of $Maint$, and of an additional binary relation symbol $MP$, that associates each person with an id. Schema $S_3$ consists of one binary relation symbol $Reg$, that associates person ids with the equipments the persons take. Consider the following schema mappings $M_{12} = (S_1, S_2, \Sigma_{12})$ and $M_{32} = (S_2, S_3, \Sigma_{32})$, where $\Sigma_{12} = \forall \forall \forall (Maint(n, c) \rightarrow Maint(n, c)), \forall (\forall \forall (MP(n, c) \rightarrow \exists \exists Reg(s, c)))$.
Given the instances of $S_1, S_2, S_3$ as follows:
$I_1 : Maint = \{(A, Eq1), (A, Eq2)\}$
$I_2 : Maint = \{(A, 0001), (B, 0002)\}$
$I_3 : Reg = \{(0001, Eq1), (0001, Eq2)\}$
According to $\Sigma_{12}$ and $\Sigma_{32}$, we have $\langle I_1, I_2 \rangle \in Inst(M_{12})$ and $\langle I_1, I_3 \rangle \in Inst(M_{32})$. By definition 2, if there is a mapping $M_{12}$, that satisfies $\langle I_1, I_2 \rangle \in Inst(M_{12})$, then $M_{12}$ is the composition of $M_{12}$ and $M_{32}$, so $M_{12}$ can be expressed as $\forall \forall \forall (Maint(n, c) \rightarrow \exists \exists Reg(s, c))$.
Theory 1 The mapping composing operation $Inst(M_{12}) \circ Inst(M_{32}) = Inst(M_{12})$ is closed under the homomorphism of instances.
Proof: Given $\langle I_1, I_2 \rangle \in Inst(M_{12})$, for the instances $I_1'$ and $I_2'$, satisfying $I_1' \cong I_1$ and $I_2' \cong I_2$ ("\cong" denotes homomorphism), we just need to show that $\langle I_1', I_2' \rangle \in Inst(M_{12})$. Since $Inst(M_{12})$ and $Inst(M_{32})$ are closed under homomorphism, and $Inst(M_{12}) \circ Inst(M_{32}) = Inst(M_{12})$, there exists an instance $I_2'$ that $\langle I_1', I_2' \rangle \in Inst(M_{12}) \land \langle I_1, I_2 \rangle \in Inst(M_{32})$. So there is an instance $I_2' \cong I_2$ such that $\langle I_1', I_2' \rangle \in Inst(M_{12})$, and also $\langle I_1, I_2 \rangle \in Inst(M_{32})$, then $\langle I_1', I_2' \rangle \in Inst(M_{12})$. This was to be shown.
IV. INDIRECT MAPPING
A direct mappings can be given by a Source-to-Target tuple generating dependency (denoted by S2Tgd): $\forall x(\phi_S(x) \rightarrow \exists y \psi_T(x, y))$, where $\phi_S$ and $\psi_T$ are respectively the formulas on $S$ and $T$, and $x, y$ are the set of variables. A full S2Tgd mapping can be expressed by $\forall x(\phi_S(x) \rightarrow \psi_T(x))$, and a mapping expressed by $\forall x(\phi_S(x) \rightarrow \phi_R(x))$ equals to the set of some full S2Tgds $\forall x(\phi_S(x) \rightarrow \phi_R(x))$, where $i = 1, \ldots, k$, and $\phi_R(x)$ is the atomic formula.
A. Describe indirect mappings
In indirect mappings, the algebra operation results of the elements in both the source and target schema are related, as shown in Figure 3.

Figure 3. Indirect mappings
In element correspondence "[1]" (see Fig. 3), nodes in the pane of the source schema are mapped to an element in the target schema after some algebra operations. The algebra operations of schema elements includes some basic ones, like Union, Cartesian, Projection and Selection, and some compositions, like Intersection, Join, Nature Join and division etc.
Definition 3 Indirect elements correspondence (denoted by ICOE) is the set of element correspondences that elements are mapped after some algebra operations.
Let $\phi_3$ be the formulas of $S$, and $S'$ be the scheme of $S'$, the algebra operations have six forms, as shown below, where 1–4 are the basic ones and 5–6 can be derived from 1–4:
- **Union** Let $\phi_3(x) = \bigwedge_{i, j} (S_i(x) \wedge \cdots \wedge S_j(x))$, $S_i(x) \in S$, and it is the same for $\psi_7$.
ICOE: $\forall e(S_i(e) \lor S_j(e) \lor \cdots \lor S_k(e) \rightarrow T(e))$ (or $\forall e(S_i(e) \lor S_j(e) \lor \cdots \lor S_k(e) \rightarrow \exists w(T(e), w))$ if there is an existence quantifier).
- **Cartesian** $\phi_3(x) = \bigwedge (S_i(x) \times \cdots \times S_j(x))$, $S_i(x) \in S$
ICOE: $\forall e(S_i(e) \times S_j(e) \times \cdots \times S_k(e) \rightarrow T(e))$ (or $\forall e(S_i(e) \times S_j(e) \rightarrow \exists w(T(e), w))$).
- **Projection** $\phi_3(x) = \bigwedge (\pi_{k, l, m}(S_i(x)))$, $S_i(x) \in S$
ICOE: $\forall x(\pi_{a, b}(S_i(a, b, c, d)) \wedge \pi_{c, d}(S_i(b, c, d, e))
\rightarrow T(a, b, c, d, e))$ (or $\forall x(\pi_{a, b}(S_i(a, b, c, d)) \wedge \pi_{c, d}(S_i(b, c, d, e))
\rightarrow \exists y(T(a, b, c, d, e)))$, where $x$ and $y$ are the set of variables.
1. **Selection** This operation considers the conditional mappings, which represents the user’s integrality. It has the form of
$\phi_3(x) = \bigwedge (\pi_{k, l, m}(S_i(x))) = \bigwedge (S_i(x) \land F(t)), S_i(x) \in S,t \in Inst(S)$.
ICOE: $\forall e((S(e) \land (Inst(e) \Rightarrow \theta)) \rightarrow T(e))$.
2. **Intersection** refers to several schemas, which is a kind of selection operation essentially.
3. **Join** is the composition of Cartesian and Selection
Let $\Theta = \{\land, \lor, \sigma, \theta\}$ be the basic algebra operation symbol set (see 1–4 in definition 3), and indirect mappings can be expressed as $\phi_3'(x) = \bigotimes_{i} (\pi_{k, l, m}(S_i(x)))$, $S_i(x) \in S$. Take the indirect mapping shown in Figure 4 as an example, $M_{12}$ can be expressed as:
$\forall x((\pi_{a, b}(R_{1}^{S}(x))) \land \pi_{c, d}(R_{2}^{S}(x))) \land \pi_{a, b}(R_{3}^{S}(x)))$ $\rightarrow \exists y(R_{1}^{S}(y) \lor R_{2}^{S}(y) \lor R_{3}^{S}(y))$, and $M_{23}$:
$\forall y(R_{1}^{S}(y) \land \pi_{c, d}(R_{2}^{S}(y)) \rightarrow \exists z(R_{1}^{S}(z) \land R_{2}^{S}(z)))$.
If $\psi_i(x, y)$ has the same form with $\phi'_3(x)$, then the non-full indirect mapping can be expressed as:
$\forall x(\phi'_3(x) \rightarrow \exists y\psi'_i(x, y))$.

Figure 4. Indirect mappings between schemas.
B. Exchange data under indirect mappings
Now we give the concept of homomorphic instances under indirect mappings (concept of homomorphic direct mappings is given in [4], and we do not discuss it here). Consider the following example:
**Example 2:** Given schemas $S_1$, $S_2$, and $S_3$. $S_1$ consists of a single binary relation symbol $Consistof$, which associates the name of equipments with the parts they have. $S_2$ consists of a similar binary relation symbol $Consistof$, which is a copy of $Consistof$, and of an additional binary relation symbol $MCode$, that associates each equipment name with an id. $S_3$ consists of a single binary relation symbol $MRegister$, which associates the code of equipments with the related parts. Consider the following mappings: $M_{12} = (S_1, S_2, \Sigma_{12})$ and $M_{23} = (S_2, S_3, \Sigma_{23})$, where
$\Sigma_{12} = \{\forall e \forall p(Consistof(e, p) \rightarrow Consistof(e, p)), \forall e \forall p(Consistof(e, p) \rightarrow \exists c MCode(e, c))\}$,
$\Sigma_{23} = \{\forall e \forall p(MCode(e, c) \land Consistof(e, p) \rightarrow MRegister(e, p))\}$.
The second formula in $\Sigma_{12}$ is a direct mapping that associate equipment name $e$ in the source schema to the equipment name $e$ in the target schema. According to Theory 1, let $I, J$ be the instance of a mapping $M$, that means $\langle I, J \rangle \in Inst(M)$, and if there is an instance pair $\langle I', J' \rangle$ that $\langle I', J' \rangle \cong \langle I, J \rangle$, then $\langle I', J' \rangle \in Inst(M)$.
For the converse, if $\langle I', J' \rangle$ and $\langle I, J \rangle$ are respectively the instance of $M$, then we have $\langle I', J' \rangle \cong \langle I, J \rangle$. The above conclusion is true for that $M$ is the direct mapping. For the indirect mappings, the homomorphism between instances is different. For example, let $S_1'$ be a schema,
which is a copy of $S_i$ with a little differences that certain element in $S_i$ splits into several parts in $S_i'$. For example, the equipment name in $S_i'$ contains two parts (supername+name), and the corresponding formula set is $\Sigma_i'$. Therefore, under the condition of homomorphism, there may be some non-one-to-one correspondences.
Let $S_i$, $T$ be the source and target schema respectively, and $R_i$ is a scheme of $S_i$. If there is an indirect mapping $M$ from $S_i$ to $T$, and let $R_i'$ be the scheme constructed as follows: substituting each element combination that participates in the indirect mappings with a single element. We call $R_i'$ a generating scheme. If we define mappings from $R_i'$ to $T$ with the form of $\forall x(R_i'(x) \rightarrow \psi_f(x))$, and mappings from $R_i$ to $T$ with the form of $\forall x(R_i(x) \rightarrow \psi_f(x))$, then the mappings from $R_i(x)$ to $\psi_f(x)$ are direct, and those from $R_i'(x)$ to $\psi_f(x)$ are indirect, as shown in figure 5:

Figure 5. Instance homomorphism under indirect mappings.
We define the homomorphism function $h$ under indirect mappings as:
$$h(a_i) = \begin{cases}
&e, & e \in \text{Inst}(\pi_k(R_i)) \\
&\text{compound}(e_1, \ldots, e_n), & e_i \in \text{Inst}(\pi_i(R_i))
\end{cases}$$
Then $h$ is called a general homomorphism function, where compound is the algebra operation of elements.
**Definition 4** Let $R$ and $R'$ be two schemas, $h$ is the general homomorphism function shown above. Let $J_i, J_i'$ be the instances of $R$ and $R'$ respectively. If for each scheme symbol $R_i$ in $R$, and each tuple $(a_1, \ldots, a_n) \in R'$, there is a tuple $(h(a_1), \ldots, h(a_n)) \in R'$, then $R$ and $R'$ are homomorphic instances under $h$, then the data exchange process under indirect mappings can be implemented by the homomorphism given in Definition 4 comparing with the process under direct mappings (see[12]).
### V. MECHANIC OF INDIRECT MAPPING COMPOSITION
#### A. Schema-elements-back
As shown in figure 4, although there is a common schema $S_i$, we can not directly construct the mappings from $S_i$ to $S_{i'}$, according to $M_{i_1}$ and $M_{i_2}$, even can not give the mapping composition directly. Using direct mapping composition, elements in $S_i$ can only map to part of the certain elements of $S_{i'}$. During the data exchange procedure, a complete instance of $S_i$ may consist part of the instance from $S_{i'}$, so data exchange from $S_i$ to $S_{i'}$ by direct mapping composition $M_{i_2} \circ M_{i_3}$ may lose some information, that is because $M_{i_1}$ and $M_{i_3}$ contains some elements that is not in common, there is no such an instance $I_i$ of $S_i$ satisfying conditions in Definition 2.
Since the expressions of indirect mapping composition can not be derived by Definition 2 directly, we deal with this problem based on general instance homomorphism, the main idea of which is to construct a schema that can be directly used for mapping composition, and make sure that the instances of the constructed schema is homomorphic with the instances of the original schema under the general instance homomorphism function. We propose a strategy named schema-elements-back by constructing some virtual elements in the source schema to deal with the composition of indirect mappings. Compared with BECA, schema-elements-back can eliminate the infection of the middle schema.
**Definition 5** Let $M_{S_T} = (S, T, \Sigma_{S_T})$ be a model of indirect schema mapping, where $\Sigma_{S_T}$ is the formula set with the form of $\forall x(\phi(x) \rightarrow \exists \psi_f(x,y))$, and mappings from $S$ to $T$ are sound[13]. Let Dom$_{\psi_f}(S)$ be the domain of $S$, and Dom(T) be the domain of $T$, and then we have Dom$_{\psi_f}(S) \subseteq$ Dom(T). Let $y_{\psi_f}$ and $x_{\psi_f}$ be the set of variables that participate in the indirect mappings in $T$ and $S$ respectively, and we construct the virtual elements in $S$ by set $y_{\psi_f} - x_{\psi_f}$. As shown in figure 6, the newly constructed source schema is denoted by $S'$. We can construct direct mappings from $T$ to $S'$, and then the process of virtual elements construction is called a schema-elements-back process.

Figure 6. Inverse of indirect mappings based on virtual elements.
B. Indirect mapping composition algorithm
When data are exchanged under the strategy of \textit{schema-elements-back}, instance of some elements in the middle schema may be firstly transformed to the source, and there is a reverse data flow during the data exchanging process from the source to the target. Such a data flow can be implemented by the reverse mapping from the element in the middle schema to the newly added element in the source schema. Since the object of composing two schema mappings is to directly construct the mapping from the source to the target without considering the middle schema, \textit{schema-elements-back} can ensure that under the condition of indirect mapping, directly data exchange from the source to the target can be realized, which can improve the efficiency of data transformation. Now we show the process of indirect mapping composition.
\textit{Schema-elements-back} is the process of adding new elements to the source schema according to the middle schema, in order to achieve a directly mapping from the source schema to the target schema under the condition that new elements are introduced. As shown in figure 7, the dashed lines represent the correspondences between the newly added schema elements and elements in the middle schema, from which we can see that when \( S_1 \) is translated to \( S'_1 \), by \textbf{Theorem 2}, mapping composition \( M_{13} \) can be directly constructed from \( S'_1 \) to \( S_3 \).
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{schema_elements_back.png}
\caption{Process of Schema-elements-back}
\end{figure}
Now we give an indirect mapping composition algorithm (denoted by IMCA), without loss of generality, we consider the case shown in figure 7, and for the other cases we just need some small modifications.
\textbf{Algorithm IMCA}
\textbf{Input:} \( M_{12} = (S_1, S_2, \Sigma_{12}) \) and \( M_{23} = (S_3, S_4, \Sigma_{23}) \), where \( \Sigma_{12} \) and \( \Sigma_{23} \) are extended S2Ttgds that describe indirect schema element correspondences.
\textbf{Output:} A schema mapping \( M_{13} = (S'_1, S_2, \Sigma_{13}) \), which is the composition of \( M_{12} \) and \( M_{23} \).
\textbf{Step1. Normalize} \( \Sigma_{12} \) and \( \Sigma_{23} \):
Rename the function and element symbols so that the symbols that appear in \( \Sigma_{12} \) are all distinct from those in \( \Sigma_{23} \).
\textbf{Step2. Construct virtual elements in} \( S_1 \):
If \( M_{12} : \pi_{\phi_1}(R_{\phi_1}^3(x_i)) \rightarrow \pi_{\phi_2}(R_{\phi_2}^3(x_i)) \) and \( M_{23} : \pi_{\phi_1}(R_{\phi_1}^4(x_i)) \rightarrow \pi_{\phi_2}(R_{\phi_2}^4(x_i)) \) satisfying that \( \theta'_2 \supset \theta_2 \).
\textbf{Then} for each \( i \) do projection operation \( \pi_{\theta_i}(R_{\theta_i}^5(x_i)) \), and get an attribute set \( E = \{E_1, \ldots, E_i\} \), where \( E_i \) is the set of attributes generated by \( R_{\theta_i}^5(x_i) \).
Add each set \( E_i \) into \( S_1 \) and combine it with attributes computed by \( \pi_{\phi_1}(R_{\phi_1}^3(x_i)) \), which is called a schema elements back process. As shown in the left dashed box of figure 7, we get the new constructed schema \( S'_1 \) by combining elements in \( E \) to \( S_1 \) (virtual elements are from \( S'_1 \)). Then we get the mapping \( M_{13} \) from \( S'_1 \) to \( S_3 \) with the formula set \( \Sigma_{13} \). Since the mapping from \( S_1 \) to \( S'_1 \) is one-to-one, we get \( M_{13} = M_{23} \), where \( S_2 \) gives the data back rules from \( S_1 \) to \( S'_1 \) during the data exchange process from \( S'_1 \) to \( S_3 \).
\textbf{Step3. Construction of} \( S_{12} \) and \( S_{23} \):
(3.1) Initialize \( S_{12} \) and \( S_{23} \) to each be the empty set.
Assume the formulas in \( \Sigma_{12} \) is:
\( (\forall x_i(\phi_1 \rightarrow \psi_i)) \wedge \cdots \wedge (\forall x_i(\phi_i \rightarrow \psi_i)) \)
(3.2) Put each of the \( n \) implications \( \phi_i \rightarrow \psi_i \), for \( 1 \leq i \leq n \), into \( S_{12} \). We do likewise for \( \Sigma_{23} \) and \( \Sigma_{23} \).
Each implication \( \chi \) in \( \Sigma_{12} \) has the form \( \phi(x) \rightarrow \bigotimes_{j} (\pi_{\phi_j}(R_{\phi_j}^3(x_j))) \otimes \cdots \otimes \pi_\phi(R_{\phi}^3(x_i))) \) where every member of \( x \) is a universally quantified variable, and each \( x_j \), for \( 1 \leq j \leq k \), is a sequences of terms on \( x \).
(3.3) Replace each such implication \( \chi \) in \( S_{12} \) with \( k \) implications:
\( \phi(x) \rightarrow \pi_{\phi_1}(R_{\phi_1}^3(x_i)) \otimes \cdots \otimes \pi_{\phi_k}(R_{\phi_k}^3(x_i)), \ldots, \phi(x) \rightarrow \pi_{\phi_k}(R_{\phi_k}^3(x_i)) \otimes \cdots \otimes \pi_\phi(R_{\phi}^3(x_i))) \).
\textbf{Step4. Compose} \( S_{12} \) and \( S_{23} \):
Repeat the following until every schema symbol in the left-hand side of every formula in \( S_{23} \) is from \( S_1 \).
For each implication \( \chi \) in \( S_{23} \) of the form \( \psi \rightarrow \gamma \) where there is an atom \( R(y) \) (\( R \) is a scheme symbol in \( S_2 \)), perform the following steps to replace \( R(y) \) with atoms in \( S'_1 \).
(4.1) Let \( \phi_1 \rightarrow R(t_1), \ldots, \phi_k \rightarrow R(t_k) \) be all the formulas in \( S_{12} \) whose right-hand side has \( R \) in it. If no such implications exist in \( S_{12} \), we remove \( \chi \) from \( S_{23} \).
Otherwise, for each such formula \( \phi_i \rightarrow R(t_i) \), rename the variables in this formula so that they do not overlap with the variables in \( \chi \).
(4.2) Remove \( \chi \) from \( S_{23} \) and add \( p \) formulas to \( S_{23} \) as follows: replace \( R(y) \) in \( \chi \) with \( \phi_i \) and add the resulting formula to \( S_{23} \), for \( 1 \leq i \leq p \).
\textbf{Step5. Construct} \( M_{13} \):
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Copyright © 2009 MECS} & \\
\hline
\textbf{I.J. Image, Graphics and Signal Processing, 2009, 1, 50-58} & \\
\hline
\end{tabular}
\end{table}
Let \( S_{23} = (\chi_1, \ldots, \chi_r) \) where \( \chi_1, \ldots, \chi_r \) are all the formulas from step 4. Let \( \Sigma_{12} = \forall z_i \chi_i \wedge \forall z_i \chi_i \) where \( z_i \) is the set of variables found in \( \chi_i \), for \( 1 \leq i \leq r \).
**Return** \( M_{12} = (S'_1, S_1, \Sigma_{12}) \).
A new source schema should be maintained by the composition algorithm, and such a schema may not be useful in other mapping compositions. However, if the scale of data to be exchanged is large, such new extended schema can improve the efficiency of data exchange.
**VI. EXPERIMENTS**
**A. Complexity of ICMA**
Let the elements that schema \( S_1, S_2 \) and \( S_3 \) respectively contain be \( N_{11}, N_2 \) and \( N_3 \). In step 1 of ICMA, the rename operation need to match each elements among \( S_1, S_2 \) and \( S_3 \), whose complexity is \( O(N_1 \times N_2 \times N_3) \).
Let the correspondences in \( M_{12} \) be \( N_{12} \) and correspondences in \( M_{23} \) be \( N_{23} \), where \( N_{12} \) and \( N_{23} \) are the numbers of atom elements. The number of virtual elements constructed in schema-element-back process is \( N_{23} - N_{12} \), so the complexity for constructing virtual elements is \( O(N_{23} - N_{12}) \).
The complexity for composing is decided by the atom implications that \( \Sigma_{12} \) and \( \Sigma_{23} \) contain. Let \( N_{12} \) be the number of implications in \( \Sigma_{12} \) with the form of \( \phi_i \rightarrow \gamma_i \) and \( n_{12} \) be the atom formula in \( \gamma_i \), and then the number of formulas in \( \Sigma_{12} \) is \( N_{12} \times n_{12} \). If the number of implications with the form of \( \phi_i \rightarrow \gamma_i \) in \( \Sigma_{23} \) is \( N_{23} \), where \( \psi \) is the set of atom formulas. The number of pairs of atom formulas that should be matched during the replacing process is \( N_{12}^2 \times N_{23}^2 \), so the composing complexity is \( O(N_{12}^2 \times N_{23}^2 \times N_{23}^2) \). The whole complexity for indirect mapping composition is \( O(N_1 \times N_2 \times N_3) + O(N_{23} - N_{12}) + O(N_{12}^2 \times N_{23}^2) \). The complexity of direct mapping composition process is \( O(N_1 \times N_2 \times N_3) + O(N_{12}^2 \times N_{23}^2) \), both are polynomial complexities.
**B. Results and analysis**
(1) Query efficiency under indirect mapping composition
Given the numbers of elements in schema \( S_1, S_2 \) and \( S_3 \), and the numbers of implications in \( \Sigma_{12} \) and \( \Sigma_{23} \). Given different scales of indirect mappings, now we compute the numbers of virtual elements constructed during the process of schema-element-back. We use the example in DBLP, chose 20 literature base by the sequence of letters, and pick up the meta-information of each literature. We construct the mappings manually as follows (take three of them for example):
**Author \( \otimes \) Author \( \otimes \ldots \otimes \) Author \( \rightarrow \) AuthorSet
Country \( \otimes \) University \( \otimes \ldots \otimes \) City \( \rightarrow \) AuthorAddress
Since virtual elements are introduced during the process of indirect mapping composition, figure 8 shows a comparison for numbers related by the query for non-mapping-composition and mapping-composition respectively.
As we can see that, although some virtual elements participate the querying and data exchanging, compared with the non-composition way, indirect mapping composition can reduce the number of mappings related by the query and improve the query efficiency.
Consider the indirect mappings shown in figure 4, we simulate the data exchange process from \( S_1 \) to \( S_3 \), and compare the executing times between indirect mapping composition and non-composition under different scale of instances. We use Oracle 9i as the database, and the time for reading records in the oracle database is the increase function of record scale and attributes numbers.
In the case of non-composition, instances \( \{I_1\} \) of \( S_1 \) are firstly transformed to instances of \( S_2 \) according to \( M_{12} \), denoted as \( \{I_2\} \), then \( \{I_2\} \) are transformed to \( \{I_3\} \) of \( S_3 \), the read and write of data happens among three schemas.
In the case of composition, since there are indirect mappings from \( S_1 \) to \( S_3 \), using the schema element back given in IMCA, instance for part attributes \( S_2 \) of are firstly transformed to \( S_3 \), and we get the new instances \( \{I'_1\} \), then instances \( \{I'_1\} \) of \( S_1 \) are generated according to \( M_{12} \).
Now we give the executing time (milliseconds) for the data exchange process from \( S_1 \) to \( S_3 \) under different instance scale of \( S_1 \), Figure 9 show the data exchange executing time comparison results.
Mechanism and Algorithm for Indirect Schema Mapping Composition
Figure 9. Data exchange process executing time comparisons.
Figure 10. Comparison for affection of middle schema between BECA and IMCA
(2) Comparison of BECA and IMCA
In BECA, symbols in the middle schema are eliminated by some algebra operations step by step using view unfolding, left composing and right composing. The main idea of BECA is to replace the symbols in the middle schema by the symbols in the source and target schemas as much as possible. Let the elements that participate in the process of elimination obey the equality distribution, and the numbers of virtual elements are decided by step 2 of IMCA, figure 10 shows the comparison between number of element symbols eliminated by BECA and number of virtual elements constructed in IMCA.
VII. CONCLUSION
Mapping composition is an active aspect in the research of schema mapping management, the object of which is to reuse mappings as a model, and simplify the process of data exchange. Especially when the scale of peers in the network is large, composing mappings can improve the data exchange efficiency. In this paper, we give the definition of indirect mappings, and discuss the composability of indirect mappings. Then we propose an indirect mapping composition algorithm using the schema-element-back strategy. Experiment results show that although IMCA does not improve the query performance, but the affection of middle schema can be eliminated.
REFERENCES
---
**Wang Bo**, born in Shenyang, China, in September 18th, 1980, got his Ph.D in National University of Defense Technology, Changsha, China in 2009. His current research interests include heterogeneous information integration and schema mapping.
**Guo Bo**, born in 1963. Professor and PH.D. supervisor. His main research interests include system management and integration and information management system.
|
{"Source-Url": "http://www.mecs-press.org/ijigsp/ijigsp-v1-n1/IJIGSP-V1-N1-7.pdf", "len_cl100k_base": 10055, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 36931, "total-output-tokens": 11801, "length": "2e13", "weborganizer": {"__label__adult": 0.00031757354736328125, "__label__art_design": 0.0006499290466308594, "__label__crime_law": 0.0004916191101074219, "__label__education_jobs": 0.00395965576171875, "__label__entertainment": 0.00012028217315673828, "__label__fashion_beauty": 0.00022804737091064453, "__label__finance_business": 0.0010433197021484375, "__label__food_dining": 0.00045990943908691406, "__label__games": 0.0006151199340820312, "__label__hardware": 0.0010623931884765625, "__label__health": 0.0008869171142578125, "__label__history": 0.0005397796630859375, "__label__home_hobbies": 0.00016999244689941406, "__label__industrial": 0.001064300537109375, "__label__literature": 0.0006871223449707031, "__label__politics": 0.00042724609375, "__label__religion": 0.0005640983581542969, "__label__science_tech": 0.478759765625, "__label__social_life": 0.00018703937530517575, "__label__software": 0.024078369140625, "__label__software_dev": 0.482421875, "__label__sports_fitness": 0.00023508071899414065, "__label__transportation": 0.0006604194641113281, "__label__travel": 0.0002310276031494141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39567, 0.02795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39567, 0.85901]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39567, 0.8413]], "google_gemma-3-12b-it_contains_pii": [[0, 4492, false], [4492, 9932, null], [9932, 15027, null], [15027, 20242, null], [20242, 24671, null], [24671, 30735, null], [30735, 35577, null], [35577, 37481, null], [37481, 39567, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4492, true], [4492, 9932, null], [9932, 15027, null], [15027, 20242, null], [20242, 24671, null], [24671, 30735, null], [30735, 35577, null], [35577, 37481, null], [37481, 39567, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39567, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39567, null]], "pdf_page_numbers": [[0, 4492, 1], [4492, 9932, 2], [9932, 15027, 3], [15027, 20242, 4], [20242, 24671, 5], [24671, 30735, 6], [30735, 35577, 7], [35577, 37481, 8], [37481, 39567, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39567, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
90b14f91e488d3bacdf999f377562f1e63bff1fc
|
Using Python for Scientific Computing
Session 3 - NumPy, SciPy, Matplotlib
Felix Steffenhagen
University of Freiburg
May 4, 2011
Inhalt
1 NumPy
2 SciPy
3 Plotting and Data Visualization
What is NumPy?
- Fundamental package for scientific computing in Python
- Provides multidimensional arrays, matrices and polynom objects
- Fast operations on arrays through vectorized functions
- Differences to Python sequences:
- Fixed size at creation
- All elements of the same data type
- Greater variety on numerical datatypes (e.g. int8, int32, uint32, float64)
- Highly efficient (implemented in C)
- Base for many other scientific related packages
Python is slow(er) ...
Simple test: Multiply two arrays of length 10,000,000
**pure Python**
```python
import time
l = 10000000
start = time.time()
a, b = range(l), range(l)
c = []
for i in a:
c.append(a[i] * b[i])
t = time.time() - start
print("Duration: %s" % t)
```
Duration: 4.67 s
**Using numpy**
```python
import numpy as np
import time
l = 10000000
start = time.time()
a = np.arange(l)
b = np.arange(l)
c = a * b
t = time.time() - start
print("Duration: %s" % t)
```
Duration: 0.73 s
Creating NumPy arrays
NumPy arrays can be created from Python structures or by using specific array creation functions.
```python
>>> import numpy as np
>>> a = np.array([1.5, 2.2, 3.0, 0.9])
array([ 1.5, 2.2, 3. , 0.9])
>>> zeros = np.zeros(6)
array([ 0., 0., 0., 0., 0., 0.])
>>> ones = np.ones(6)
array([ 1., 1., 1., 1., 1., 1.])
>>> a = np.arange(12)
>>> print a
[ 0 1 2 3 4 5 6 7 8 9 10 11]
>>> print a.size, a.ndim, a.shape
12 1 (12,)
>>> m = a.reshape(3, 4)
>>> print m
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
>>> print m.size, m.ndim, m.shape
12 2 (3, 4)
>>> Z = zeros((2,3))
>>> print Z
[[ 0. 0. 0.]
[ 0. 0. 0.]]
>>> v = np.linspace(0, 1.0, 5)
>>> v
array([ 0. , 0.25, 0.5 , 0.75, 1. ])
```
Array Creation Functions
- `np.array(seq, dtype)`: Creates an array from `seq` having data type `dtype` (optional)
- `np.ones(shape, dtype), np.zeros(shape, dtype)`: Creates an array of given shape and type, filled with ones/zeros. Default type is `float64`.
- `np.arange([start,] stop[, step], dtype)`: Like the normal `range` function but works also with floats. Returns evenly spaced values within a given interval.
- `np.linspace(start, stop[, num])`: Returns evenly spaced numbers over a specified interval.
**`arange` vs. `linspace`:**
- `np.arange(0.0, 1.0, 0.25) ⇒ [0.0, 0.25, 0.5 0.75]
- `np.linspace(0.0, 1.0, 5) ⇒ [0.0, 0.25, 0.5, 0.75, 1.0]`
Indexing Arrays
Python Interpreter
```python
>>> import numpy as np
>>> a = np.arange(20)
>>> a = a.reshape(5,4)
>>> a[3,2]
14
```
5×4 matrix
\[
\begin{pmatrix}
0 & 1 & 2 & 3 \\
4 & 5 & 6 & 7 \\
8 & 9 & 10 & 11 \\
12 & 13 & 14 & 15 \\
16 & 17 & 18 & 19 \\
\end{pmatrix}
\]
Indexing Arrays
Python Interpreter
```python
>>> import numpy as np
>>> a = np.arange(20)
>>> a = a.reshape(5,4)
>>> a[3,2]
14
>>> a[1] # second row
array([4, 5, 6, 7])
```
5 × 4 matrix
$$
\begin{pmatrix}
0 & 1 & 2 & 3 \\
4 & 5 & 6 & 7 \\
8 & 9 & 10 & 11 \\
12 & 13 & 14 & 15 \\
16 & 17 & 18 & 19 \\
\end{pmatrix}
$$
Indexing Arrays
Python Interpreter
```python
>>> import numpy as np
>>> a = np.arange(20)
>>> a = a.reshape(5,4)
>>> a[3,2]
14
>>> a[1] # second row
array([4, 5, 6, 7])
>>> a[-2] # second last row
array([12, 13, 14, 15])
```
5 × 4 matrix
$$\begin{pmatrix}
0 & 1 & 2 & 3 \\
4 & 5 & 6 & 7 \\
8 & 9 & 10 & 11 \\
12 & 13 & 14 & 15 \\
16 & 17 & 18 & 19 \\
\end{pmatrix}$$
Indexing Arrays
**Python Interpreter**
```python
>>> import numpy as np
>>> a = np.arange(20)
>>> a = a.reshape(5,4)
>>> a[3,2]
14
>>> a[1] # second row
array([4, 5, 6, 7])
>>> a[-2] # second last row
array([12, 13, 14, 15])
>>> a[:,0] # first column
array([ 0, 4, 8, 12, 16])
```
5 × 4 matrix
\[
\begin{pmatrix}
0 & 1 & 2 & 3 \\
4 & 5 & 6 & 7 \\
8 & 9 & 10 & 11 \\
12 & 13 & 14 & 15 \\
16 & 17 & 18 & 19
\end{pmatrix}
\]
Indexing Arrays
Python Interpreter
```python
>>> import numpy as np
>>> a = np.arange(20)
>>> a = a.reshape(5,4)
>>> a[3,2]
14
>>> a[1] # second row
array([4, 5, 6, 7])
>>> a[-2] # second last row
array([12, 13, 14, 15])
>>> a[:,0] # first column
array([0, 4, 8, 12, 16])
>>> a[1:4, 0:3] # sub-array
array([[4, 5, 6],
[8, 9, 10],
[12, 13, 14]])
```
```
5 \times 4 matrix
\begin{pmatrix}
0 & 1 & 2 & 3 \\
4 & 5 & 6 & 7 \\
8 & 9 & 10 & 11 \\
12 & 13 & 14 & 15 \\
16 & 17 & 18 & 19 \\
\end{pmatrix}
```
Indexing Arrays
Python Interpreter
```python
>>> import numpy as np
>>> a = np.arange(20)
>>> a = a.reshape(5,4)
>>> a[3,2]
14
>>> a[1] # second row
array([4, 5, 6, 7])
>>> a[-2] # second last row
array([12, 13, 14, 15])
>>> a[0] # first column
array([0, 4, 8, 12, 16])
>>> a[1:4, 0:3] # sub-array
array([[4, 5, 6],
[8, 9, 10],
[12, 13, 14]])
>>> a[::2, ::3] # skipping indices
array([[0, 3],
[8, 11],
[16, 19]])
```
5 × 4 matrix
```
0 1 2 3
4 5 6 7
8 9 10 11
12 13 14 15
16 17 18 19
```
Functions on numpy arrays
- The worst thing you can do is iterating with a `for`-loop over a numpy array.
- That’s why numpy supports several standard functions on arrays.
**Python Interpreter**
```python
>>> import numpy as np
>>> a = np.arange(1, 21)
>>> a
[1, 2, 3, 4, ..., 20]
>>> print a.min(), a.max(), a.mean() # minimum, maximum, arithmetic mean
1 20 10.5
>>> print a.std(), a.var() # standard deviation, variance
5.76 33.25
>>> print a.sum(), a.prod() # sum, product
210 2432902008176640000
>>> print a.any(), a.all() # any True?, all True?
True True
>>> b = np.array([0, 0, 1])
>>> print b.any(), b.all() # any True?, all True?
True False
```
Arithmetic operations on arrays
- NumPy supports arithmetic operations between arrays
- Advantage: No for-loops necessary (looping occurs in C)
- Element-wise operation for arrays of the same shape
Python Interpreter
```python
>>> import numpy as np
>>> a, b = np.arange(1, 11), np.arange(1,11)
>>> a
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
>>> a + 1
array([ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
>>> a * 2
array([ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20])
>>> a + b
array([ 2, 4, 6, 8, 10, 12, 14, 16, 18, 20])
>>> a * b
array([ 1, 4, 9, 16, 25, 36, 49, 64, 81, 100])
```
Things get little more complicated when arrays have different shapes. (see Broadcasting)
Operations on arrays of different shapes
- **broadcasting** describes how numpy treats arrays of different shapes during arithmetic operations
- Two dimensions are compatible when they are equal or one of the dimensions is 1
**Python Interpreter**
```python
>>> import numpy as np
>>> a = np.arange(9.0)
>>> a = a.reshape((3,3))
>>> a
array([[ 0., 1., 2.],
[ 3., 4., 5.],
[ 6., 7., 8.]])
>>> b = np.array([1.0, 2.0, 3.0])
>>> a * b
array([[ 0., 2., 6.],
[ 3., 8., 15.],
[ 6., 14., 24.]])
```
a (2d array): 3 x 3
b (1d array): x 3
Result : 3 x 3
The smaller array gets broadcasted to the larger array. Thus, result is computed by element-wise multiplication of
\[
\begin{pmatrix}
0 & 1 & 2 \\
3 & 4 & 5 \\
6 & 7 & 8 \\
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 2 & 3 \\
1 & 2 & 3 \\
1 & 2 & 3 \\
\end{pmatrix}
\]
Matrices are special array objects
Always 2-dimensional
Matrix multiplication
special properties of matrices:
- matrix.I (Inverse)
- matrix.T (Transposed)
- matrix.H (Conjugate)
- matrix.A (Array conversion)
Python Interpreter
```python
>>> import numpy as np
>>> m = np.matrix([[1, 2], [3,4]])
>>> m
matrix([[1, 2],
[3, 4]])
>>> m.I
matrix([[[-2. , 1. ],
[ 1.5, -0.5]]])
>>> m.T
matrix([[1, 3],
[2, 4]])
>>> b = np.array([2, 3])
>>> b.shape = (2, 1)
>>> b
array([[2],
[3]])
>>> m * b
matrix([[ 8],
[18]])
```
Felix Steffenhagen (Uni Freiburg)
The `numpy.linalg` submodule provides core linear algebra tools.
```python
>>> import numpy as np
>>> import numpy.linalg as linalg
A = np.matrix([[2, 3, -1],
[1, 3, 1],
[-2, -2, 4]])
>>> A
matrix([[ 2, 3, -1],
[ 1, 3, 1],
[-2, -2, 4]])
>>> b = np.array([1, 2, 4])
>>> linalg.solve(A, b)
array([ 3., -1., 2.])
```
\[
\begin{align*}
2x + 3y - z &= 1 \\
x + 3y + z &= 2 \\
-2x - 2y + 4z &= 4
\end{align*}
\]
Polynoms
- NumPy defines a polynom datatype that allows symbolic computations value evaluation and polynomial arithmetic
- Derivation, Integration
```python
p = np.poly1d(coefs)
```
- Constructs a polynom \( p \) from the given coefficient sequence ordered in decreasing power.
```python
p.deriv(m), p.integ(m)
```
- Compute the derivative or anti-derivative of \( p \). Parameter \( m \) determines the order of derivation.
```
e.g. \( f(x) = 3x^2 - 2x + 1 \)
```
```python
>>> import numpy as np
>>> f = np.poly1d([3, -2, 1])
>>> print(f)
2
3 x - 2 x + 1
>>> f(2.5)
14.75
>>> f_1, F = f.deriv(), f.integ()
>>> print f_1
6 x - 2
>>> print F
3 2
1 x - 1 x + 1 x
```
Curve fitting
- polynomial regression
- \texttt{np.polyfit(x, y, deg)}:
Least squares polynomial fit of degree \texttt{deg} for coordinate sequences \texttt{x} and \texttt{y}.
Returns array with polynomial coefficients.
Python Interpreter
```python
from numpy import array, poly1d, polyfit
>>> x = array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
>>> y = array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
```

Curve fitting
- polynomial regression
- `np.polyfit(x, y, deg)`:
Least squares polynomial fit of degree `deg` for coordinate sequences `x` and `y`. Returns array with polynomial coefficients.
Python Interpreter
```python
from numpy import array, poly1d, polyfit
>>> x = array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
>>> y = array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
# linear fit
>>> coefs = polyfit(x, y, 1)
>>> p1 = poly1d(coefs)
>>> print(p1)
-0.3029 x + 0.7571
```

Curve fitting
- polynomial regression
- `np.polyfit(x, y, deg)`: Least squares polynomial fit of degree `deg` for coordinate sequences `x` and `y`. Returns array with polynomial coefficients.
Python Interpreter
```python
from numpy import array, poly1d, polyfit
>>> x = array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0])
>>> y = array([0.0, 0.8, 0.9, 0.1, -0.8, -1.0])
# linear fit
>>> coefs = polyfit(x, y, 1)
>>> p1 = poly1d(coefs)
>>> print(p1)
-0.3029 x + 0.7571
# cubical fit
>>> coefs = polyfit(x, y, 3)
>>> p3 = poly1d(coefs)
>>> print(p3)
3 2
0.08704 x - 0.8135 x + 1.693 x - 0.03968
```
SciPy package
- Collection of mathematical algorithms and convenience functions
- Built on NumPy
- Organized into sub-packages
Some of the interesting modules:
<table>
<thead>
<tr>
<th>Sub-Module</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>cluster</td>
<td>Clustering algorithms</td>
</tr>
<tr>
<td>constants</td>
<td>Physical Constants</td>
</tr>
<tr>
<td>fftpack</td>
<td>Fast Fourier Transformation</td>
</tr>
<tr>
<td>integrate</td>
<td>Integration and ODE solvers</td>
</tr>
<tr>
<td>interpolate</td>
<td>Interpolation (e.g. Splines)</td>
</tr>
<tr>
<td>special</td>
<td>Special functions</td>
</tr>
<tr>
<td></td>
<td>(e.g. Bessel functions, Gamma-Function)</td>
</tr>
<tr>
<td>stats</td>
<td>Statistical Functions and Distributions</td>
</tr>
</tbody>
</table>
SciPy-Safari: Integration
```python
import scipy.integrate as spint
def f(x):
return x**2
spint.quad(f, 0, 2)
output: (2.66..., 2.96e-14)
```
SciPy also supports infinite integration limits. See documentation.
import numpy as np
import scipy.interpolate as spintp
x = np.linspace(0, 2 * np.pi, 10)
y = np.sin(x)
x_spline = np.linspace(0, 2 * np.pi, 100)
y_spline = spintp.spline(x, y, x_spline)
y_spline
# output:
array([ 3.851e-16, 6.465e-02, 1.286e-01,
1.917e-01, 2.538e-01, 3.146e-01,
3.739e-01, 4.317e-01, 4.876e-01,
5.416e-01, 5.934e-01, 6.427e-01,
6.896e-01, 7.337e-01, 7.749e-01,
8.132e-01, 8.483e-01, 8.801e-01,
9.085e-01, 9.333e-01, 9.544e-01,
...])
Random Number Generation
numpy.random sub module (= scipy.random) provides many different functions for random number generation.
- sp.rand(d0, d1, ...):
Create array of given shape filled with uniform random numbers over [0, 1].
- sp.randn(d0, d1, ...):
The same as sp.rand() but generates zero-mean unit-variance Gaussian random numbers.
- sp.random.randint(low, high=None, size=None):
Return random integers x such that low \leq x < high. If high is None, then 0 \leq x < low.
- sp.random.binomial(n, p, shape=None):
Draw n samples from binomial distr. with success probability p. Returns array of given shape containing the number of successes.
Random Number Generation - Examples
Python Interpreter
```python
>>> from scipy.random import * # import all random functions
>>> rand(2,3) # 2x3 array
array([[ 0.49010722, 0.73308678, 0.5209828 ],
[ 0.54217486, 0.75698016, 0.10697513]])
>>> rnd = randn(100) # 100 norm. distr. numbers
>>> rnd.mean() # mean should be close to 0
0.0789
>>> randint(1, 50, 6) # lottery numbers 6 of 49
array([ 2, 28, 15, 49, 22, 35])
>>> binomial(5, 0.4) # unfair coin flipping
2
>>> binomial(5, 0.4, 10) # 10 games with 5 flips
array([4, 3, 0, 1, 3, 2, 3, 2, 1, 3])
```
Data Visualization with matplotlib
- **matplotlib** provides 2D data visualization as in MATLAB.
- Publication quality plots
- Export to different file formats
- Embeddable in graphical user interfaces
- Making plots should be easy!
- Heavy use of NumPy and SciPy
- **pylab**: provides a matlab-like environment
(roughly: combines NumPy, SciPy and matplotlib)
<table>
<thead>
<tr>
<th><strong>pylab (matplotlib.pyplot)</strong></th>
<th>(Provides plot functions similar to MATLAB)</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>matplotlib API</strong></td>
<td>(Basic libraries for creating and managing figures, text, lines, ... )</td>
</tr>
<tr>
<td><strong>Backend</strong></td>
<td>(device dependent renderers)</td>
</tr>
</tbody>
</table>
A simple plot
Plots are generated successively. Each plotting function makes changes to the figure.
Python Interpreter
```python
>>> from pylab import *
# Turn on interactive mode
>>> ion()
# 10 norm. distr. rnd numbers
>>> x = randn(10)
>>> plot(x)
```
A simple plot
Plots are generated successively. Each plotting function makes changes to the figure.
Python Interpreter
```python
from pylab import *
# Turn on interactive mode
ion()
# 10 norm. distr. rnd numbers
x = randn(10)
# setting axis limits
axis([0, 10, -3, 3])
```
A simple plot
Plots are generated successively. Each plotting function makes changes to the figure.
Python Interpreter
```python
>>> from pylab import *
# Turn on interactive mode
>>> ion()
# 10 norm. distr. rnd numbers
>>> x = randn(10)
>>> plot(x)
# setting axis limits
>>> axis([0, 10, -3, 3])
# toggle grid
>>> grid()
```
A simple plot
Plots are generated successively. Each plotting function makes changes to the figure.
Python Interpreter
```python
>>> from pylab import *
# Turn on interactive mode
>>> ion()
# 10 norm. distr. rnd numbers
>>> x = randn(10)
>>> plot(x)
# setting axis limits
>>> axis([0, 10, -3, 3])
# toggle grid
>>> grid()
>>> grid()
```
A simple plot
Plots are generated successively. Each plotting function makes changes to the figure.
Python Interpreter
```python
>>> from pylab import *
# Turn on interactive mode
>>> ion()
# 10 norm. distr. rnd numbers
>>> x = randn(10)
>>> plot(x)
# setting axis limits
>>> axis([0, 10, -3, 3])
# toggle grid
>>> grid()
>>> grid()
# add another plot
>>> y = linspace(-3, 3, 10)
>>> plot(y)
```
A simple plot
Plots are generated successively. Each plotting function makes changes to the figure.
Python Interpreter
```python
>>> from pylab import *
# Turn on interactive mode
>>> ion()
# 10 norm. distr. rnd numbers
>>> x = randn(10)
>>> plot(x)
# setting axis limits
>>> axis([0, 10, -3, 3])
# toggle grid
>>> grid()
>>> grid()
# add another plot
>>> y = linspace(-3, 3, 10)
>>> plot(y)
# plot with x and y axis values
>>> x = linspace(0, 9, 100)
>>> plot(x, sin(x))
```
Basic Plotting Functions
- **plot([x,] y):**
Generates simple line plot for x and y values. If x-values are not specified the array index values (0, 1, 2, ...) will be used.
- **axis(v):**
Sets the axis limits to the values $v = [\text{xmin}, \text{xmax}, \text{ymin}, \text{ymax}]$. $v$ can also be a string (e.g. 'off', 'equal', 'auto')
- **xlabel(s), ylabel(s):**
Set labels for x or y axis to s.
- **title(s), suptitle(s):**
Set title for current plot or for the whole figure.
- **show():**
Shows the current figure. Usually the last function to be called in a script after generating a plot.
- **clf():** clear the figure
The `plot` function accepts a pattern string specifying the line and symbol style in the format: "<color><line><symbol>"
**example**
```python
# initialize some values
>>> values = arange(10)
# plot red dotted line with circles
>>> plot(x, "r:o")
# plot green dashed line
>>> plot(x + 5, "g--")
```
<table>
<thead>
<tr>
<th>Line Colors</th>
<th>Line Styles</th>
<th>Marker Symbols</th>
</tr>
</thead>
<tbody>
<tr>
<td>r red</td>
<td>solid line</td>
<td>o circles</td>
</tr>
<tr>
<td>g green</td>
<td>dashed line</td>
<td>s squares</td>
</tr>
<tr>
<td>b blue</td>
<td>dash-dot line</td>
<td>x crosses</td>
</tr>
<tr>
<td>w white</td>
<td>dotted line</td>
<td>+ plus’es</td>
</tr>
<tr>
<td></td>
<td></td>
<td>* stars</td>
</tr>
<tr>
<td>c cyan</td>
<td></td>
<td>D diamonds</td>
</tr>
<tr>
<td>m magenta</td>
<td></td>
<td>d thin diamonds</td>
</tr>
<tr>
<td>y yellow</td>
<td></td>
<td></td>
</tr>
<tr>
<td>k black</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Let’s make a plot having labels, title and a legend.
Python Interpreter
```python
>>> x = linspace(-5, 5, 100)
>>> y_sin, y_cos = sin(x), cos(x)
```
This is the plot we want to create.
Adding Labels and Legends
Let's make a plot having labels, title and a legend.
Python Interpreter
```python
>>> x = linspace(-5, 5, 100)
>>> y_sin, y_cos = sin(x), cos(x)
>>> plot(x, y_sin, "r", label="sine")
```
Adding the sine curve.
Let’s make a plot having labels, title and a legend.
```
>>> x = linspace(-5, 5, 100)
>>> y_sin, y_cos = sin(x), cos(x)
>>> plot(x, y_sin, "r", label="sine")
>>> plot(x, y_cos, "b", label="cosine")
```
Adding the cosine curve.
Let's make a plot having labels, title and a legend.
Python Interpreter
```python
>>> x = linspace(-5, 5, 100)
>>> y_sin, y_cos = sin(x), cos(x)
>>> plot(x, y_sin, "r", label="sine")
>>> plot(x, y_cos, "b", label="cosine")
>>> xlabel("x-value")
>>> ylabel("y-value")
>>> title("Sine and Cosine function")
```
Adding labels and title.
Let's make a plot having labels, title and a legend.
Python Interpreter
```python
>>> x = linspace(-5, 5, 100)
>>> y_sin, y_cos = sin(x), cos(x)
>>> plot(x, y_sin, "r", label="sine")
>>> plot(x, y_cos, "b", label="cosine")
>>> xlabel("x-value")
>>> ylabel("y-value")
>>> title("Sine and Cosine function")
>>> axis([-5, 5, -2, 2])
>>> grid()
```
Changing axis and adding grid.
Let’s make a plot having labels, title and a legend.
```python
>>> x = linspace(-5, 5, 100)
>>> y_sin, y_cos = sin(x), cos(x)
>>> plot(x, y_sin, "r", label="sine")
>>> plot(x, y_cos, "b", label="cosine")
>>> xlabel("x-value")
>>> ylabel("y-value")
>>> title("Sine and Cosine function")
>>> axis([-5, 5, -2, 2])
>>> grid()
>>> legend(loc="upper center")
```
Add the legend.
Let’s make a plot having labels, title and a legend.
Python Interpreter
```python
>>> x = linspace(-5, 5, 100)
>>> y_sin, y_cos = sin(x), cos(x)
>>> plot(x, y_sin, "r", label="sine")
>>> plot(x, y_cos, "b", label="cosine")
>>> xlabel("x-value")
>>> ylabel("y-value")
>>> title("Sine and Cosine function")
>>> axis([-5, 5, -2, 2])
>>> grid()
>>> legend(loc="upper center")
>>> annotate(r"\pi" around here,
xy=(3.1, 1.0), xytext=(-1.0, -1.5),
arrowprops=dict(color='black'))
```
Add the annotation.
The complete plot
Sine and Cosine Function
\[ y \approx \pi \text{ around here} \]
\[ \sin x \text{ and } \cos x \]
Annotations and Text functions
- **legend()**: Adds a legend to the current plot. Use keyword parameter `loc` to set the location either by string (e.g. `'upper center'`) or by 2-tuple (e.g. `(2, 3)`).
- **annotate(text, xy=(ax, ay), xytext=(tx, ty))**: Annotate special location `(ax, ay)` and put text at location `(tx, ty)`.
- Optional parameter `arrowprops` is a dictionary of arrow properties. If properties are set, an arrow is drawn in the figure.
- **text(x, y, text)**: Add text at location `(x, y)`.
Wherever text can be added (labels, titles, annotations), you can use TeX formulas (e.g. `r"$\sum_i^n i$"`). `r" "` is a raw string in which backslashes are kept unchanged.
- `matplotlib` provides a lot of plot types
- Lineplots, Scatterplots, Histograms, Timeseries plots, ...
http://matplotlib.sourceforge.net/gallery.html
hist(x, bins=10)
Computes and draws the histogram of x. Additional keyword options:
- normed=[False | True]: normalize to probability density
- orientation=["horizontal" | "vertical"]
Python Interpreter
```python
# create some data
>>> mu, sigma = 3, 1.2
>>> values = mu + sigma * randn(100)
# plot histogram
>>> hist(values, normed=True,
color="#42da42", ec="black")
```
### Histograms
- `hist(x, bins=10)`
Computes and draws the histogram of `x`. Additional keyword options:
- `normed=[False | True]`: normalize to probability density
- `orientation=["horizontal" | "vertical"]`
#### Python Interpreter
```python
# create some data
>>> mu, sigma = 3, 1.2
>>> values = mu + sigma * randn(100)
# plot histogram
>>> hist(values, normed=True,
color="#42da42", ec="black")
# add Norm PDF
>>> p = gca()
>>> x_min, x_max = p.get_xlim()
>>> x = linspace(x_min, x_max, 100)
>>> plot(x, normpdf(x, mu, sigma))
```
`gca()`: get current axes
- `bar(left, height)`: Make a bar plot with rectangles.
- `xticks(pos, labels)`: Set locations and labels of the xticks
**Python Interpreter**
```python
>>> left = [1, 2, 3]
>>> height = [5, 10, 20]
>>> bar(left, height)
```
Bar Plots
- `bar(left, height)`: Make a bar plot with rectangles.
- `xticks(pos, labels)`: Set locations and labels of the xticks.
```python
>>> left = [1, 2, 3]
>>> height = [5, 10, 20]
>>> bar(left, height)
>>> clf()
>>> bar(left, height, align="center")
```
Bar Plots
- `bar(left, height)`: Make a bar plot with rectangles.
- `xticks(pos, labels)`: Set locations and labels of the xticks.
```
>>> left = [1, 2, 3]
>>> height = [5, 10, 20]
>>> bar(left, height)
>>> clf()
>>> bar(left, height, align="center")
>>> xticks(left, ("A", "B", "C"))
```

Bar Plots for different groups require the separate plotting of each group.
Python Interpreter
```python
>>> bar_width = .5
>>> group1 = [20, 25, 18, 29]
>>> group2 = [22, 24, 25, 35]
```
Bar Plots for different groups require the separate plotting of each group.
Python Interpreter
```python
>>> bar_width = .5
>>> group1 = [20, 25, 18, 29]
>>> group2 = [22, 24, 25, 35]
>>> pos1 = arange(4) + 1
>>> bar(pos1, group1, color="blue")
```
Bar Plots for different groups require the separate plotting of each group.
Python Interpreter
```python
>>> bar_width = .5
>>> group1 = [20, 25, 18, 29]
>>> group2 = [22, 24, 25, 35]
>>> pos1 = arange(4) + 1
>>> bar(pos1, group1, color="blue")
>>> pos2 = pos1 + bar_width + .1
>>> bar(pos2, group2, color="yellow")
```
Bar Plots for different groups require the separate plotting of each group.
```
>>> bar_width = .5
>>> group1 = [20, 25, 18, 29]
>>> group2 = [22, 24, 25, 35]
>>> pos1 = arange(4) + 1
>>> bar(pos1, group1, color="blue")
>>> pos2 = pos1 + bar_width + .1
>>> bar(pos2, group2, color="yellow")
>>> cond = ('C 1', 'C 2', 'C 3', 'C 4')
>>> xticks(pos2, cond)
```
Plotting 2D Arrays (as Images)
- `imshow(X[, cmap])`: Display the image or float array in X. The parameter `cmap` lets you specify a colormap (e.g. `cmap=cm.gray`)
- `colorbar()`: adds a colorbar to the current plot
Python Interpreter
```python
>>> img_dat = rand(30,30)
>>> imshow(img_dat)
```
(see help(colormaps) for more themes)
Plotting 2D Arrays (as Images)
- `imshow(X[, cmap])`: Display the image or float array in `X`. The parameter `cmap` lets you specify a colormap (e.g. `cmap=cm.gray`)
- `colorbar()`: adds a colorbar to the current plot
Python Interpreter
```python
>>> img_dat = rand(30,30)
>>> imshow(img_dat)
>>> colorbar()
```
(see `help(colormaps)` for more themes)
Plotting 2D Arrays (as Images)
- `imshow(X[, cmap])`: Display the image or float array in \( X \). The parameter `cmap` lets you specify a colormap (e.g. `cmap=cm.gray`)
- `colorbar()`: adds a colorbar to the current plot
Python Interpreter
```python
>>> img_dat = rand(30,30)
>>> imshow(img_dat)
>>> colorbar()
>>> gray()
```
(see help(colormaps) for more themes)
Plotting 2D Arrays (as Images)
- `imshow(X[, cmap])`:
Display the image or float array in X. The parameter `cmap` lets you specify a colormap (e.g. `cmap=cm.gray`)
- `colorbar()`: adds a colorbar to the current plot
**Python Interpreter**
```python
>>> img_dat = rand(30,30)
>>> imshow(img_dat)
>>> colorbar()
>>> gray()
>>> copper()
```
(see `help(colormaps)` for more themes)
Multiple figures and subplots
- matplotlib uses concept of current figures and current plots.
- plot command changes current subplot in current figure.
- arbitrary number of figures and subplots possible
- Plots are arranged in a matrix grid.
![4 in 1 plots]
Multiple figures and subplots cntd.
- Let's create two figures, with two plots in each.
- One aligned horizontally, the other vertically
Python Interpreter
```python
# get some data
>>> x = randn(100)
# create 1st figure
>>> figure(1)
>>> subplot(2,1,1)
>>> hist(x)
>>> subplot(2,1,2)
>>> plot(x)
```
`subplot(rows, cols, n)` creates or switches to n-th plot in a rows×cols arrangement
Let’s create two figures, with two plots in each. One aligned horizontally, the other vertically.
Python Interpreter
```python
# get some data
>>> x = randn(100)
# create 1st figure
>>> figure(1)
>>> subplot(2,1,1)
>>> hist(x)
>>> subplot(2,1,2)
>>> plot(x)
# create 2nd figure
>>> figure(2)
>>> subplot(1,2,1)
>>> hist(x)
>>> subplot(1,2,2)
>>> plot(x)
```
More complex layouts
- `subplot` command allows creation of more complex plot arrangements
- limited to matrix arrangement, no spanning over several cols/rows
- Plot 1 is first plot in $2 \times 2$ layout
- Plot 2 is second plot in $2 \times 2$ layout
- Plot 3 is second plot in $2 \times 1$ layout
Example
Python Interpreter
```python
# generate some data
>>> x, y = randn(100), randn(100)
# generate 1st subplot
>>> subplot(2,2,1)
>>> hist(x)
```
Example
**Python Interpreter**
```python
# generate some data
>>> x, y = randn(100), randn(100)
# generate 1st subplot
>>> subplot(2,2,1)
>>> hist(x)
# generate 2nd subplot
>>> subplot(2,2,2)
>>> plot(x, y, "bo")
```
![Graphs showing histogram and scatter plot]
Example
Python Interpreter
# generate some data
>>> x, y = randn(100), randn(100)
# generate 1st subplot
>>> subplot(2,2,1)
>>> hist(x)
# generate 2nd subplot
>>> subplot(2,2,2)
>>> plot(x, y, "bo")
# generate 3rd subplot
>>> subplot(2,1,2)
>>> plot(x)
Example
Python Interpreter
```python
# generate some data
>>> x, y = randn(100), randn(100)
# generate 1st subplot
>>> subplot(2,2,1)
>>> hist(x)
# generate 2nd subplot
>>> subplot(2,2,2)
>>> plot(x, y, "bo")
# generate 3rd subplot
>>> subplot(2,1,2)
>>> plot(x)
# switch back to plots
>>> subplot(2,2,1)
>>> title("Histogram")
>>> subplot(2,2,2)
>>> title("Scatter Plot")
```
Figures can be saved from interactive window or with function `savefig`.
```python
savefig(filename):
Saves the current figure as PNG to `filename`.
Optional keyword parameters:
- `format`: 'png', 'pdf', 'ps', 'eps', 'svg'
- `transparent`: If `True` makes the figure transparent
```
```python
from pylab import *
x = linspace(-3, 3, 100)
y = sin(x)
plot(x, y)
savefig("sineplot", format="pdf")
```
|
{"Source-Url": "http://www-personal.umich.edu/~kundeng/stats607/week_5_pysci-03-scipy.pdf", "len_cl100k_base": 9524, "olmocr-version": "0.1.53", "pdf-total-pages": 68, "total-fallback-pages": 0, "total-input-tokens": 110167, "total-output-tokens": 12832, "length": "2e13", "weborganizer": {"__label__adult": 0.0003390312194824219, "__label__art_design": 0.0016918182373046875, "__label__crime_law": 0.00031495094299316406, "__label__education_jobs": 0.004337310791015625, "__label__entertainment": 0.0002448558807373047, "__label__fashion_beauty": 0.0001971721649169922, "__label__finance_business": 0.0003731250762939453, "__label__food_dining": 0.0005664825439453125, "__label__games": 0.0007386207580566406, "__label__hardware": 0.0015468597412109375, "__label__health": 0.0006732940673828125, "__label__history": 0.0005435943603515625, "__label__home_hobbies": 0.00025010108947753906, "__label__industrial": 0.0008196830749511719, "__label__literature": 0.0003707408905029297, "__label__politics": 0.0003819465637207031, "__label__religion": 0.0006036758422851562, "__label__science_tech": 0.314453125, "__label__social_life": 0.00027561187744140625, "__label__software": 0.056610107421875, "__label__software_dev": 0.61328125, "__label__sports_fitness": 0.0002913475036621094, "__label__transportation": 0.00047397613525390625, "__label__travel": 0.0003249645233154297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27625, 0.0788]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27625, 0.45731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27625, 0.55819]], "google_gemma-3-12b-it_contains_pii": [[0, 132, false], [132, 192, null], [192, 192, null], [192, 657, null], [657, 1159, null], [1159, 1865, null], [1865, 2521, null], [2521, 2798, null], [2798, 3120, null], [3120, 3493, null], [3493, 3941, null], [3941, 4463, null], [4463, 4988, null], [4988, 5649, null], [5649, 6305, null], [6305, 7164, null], [7164, 7760, null], [7760, 8221, null], [8221, 8894, null], [8894, 9332, null], [9332, 9852, null], [9852, 10443, null], [10443, 11071, null], [11071, 11289, null], [11289, 11790, null], [11790, 12452, null], [12452, 13025, null], [13025, 13734, null], [13734, 13991, null], [13991, 14270, null], [14270, 14599, null], [14599, 14939, null], [14939, 15338, null], [15338, 15817, null], [15817, 16460, null], [16460, 17273, null], [17273, 17461, null], [17461, 17701, null], [17701, 17930, null], [17930, 18267, null], [18267, 18646, null], [18646, 19021, null], [19021, 19537, null], [19537, 19656, null], [19656, 20345, null], [20345, 20498, null], [20498, 20881, null], [20881, 21459, null], [21459, 21686, null], [21686, 21949, null], [21949, 22272, null], [22272, 22462, null], [22462, 22714, null], [22714, 23038, null], [23038, 23400, null], [23400, 23737, null], [23737, 24093, null], [24093, 24462, null], [24462, 24846, null], [24846, 25107, null], [25107, 25497, null], [25497, 25858, null], [25858, 26159, null], [26159, 26311, null], [26311, 26576, null], [26576, 26831, null], [26831, 27210, null], [27210, 27625, null]], "google_gemma-3-12b-it_is_public_document": [[0, 132, true], [132, 192, null], [192, 192, null], [192, 657, null], [657, 1159, null], [1159, 1865, null], [1865, 2521, null], [2521, 2798, null], [2798, 3120, null], [3120, 3493, null], [3493, 3941, null], [3941, 4463, null], [4463, 4988, null], [4988, 5649, null], [5649, 6305, null], [6305, 7164, null], [7164, 7760, null], [7760, 8221, null], [8221, 8894, null], [8894, 9332, null], [9332, 9852, null], [9852, 10443, null], [10443, 11071, null], [11071, 11289, null], [11289, 11790, null], [11790, 12452, null], [12452, 13025, null], [13025, 13734, null], [13734, 13991, null], [13991, 14270, null], [14270, 14599, null], [14599, 14939, null], [14939, 15338, null], [15338, 15817, null], [15817, 16460, null], [16460, 17273, null], [17273, 17461, null], [17461, 17701, null], [17701, 17930, null], [17930, 18267, null], [18267, 18646, null], [18646, 19021, null], [19021, 19537, null], [19537, 19656, null], [19656, 20345, null], [20345, 20498, null], [20498, 20881, null], [20881, 21459, null], [21459, 21686, null], [21686, 21949, null], [21949, 22272, null], [22272, 22462, null], [22462, 22714, null], [22714, 23038, null], [23038, 23400, null], [23400, 23737, null], [23737, 24093, null], [24093, 24462, null], [24462, 24846, null], [24846, 25107, null], [25107, 25497, null], [25497, 25858, null], [25858, 26159, null], [26159, 26311, null], [26311, 26576, null], [26576, 26831, null], [26831, 27210, null], [27210, 27625, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27625, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 27625, null]], "pdf_page_numbers": [[0, 132, 1], [132, 192, 2], [192, 192, 3], [192, 657, 4], [657, 1159, 5], [1159, 1865, 6], [1865, 2521, 7], [2521, 2798, 8], [2798, 3120, 9], [3120, 3493, 10], [3493, 3941, 11], [3941, 4463, 12], [4463, 4988, 13], [4988, 5649, 14], [5649, 6305, 15], [6305, 7164, 16], [7164, 7760, 17], [7760, 8221, 18], [8221, 8894, 19], [8894, 9332, 20], [9332, 9852, 21], [9852, 10443, 22], [10443, 11071, 23], [11071, 11289, 24], [11289, 11790, 25], [11790, 12452, 26], [12452, 13025, 27], [13025, 13734, 28], [13734, 13991, 29], [13991, 14270, 30], [14270, 14599, 31], [14599, 14939, 32], [14939, 15338, 33], [15338, 15817, 34], [15817, 16460, 35], [16460, 17273, 36], [17273, 17461, 37], [17461, 17701, 38], [17701, 17930, 39], [17930, 18267, 40], [18267, 18646, 41], [18646, 19021, 42], [19021, 19537, 43], [19537, 19656, 44], [19656, 20345, 45], [20345, 20498, 46], [20498, 20881, 47], [20881, 21459, 48], [21459, 21686, 49], [21686, 21949, 50], [21949, 22272, 51], [22272, 22462, 52], [22462, 22714, 53], [22714, 23038, 54], [23038, 23400, 55], [23400, 23737, 56], [23737, 24093, 57], [24093, 24462, 58], [24462, 24846, 59], [24846, 25107, 60], [25107, 25497, 61], [25497, 25858, 62], [25858, 26159, 63], [26159, 26311, 64], [26311, 26576, 65], [26576, 26831, 66], [26831, 27210, 67], [27210, 27625, 68]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27625, 0.02498]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
23c032fb662d0db8d74db30fc970043619bc8f74
|
Abstract
Guitar Hero: Nursery Rhyme Edition is a guitar simulation game with a computer keyboard acting as a substitute for the guitar. The objective of the game is to play a nursery rhyme song in tempo with the rhythm by hitting the notes correctly through holding the respective keys and pressing the Shift key as an act of strumming. The game system consists of four major components, the keyboard interface, display, game logic, and audio. The game logic manages the overall functionality of the game by connecting the three other main components.
# Table of Contents
List of Figures........................................................................................................... 2
1 Overview................................................................................................................ 3
2 Description............................................................................................................. 4
2.1 Keyboard Interface............................................................................................. 5
2.1.1 Keyboard Module......................................................................................... 5
2.1.2 PS2 Module.................................................................................................. 5
2.2 Music Memory..................................................................................................... 6
2.2.1 Music ROM................................................................................................. 6
2.2.2 Song Selection Module.................................................................................. 6
2.3 Display............................................................................................................... 6
2.3.1 SVGA Module.............................................................................................. 6
2.3.2 Display FSM Module.................................................................................... 6
2.3.2.1 Character String Display Module......................................................... 8
2.3.2.2 Binary to String Converter Module...................................................... 8
2.3.2.3 Start Menu Module................................................................................ 8
2.3.2.4 Playing Display Module......................................................................... 9
2.3.2.4.1 Blob Module.................................................................................... 10
2.3.2.4.2 Streak Module.................................................................................. 11
2.3.2.4.3 Blob Outline Module....................................................................... 11
2.3.2.5 Song Over Module................................................................................. 12
2.4 Game Logic........................................................................................................ 12
2.4.1 Game Logic Module...................................................................................... 12
2.4.1.1 Divider Module..................................................................................... 13
2.4.1.2 Timer Module....................................................................................... 13
2.5 Audio............................................................................................................... 14
2.5.1 Final Audio Module.................................................................................... 14
2.5.1.1 Direct Digital Synthesizer.................................................................... 14
2.5.2 AC97 Module.............................................................................................. 14
2.5.3 AC97 Commands Module.......................................................................... 14
3 Testing and Debugging......................................................................................... 15
4 Conclusion............................................................................................................ 17
References............................................................................................................... 19
## List of Figures
<table>
<thead>
<tr>
<th>Figure</th>
<th>Description</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Overall Block Diagram</td>
<td>4</td>
</tr>
<tr>
<td>2</td>
<td>FSM and Look-up Tables within the Keyboard States</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>Finite State Machine for the Display</td>
<td>7</td>
</tr>
<tr>
<td>4</td>
<td>Block Diagram for Display to Create Pixels</td>
<td>7</td>
</tr>
<tr>
<td>5</td>
<td>Start Menu Display with Highlighted Selector</td>
<td>8</td>
</tr>
<tr>
<td>6</td>
<td>Playing Display of Scrolling Notes with Various Lengths</td>
<td>10</td>
</tr>
<tr>
<td>7</td>
<td>Display of Keyboard Fret Buttons Being Pressed</td>
<td>11</td>
</tr>
<tr>
<td>8</td>
<td>Song Over Display</td>
<td>12</td>
</tr>
<tr>
<td>9</td>
<td>Data flow to and from AC97 and AC97 Commands modules.</td>
<td>14</td>
</tr>
</tbody>
</table>
1. Overview
This project implements a version of the popular game Guitar Hero. The project is composed of visual, audio, and interactive parts. The goal of the game is to be able to “play” the notes of a song chosen from a given list by pressing the appropriate buttons on the keyboard to match the ones on the screen.
On the PS2 keyboard, keys 1-8 are used for the fret buttons that represent the notes and the Shift key substitutes for the strumming of the strings of the guitar. This allows the player to hold the keyboard like a guitar. Shift must be pressed for the note to be interpreted as “played.”
Using a direct digital synthesizer, eight different frequencies are created to fulfill an octave of notes and a buzzer frequency is created to indicate incorrect notes played by the user. These frequencies are stored in one ROM and are only outputted as necessary. Simple nursery rhyme songs Mary had a Little Lamb, Twinkle, Twinkle Little Star, Row Your Boat, and Chopsticks are created from music sheet using the respective notes. These songs are stored in a ROM since there is no need to write into memory. Each address in the ROM consists of a 12-bit word. The 4 MSBs represent the duration of the note while the other 8 bits determine the notes to be played. The song is heard from the audio as the user plays the correct notes.
The visual display includes the start menu, the playing display, and the end display. The start menu contains all the songs available and allows the user to select and start a song. The playing display is the interactive game display that includes scrolling notes of the selected song, a matching zone, an indicator for the keys being pressed, and an updated score that keeps track of the points depending on the player’s accuracy. In the playing display, at most 6 notes of the song are displayed at once and scroll to the bottom of the screen as they should be played. The display also represents rhythm with longer rectangles to show notes of longer duration. The note should be played with the keyboard when the note reaches the matching zone at the bottom of the screen. An additional feature informs the user visually of whether the note is played and for how long it is being played. The game also includes different levels of difficulty, which involve increasing the speed of the songs and a smaller detection window to hit the note accurately.
The Keyboard Interface, Music Memory, Display, Game Logic, and Audio are the main underlying components. Figure 1 displays the block diagram that integrates the underlying modules.
Keyboard Interface
A PS2 keyboard that simulates the guitar controller is interfaced with the labkit. The Keyboard interface outputs the keys pressed as a nine-bit piece of data to represent 1-8 and the shift key. The MSB represents the shift key.
Music Memory
- **Music ROM:** The Music ROM will contain all the different song selections that we want to include for the user. It outputs the encoding for the notes to the Display module and it will take an address input from the Display to select the correct notes within the memory each time the address increments.
- **Song Selection**
This module allows the user to decide what song he would like to play. The switches, switch[1:0] are used to indicate which song has been selected, and it will be referenced to an assigned song within the module that would indicate the appropriate start and end addresses for the song. The switches are the input and the start and end addresses are the outputs to the Display.
Display
- **SVGA**: The SVGA generates the necessary signals for video display. These signals include hcount, vcount, hsync, vsync, vclock, and blank, which are sent to the Display.
- **Playing Display**: This module displays the scrolling notes to be played. These notes are received from the Music Rom. The Display is given the inputs of the start address and end address from the Song Selection module. The current note received will be newly displayed at the top of the screen while the previous notes will continuously scroll downwards until it reaches the bottom of the screen.
Game Logic
This module determines the note to be outputted to the audio module. The output note is based on the input of the keys being pressed and the expected note that would be currently in the designated area of the Display. The Game Logic also keeps track of the total score.
Audio
- **Audio**: This module only has one input coming from the Game Logic. This input is the note encoding that is mapped to the appropriate frequencies. This frequency is then outputted to the AC97 module.
- **AC97**: This module transmits the data received from the Audio module in order to output it to a speaker that is plugged into the labkit.

2. Description
The Guitar Hero: Nursery Rhyme Edition system is made of five major subsystems. These systems include the Keyboard Interface, Music Memory, Display, Game Logic, and Audio.
2.1 KEYBOARD INTERFACE (by Judy Ho)
2.1.1 PS2 Module (C. Terman and I. Chuang) [3]
When keys on a computer keyboard are pressed, certain codes are sent to the central processing unit to determine which keys were actually pressed. Each key has a unique make code, while every key has the same break code to indicate release. When a key is pressed, a make code is sent to indicate the pressed state. When a key is released, the break code is sent, followed by the make code for the key. This allows the system to know not only that a key was released, but which specific key was released. This module was created to read the codes coming in from a PS2 keyboard. The data coming in is put into a FIFO, as it is received.
2.1.2 Keyboard Module (JH)
The keyboard module allows the user interact with the rest of the game through the keyboard interface. A small, two-state FSM is created in this module to decide whether keys are being pressed or released. Each of these states contains one look-up table that sets the state of each key as on or off, outputting the keys as a nine-bit piece of data to represent 1-8 and the shift key. The MSB represents the shift key.
The FSM uses the data that is being pulled off the FIFO from the PS2 module to help determine the state of each key. The two states of the overall FSM are READY_ALL and READY_RELEASE. In the READY_ALL state, if the make code for a key is detected and found in the look-up table, that key is turned on. If the break code, F0, is pulled off the FIFO, then the overall FSM moves to the READY_RELEASE state. The look-up table within that state will compare the next piece of data from the FIFO with its contents and turn off the appropriate key. The FSM moves back to the READY_ALL state whether or not it received a relevant make code. The lookup table is shown in Figure 2.

2.2 MUSIC MEMORY (by Judy Ho)
2.2.1 Music ROM
A COE file is first created to be loaded into the ROM. The COE file consists of all the notes that need to be played on each beat. Each piece of data is 12 bits, with the first 4 bits representing the duration of the notes, and the last 8 bits representing which notes are supposed to be played. If the bit is high, then the note needs to be played. A ROM is then generated using the Xilinx tools with this preloaded COE file. The data within the ROM can be read out from the data port while an address selection is inputted through another port.
2.2.2 Song Selection Module
The song selection module allows the user to choose which song he wants to play from the ROM. It also keeps track of the start and end addresses of song since they are all stored in one ROM. This module takes in switches one and zero as inputs to choose one of the four songs in the ROM. Upon selection of a song, its start and end addresses are sent to the display module to begin displaying the corresponding notes from the ROM.
2.3 DISPLAY (by Emily Hwang)
2.3.1 SVGA Module
To create a video image for the game, an SVGA was created for an 800x600 pixel resolution at 60Hz refresh rate, which requires a 40MHz pixel clock. This allows 25ns for computation between each frame. Based off the XVGA, the SVGA creates an hcount, vcount, vsync, hsync, and blank signals using the appropriate active video, front porch, sync pulse, and back pulse for the horizontal pixels and vertical lines. Hcount counts pixels in a horizontal scan line while vcount counts scan lines in a frame. Hsync indicates the end of each horizontal scan line while vsync indicates the end of the frame. Blank also indicates when a pixel value will be displayed or when it will be off the screen.
2.3.2 Display FSM Module
The Display module is an FSM itself, which switches between the Start Menu, Playing Display, and the Song Over Display. At power on and reset, the Start Menu is displayed. This is where the user must select their song with switch[1:0] before pressing the Enter button to start the song.
The FSM switches to the Playing Display when the Enter button, Start Song, is asserted. This contains the actual interactive game with scrolling notes. The Playing Display contains a Continue_Song signal that is asserted low when the incrementing address reaches the end address specified by the Song Selection. Since the Continue_Song is asserted low when the last note is retrieved and placed at the top of the screen, there still requires time for the note to scroll down to the bottom because the song is still in play. Therefore, the Display FSM uses a timer that starts when the Continue_Song is asserted low when the song is over. The Display FSM changes to the Song Over state when the timer expires. A second timer is then started when transferring into the Song Over state. In the Song Over state, the congratulations banner is shown and the final score is displayed. When the second timer expires, the state transfer back to the Start_Menu. The user is also allowed to press the Start Song button again in the Playing Display.
or the Song_Over display if they wish to restart the song. This finite state machine is shown in Figure 3.

*Figure 3. Finite State Machine for the Display*
The Start Menu, Playing Display, and Song Over modules create pixels for each stage while the Binary to String creates the score pixels necessary for the Playing Display and Song Over modules. The Display FSM decides which pixels to display depending on its state to create the current output pixel, as shown in Figure 4.

*Figure 4. Block Diagram for Display to Create Pixels*
2.3.2.1 Character String Display Module (C. Terman and I. Chuang) [3]
The Character String Display Module uses a font ROM to display strings. ASCII encoded character strings are displayed in a video window at a specified x,y pixel location. Each character is 8x12, but the pixels are doubled horizontally and vertically in order to magnify the fonts by 2. The Character String Display Module is used by the Start Menu module, Song Over Module, and the Binary to String Converter Module.
2.3.2.2 Binary to String Converter Module
The score display required calculating the number string for each decimal place from a binary number. Therefore, the Binary to String Converter (hex_to_decimal.v) was created to take in a score and display the string in decimal format. The Binary to String Converter is similar to an odometer and uses counters for each place. If the counter of the lower bit increases from 9 to 0, then the next bit increases by 1. The counter increments the amount of times specified by the score. Each counter is then mapped to a number string depending on whether it is a number from 0 to 9. Therefore, each digit of the score is its own Character String Display.
2.3.2.3 Start Menu Module
The Start Menu primarily uses the Character String Display Module borrowed from the course website. For each of the eight lines of text, a Character String Display is instantiated. Therefore, however, there are eight font ROMs that are created, which could be improved in the future. The lines of text inform the user of the name of the game, the songs to select, and instructions to play the song by pressing enter.
The user must select the songs using the switch[1:0]. Changing the switch will also change the highlight bar, which is a blue colored blob that changes its y position depending on the selected song chosen through switch[1:0]. Figure 5 displays the strings of the Start Menu and the blue highlight bar that switches positions based on the song selected.

2.3.2.4 Playing Display Module
The Playing Display module displays the scrolling notes to be played. It at most displays 6 notes. These notes are received from the Music ROM. The current note received is newly displayed at the top of the screen while the previous notes continuously scroll downwards until it reaches the bottom of the screen. Therefore, the Playing Display increments its address from the start address to the end address specified by Song Selection to retrieve the appropriate notes.
Once the user presses Start_Song, the start address and end address specified by Song Selection are kept on the internal signals address_init and address_final. This is done so that if the selection, switch[1:0], are changed in the middle of the song, the change of the start and end address from Song Selection does not affect the current address in the song being played. The address is incremented when a new note is entered at the top so that the next time the note must be changed, the next address retrieves the subsequent note from the Music ROM. This is done until the address reaches the end address.
The Playing Display contains its own FSM, which specifies which register to enter the new note. Originally there were 6 registers, but with the additional of rhythm, 9 registers were needed in order to continually display a note up to a whole note. Therefore, there are 9 y positions that specify each of the 9 registers that hold information about the last 9 notes of the song being played. These y positions cycle from top to bottom so that when a new register is needed, the register that contains the oldest note played is reused to hold the latest note of the song. Nine states are used in order to enter the new note into the correct register. For example, when register8 is filled, register7 will always be filled in next. A pulse signal Change_Note specifies when the next note should be entered into the new available register.
The signal that determines when the note must be changed is the Change_Note signal. This signal depends on the y position of the latest note and the state, which specifies which register the to-be-new note should enter. For example, if we are in state3, the newest note was entered into register4, which is specified by y4. Therefore, we ask if y4 is greater or equal to the standard y position of the 2nd row. This allows for the Change_Note signal to depend on the position and therefore speed of the scrolling notes. It then allows implementation of difficulty of the song, which increases the speed of the notes and shortens the time to accurately play the note. Difficulty is specified by switch[5:3].
The game’s scoring is implemented so that the more accurate the user is, the more points will be accumulated. The maximum points obtained from a single note occurs when the squares align perfectly. Otherwise, the more the square strays away from the outline, the less points received. One could still receive at least one point if the very edge is aligned with the outline. The Playing Display module therefore outputs a score_worth signal that informs the Game Logic how many points the score is worth at a given moment. This score_worth depends on the y position of the blob that is crossing the matching zone. Since the bottom of the blob, y+height, could cross the y position of the matching zone outline, this is also taken into consideration. The area of overlap is calculated to create the score_worth so that perfectly matching would require the blob area to entirely overlap the outlined blob. The difficulty is also taken into consideration for the score_worth where higher difficulty will allow for more points to be accumulated.
In the actual display, the scrolling notes use the Blob module and the matching zone uses a Blob Outline module to outline where the user must hit the note when it scrolls through. Figure 6 displays a screenshot of the Playing Display with scrolling notes of various lengths.
2.3.2.4.1 Blob Module (C. Terman) [3]
With eight total notes to represent the eight frets, each row must contain eight blobs, and with 9 possible rows, there are 72 total blobs. The Blob module was modified to include whether the color is displayed or to not display the color by displaying the blob as black. Therefore, the color is on when the bits of the note are high, which specify that the note should be played. This allows for multiple notes to be displayed as well since each blob of a row is independent of the other blobs. However, each blob of a row depend on the same y position that increments to allow for the scrolling effect.
The display of what key is being pressed helps the user know what fret their finger is located and assists in playing the notes correctly. Again, eight more blobs are created, but with a smaller width and height to fit into the matching zone since the user will keep their eye in this zone. The color is turned on based on the keys_held signal from the keyboard interface. The visual display can be seen in Figure 7 where the visual indication can be seen in the matching zone because some keys are being pressed.
2.3.2.4.2 Streak Module
The addition of rhythm required a visual indication that a note was longer than one of the quarter notes. Therefore, the Streak Module was designed to add length to a note. The streak is created similar to the blob creation. Each blob needs its own streak, and the streak height is used as an input parameter. The length/height depends on the four most significant bits of the note from the Music ROM that were added for rhythmic purposes. Since 1 represents a quarter note, 2 a half note, and so on, we do not want any length if there is a quarter note. So the height = (note_type-1)*100. The x and y position in the streak represents the lower left corner instead of the upper left corner of the rectangle. This allows for the same x and y positions of the blob to be used to specify the length. A streak can be seen in Figure 6 with the longer length note.
2.3.2.4.3 Blob Outline Module
The Blob Outline module takes the x and y position to make a square where the x and y position represent the locations of the top left corner of the square. As stated in the name, only the outline of the square has the color of the pixel specified by the color input.
The Blob Outline Module is used for the matching zone area to indicate when to play the notes and to indicate when the notes were hit and therefore being played. Eight outlines were needed for the eight different notes of the matching zone while sixteen outlines were necessary for showing that the note was played.
The visual indication of correctly hitting the note is shown by two outlines surrounding the matching zone outline of the corresponding notes. The color is either black or the corresponding column color depending on the notes_hit signal, which is essentially the note sent to the audio. The notes_hit signal contains eight bits, each bit representing the eight positions, so if the bit is high, the note is being played. The user must also hold the fret buttons (number keys) in order to allow for the note to play its appropriate length, as it would work with a guitar. The user does not have to strum the whole time, but the number keys must continue to match in order to continually play the note, especially when the notes are more than a quarter note long.
2.3.2.5 Song Over Module
The Song Over Module congratulates the user and displays the final score. A congratulations banner uses the Character String Display module again. The x position is changed to demonstrate a scrolling effect. The x position decrements so that the congratulations scrolls from right to left. The x position is reinitialized to the x dimension, in the case of the resolution of 800x600, pixel 799, when the Start_Song or Reset is asserted, meaning that the user will play another song before the Song_Over display is shown again. The x position decrements by three pixels until it reaches a specified x position, which stops the movement of the banner in the middle. The score is also constantly displayed in the same manner as in the Playing Display. Figure 8 displays the Song Over Display of the game.

2.4 GAME LOGIC (by Emily Hwang)
2.4.1 Game Logic Module
The Game Logic acts as a segue between the Display module and the Audio module. This module determines the note entered into audio module for a lookup table and also acts as a scorekeeper. The Timer and Divider modules are also necessary in assisting an accurate behavior of the Game Logic.
The Game Logic determines the note to be sent to the audio module by comparing the expected note to be played to the note being played. The output note is based on the input of what keys the user is pressing and the note that is expected to be in the designated matching area of the display. The user must press the shift button to enter the notes of the pressed frets. Therefore, the Game Logic module only computes the note to send and the score and starts the timer for the length of the note at the moment the shift button is pressed.
The Game Logic constantly outputs the “off” sound until the user presses the shift key, in which the output will either be the correct expected note, or the buzzer note. All eight buttons are taken in consideration of whether the user pressed the correct keys. The keys being held must
match with the expected note to output the corresponding note, otherwise, the buzzer note is sounded. It is also included that the user must hold down the fret buttons in order for the note to continue playing if it is a long note to simulate playing a guitar.
The Game Logic contains a timer in order to send the correct note or buzzer note for the amount of time the note is specified to be. Currently, there exist quarter notes, half notes, three-quarter notes, and whole notes. The Game Logic sets the value to count down from depending on the type of note stated above, which is included in the information of the expected_note, and the difficulty of the game, which establishes the speed of the scrolling notes. Therefore, in easy mode, difficulty = 1, the quarter notes are longer than the quarter notes at difficulty = 3, for example. The quarter, half, three-quarter, and whole notes are also relative to each other in length so that, for example, the half note is 2 times as long as the quarter note in the same difficulty.
The Game Logic acts as a scorekeeper by incrementing the total score by the points determined by the Playing Display module through score_worth. At the moment that the user presses the shift key to assert the desire to play the note, the score is incremented depending on the accuracy of the user’s playing skills. This score is sent back to the Display module to be displayed during the game play and also at the end of the song.
2.4.1.1 Divider Module
The Divider was created to convert the 40 MHz clock into a Hz enable signal, which sends an enable signal at about a quarter of a second. To create a one Hz enable second using a 40Mhz clock, the divider would count to 40,000,000 to send a high signal, but a shorter length was necessary for the quarter notes, so a quarter of a second pulse was created by counting to 10,000,000. This enable is used for the Timer module to countdown its timer value every quarter-second until the timer value reaches zero. To increase accuracy of the Timer, the internal counter is restarted so that when Start_Timer is asserted, the Hz enable signal will enable exactly a quarter-second after the assertion.
2.4.1.2 Timer Module
The Timer counts down the number of seconds, initially starting at the value specified by the Game Logic module when the signal Start_Timer is asserted. When the timer is started, the expired signal is low. The counter counts down every quarter-second when the Hz enable is asserted. When the counter reaches zero, the Expired signal is asserted high. The Expired signal remains high until the time is started again. The timer is also used by the Display FSM to change states after a certain amount of time.
2.5.1 Final Audio Module
The Final Audio module instantiates the Direct Digital Synthesizer, and selects the appropriate tones to output. There is a look-up table in the in this module that selects the samples on the different channels and drives them to separate registers. The sample coming from the DDS is 17 bits, but is sign-extended during a pipelining process of the data. If a key is pressed, then the appropriate samples are selected to be outputted. If a key isn’t pressed, then zeroes are selected. In the end, the samples for all 8 tones are added together and outputted to the ac97 module. The 8 tones had to be added in a pipelined manner in order for the addition to be completed in one cycle.
2.5.1.1 Direct Digital Synthesizer
The DDS is generated using the Xilinx tools available. This is a ROM that already contains all the data samples necessary to generate a tone. It operates on a 48 kHz clock, and outputs one continuous stream of data samples on 9 different channels, one for each of the 8 tones in the octave as well as the buzzer tone. The 48 kHz clock is split equally between the 9 channels, so no output frequency can exceed 5.3 kHz.
2.5.2 AC97 Module (I. Chuang) [1]
The AC97 module takes in the data samples from the Final Audio module and reconstructs a sine wave based on those samples. This is then outputted to the speakers.
2.5.3 AC97 Commands Module (I. Chuang) [1]
The AC97 Commands module generates the command addresses necessary for the AC97 module to operate. It also controls the volume/amplitude of the sine wave being outputted. The data flow between AC97 and AC97 Commands is shown in Figure 9.

3. Testing and Debugging
Keyboard (JH)
The method for testing the keyboard module was simple in a way. The state of the 8 keys was outputted to the led lights. One key would correspond to one led light, and then the lights were observed as the keys were pressed. If the key was detected, then the led light gets turned off. If this did not happen, then it was a matter of making sure the data was being pulled off the FIFO properly.
Music Memory (JH)
In order to make sure that the data was properly loaded into the ROM, a 1Hz enable was used to make a counter. The counter was used as an input for the address. The data was displayed on the LEDs. Therefore, as the counter incremented, the next piece of data would be pulled out of the ROM and displayed on the LEDs every second.
The song selection module was tested by using switches one and zero for the user input as planned, and displaying the start and end addresses to the hex display. For a while, only the end address, and after careful inspection, it became clear that the number of bits for the zeroes added to the hex display input were not specified. Once that was cleared up, the start and end addresses showed up properly.
SVGA (EH)
In order to create a 800x600 resolution for 60Hz refresh, a 40MHz pixel clock was necessary. Therefore, an SVGA was made for the 800x600 resolution and tested with the Pong display from one of the previous labs that had shown to work at 1024x768 resolution. The Pong display at 800x600 resolution verified that both the 40MHz pixel clock and the SVGA were running correctly.
Display (EH)
After the SVGA worked correctly, the actual display could be tested. Looking at the display itself could only test each component and/or subsystem of the Display module.
Originally, the Playing Display was the Display module since it is the actual game itself and is the most important aspect of the display. Simply tested, the outlines of the matching zone were tested to verify the colors in each column and the outline itself was correct. An important necessary feature was the scrolling of the notes, which was first tested without a ROM and without using addresses to retrieve new notes. Notes specified by the switches were used as the input notes. Each note was tested to verify the corresponding placement of eight different horizontal positions on the screen. When this occurred smoothly, the ROM was later used to verify the address process of retrieving notes. After scrolling, other features could be added such as increased speeds for increased difficulty and multiple notes using a song with multiple notes. Other visual aids included the display of visual blobs in the matching zone to indicate which keys the user is pressing and also the display of outlines around the matching outlines to indicate that the note is being hit.
After the Playing Display was created, a Start Menu and a Song Over Display were created. These used the Character Display String module and a font ROM previously created by C. Terman and I. Chuang. The Start Menu displays lines of text that include the title, songs, and
instructions on how to play. The Song Over Display displays a “Congratulations” string that scrolls at the end of the song and also a score display.
The score display required the binary/hex score to be displayed in decimal values. Therefore, each digit of the decimal needed to be computed and mapped onto a number string. The Binary to String module was also tested through visual testing and knowledge of the score through the hex display on the labkit.
The problems that occurred was that the pixels sometimes had glitches and subsequently, some colors were not in the correct locations. However, the only testing I did was change whether the signals were OR’ed or XOR’ed together. With the correct combination of OR’ing the similar pixels together but XOR’ing the different pixels such as outline pixels vs. blob pixels, the display soon became absent of glitches.
Game Logic (EH)
The Game Logic was simply tested individually, but later tested while integrated in the system. Initially, without a ROM, one “dummy” note was used as the constant expected note. Without the keyboard, the labkit switches were used to denote the number keys pressed by the user and one of the buttons acted as the shift key. The output note that would be sent to the audio was displayed depending on the user inputs, which were verified to be correct in sending the buzzer note in incorrect matches, the expected note in correct matches, and the “off” note in the remaining situations.
When the ROM and keyboard were working, the Game Logic could be tested more fully. Without the audio, the LED lights were used to represent the new note created by the module. The light would light up if it was being sent. Since there are eight notes, the most significant bit was displayed in the hex display. This bit represented the buzzer note. Therefore, it was verified that pressing incorrect notes sent the buzzer note code to the audio.
Divider and Timer (EH)
The Divider and Timer were created in lab 3. The Divider was originally tested by creating a blinking LED light that would blink every second to verify a one Hz enable. The Timer was originally tested with the hex display. The hex displayed whether the system was in countdown or not along with the initial countdown value. The blinking LED light was also displayed to help count the number of seconds that the countdown should count. One of the buttons was used to start the timer. Different time values were set as initial countdown values. The status of the expired signal verified that the Timer module timed the correct time delay.
Audio (JH)
The tones had to be tweaked by just listening to it after compilation. The keys outputted from the keyboard module were fed into the audio module. This way, as the keys were being pressed, the corresponding note would be outputted to speakers. At first, 9 different tones were created by instantiating 9 different ROMs that took in 9 different phase increments. The sign extension on each of these frequency outputs had to be done correctly.
Initially, there was a lot of static because the data was outputted directly from the ROM to the AC97. The outputs had to be pipelined through a series of registers in order to make sure the data was stable by the time it would actually be outputted. In order to get multiple notes to output at the same time, all outputs from the ROMs had to be added together. With this method, only some of the notes would output correctly, while others would be static. The suspicion was
that the addition was not getting done quickly enough to be outputted correctly at the end of the
clock cycle. To make sure that this was the problem, one of the notes that was static, was
outputted alone to the AC97 without any addition. When this output resulted just fine, it was
evident that the problem was in the addition. As a result, the addition was then pipelined so that
it would be done in time to be outputted.
After this, a single ROM with 9 different channels was generated to save memory. Some
of the debugging for this ROM was similar to the other ones, such as making sure that the sign
extension was done correctly, and that the data was pipelined out of the ROM. The number of
bits for the output from the DDS also had to be changed from 8 bits to 20 bits to make sure it was
audible.
4. Conclusion
The overall project was completely functional in the end through good design and
organized time. The design of the transfer of signals was important in determining the simplicity
of each module. The block diagram was updated several times before becoming the simple
implementation that integrated the project so well.
At first, the idea of the keyboard seemed very difficult because it wouldn't be easy to
determine whether or not a key was actually pressed in conjunction with the shift key. If it were
implemented that way, it would have required a more complicated FSM. However, after
reviewing this idea, the implementation was changed to have 9 bits represent whether each of the
nine keys were pressed or not and to have the Game Logic module do the actual check to see if
the shift key had been pressed in its computations.
Another similar change that improved the design was the transfer of the note codes
between the Music ROM and Display, the Display and Game Logic, and between the Game
Logic and Audio. Originally the eight notes and the buzzer note would be represented as a 4-bit
code to represent each tone. However, generalizing the signals proved to be easier and allowed
for additional features such as adding multiple notes. The 4-bit code was changed to an 8-bit
code, each bit representing each of the 8 frequencies. The Music ROM did not need to carry
buzzer tones since it only contains the correct notes of each song. When rhythm was later
implemented, the Music ROM also carried this information by adding 4 more bits to represent
whether the notes were quarter, half, three-quarter, or whole notes. The Display also only cared
about which notes were enabled, so it could easily look at each of the eight bits for each of its
eight positions of the corresponding notes. The same note code was then sent to the Game Logic
where the rhythm is used for its timer of how long it should send the note to the Audio. The
transfer of notes between the Game Logic and Audio is a 9-bit signal to include the eight
frequencies in each of the eight bits and also an additional bit for the buzzer frequency.
Otherwise, the Game Logic would send a 9-bit code with all zeros to represent no frequencies
being played. This representation of the notes allowed for simple communication between the
main modules.
Improvement in design occurred by decreasing the amount of ROMs instantiated. A new
direct digital synthesizer ROM with 9 channels was created so that each of the nine frequencies
were on the separate channels instead of creating 9 ROMs with one channel each. If time
permitted, an enhancement in the design of the Character String Display module would be to
create one ROM instead of instantiating multiple ROMs for each string. The implementation in
the project however, used multiple ROMs. The number of ROMs, however, was decreased in the
score display. Since both the Playing Display module and the Song Over Display module display
the score and use the Binary to String Converter for the score where each place in the decimal number is its own string and therefore own ROM, it was important that the Binary to String Converter was only instantiated once. Therefore, the Display FSM instantiated the score display and chose to display it in the Playing Display and Song Over Display when necessary while using a multiplexer to determine the position of the score. This action lowered the number of ROMs by six.
Between the Game Logic and Display, the Playing Display took the longest time because many features could be added. Of these additional features, displaying the binary score into a decimal string was difficult. The design could be improved to be efficient in decreasing the amount of ROMs used, which would be a good idea to focus on when using strings in the final project.
Something that would have made the project go a lot faster was being aware of the sign extensions and remember that numbers outputted to the AC97 are signed numbers. The bit extensions caused other problems when old signals increased in bits from additional features such as adding the rhythm. If not all signals are updated, the new bits will not go through the entire system and bugs will occur. This took time especially when the code was compiled, which takes more time, to find that the new feature was not functional. After finding that there was a bug, it was unfortunate to find that some signals were not updated in extending the number of bits, which is a simple change. Another simple problem that would’ve made the project go faster is being aware of how the clocks are made through multiplication and division. I was unaware that the power of the labkit must be turned off to reset the clock when the 65MHz from lab 5 was tested and the 40MHz was created for the smaller resolution. When reprogramming the FPGA with the new clock, there would be errors. These errors were simply fixed by resetting the labkit, but this fact was not known and time was spent in trying to create the 800x600 pixel resolution when it was correct from the start.
The overall project was completely functional in the end, but there were some ways that it could have been improved if there had been more time. More features could have been added, such as a recording mode. This would allow the player to record his own songs and have them play back. It can also become another one of the challenge songs for the game. Another aspect that could have possibly been added was being able to play multiple notes at a time, but also have them be of different durations.
References
|
{"Source-Url": "http://web.mit.edu/6.111/www/f2007/projects/ehwang_Project_Final_Report.pdf", "len_cl100k_base": 9247, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 37347, "total-output-tokens": 10160, "length": "2e13", "weborganizer": {"__label__adult": 0.001983642578125, "__label__art_design": 0.006816864013671875, "__label__crime_law": 0.0011014938354492188, "__label__education_jobs": 0.00431060791015625, "__label__entertainment": 0.00461578369140625, "__label__fashion_beauty": 0.0009522438049316406, "__label__finance_business": 0.0009975433349609375, "__label__food_dining": 0.0022716522216796875, "__label__games": 0.10028076171875, "__label__hardware": 0.1630859375, "__label__health": 0.0008726119995117188, "__label__history": 0.001743316650390625, "__label__home_hobbies": 0.0023555755615234375, "__label__industrial": 0.0040283203125, "__label__literature": 0.0012979507446289062, "__label__politics": 0.0006937980651855469, "__label__religion": 0.0020885467529296875, "__label__science_tech": 0.2156982421875, "__label__social_life": 0.00035262107849121094, "__label__software": 0.0212554931640625, "__label__software_dev": 0.458251953125, "__label__sports_fitness": 0.0014944076538085938, "__label__transportation": 0.0027294158935546875, "__label__travel": 0.0005822181701660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45293, 0.02933]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45293, 0.73647]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45293, 0.92774]], "google_gemma-3-12b-it_contains_pii": [[0, 552, false], [552, 4249, null], [4249, 5336, null], [5336, 8887, null], [8887, 10149, null], [10149, 12254, null], [12254, 15404, null], [15404, 16042, null], [16042, 18094, null], [18094, 22073, null], [22073, 23232, null], [23232, 25497, null], [25497, 27541, null], [27541, 30256, null], [30256, 31979, null], [31979, 35088, null], [35088, 38603, null], [38603, 42384, null], [42384, 44995, null], [44995, 45293, null]], "google_gemma-3-12b-it_is_public_document": [[0, 552, true], [552, 4249, null], [4249, 5336, null], [5336, 8887, null], [8887, 10149, null], [10149, 12254, null], [12254, 15404, null], [15404, 16042, null], [16042, 18094, null], [18094, 22073, null], [22073, 23232, null], [23232, 25497, null], [25497, 27541, null], [27541, 30256, null], [30256, 31979, null], [31979, 35088, null], [35088, 38603, null], [38603, 42384, null], [42384, 44995, null], [44995, 45293, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45293, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45293, null]], "pdf_page_numbers": [[0, 552, 1], [552, 4249, 2], [4249, 5336, 3], [5336, 8887, 4], [8887, 10149, 5], [10149, 12254, 6], [12254, 15404, 7], [15404, 16042, 8], [16042, 18094, 9], [18094, 22073, 10], [22073, 23232, 11], [23232, 25497, 12], [25497, 27541, 13], [27541, 30256, 14], [30256, 31979, 15], [31979, 35088, 16], [35088, 38603, 17], [38603, 42384, 18], [42384, 44995, 19], [44995, 45293, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45293, 0.05046]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
ef02d6d99dcc210ec3c822807c0d5ae0baff464c
|
The Impact of Mislabelling on the Performance and Interpretation of Defect Prediction Models
Chakkrit Tantithamthavorn†, Shane McIntosh‡, Ahmed E. Hassan‡, Akinori Ihara†, Kenichi Matsumoto†
†Graduate School of Information Science, Nara Institute of Science and Technology, Japan.
‡School of Computing, Queen’s University, Canada.
Abstract—The reliability of a prediction model depends on the quality of the data from which it was trained. Therefore, defect prediction models may be unreliable if they are trained using noisy data. Recent research suggests that randomly-injected noise that changes the classification (label) of software modules from defective to clean (and vice versa) can impact the performance of defect models. Yet, in reality, incorrectly labelled (i.e., mislabelled) issue reports are likely non-random. In this paper, we study whether mislabelling is random, and the impact that realistic mislabelling has on the performance and interpretation of defect models. Through a case study of 3,931 manually-curated issue reports from the Apache Jackrabbit and Lucene systems, we find that: (1) issue report mislabelling is not random; (2) precision is rarely impacted by mislabelled issue reports, suggesting that practitioners can rely on the accuracy of modules labelled as defective by models that are trained using noisy data; (3) however, models trained on noisy data typically achieve 56%-68% of the recall of models trained on clean data; and (4) only the metrics in top influence rank of our defect models are robust to the noise introduced by mislabelling, suggesting that the less influential metrics of models that are trained on noisy data should not be interpreted or used to make decisions.
I. INTRODUCTION
Defect models, which identify defect-prone software modules using a variety of software metrics [15, 39, 45], serve two main purposes. First, defect models can be used to predict [1, 11, 17, 27, 33, 34, 36, 42, 51] modules that are likely to be defect-prone. Software Quality Assurance (SQA) teams can use defect models in a prediction setting to effectively allocate their limited resources to the modules that are most likely to be defective. Second, defect models can be used to understand [10, 30, 32, 33, 46, 47] the impact that various software metrics have on the defect-proneness of a module. The insights derived from defect models can help software teams to avoid pitfalls that have often led to defective software modules in the past.
The accuracy of the predictions and insights derived from defect models depends on the quality of the data from which these models are trained. Indeed, Mockus argues that poor data quality can lead to biased conclusions [31]. Defect models are trained using datasets that connect issue reports recorded in an Issue Tracking System (ITS) with the software modules that are impacted by the associated code changes that address these issue reports. The code changes are in turn recorded in a Version Control System (VCS). Thus, the quality of the data recorded in the ITS and VCS impacts the quality of the data used to train defect models [2, 5, 6, 19, 37].
Recent research shows that the noise that is generated by issue report mislabelling, i.e., issue reports that describe defects but were not classified as such (or vice versa), may impact the performance of defect models [26, 41]. Yet, while issue report mislabelling is likely influenced by characteristics of the issue itself — e.g., novice developers may be more likely to mislabel an issue than an experienced developer — the prior work randomly generates mislabelled issues.
In this paper, we set out to investigate whether mislabelled issue reports can be accurately explained using characteristics of the issue reports themselves, and what impact a realistic amount of noise has on the predictions and insights derived from defect models. Using the manually-curated dataset of mislabelled issue reports provided by Herzig et al. [19], we generate three types of defect datasets: (1) realistic noisy datasets that contain mislabelled issue reports as classified manually by Herzig et al., (2) random noisy datasets that contain the same proportion of mislabelled issue reports as contained in the realistic noisy dataset, however the mislabelled issue reports are selected at random, and (3) clean datasets that contain no mislabelled issues. In this paper, we study whether mislabelled issue reports are likely non-random. In this paper, we address the following three research questions:
(RQ1) Is mislabelling truly random?
Issue report mislabelling is not random. Our models can predict mislabelled issue reports with a mean F-measure that is 4-34 times better than that of random guessing. The tendency of a reporter to mislabel issues in the past is consistently the most influential metric used by our models.
(RQ2) How does mislabelling impact the performance of defect models?
We find that the precision of our defect models is rarely impacted by mislabelling. Hence, practitioners can rely on the accuracy of modules labelled as defective by defect models that are trained using noisy data. However, cleaning the data prior to training the defect models will likely improve their ability to identify all defective modules.
(RQ3) How does mislabelling impact the interpretation of defect models?
We find that 80%-85% of the metrics in the top influence rank of the clean models also appear in the top influence rank of the noisy models, indicating that the most influential metrics are not heavily impacted by issue report mislabelling. On the other hand, as little as 18% of the metrics in the second and third influence rank of the clean models appear in the same rank in the noisy models, which suggests that the less influential metrics are more unstable.
Furthermore, we find that randomly injecting mislabelled defects tends to overestimate the impact that mislabelling truly has on model performance and model interpretation.
### Paper organization
The remainder of this paper is organized as follows. Section II situates this paper with respect to the related work. Section III discusses the design of our case study, while Section IV presents the results with respect to our three research questions. Section V discloses the threats to the validity of our work. Finally, Section VI draws conclusions.
## II. Related Work & Research Questions
Given a software module, such as a source code file, a defect model classifies it as either likely to be defective or clean. Defect models do so by modelling the relationship between module metrics (e.g., size and complexity), and module class (defective or clean).
As shown in Figure 1, module metrics and classes are typically mined from historical repositories, such as ITSs and VCSs. First, issue reports, which describe defects, feature requests, and general maintenance tasks, are extracted from the ITS. Next, the historical code changes that are recorded in a VCS are extracted. Finally, these issue reports are linked to the code changes that have been performed in order to address them. For example, a module’s class is set to defective if it has been affected by a code change that addresses an issue report that is classified as a defect.
Various data quality issues can arise when constructing defect prediction datasets. Specifically, prior work has investigated data quality issues with respect to the linkage process and the issue reports themselves. We describe the prior work with respect to each data quality issue below.
### A. Linkage of Issue Reports with Code Changes
The process of linking issue reports with code changes can generate noise in defect prediction datasets, since the linkage process often depends on manually-entered links that are provided by developers. Bachmann et al. find that the issue reports of several defects are not identified in the commit logs [5], and thus are not visible to the automated linking tools that are used to extract defect datasets. Wu et al. [50] and Nguyen et al. [37] use the textual similarity between issue reports and version control logs to recover the missing links between the ITS and VCS repositories.
The noise generated by missing links in defect prediction datasets introduces bias. Bird et al. find that more experienced developers are more likely to explicitly link issue reports to the corresponding code changes [6]. Nguyen et al. find that such biases also exist in commercial datasets [38], which were suspected to be “near-ideal.” Rahman et al. examined the impact of bias on defect models by generating artificially biased datasets [40], reporting that the size of the generated dataset matters more than the amount of injected bias.
Linkage noise and bias are addressed by modern tools like JIRA1 and IBM Jazz2 that automatically link issue reports with code changes. Nevertheless, recent work by Nguyen et al. shows that even when such modern tools are used, bias still creeps into defect datasets [38]. Hence, techniques are needed to detect and cope with biases in defect prediction datasets.
#### B. Mislabelled Issue Reports
Even if all of the links between issue reports and code changes are correctly recovered, noise may creep into defect prediction datasets if the issue reports themselves are mislabelled. Aranda and Venolia find that ITS and VCS repositories are noisy sources of data [3]. Antoniol et al. find that textual features can be used to classify issue reports [2], e.g., the term “crash” is more often used in the issue reports of defects than other types of issue reports. Herzig et al. find that 43% of all issue reports are mislabelled, and this mislabelling impacts the ranking of the most defect-prone files [19].
Mislabelled issue reports generate noise that impacts defect prediction models. Yet, little is known about the nature of mislabelling. For example, do mislabelled issue reports truly appear at random throughout defect prediction datasets, or are they explainable using characteristics of code changes and issue reports? Knowledge of the characteristics that lead to mislabelling would help researchers to more effectively filter (or repair) mislabelled issue reports in defect prediction datasets.
---
1https://issues.apache.org/jira/
2http://www.jazz.net/
that defect models are considerably less accurate when they are trained using datasets that have a 20%-35% mislabelling rate [26]. Seiffert et al. conduct a comprehensive study [44], and the results confirm the prior findings of Kim et al. [26].
However, prior work assumes that issue report mislabelling is random, which is not necessarily true. For example, novice developers may be more likely to mislabel an issue report than experienced developers. Hence, we set out to address the following research question:
(RQ2) How does mislabelling impact the performance of defect models?
In addition to being used for prediction, defect models are also used to understand the characteristics of defect-prone modules. Mockus et al. study the relationship between developer-centric measures of organizational change and the probability of customer-reported defects in the context of a large software system [32]. Cataldo et al. study the impact of software and work dependencies on software quality [10]. Shihab et al. study the characteristics of high-impact and surprise defects [47]. McIntosh et al. study the relationship between software quality and modern code review practices [30]. Such an understanding of defect-proneness is essential to chart quality improvement plans.
Mislabelled issue reports likely impact the interpretation of defect models as well. To investigate this, we formulate the following research question:
(RQ3) How does mislabelling impact the interpretation of defect models?
III. Case Study Design
In this section, we outline our criteria for selecting the studied systems, and our data extraction and analysis approaches.
A. Studied Systems
To address our research questions, we need a dataset of mislabelled issue reports. In selecting the studied systems, we identified two important criteria that needed to be satisfied:
- Criterion 1 — Mislabeled issue report oracle: In order to study the impact that mislabeling has on defect prediction models, we need an oracle of which issues have been mislabelled.
- Criterion 2 — Issue report linking rate: The issue reports for each studied system must be traceable, i.e., an issue report must establish a link to the code change that addresses it. Systems with low rates of traceable issue reports will introduce too many missing links [5, 6], which may impact the performance of our defect models [40].
To satisfy criterion 1, we began our study using the corpus of mislabelled issue reports that was manually-curated by Herzig et al. [20]. Table I provides an overview of the five systems in the corpus.
To satisfy criterion 2, we first select the set of systems in the corpus of Herzig et al. that use the JIRA ITS. JIRA explicitly links code changes to the issue reports that they address. Since Rhino and Tomcat5 do not use JIRA, we removed them from our analysis. Next, we discard systems that do not have a high linkage rate. We discard HTTPClient, since fewer than half of the issue reports could be linked to the code changes that address them.
Table I shows that the Jackrabbit and Lucene systems satisfied our criteria for analysis. Jackrabbit is a digital content repository that stores versioned entries in a hierarchy. Lucene is a library offering common search indexing functionality.
B. Data Extraction
In order to produce the datasets necessary for our study, we first need to extract data from the ITS of each studied system. Next, we need to link the extracted ITS data with entries from the respective VCS repositories, as well as with the oracle of mislabelled issue reports. Figure 2 provides an overview of our data extraction approach, which is further divided into the four steps that we describe below.
(DE 1) Link issue reports to code changes. We first extract the issue reports from the ITS of each studied system. Then, we extract the references to code changes from those issue reports. Finally, we extract the commit information for the referenced code changes from the VCS.
---
1 http://jackrabbit.apache.org/
2 http://lucene.apache.org/
TABLE II
FACTORS USED TO STUDY THE NATURE OF MISLABELED ISSUE REPORTS (RQ1).
<table>
<thead>
<tr>
<th>Metrics</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>Diffusion Dimension</strong></td>
<td></td>
</tr>
<tr>
<td># Files, # Components,</td>
<td>The number of unique files, components, and subsystems that are involved in</td>
</tr>
<tr>
<td># Subsystems Entropy</td>
<td>the code changes that address an issue report.</td>
</tr>
<tr>
<td></td>
<td>The dispersion of a change across the involved files.</td>
</tr>
<tr>
<td><strong>Size Dimension</strong></td>
<td></td>
</tr>
<tr>
<td># Commits</td>
<td>The number of commits made to address an issue report.</td>
</tr>
<tr>
<td>Churn</td>
<td>The sum of the added and removed lines in the code changes made to address</td>
</tr>
<tr>
<td></td>
<td>an issue report.</td>
</tr>
<tr>
<td><strong>History Dimension</strong></td>
<td></td>
</tr>
<tr>
<td>Reporter tendency</td>
<td>The proportion of prior issue reports that were previously filed by the</td>
</tr>
<tr>
<td></td>
<td>reporter of this issue and that were mislabelled.</td>
</tr>
<tr>
<td></td>
<td>For each file involved in the code changes that address an issue report, we</td>
</tr>
<tr>
<td></td>
<td>calculate the proportion of its prior issue reports that were mislabelled.</td>
</tr>
<tr>
<td></td>
<td>For each issue report, we select the maximum of the proportions of each of</td>
</tr>
<tr>
<td></td>
<td>its files.</td>
</tr>
<tr>
<td>Code tendency</td>
<td></td>
</tr>
<tr>
<td></td>
<td>The proportion of prior issue reports that were previously filed by the</td>
</tr>
<tr>
<td></td>
<td>reporter of this issue and that were mislabelled.</td>
</tr>
<tr>
<td></td>
<td>For each file involved in the code changes that address an issue report, we</td>
</tr>
<tr>
<td></td>
<td>calculate the proportion of its prior issue reports that were mislabelled.</td>
</tr>
<tr>
<td></td>
<td>For each issue report, we select the maximum of the proportions of each of</td>
</tr>
<tr>
<td></td>
<td>its files.</td>
</tr>
<tr>
<td><strong>Communication Dimension</strong></td>
<td></td>
</tr>
<tr>
<td>Discussion length</td>
<td>The number of comments that were posted on the issue report.</td>
</tr>
</tbody>
</table>
(DE 2) Integrate oracle of mislabelled issue reports. We link the oracle of mislabelled issue reports with our defect datasets for two purposes. First, we record the mislabelled issues in order to train models that predict and explain the nature of mislabelling (cf. RQ1). Second, we use the oracle to correct mislabelled issues in order to produce clean (mislabel-free) versions of our defect prediction datasets. We use this data to study the impact of mislabelling on the performance and interpretation of our models (cf. RQ2 and RQ3).
(DE 3) Calculate metrics for the prediction of mislabelled issue reports. In order to address RQ1, we train models that classify whether an issue report is mislabelled or not. Table II shows the nine metrics that we use to predict whether an issue report is mislabelled or not. These nine metrics capture four dimensions of an issue report that we briefly describe below.
**Diffusion** metrics measure the dispersion of a change across modules. Since broadly-dispersed code changes may contain several different concepts, they are likely difficult to accurately label. We use four metrics to measure diffusion as described below. The # Subsystems, # Components, and # Files metrics measure the spread of a change at different granularities. For example, for a file `org.apache.lucene/index/values/Reader.java`, the subsystem is `org.apache.lucene.index` and the component is `org/apache/lucene/index/values`. We count the number of unique subsystems, components, and files that are modified by a change by analyzing the file paths as described above. We also measure the entropy (i.e., disorder) of a change. We use the entropy definition of prior work [17, 23], i.e., the entropy of a change $C$ is $H(C) = - \sum_{k=1}^{N} (p_k \times \log_2 p_k)$, where $N$ is the number of files included in a change, $p_k$ is the proportion of change $C$ that impacts file $k$. The larger the entropy value, the more broadly that a change is dispersed among files.
Size metrics measure how much code change was required to address an issue report. Similar to diffusion, we suspect that larger changes may contain more concepts, which likely makes the task of labelling more difficult. We measure the size of a change by the \# commits (i.e., the number of changes in the VCS history that are related to this issue report) and the churn (i.e., the sum of the added and removed lines).
History metrics measure the tendency of files and reporters to be involved with mislabelled issue reports. Files and reporters that have often been involved with mislabelled issue reports in the past are likely to be involved with mislabelled issue reports in the future. The reporter tendency is the proportion of prior issue reports that were created by a given reporter and were mislabelled. To calculate the code tendency for an issue report \( r \), we first compute the tendency of mislabelling for each involved file \( f_k \), i.e., the proportion of prior issue reports that involve \( f_k \) that were mislabelled. We select the maximum of the mislabelling tendencies of \( f_k \) to represent \( r \).
Communication metrics measure the degree of discussion that occurred on an issue report. Issue reports that are discussed more are likely better understood, and hence are less likely to be mislabelled. We represent the communication dimension with the discussion length metric, which counts the number of comments posted on an issue report.
(DE 4) Calculate metrics for the prediction of defect-prone files. In order to address RQ2 and RQ3, we train defect models that identify defect-prone files. Table III shows the ten metrics that are spread across three dimensions that we use to predict defect-prone files. These metrics have been used in several previous defect prediction studies [4, 7, 24, 33–35, 40, 46, 49]. We briefly describe each dimension below.
Process metrics measure the change activity of a file. We count the number of commits, lines added, lines deleted, and churn to measure change activity of each file. Similar to Rahman et al. [40], we normalize the lines added and lines deleted of a file by the total lines added and lines deleted.
Developer metrics measure the size of the team involved in the development of each file [7]. Active developers counts the developers who have made changes to a file during the studied release period. Distinct developers counts the developers who have made changes to a file up to (and including) the studied release period. Minor developers counts the number of developers who have authored less than 5% of the changes to a file in the studied release period.
Ownership metrics measure how much of the change to a file has been contributed by a single author [7]. Ownership ratio is the proportion of the changed lines to a file that have been contributed by the most active author. We measure the experience of an author using the proportion of changed lines in all of the system files that have been contributed by that author. Owner experience is the experience of the most active author of a file. Committer experience is the geometric mean of the experiences of the authors that contributed to a file.
C. Data Analysis
We train models using the datasets that we extracted from each studied system. We then analyze the performance of these models, and measure the influence that each of our metrics has on model predictions. Figure 2 provides an overview of our data analysis approach, which is divided into four steps. We describe each step below.
(DA 1) Generate bootstrap datasets. In order to ensure that the conclusions that we draw about our models are robust, we use the bootstrap resampling technique [12]. The bootstrap randomly samples \( K \) observations with replacement from the original dataset of size \( K \). Using the bootstrap technique, we repeat our experiments several times, i.e., once for each bootstrap sample. We use implementation of the bootstrap algorithm provided by the \texttt{boot} R package [9].
Unlike k-fold cross-validation, the bootstrap technique fits models using the entire dataset. Cross-validation splits the data into \( k \) equal parts, using \( k - 1 \) parts for fitting the model, setting aside 1 fold for testing. The process is repeated \( k \) times, using a different part for testing each time. Notice, however, that models are fit using \( k - 1 \) folds (i.e., a subset) of the dataset. Models fit using the full dataset are not directly tested when using k-fold cross-validation. Previous research demonstrates that the bootstrap leads to considerably more stable results for unseen data points [12, 16]. Moreover, the use of bootstrap is recommended for high-skewed datasets [16], as is the case in our defect prediction datasets.
(DA 2) Construct models. We train our models using the random forest classification technique [8]. Random forest is an accurate classification technique that is robust to noisy data [22, 48], and has been used in several previous studies [13, 14, 22, 24, 28]. The random forest technique constructs a large number of decision trees at training time. Each node in a decision tree is split using a random subset of all of the metrics. Performing this random split ensures that all of the trees have a low correlation between them. Since each tree in the forest may report a different outcome, the final class of a work item is decided by aggregating the votes from all trees and deciding whether the final score is higher than a chosen threshold. We use the implementation of the random forest technique provided by the \texttt{bigrf} R package [29].
We use the approach described by Harrell Jr. to train and test our models using the bootstrap and original samples [16]. In theory, the relationship between the bootstrap samples and the original data is asymptotically equivalent to the relationship between the original data and its population [16]. Since the population of our datasets is unknown, we cannot train a model on the original dataset and test it on the population. Hence, we use the bootstrap samples to approximate this by using several thousand bootstrap samples to train several models, and test each of them using the original data.
Handling skewed metrics: Analysis of the distributions of our metrics reveals that they are right-skewed. To mitigate this skew, we log-transform each metric prior to training our models (\( \ln(x + 1) \)).
Handling redundant metrics: Correlation analysis reduces collinearity among our metrics, however it would not detect all of the redundant metrics, i.e., metrics that do not have a unique signal with respect to the other metrics. Redundant metrics will interfere with each other, distorting the modeled relationship between the module metrics and its class. We, therefore, remove redundant metrics prior to constructing our defect models. In order to detect redundant metrics, we fit preliminary models that explain each metric using the other metrics. We use the $R^2$ value of the preliminary models to measure how well each metric is explained by the others.
We use the implementation of this approach provided by the `redund` function of the `rms` R package. The function builds preliminary models for each metric for each bootstrap iteration. The metric that is most well-explained by the other metrics is iteratively dropped until either: (1) no preliminary model achieves an $R^2$ above a cutoff threshold (for this paper, we use the default threshold of 0.9), or (2) removing a metric would make a previously dropped metric no longer explainable, i.e., its preliminary model will no longer achieve an $R^2$ exceeding our 0.9 threshold.
Handling imbalanced categories: Table I shows that our dependent variables are imbalanced, e.g., there are more correctly labelled issue reports than mislabelled ones. If left untreated, the models trained using imbalanced data will favour the majority category, since it offers more predictive power. In our case, the models will more accurately identify correctly-labelled issue reports than mislabelled ones.
To combat the bias of imbalanced categories, we re-balance the training corpus to improve the performance of the minority category. We re-balance the data using a re-sampling technique that removes samples from the majority category (under-sampling) and repeats samples in the minority category (over-sampling). We only apply re-balancing to bootstrap samples (training data) — the original (testing) data is not re-balanced.
**TABLE IV**
<table>
<thead>
<tr>
<th>(a) Prediction of mislabelled issue reports.</th>
<th>(b) Prediction of defect-prone files.</th>
</tr>
</thead>
<tbody>
<tr>
<td>Classified as Mislabelled</td>
<td></td>
</tr>
<tr>
<td>Correct</td>
<td>Mislabelled Correct</td>
</tr>
<tr>
<td>Defective</td>
<td>TP</td>
</tr>
<tr>
<td>Non-Defective</td>
<td>TP</td>
</tr>
</tbody>
</table>
To study the influence that the studied metrics have on our models, we apply the Scott-Knott test [43]. Each metric will have several variable importance scores (i.e., one from each of the releases). The Scott-Knott test will cluster the metrics according to statistically significant differences in their mean variable importance scores ($\alpha = 0.05$). We use the implementation of the Scott-Knott test provided by the `ScottKnott` R package [21]. The Scott-Knott test ranks each metric exactly once, however several metrics may appear within one rank.
**IV. CASE STUDY RESULTS**
In this section, we present the results of our case study with respect to our three research questions.
(RQ1) Is mislabelling truly random?
To address RQ1, we train models that indicate whether or not an issue report was mislabelled. We build two types of mislabelling models — one to predict issue reports that were incorrectly labelled as defects (defect mislabelling, i.e., false positives), and another to predict issue reports that should have been labelled as defects, but were not (non-defect mislabelling, i.e., false negatives). We then measure the performance of these models (RQ1-a) and study the impact of each of our issue report metrics in Table II (RQ1-b).
**RQ1-a) Model performance.** Figure 3 shows the performance of 1,000 bootstrap-trained models. The error bars indicate the 95% confidence interval of the performance of the bootstrap-trained models, while the height of the bars indicates the mean performance of these models. We compare the performance of our models to random guessing.
Our models achieve a mean F-measure of $0.38-0.73$, which is $4-34$ times better than random guessing. Figure 3 also shows that our models also achieve a mean precision of $0.68-0.78$, which is $6-75$ times better than random guessing. Due to the scarcity of non-defect mislabelling (see Table I), we observe broader ranges covered by the confidence intervals of the performance values in Figure 3(b). Nonetheless, the ranges covered by the confidence intervals of the precision and F-measure of all of our models does not overlap with those of random guessing. Given the skewed nature of the distributions at hand, we opt to use a bootstrap t-test, which is distribution independent. The results show that the differences are statistically significant ($\alpha = 0.05$).
Figure 3(b) shows that the only case where our models under-perform with respect to random guessing is the non-defect mislabelling model on the Jackrabbit system. Although the mean recall of our model is lower in this case, the mean precision and F-measure are still much higher than that of random guessing.
**RQ1-b) Influence of metrics.** We calculate the variable importance scores of our metrics in 1,000 bootstrap-trained models, and cluster the results using the Scott-Knott test.
A reporter’s tendency to mislabel issues in the past is the most influential metric for predicting mislabelled issue reports. We find that reporter tendency is the only metric in the top Scott-Knott cluster, indicating that it is consistently the most influential metric for our mislabelling models. Moreover, for defect mislabelling, reporter tendency is the most influential metric in 94% of our bootstrapped Jackrabbit models and 86% of our Lucene models.
Similar to RQ1-a, we find that there is more variability in the influential metrics of our non-defect mislabelling models than our defect mislabelling ones. Nonetheless, reporter tendency is still the only metric in the top Scott-Knott cluster. Furthermore, reporter tendency is the most influential metric in 46% of our Jackrabbit models and 73% of our Lucene models.
*Issue report mislabelling is not random.* Our models can predict mislabelled issue reports with a mean F-measure that is 4-34 times better than that of random guessing. The tendency of a reporter to mislabel issues in the past is consistently the most influential metric used by our models.
**(RQ2) How does mislabelling impact the performance of defect models?**
**Approach.** We use the same high-level approach to address RQ2 and RQ3. Figure 4 provides an overview of the steps in that approach. We describe how we implement each step to address RQ2 in particular below.
**(Step 1) Construct models:** For each bootstrap iteration, we train models using clean, realistic noisy, and random noisy samples. The clean sample is the unmodified bootstrap sample. The realistic noisy sample is generated by re-introducing the mislabelled issue reports in the bootstrap sample. To generate the random noisy sample, we randomly inject mislabelled issue reports in the bootstrap sample until the rate of mislabelled issue reports is the same as the realistic noisy sample. Finally, we train models on each of the three samples.
**(Step 2) Analyze models:** We want to measure the impact that real mislabelling and random mislabelling have on defect prediction. Thus, we compute the ratio of the performance of models that are trained using the noisy samples to that of the clean sample. Since we have three performance measures, we generate six ratios for each bootstrap iteration, i.e., the precision, recall, and F-measure ratios for realistic noisy and random noisy samples compared to the clean sample.
**(Step 3) Interpret results:** We repeat the bootstrap experiment for each studied release individually. Finally, we compare the distributions of each performance ratio using beanplots [25]. Beanplots are boxplots in which the vertical curves summarize the distributions of different data sets. The horizontal lines indicate the median values. We choose beanplots over boxplots, since beanplots show contours of the data that are hidden by the flat edges of boxplots.
Results. Figure 5 shows the distribution of the ratios of our performance metrics in all of the studied releases.
Similar to RQ1, we perform experiments for defect mislabelling and non-defect mislabelling individually. We find that, likely due to scarcity, non-defect mislabelled issue reports have little impact on our models. Hence, we focus on defect mislabelling for the remainder of this section.
The modules classified as defective by models trained using noisy data are typically as reliable as the modules classified as defective by models trained using clean data. Figure 5 shows that there is a median ratio of one between the precision of models trained using the realistic noisy and clean samples for both of the studied systems. Furthermore, we find that the 95% confidence interval for the distributions are 0.88-1.20 (Jackrabbit) and 0.90-1.19 (Lucene). This tight range of values that are centred at one suggests that the precision of our models is not typically impacted by mislabelled defects.
On the other hand, models trained using noisy data tend to miss more defective modules than models trained using clean data. Figure 5 shows that the median ratio between the recall of models trained using the realistic noisy and clean samples is 0.68 (Jackrabbit) and 0.56 (Lucene). This indicates that models trained using data with mislabelled defects typically achieve 56%-68% of the recall that models trained on clean data would achieve when tested on clean data.
(RQ3) How does mislabelling impact the interpretation of defect models?
Approach. We again use the high-level approach of Figure 4 to address RQ3. While Step 1 of the approach is identical for RQ2 and RQ3, Steps 2 and 3 are performed differently. We describe the different Steps 2 and 3 below.
(Step 2) Analyze models: For each bootstrap iteration, we calculate the variable importance score for each metric in each type of model (i.e., clean, realistic noisy, and random noisy). Hence, the variable importance score for each metric is calculated three times in each bootstrap iteration.
(Step 3) Interpret results: We cluster the variable importance scores of metrics in each type of model using Scott-Knott tests to produce statistically distinct ranks of metrics for clean, realistic noisy, and random noisy models. Thus, each metric has a rank for each type of model.
To estimate the impact that random and realistic mislabelling have on model interpretation, we compute the difference in the ranks of the metrics that appear in the top-three ranks of the clean models. For example, if a metric \( m \) appears in the top rank in the clean and realistic noisy models then the metric would have a rank difference of zero. However, if \( m \) appears in the third rank in the random noisy model, then the rank difference of \( m \) would be negative two.
Random mislabelling issue reports tends to overestimate the impact that realistic mislabelling has on model performance. Figure 6 shows that while the median ratio between the precision of realistic and random noisy models is 1 for both studied systems, the median recall and F-measure ratios are 0.84-0.90 and 0.88-0.93 respectively. In fact, 64%-66% of the recall and F-measure ratios are below 1 in our studied systems, indicating that models trained using randomly mislabelled issues tend to overestimate the impact that real mislabelling has on the recall and F-measure of our models.
When randomly injecting mislabelled defects, our results suggest that the impact of the mislabelling will be overestimated by 7-16 percentage points.
While defect mislabelling rarely impacts the precision of defect models, the recall is often impacted. Practitioners can rely on the modules classified as defective by defect models trained on noisy data. However, cleaning historical data prior to training defect models will likely improve their recall.
Similar to RQ2, we repeat the whole experiment for each studied release individually.
Results. Figure 7 shows the rank differences for all of the studied releases. We again perform experiments for defect mislabelling and non-defect mislabelling individually. The scarcity of non-defect mislabelling limits the impact that it can have on model interpretation. Indeed, we find that there are very few rank differences in the non-defect mislabelling results. Hence, we focus on defect mislabelling for the remainder of this section.
The most influential metrics are generally robust to the noise introduced by defect mislabelling. Figure 7 shows that 80% (Lucene) to 85% (Jackrabbit) of the metrics in the top rank of the clean model appear in the top rank of the random noisy model. Moreover, the 10%-15% of metrics in the top rank of the clean model that do not appear in the top rank of the noisy models only decrease by one rank.
Conversely, the metrics in the second and third ranks are less stable. Figure 7 shows that 31% (Jackrabbit) to 75% (Lucene) of the metrics in the second rank and 38% (Jackrabbit) to 82% (Lucene) of the metrics in the third rank of the clean model (most often, the process and developer metrics) do not appear in the second rank of the realistic noisy model, indicating that these metrics are influenced by defect mislabelling. Furthermore, 8%-33% of the second and third rank variables drop by two or more ranks in the noisy models.
Randomly injected mislabelled defects have a more damaging impact on model interpretation than real mislabelled defects do. Figure 7 shows that a smaller percentage of the metrics of the clean models are found at the same rank in the random noisy models than the realistic noisy models.
V. DISCUSSION & THREATS TO VALIDITY
We now discuss the results of our case study with respect to other work on issue report mislabelling, as well as the threats to the validity of our case study.
A. Discussion
In prior work, Herzig et al. show that issue report mislabelling has a drastic impact on the relative order of the most defect-prone files [19] — 16%-40% of the top-10% most defect-prone files do not belong in that group. The impact that issue report mislabelling has on the ordering of the most defect-prone files suggests that defect models (such as the ones that we build in this study) will also be drastically impacted, both in terms of precision and recall.
Yet in this study, we find that issue report mislabelling has little impact on the precision of defect models, which may seem to be incongruent with the prior work. We suspect that the differences in the conclusions that we draw have to do with the differences in our defect prediction experiments.
In the study of Herzig et al., files are ranked according to the number of defect reports that are mapped to a file. The files at the top of this ranked list are the most defect-prone, and would yield the most benefit from additional quality assurance effort [18]. Instability in the top-10% of files in this ranked list occurs if these highly defect-prone file have several mislabelled defects mapped to them.
On the other hand, our defect models classify whether a file is defective or clean. In order for a file to be remapped from defective to clean, all of the defects that are mapped to a file must be mislabelled, reducing the number of defects to zero. Otherwise, a file would still be considered defective. Hence, the instability that Herzig et al. observe with respect to the most defect-prone files may not have as much of an impact on the files that our defect models will consider defective.
B. Threats to Validity
External validity. We focus our study on two subject systems, due to the low number of systems that satisfied our analysis criteria (cf. Section III). The lack of a curated oracle of mislabelled issue reports presented a major challenge. Nonetheless, additional replication studies are needed.
Construct validity. Although the studied datasets have high link rates of issue reports and code changes, we make the implicit assumption that these links are correct. On the other hand, we rely on JIRA links from issue reports to code changes, which others have noted lead to more accurate links than links constructed from code changes to issue reports [40].
Internal validity. We use nine metrics to train models that identify mislabelled issue reports, and ten metrics to train models that identify defective files. We selected metrics that cover a variety of dimensions for each type of model. However, other metrics that we may have overlooked could also improve the performance of our models.
We focus on the random forest classification technique. Although prior studies have also used random forest [13, 14, 22, 24, 28], our findings are entirely bound to this technique. We plan to explore the impact that issue report mislabelling has on other classification techniques in future work.
VI. Conclusions
Defect models identify potentially defective software modules. However, the accuracy of the predictions and the insights derived from defect models depend on the quality of the data from which these models are trained. While recent work has shown that issue report mislabelling may impact the performance of defect prediction models [26, 40], the mislabelled issue reports were generated randomly.
In this paper, we study the nature of mislabelled issue reports and the impact that truly mislabelled issue reports have on the performance and interpretation of defect models. Through a case study of two large and successful open source systems, we make the following observations:
- Mislabelling is not random. Models trained to identify mislabelled issue reports achieve a mean F-measure that is 4-34 times better than that of random guessing. A reporter’s tendency to mislabel issues in the past is consistently the most influential metric used by our models.
- Since we observe that the precision of our defect models is rarely impacted by defect mislabelling, practitioners can rely on the accuracy of modules labelled as defective by defect models that are trained using noisy data — the files that are classified as defect-prone by models trained using noisy data are often just as accurate as the defect-prone predictions of models trained using clean data (i.e., mislabel-free). However, cleaning the data prior to training defect models will likely allow them to identify more of the truly defective modules.
- The most influential metrics are generally robust to defect mislabelling. 80%-85% of the most influential metrics from the clean models appear in the top ranks of the noisy models as well.
- On the other hand, the second- and third-most influential metrics are more unstable than the most influential ones. As little as 18% of the metrics in the second and third influence rank of the clean models also appear in the same rank in the noisy models.
- Randomly injecting mislabelled defects tends to overestimate the impact that defect mislabelling has on the performance and interpretation of defect models.
Acknowledgments
This work would not have been possible without the manually-curated oracle of mislabelled issue reports provided by Herzig et al. [19]. This work was conducted as a part of the Program for Advancing Strategic International Networks to Accelerate the Circulation of Talented Researchers; and the Japan Society for the Promotion of Science, Grant-in-Aid for Young Scientists (B: 25730045) and Scientific Research (B: 23300009). This work was also supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
REFERENCES
|
{"Source-Url": "http://se-naist.jp/pman3/pman3.cgi?DOWNLOAD=606", "len_cl100k_base": 9630, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 42208, "total-output-tokens": 13667, "length": "2e13", "weborganizer": {"__label__adult": 0.00036072731018066406, "__label__art_design": 0.00035381317138671875, "__label__crime_law": 0.0002951622009277344, "__label__education_jobs": 0.0019779205322265625, "__label__entertainment": 5.626678466796875e-05, "__label__fashion_beauty": 0.0001760721206665039, "__label__finance_business": 0.0002417564392089844, "__label__food_dining": 0.0002435445785522461, "__label__games": 0.0007114410400390625, "__label__hardware": 0.0005502700805664062, "__label__health": 0.0003268718719482422, "__label__history": 0.00019049644470214844, "__label__home_hobbies": 8.720159530639648e-05, "__label__industrial": 0.00025343894958496094, "__label__literature": 0.00028967857360839844, "__label__politics": 0.00021159648895263672, "__label__religion": 0.0003490447998046875, "__label__science_tech": 0.006893157958984375, "__label__social_life": 0.0001068115234375, "__label__software": 0.005550384521484375, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0002655982971191406, "__label__transportation": 0.00033593177795410156, "__label__travel": 0.00016069412231445312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56096, 0.02066]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56096, 0.16272]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56096, 0.91471]], "google_gemma-3-12b-it_contains_pii": [[0, 5290, false], [5290, 10319, null], [10319, 14382, null], [14382, 19218, null], [19218, 25673, null], [25673, 30976, null], [30976, 33881, null], [33881, 37775, null], [37775, 39728, null], [39728, 45429, null], [45429, 50869, null], [50869, 56096, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5290, true], [5290, 10319, null], [10319, 14382, null], [14382, 19218, null], [19218, 25673, null], [25673, 30976, null], [30976, 33881, null], [33881, 37775, null], [37775, 39728, null], [39728, 45429, null], [45429, 50869, null], [50869, 56096, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56096, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56096, null]], "pdf_page_numbers": [[0, 5290, 1], [5290, 10319, 2], [10319, 14382, 3], [14382, 19218, 4], [19218, 25673, 5], [25673, 30976, 6], [30976, 33881, 7], [33881, 37775, 8], [37775, 39728, 9], [39728, 45429, 10], [45429, 50869, 11], [50869, 56096, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56096, 0.14679]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
20cacdd8d3a267575bf7ebc9f3b6627d8b3778be
|
On Simultaneously Determinizing and Complementing
\(\omega\)-Automata\(^\dagger\)
(Extended Abstract)
E.Allen EMERSON\(^{1,2}\) and Charanjit S. JUTLA\(^1\)
1. Department of Computer Sciences, The University of Texas at Austin, USA
2. Mathematics and Computing Science Department, Technical University of Eindhoven, Netherlands
Abstract: We give a construction to simultaneously determinize and complement a Buchi Automaton on Infinite strings, with an exponential blowup in states, and linear blowup in number of pairs. An exponential lower bound is already known. The previous best construction was double exponential (Safra 88). This permits exponentially improved essentially optimal decision procedures for various Modal Logics of Programs. The new construction also gives exponentially improved conversions between various kinds of \(\omega\)-automata.
1. Introduction
Historically, automata on infinite strings were introduced by Buchi [Bu62] and slightly later by Muller [Mu63] in apparently unrelated areas: Buchi was interested in giving decision procedures for \(S1S\) (the monadic second order theory of one successor) and Muller was interested in describing behaviors of non-stabilizing asynchronous circuits. Over the years, not only have such automaton helped remove this diversity, but they now lie near the center of those areas of Computer Science where non-terminating computations are involved ([Pn77],[Pa78],[Pr79],[St81],[VW84],[VS85],[ESi84],[Sa88],[EJ88],[PR89]).
Just as in the theory of automata on finite strings, the basic theorem in automata on infinite strings relates two characterizations of languages of infinite strings: one in terms of acceptance by an automaton, the other in terms of generation by some mechanism. The second concept, that of regular \(\omega\)-event occured both in [Bu62] and [Mu63]. A language of infinite strings (or \(\omega\)-strings) \(L\) is \(\omega\)-regular iff there are regular sets \(\alpha_1...\alpha_n, \beta_1...\beta_n\) such that \(L = \alpha_1\beta_1 \cup ... \cup \alpha_n\beta_n\), where \(\alpha^\omega\) for a regular set \(\alpha\) denotes the set of all infinite strings obtained by concatenating infinitely many members of \(\alpha\).
The first notion, i.e. in terms of acceptance by finite automata, involves a complication since in contrast to automata on finite strings where acceptance can be defined in terms of a final state, automata accepting infinite strings do not reach a final state. If we allow non-determinism, then acceptance can be defined in terms of a privileged subset of states, such that an automaton accepts an \(\omega\)-string \(w\) iff the automaton non-deterministically visits some privileged state infinitely often while testing \(w\). Such an automaton is called a \textit{Buchi Automaton}. It is well known that Buchi Automata are expressively equivalent to \(\omega\)-regular languages.
As already mentioned, Buchi's motivation for studying such automata was to give a decision procedure for \(S1S\). He showed that formulae of \(S1S\) corresponded in a natural way to these automata. The decidability result required showing that such automata are closed under conjunction, disjunction and complementation. To complement a Buchi automaton, a natural way, as in the case of finite strings, seemed to be to show that Buchi Automata are equivalent to some deterministic form of automaton on infinite strings. Although, proofs without such determinization have existed ([Bu62],[SVW87]), determinization has indeed turned out to be the natural way as shown by Safra ([Sa88]).
\(^\dagger\)This work was supported in part by NSF grant DCR-8511354, ONR URI contract N00014-86-K-0763, and Netherlands NWO grant f3/nfb 62-500.
But, the real importance of this determinization process is revealed while obtaining optimal decision procedures for Branching Modal and Temporal Logics of Programs. Semantics of such logics are given with respect to infinite computation tree models. Streett [St82] showed (for one such logic, PDL-delta) that a formula of PDL-delta is satisfiable iff an automaton (effectively obtained from the formula) accepting infinite trees is non-empty. Most of these automata require checking (apart from other things) that each path of the tree belongs to some \( \omega \)-regular set. As already mentioned, it is easy to obtain a non-deterministic Buchi automaton equivalent to these \( \omega \)-regular languages. But there seems no way (at present) to obtain a tree automaton which checks that all paths are in this language, without first determinizing the Buchi automaton. If we can obtain a deterministic version of this Buchi automaton, then this automaton can be easily modified to check all paths.
A very clever notion of acceptance was given by Muller [Mu63] in his original paper, using which he defined a deterministic automaton on infinite string. McNaughton [Mc66] modified this acceptance condition (which was formalized by Rabin [Ra69] as Pairs Automaton) to prove that non-deterministic Buchi Automaton are expressively equivalent to deterministic Muller (or Rabin's Pairs) Automaton. This proof was very intricate and required that the deterministic automaton obtained from the Buchi Automaton have number of states double exponential in the number of the originial states. Safra [Sa88] gave a more transparent proof of the same theorem requiring only a single exponential state-space blowup. Safra's Construction has not only helped improve our understanding of the theory of \( \omega \)-automaton, it has also been used, along with the non-emptiness testing algorithms for tree automaton, to obtain essentially tight decision procedures for several logics of programs [EJ88] (cf. [PR89]).
If we take a closer look at the way Streett obtained the decision procedure for PDL-delta, it turns out that the tree automaton requires checking that each path of the tree belongs to the complement of some \( \omega \)-regular language. Such is also the case with most other Logics of Programs. Thus to obtain the tree automaton, it is not only required to determinize the Buchi Automaton corresponding to this \( \omega \)-regular set, but also to complement it. But, so far no single exponential blowup reduction is known which gives a deterministic Pairs automaton which accepts the complement of the original non-deterministic Buchi automaton. The reduction known so far required a double exponential blowup ([IVS85],[Sa88]). One solution of this problem was to give a new acceptance condition. This acceptance condition, known as the Complemented Pairs acceptance condition [St82], is basically the complement of the pairs condition for string automaton. For tree automaton, this acceptance condition requires that along all paths of the input tree the complement of the pairs condition hold. Given a deterministic pairs automaton on strings, the same automaton with the acceptance condition now viewed as the Complemented pairs acceptance condition, accepts the complement of the original automaton. In [EJ88], an algorithm to test non-emptiness of Complemented pairs automaton was also given, using which essentially optimal decision procedures were obtained for PDL-delta, and Mu-Calculus [Ko83].
However, there are logics for which both the determinization process, and the determinization of the complement seem to be required. A trivial example of such a logic is as follows: the atomic formulae consist of: Along all paths (and there exists a path) such that the path is in the language accepted by a non-deterministic Buchi Automaton. The formulae of the logic are boolean combinations of such atomic formulae. For such a logic, an optimal decision procedure (in this case deterministic single exponential time) uses both the determinization of a Buchi Automaton, and determinization of the complement of a Buchi Automaton in single exponential. As mentioned earlier, there was no such reduction known for the latter reduction. This problem was stated open in [Sa88].
In this paper, we build on Safra's Construction, to give a single exponential blowup construction which
simultaneously determinizes and complements a Buchi Automaton. More specifically, given a non-deterministic Buchi automaton, our construction gives a deterministic Pairs Automaton which accepts the complement of the original automaton. This is essentially an optimal conversion since a single exponential lower bound is known for complementing Buchi Automata.
The result is of particular interest for obtaining decision procedures for Modal and Temporal Logics. In particular, using the non-emptiness testing algorithm for Pairs Tree Automata in [EJ88] and our new Construction we obtain an essentially optimal non-emptiness testing algorithm for Pairs Hybrid Tree Automata [VS85]. In [VS85] for several Modal Logics of Programs, the satisfiability problem was reduced to non-emptiness problem of Hybrid Automaton, including YAPL of Vardi and Wolper [VW83], Parikh’s Game Logic [Pa83] (with the dual operator) and Streett’s Delta-Converse-PDL [St82]. It follows that YAPL has a deterministic double exponential time decision procedure (which is essentially optimal). Similarly, Propositional Game Logic (with the dual operator) and delta-converse-PDL have a deterministic single exponential time decision procedures.
Our Construction also helps us obtain essentially optimal (exponential) translations between Deterministic Complemented Pairs and Deterministic Pairs Automaton and vice-versa. The previous best known translations were again double exponential. Moreover, we can now complement a non-deterministic Complemented Pairs Automata with double-exponential blowup. The previous best known result caused triple exponential blowup.
Finally, we expect our construction to further our understanding of the theory of Automata on Infinite Objects, just as Safra’s Determinization Construct has done. The rest of the paper is organized as follows. Section 2 gives preliminary definitions and notations. In Section 3 we give our main Construction and its corollaries to the translations between different kinds of Automaton. In Section 4 we give its corollary to testing Non-emptiness of Pairs Hybrid Automata, and obtaining decision procedures for several Logics of Programs. In the concluding section 5 we discuss related work.
2. Preliminaries
2.1 \( \omega \)-Automata
An \( \omega \)-Automaton over alphabet \( \Sigma \) accepts a language which is a subset of \( \Sigma^\omega \), i.e. the set of all infinite sequences of elements from \( \Sigma \). For \( \sigma \in \Sigma^\omega \), we let \( \sigma_i \), \( i \geq 0 \), denote the \((i+1)th \) element in \( \sigma \).
An \( \omega \)-Automaton \( A \) consists of a tuple \( (\Sigma, S, \delta, s_0) \) plus an acceptance condition (described subsequently) where
- \( \Sigma \) is the input alphabet,
- \( S \) is the set of states of the automaton,
- \( \delta : S \times \Sigma \rightarrow \text{Powerset}(S) \) is the non-deterministic transition function, and
- \( s_0 \in S \) is the start state.
\( A \) is a deterministic automaton iff for each state \( s \) and each input symbol \( a : |\delta(s, a)| \leq 1 \). A run \( \rho \) of \( A \) on the input string \( \sigma \in \Sigma^\omega \) is an infinite sequence of states from \( S \), such that \( \rho_0 = s_0 \), and \( \rho_{i+1} \in \delta(\rho_i, \sigma_i) \).
We say that \( A \) accepts input string \( \sigma \) iff there exists a run \( \rho \) of \( A \) on \( \omega \) such that \( \rho \) satisfies the acceptance condition (as below). Define \( L(A) \) to be \( \{\sigma | \sigma \in \Sigma^\omega \text{ is accepted by } A \} \).
For an infinite sequence \( \rho \in \Sigma^\omega \), \( In(\rho) \) is the set \( \{i | \text{For infinitely many } i \ \rho_i = s \} \). For a Buchi Automaton ([Bu62]) \( A \) acceptance is defined in terms of a subset \( F \subseteq S \). \( \rho \) satisfies the Buchi acceptance condition iff \( In(\rho) \cap F \neq \phi \). Informally, if we traverse \( \rho \) starting from \( \rho_0 \), and flash a green light whenever we reach a state in \( F \), then \( \rho \) satisfies the Buchi acceptance condition iff the green light flashes infinitely often (for short, i.o.) on traversing \( \rho \).
For a Pairs Automaton (Rabin [Ra69]) acceptence is defined in terms of a finite list \( ((\text{RED}, \text{GREEN}), \ldots, (\text{RED}_k, \text{GREEN}_k)) \) of pairs of sets of states (think of them as pairs of colored lights where \( A \) flashes the red light of the first pair upon entering any state of the set \( \text{RED}_1 \), etc.) : \( \rho \) satisfies the pairs condition iff there exists a pair \( i \in [1..k] \)
such that REDi flashes finitely often and GREENi flashes infinitely often. More precisely, \( \exists i \in [1..k] : In(p) \cap RED_i = \phi \land In(p) \cap GREEN_i \neq \phi \). Finally, a Complemented Pairs (for Streett [St81]) automaton is defined by the above condition being false, i.e. for all pairs \( i \in [1..k] \), infinitely often GREENi flashes implies that REDi flashes infinitely often too, i.e. \( \forall i \in [1..k] : In(p) \cap GREEN_i \neq \phi \Rightarrow In(p) \cap RED_i \neq \phi \).
2.2 Hybrid Tree Automata
For notational simplicity, we only consider finite automata on infinite binary trees. An infinite binary \( \Sigma \)-tree \( T \) is a mapping \( T : (0,1)^* \rightarrow \Sigma \). A path starting at a node \( v \in (0,1)^* \) is the infinite sequence \( x = v_0, v_1, ... \) where \( v_0 = v \) and \( v_{i+1} \) is either \( v_i \cdot 0 \) or \( v_i \cdot 1 \). We let \( T[x] \) denote the infinite sequence of elements of \( \Sigma \) which occur in \( T \) along path \( x \), i.e. the infinite sequence \( T(v_0), T(v_1), ... \).
A finite automaton \( A \) on infinite binary \( \Sigma \)-trees consists of a tuple \( (\Sigma, S, \delta, s_0) \) plus an acceptance condition (described subsequently) where
- \( \Sigma \) is the input alphabet labeling the nodes of the input tree,
- \( S \) is the set of states of the automaton,
- \( \delta : S \times \Sigma \rightarrow \text{Powerset}(S^2) \) is the non-deterministic transition function, and
- \( s_0 \in S \) is the start state of the automaton.
A run of \( A \) on the input \( \Sigma \)-tree \( T \) is a function \( \rho : \{0,1\}^* \rightarrow S \) such that for all \( v \in \{0,1\}^* \), \((\rho(v0), \rho(v1)) \in \delta(\rho(v), T(v)) \) and \( \rho(\lambda) = s_0 \). We say that \( A \) accepts input tree \( T \) iff there exists a run \( \rho \) of \( A \) on \( T \) such that for all paths \( x \) starting at the root of \( T \) if \( r = \rho(x) \), the sequence of states \( A \) goes through along path \( x \), then the acceptance condition (as below) holds along \( r \).
As for Automata on infinite strings, for Buchi tree automata acceptance is defined in terms of a subset \( F \subseteq S \). \( r \) satisfies the Buchi acceptance condition iff \( \text{in}(r) \cap F \neq \phi \). For a Pairs automaton (Rabin [Ra69]) acceptance is defined in terms of a finite list \(( (RED_1, GREEN_1), \ldots, (RED_k, GREEN_k)) \) of pairs of sets of states. \( r \) satisfies the pairs condition iff there exists a pair \( i \in [1..k] \) such that \( \text{in}(r) \cap RED_i = \phi \) and \( \text{in}(r) \cap GREEN_i \neq \phi \).
A Hybrid Tree Automaton ([VS85]) \( H \) is a pair \((A, B)\), where \( A \) is a Pairs Tree Automaton and \( B \) is a Buchi \( \omega \)-Automaton, both over the same alphabet \( \Sigma \). \( H \) accepts a tree \( T \) iff \( T \) is accepted by \( A \) and, for every infinite path \( x \) starting at \( \lambda \), \( B \) does not accept (i.e. rejects) the infinite sequence \( T[x] \).
3. Technical Results
Theorem 3.1: Given a non-deterministic Buchi automata (NB, for short) \( A = (\Sigma, Q, q_0, \delta, F) \) a deterministic pairs automaton (DR, for short) \( CD = (\Sigma, Q', q_0', \delta', \Omega) \) can be constructed such that \( L(CD) \) is the complement of \( L(A) \), and \( |Q'| = 2^{|Q|^2} \) and the no. of pairs in \( \Omega \) is \( O(|Q|) \).
Introduction: Safra [Sa88] showed how to determinize a NB automata with exponential blowup in the size of the automata, and \( O(n) \) no. of pairs, where \( n \) is the size of the original NB automata. In other words, for a NB automata \( A \), Safra’s construction gives a deterministic Rabin automata (DR, for short) \( D \), such that \( \exists \text{ Run } \rho \text{ of } A \text{ on a } \omega \text{-string } s : In(p) \cap F \neq \phi \) iff \( \exists \text{ pair } i \in D : \text{ for the unique Run } \rho \text{ of } D \text{ on } s : In(p) \cap GREEN_i \neq \phi \land In(p) \cap RED_i = \phi \).
In our construction, we require \( CD \) to be such that
- \( \exists \text{ Run } \rho \text{ of } A \text{ on } s : In(p) \cap F \neq \phi \) iff
- \( \forall \text{ Runs } \rho \text{ of } A \text{ on } s : In(p) \cap F = \phi \)
iff
- \( \exists \text{ pair } i \in CD : \text{ on the unique Run } \rho \text{ of } CD \text{ on } s : In(p) \cap GREEN_i \neq \phi \land In(p) \cap RED_i = \phi \).
The following five paragraphs require familiarity with Safra’s Construction, however, the subsequent construction and the proof are complete in themselves. The states in \( CD \) will be ordered trees as in Safra’s Construction, with the nodes labelled by subsets of \( Q \) and with two additional sets which we describe subsequently.
It is easy to see that if we imitate Safra’s Construction, by flashing a Red instead of a Green (i.e. whenever Safra’s Construction flashes a green), then an eternal node (i.e. a node which is eventually never re-
moved from the tree) in the unique run of \( CD \) flashes red infinitely often iff there exists a run \( \rho \) of \( A \) such that \( \text{In}(\rho) \cap F \neq \emptyset \).
The hard part, now, is to make sure that in case there exists a run \( \rho \) of \( A \) s.t. \( \text{In}(\rho) \cap F \neq \emptyset \), then for no other pair \( j \) in \( CD \): (infinitely often \( \text{GREEN}_j \) flashes and finitely often \( \text{RED}_j \) flashes). And if such is not the case (i.e. \( A \) rejects the input string), then there exists a pair \( j \) (i.o. \( \text{GREEN}_j \) flashes and f.o. \( \text{RED}_j \) flashes). For the second case (Completeness), by the previous paragraph it suffices to have some eternal node flash green i.o. For the first case (soundness), according to the preceding paragraph, if an eternal node \( v \) flashes red i.o. then for all other eternal nodes \( v' \), \( v \) should be blocked from flashing green i.o. unless \( v \) flashes red i.o. So, in our Construction, whenever a node \( v \) flashes red, it blocks (at least temporarily) other nodes from flashing green. But, since it is not possible to determine a priori whether \( v \) is going to flash red i.o., \( v \) itself starts flashing green until some other interesting event happens (i.e. some other node flashes red), in which case this new node takes the responsibility of flashing green.
This way Soundness is ensured, i.e. if an eternal node in \( CD \) flashes red i.o., then \( CD \) rejects. But for Completeness, the concern is that, although \( v \) on flashing red has blocked other nodes from flashing green, \( v \) may not be eternal. The naive approach of backing up using a push-down stack doesn't work.
For example, consider a node \( u1 \) which flashes red once, and never again. So, after the last red flash, it starts flashing green. And now another node \( u2 \) flashes red, and the stack becomes \( | - u1 - u2 \) where \(-\) is the bottom. \( u2 \) starts flashing green, and soon \( u3 \) flashes red, and the stack becomes \( | - u1 - u2 - u3 \). While \( u3 \) is flashing green \( u2 \) dies, and the stack has the form \( | - u1 - u3 \). Soon \( u2 \) reappears as a new node flashing red, and gets on to the top of the stack. This time, \( u3 \) dies, and reappears again, and this process continues ad infinitum. Clearly, no eternal node flashes red i.o (because \( u1 \) is the only eternal node), and hence by Safra's Construction's completeness proof there is no run in \( A \) which accepts. Hence the input should be accepted by \( CD \) (\( L(CD) \) is complement of \( L(A) \)). But, in this push-down approach no eternal node satisfies: (i.o Green and f.o Red), failing completeness of \( CD \).
In our construction, each node \( v \) that has been blocked keeps track of all nodes which flashed red between the last two times \( v \) flashed green (say \( t_{-2} \leq t < t_{-1} \)). When (if at all) all these nodes die (say at time \( t_0 \)), \( v \) flashes green. \( v \) remains blocked from flashing any more greens, except if every node (if any) which flashed red during the interval \( [t_{-1}, t_0] \) has died, in which case \( v \) is unblocked and \( v \) keeps flashing green until it is blocked again by some node flashing red.
In the actual construction, each node \( v \) will have two additional labels A-set and R-set. R-set will maintain all alive nodes which flashed red between \( [t_{-1}, t_0] \), and A-set will maintain all alive nodes which flashed red between \( [t_{-2}, t_{-1}] \).
Construction: The states of \( CD \) are labelled ordered trees as in Safra's construction. An ordered tree \( T \) is a structure \( T = (N, r, p : N \rightarrow N, S) \) where
\( N \) is a set of nodes,
\( r \) being the root node,
\( p : N \rightarrow N \) is the parenthood function defined over \( N - \{r\} \), and defining for each \( v \in N - \{r\} \), its parent \( p(v) \in N \).
\( S \) is a partial order defining "older than" on siblings (i.e. children of the same node).
For nodes \( v \) and \( v' \), if \( p(v) = v' \), then we say \( v \) is a "child of" \( v' \). "Descendant of" is the transitive closure of "child of".
A Labelled Ordered Tree is a tuple \( (T, S, A, R) \), where \( T \) is an ordered tree as above, and \( S, A, R \) are three labellings of nodes of \( T \), \( S : N \rightarrow 2^Q \), \( A : N \rightarrow 2^N \), \( R : N \rightarrow 2^N \). The label of a node \( v \) given by \( S, A, R \) will be called S-label\(_v\), A-set\(_v\), and R-set\(_v\) resp. Moreover, each node in the tree will be marked with an auxiliary color (red, green or white).
A state(-tree) of \( CD \) is a Labelled Ordered Tree in which the S-labels enjoy the following properties:
1. The union of the S-labels of the children of a
node \( v \) is a proper subset of the S-labels of \( v \).
2. The S-labels of two nodes which are not ancestral are disjoint.
The initial state \( q_0 \) is the tree of the single node labelled: S-label = Initial states of \( A \), A-set = \( \phi \), R-set = \( \phi \). The deterministic transition function \( \delta' \) transforms a state-tree, given an input \( a \in \Sigma \), by performing the following:
1. Set the color of all nodes to white.
2. For every node \( w \) with S-label \( Q' \), replace \( Q' \) by \( \delta(Q', a) \).
3. For every node \( w \) with S-label \( Q' \) s.t. \( Q' \cap F \neq \phi \), create a new node \( \tilde{w} \) which becomes the youngest son of \( v \). Mark \( \tilde{w} \) red, and set S-label, A-set, R-set = \( Q' \cap F \).
4. For every node \( w \) with S-label \( Q' \) and state \( q \in Q' \) such that \( q \) also belongs to the S-label of an older sibling \( w' \) of \( w \), remove \( q \) from \( Q' \) and all its descendants.
5. Remove all nodes with empty S-labels.
6. For every node \( w \) whose S-label is equal to the union of the S-labels of its sons, remove all the descendants of \( w \) and color \( w \) red. Moreover, set A-set, = \( \phi \), and R-set, = \( \phi \).
7. For all nodes which are removed in (5) and (6), delete these nodes from A-set and R-set of all other nodes. In other words, if \( V \subseteq A\text{-set}_w \) such that all elements of \( V \) were removed in either (5) or (6), then set A-set, = A-set, \( - V \), R-set, = R-set, \( - V \).
8. If A-set of a node \( v \) is empty and \( v \) is not marked red then color \( v \) green and set A-set, = R-set, = \( \phi \).
9. If a node is not colored red then set R-set, = R-set, \( \cup \{ n \} \) node \( n \) is colored red. \( \text{Note: } \) Nodes colored red (i.e. due to steps (3) or (6)) have R-set, and A-set, empty (see (3) and (6)).
Let \( n = |Q| \). A state \( q \in Q \) is specific to a node \( v \) if \( q \) is in the S-label of \( v \) and is not in the S-label of any other node which is not an ancestor of \( v \). It is easy to see that a state can be specific to at most one node in the state-tree. Hence, each state-tree has at most \( n \) nodes. The acceptance condition \( \Omega \) of \( CD \) will have \( n \) pairs. Whenever a new node is created in the state-tree it is assigned a pair which is not already assigned to some node in the tree. When a node is removed from the state-tree, the pair assigned to that node becomes free, and can be reassigned.
Acceptance Condition: At the end of the above round (transition) if a node is colored red, flash the red light corresponding to the pair assigned to the node.
If a node is colored green, flash the green light.
If a node is removed in (5) or (6), again flash the red light corresponding to the pair assigned to the node.
Correctness: Note that steps (1) to (6) are similar to Safra's construction, except that now in step (6) we label \( v \) red instead of green. More specifically, the creation, deletion of nodes and maintenance of S-labels of nodes in the state-tree is exactly the same.
Completeness: We show that if there is no accepting run in \( A \), i.e. no run flashes green i.o. in \( A \), then there exists a pair \( i \) in \( CD \), s.t. (i.o. Green, and f.o. Red_i).
When a node flashes green we call it the green checkpoint of the node, and when it flashes red we call it the red checkpoint.
Observation 1: If an eternal node has a last checkpoint then it must be a green checkpoint, because after the last red checkpoint (3 or 6) A-set is empty, and hence in the next round case (8) will hold.
Observation 2: Since there is no accepting run in \( A \), then if a node \( v \) is eternal, then it will not have i.m. red checkpoints. This follows from the Soundness of Safra's Construction.
Observation 3: There is at least one eternal node (root node). W.l.o.g. we can assume that all runs of \( A \) are infinite.
We will show that at least one eternal node flashes
green i.o. Since each eternal node has finitely many red checkpoints, consider the eternal node $v$ which flashed red last among the eternal nodes. After the last red checkpoint of $v$ both $A$-set$_v$ and $R$-set$_v$ become empty. And since no eternal node flashes red after this instant, no eternal node will be in the $A$-set of $v$ after this instant. And hence $A$-set of $v$ will be empty infinitely often (At each Green Checkpoint of $v$, $A$-set$_v$ is set to some finite set of nodes, all of which eventually die, making $A$-set$_v$ empty). Thus $v$ flashes green i.o. and red f.o.
**Soundness:** Suppose there is a pair which flashes green i.o. and red f.o. Then there must be an eternal node $v$ to which this pair gets assigned eventually. Let $Red_v$ be the last time $v$ flashes red (or the time of its creation, if it doesn't flash red at all).
Claim: After $Red_v$ no eternal node flashes red. First note that $A$-set of a node $v$ can become empty either because all nodes in the $A$-set die off (7) or at the red checkpoint of $v$ (6). If after $Red_v$ any eternal node $v'$ flashes red, then $v'$ will be in $R$-set$_v$ eventually. Since, $A$-set$_v$ becomes empty i.o. (because $v$ flashes green i.o.) and since there are no more red checkpoints of $v$, $v'$ will be in $A$-set$_v$ eventually (i.e. at the next green checkpoint of $v$ after $v'$ is in $R$-set$_v$). After that $A$-set$_v$ cannot become empty because $v'$ is eternal and there are no mode red checkpoints of $v$. Contradicting that $v$ flashes green i.o.
Thus, no eternal node flashes red i.o. Hence by Completeness of Safra's Construction there is no accepting run in $A$. □
For completeness sake we reprove Safra's result here. We say that a run $\psi$ at time $t$ is in node $v$ iff $\psi_t$ is in the $S$-label of $v$ at time $t$.
For completeness of Safra's Construction we prove that if there is a accepting run in $A$, then there is a node in the tree-state run of $CD$ s.t. the red light of the pair corresponding to that node flashes i.o. (note that in our construction Safra's construction is modified in the sense that it flashes red instead of flashing green). Let $\psi$ be an accepting run in $A$. Let $u$ be the (unique) deepest node (i.e. the node which has no children of that property) in the state-tree s.t. $u$ is eternal and eventually $\psi$ is always in $u$. Such a node exists because of the fact that state of $\psi$ is always in the root node which is eternal, and the depth of the state-tree is bounded. Moreover there is a unique such node because as reasoned earlier each state in $Q$ is specific to one node in the state-tree. Since $u$ is the deepest eternal node in which $\psi$ remains forever, no child of $u$ is eternal, because if some child of $u$ was eternal then some eternal child of $u$ will eventually always have $\psi$, contradicting that $u$ is the deepest.
To see the last step, note that since $\psi$ is accepting, infinitely often $u$ has a child which contains $\psi$. Then the oldest child $v$ of $u$ which ever has $\psi$ in it, will have $\psi$ in it forever (because the only way $\psi$ can be removed from a node is by (4), and by (6), each of which leads to contradiction). Thus $v$ contradicts that $u$ is the deepest. Thus $u$ flashes red i.o. (see 6, which is the only way all children of a node die).
For soundness, we prove that if an eternal node $u$ in tree-state run of $CD$ flashes red i.o., due to step 6, then there is a good (accepting) run in $A$. Let $x$ and $y$ be any two instances when $u$ flashes red due to step 6, s.t. at no instant between $x$ and $y$, $u$ flashes red due to step 6. Let the $S$-label of $u$ at $x$ and $y$ be $R_u(x)$ and $R_u(y)$. Then there must be runs in $A$ s.t. for each $s$ in $R_u(y)$ there is a run-segment, beginning at a state in $R_u(x)$ and ending at $s$, which visited a green node ($F$ node). Using these run-segments, applying Koenig's Lemma, it can be easily shown that there is an accepting run in $A$.
**Complexity:** At the end of each move of the construction, there are at most $n = |Q|$ nodes in a state tree. Therefore there are at most $2^{\land logn}$ such unlabelled trees. Even, with $S$-labels there are at most $2^{\land (n\log n)}$ state-trees. But, since each node is labelled with $A$-set and $R$-set, there can be $2^{O(n \log n)}$ labels per node, and hence the size of $CD$ is $2^{O(n \log n)}$, with $n$ pairs. □
Lemma 3.2: For a deterministic Pairs Automaton $D$
\( (\Sigma, Q, \delta, \Omega) \) with \(|Q| = n \) and \( m \) pairs in \( \Omega \), a nondeterministic Buchi automaton \( \mathcal{N} = (\Sigma, Q', \delta', F) \) can be constructed such that \(|Q'| = O(mn)\).
Proof Sketch: The construction is a standard exercise in \( \omega \)-Automaton programming. Essentially, \( \mathcal{N} \) simultaneously guesses a pair \( i \) in \( \Omega \) and a time after which \( \text{Red}_i \) never flashes. After making this guess this run of \( \mathcal{N} \) flashes Green (i.e. goes to a state in \( F \)) whenever it visits a state in \( \text{GREEN}_i \), and terminates when it visits a state in \( \text{RED}_i \).
The following Lemma is due to Vardi (see [Sa88]).
**Lemma 3.3:** For any Streett automaton with \( n \) states and \( m \) accepting pairs, an equivalent nondeterministic Buchi automaton of size \( n2^{O(m)} \) can be constructed.
**Corollary 3.4:** For any Streett automaton \( \mathcal{A} \) with \( n \) states and \( m \) accepting pairs, a deterministic Pairs automaton \( \mathcal{D} \) with \( 2^{n^2}2^{O(m)} \) states and \( n2^{O(m)} \) pairs can be constructed which accepts the complement of \( L(\mathcal{A}) \).
Proof: Apply Lemma 3.3 followed by Theorem 3.1.\( \square \)
The previous best known simultaneous determinization and complementation (in fact, even just complementation) of a Streett automaton were triple exponential. The above Theorem also gives a double exponential translation from a Non-deterministic Streett Automaton to a deterministic Streett Automaton. Double exponential translation from a Non-deterministic Streett Automaton to a deterministic Pairs Automaton is already known (Sa88)).
We next show that there is a single exponential translation from deterministic Streett to Deterministic Pairs and vice-versa. An exponential blowup lower bound can easily be shown ([Sa88]) for these two translations. The previous best known translations were double exponential ([Sa88]).
**Corollary 3.5:** For any deterministic Streett automaton (Pairs Automaton) with \( n \) states and \( m \) pairs, an equivalent deterministic Pairs Automaton (Streett Automaton resp.) with \( n2^{O(m^2)} \) states and \( O(m) \) pairs can be constructed.
Proof: Given a deterministic pairs string Automata \( \mathcal{A} \) with \( n \) states and \( m \) pairs, we can design a non-deterministic Buchi Automaton \( \mathcal{B} \) that accepts exactly those strings of states of \( \mathcal{A} \) which meet \( \mathcal{A} \)'s pairs condition. \( \mathcal{B} \) operates as follows: it guesses the pair index \( i \) certifying that the \( i \)th pair condition holds and then guesses the position along the input string of states after which no state in \( \text{Red}_i \) is ever seen again. \( \mathcal{B} \) flashes green whenever the input state is in \( \text{Green}_i \).
Thereafter, \( \mathcal{B} \) can be implemented with \( O(m) \) states. By Theorem 3.1, there is a deterministic complemented pairs Automaton \( B_1 \) with \( 2^{O(m^2)} \) states and \( O(m) \) pairs that accepts the same language as \( B \).
We now define a deterministic complemented pairs automaton \( \mathcal{A}_1 \) which is equivalent to \( \mathcal{A} \) by taking the “product” of the transition table of \( \mathcal{A} \), with \( B_1 \): on an input string over the alphabet of \( \mathcal{A} \), \( \mathcal{A}_1 \) reads the string, going through states of \( \mathcal{A} \) according to the transition table of \( \mathcal{A} \). The resulting string of states of \( \mathcal{A} \) (the run of \( \mathcal{A} \)) is fed as input (as it is generated) to \( B_1 \), whose acceptance condition determines if the run of \( \mathcal{A} \) met the pairs condition of \( \mathcal{A} \), and \( \mathcal{A}_1 \) accepts iff \( B \) accepts. Note that \( \mathcal{A}_1 \) can be implemented with \( n2^{O(m^2)} \) states and \( O(m) \) pairs.
If \( \mathcal{A} \) is a deterministic Streett Automaton, it can be viewed as a deterministic Pairs Automaton for the complement of \( L(\mathcal{A}) \). Applying the above construction to get a deterministic Streett automaton \( \mathcal{A}_1 \) for the complement of \( L(\mathcal{A}) \), and viewing \( \mathcal{A}_1 \) as a deterministic Pairs Automaton for \( L(\mathcal{A}) \), gives the conversion from deterministic Streett Automaton to deterministic Pairs Automaton.
\( \square \)
4. **Complexity of Hybrid Tree Automata**
**Theorem 4.1**[EJ88] (also see [PR89]): Non-emptiness of a pairs tree automaton with \( m \) states and \( n \) pairs can be tested in deterministic time \( O((mn)^{3n}) \).
**Corollary 4.2:** Non-emptiness of a Hybrid Automata \( \mathcal{H} = (\mathcal{A}, \mathcal{B}) \), such that \( \mathcal{A} \) has \( m_1 \) states and \( n \) pairs, and \( \mathcal{B} \) has \( m_2 \) states, can be tested in deter-
Proof: Using Theorem 3.1, obtain a deterministic Pairs string Automaton \( C \), such that \( L(C) \) is the complement of \( L(B) \), and the no. of states in \( C \) is \( 2^{O(m^2)} \) and the no. of pairs is \( m^2 \). Since \( C \) is deterministic, it can easily be converted into a Pairs tree automaton \( D \) which accepts the language: All paths of the tree are in \( L(C) \). Now, we use the standard cross-product construction to obtain a pairs Tree Automaton which accepts the language which is the intersection of \( L(A) \) and \( L(D) \). For tree automata, with \( q_1 \) and \( q_2 \) states, and \( r_1 \) and \( r_2 \) pairs resp., such a cross product construction gives a tree automaton with \( O(q_1q_22^{q_1+q_2}) \) states and \( r_1r_2 \) pairs. Thus, we obtain a tree automaton \( E \) such that \( L(E) = L(A) \cap L(D) \), and \( E \) has \( O(m1^2m2^2+n^2) \) states, and \( nm^2 \) pairs. Moreover, by definition of \( H \), \( L(E) = L(H) \). Thus, using Theorem 4.1, we can test non-emptiness of \( E \), and hence of \( H \), in deterministic time \( O(3^{m^2}n^2) \).
It is already known ([VSSS]) that the emptiness problem of Hybrid Tree Automata is logspace hard for deterministic exponential time. Thus the above algorithm is essentially optimal.
The above Theorem can be used to give essentially optimal decision procedures for several modal logics of programs. In particular, using the reduction form satisfiability of Delta-Converse-PDL to emptiness of Hybrid Automata ([Va85], [VS85]), we now obtain a deterministic exponential time algorithm for Delta-Converse-PDL. Essentially, given a formula \( f \) of Delta-Converse-PDL, a hybrid automaton \( H_f = (A, B) \) can be constructed such that \( H_f \) accepts exactly the models (with Hintikka labels) of \( f \). The automata obtained, has the property that the no. of states in \( A \) is exponential in \( |f| \), the no. of pairs in \( A \) is polynomial in \( |f| \), and the no. of states in \( B \) is polynomial in \( |f| \). Using Corollary 4.2, we get a deterministic exponential time algorithm. Similarly, satisfiability of Parikh's Game Logic ([Pa83]) can be reduced to emptiness of Tree Automata, and then using Corollary 4.2 we obtain a deterministic exponential time decision procedure for Game Logic. The above upper bounds match the known lower bounds for these logics (which is the exponential time lower bound on the decision procedure for PDL ([FL79]), which is subsumed by both these logics). The previous best known decision procedures for both these logics were of complexity non-deterministic exponential time ([Pa83] for Game Logic, and [VS85] for both the logics).
On the other hand, YAPL ([VW83]) has a deterministic double exponential lower bound ([VS85]). When we reduce satisfiability of YAPL formulae to emptiness of Hybrid Automata, the size of the string automata turns out to be exponential in the length of the original formula (in contrast to the above logics, in which the string automata are of size polynomial in the length of the original formula). This causes the decision procedure for YAPL (obtained using Corollary 4.2) to run in time deterministic double exponential time. The previous best known algorithm for YAPL was of complexity non-deterministic double exponential time ([VS85]).
5. Conclusion
We have exhibited a construction to simultaneously determinize and complement Buchi automata on infinite strings, and illustrated its application to decision procedures for Modal Logics of Programs.
Vardi [Var] has pointed out that using the reduction of satisfiability for YAPL, Game Logic, and PDL-Delta-Converse to "weak" (Buchi) Hybrid Tree Automata (in lieu of Pairs Hybrid Automata) reported in [VS85], together with results of [EJ88] and [Sa88], it is possible to obtain upper bounds for these logics similar to those above.
Finally, another single exponential determinization and complementation of Buchi string-Automata has been obtained recently by Safra [Sa].
6. Acknowledgements
We would like to thank the class of "Automata and Formal Language Theory", Fall 88, at the University of Texas at Austin, for helpful comments and suggestions. We also thank Moshe Vardi for helpful comments on the preliminary version of the paper.
References
[Var] M.Y. Vardi, personal communication
[Sa] S. Safra, "On Streett and Rabin Deterministic Automata", manuscript
|
{"Source-Url": "https://www.computer.org/csdl/proceedings/lics/1989/1954/00/00039188.pdf", "len_cl100k_base": 10951, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39497, "total-output-tokens": 12880, "length": "2e13", "weborganizer": {"__label__adult": 0.0006318092346191406, "__label__art_design": 0.0007328987121582031, "__label__crime_law": 0.0006575584411621094, "__label__education_jobs": 0.0014238357543945312, "__label__entertainment": 0.00025725364685058594, "__label__fashion_beauty": 0.000339508056640625, "__label__finance_business": 0.0004270076751708984, "__label__food_dining": 0.0009679794311523438, "__label__games": 0.0025177001953125, "__label__hardware": 0.0019397735595703125, "__label__health": 0.0015697479248046875, "__label__history": 0.000659942626953125, "__label__home_hobbies": 0.00027823448181152344, "__label__industrial": 0.0012292861938476562, "__label__literature": 0.0012331008911132812, "__label__politics": 0.0006403923034667969, "__label__religion": 0.001117706298828125, "__label__science_tech": 0.450439453125, "__label__social_life": 0.00018727779388427737, "__label__software": 0.005870819091796875, "__label__software_dev": 0.5244140625, "__label__sports_fitness": 0.000553131103515625, "__label__transportation": 0.0015687942504882812, "__label__travel": 0.0002987384796142578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42733, 0.02277]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42733, 0.44284]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42733, 0.85424]], "google_gemma-3-12b-it_contains_pii": [[0, 3739, false], [3739, 8133, null], [8133, 12752, null], [12752, 17753, null], [17753, 22582, null], [22582, 26614, null], [26614, 31104, null], [31104, 35974, null], [35974, 40261, null], [40261, 42733, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3739, true], [3739, 8133, null], [8133, 12752, null], [12752, 17753, null], [17753, 22582, null], [22582, 26614, null], [26614, 31104, null], [31104, 35974, null], [35974, 40261, null], [40261, 42733, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42733, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42733, null]], "pdf_page_numbers": [[0, 3739, 1], [3739, 8133, 2], [8133, 12752, 3], [12752, 17753, 4], [17753, 22582, 5], [22582, 26614, 6], [26614, 31104, 7], [31104, 35974, 8], [35974, 40261, 9], [40261, 42733, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42733, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
c1c1dfe69d581b84874e25832d246c446d884822
|
Modeling work distribution mechanisms using colored petri nets
Pesic, M.; van der Aalst, W.M.P.
Published: 01/01/2005
Document Version
Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers)
Please check the document version of this publication:
• A submitted manuscript is the author's version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website.
• The final author version and the galley proof are versions of the publication after peer review.
• The final published version features the final layout of the paper including the volume, issue and page numbers.
Link to publication
Citation for published version (APA):
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
• Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
• You may not further distribute the material or use it for any profit-making activity or commercial gain.
• You may freely distribute the URL identifying the publication in the public portal.
Take down policy
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.
Download date: 02. Jan. 2019
Modeling Work Distribution Mechanisms Using Colored Petri Nets
M. Pesic and W.M.P. van der Aalst
Department of Technology Management, Eindhoven University of Technology, P.O.Box 513, NL-5600 MB, Eindhoven, The Netherlands.
m.pesic@tm.tue.nl, w.m.p.v.d.aalst@tm.tue.nl
Abstract. Workflow management systems support business processes and are driven by their models. These models cover different perspectives including the control-flow, resource, and data perspectives. This paper focuses on the resource perspective, i.e., the way the system distributes work based on the structure of the organization and capabilities/qualifications of people. Contemporary workflow management systems offer a wide variety of mechanisms to support the resource perspective. Because the resource perspective is essential for the applicability of such systems, it is important to better understand the mechanisms and their interactions. Our goal is not to evaluate and compare what different systems do, but to understand how they do it. We use Colored Petri Nets (CPNs) to model work distribution mechanisms. First, we provide a basic model that can be seen as the “greatest common denominator” of existing workflow management systems. This model is then extended for three specific systems (Staffware, FileNet, and FLOWer). Moreover, we show how more advanced work distribution mechanisms, referred to as resource patterns, can be modelled and analyzed.
Key words: Work distribution, workflow management, business process management, resource patterns, colored Petri nets.
1 Introduction
Workflow management systems are process-aware information systems [5, 20], which are used in companies as a means for the computerized structuring and driving of complex business processes. Workflow management systems implement business process models, and use them for driving the flow of work by allocating the right employees to the right tasks at the right times. The system manages the work of employees. It will determine which tasks an employee has to execute and when, which documents will be used, which information will be available during work, etc. Typically, a workflow management system offers several mechanisms to distribute work. Nevertheless, we believe that existing systems are too limited in this respect. The goal of this paper is not to propose advanced work distribution mechanisms. Instead, we focus on the analysis of functionality in existing systems. The goal is not to evaluate these systems, but to understand how they offer specific functionality. Since work distribution defines the quality of work, it is important to consider research from the field of social sciences, e.g., social-technical design [13, 17, 21, 54]. We believe that only by combining both technical and social approaches, one can truly grasp certain phenomena. A deeper understanding of particular aspects of work distribution is essential for developing a new breed of more user-centric systems.
The work reported in this paper can be seen as an extension of the workflow patterns initiative [6] (cf. www.workflowpatterns.com). Within the context of this initiative 43 resource patterns [50, 48] have been defined. Using a patterns approach, work distribution is evaluated from the perspective of the end-user as a dynamic property of workflow management systems. The work reported in this paper adds to a better understanding of these mechanisms by providing explicit process models for these patterns, i.e., the descriptive models are augmented with executable models. Most work reported in literature (cf. Section 4) uses static models to describe work distribution. Consider for example the meta modeling approaches presented in [8, 40–42, 47]. These approaches use
static models (e.g., UML class diagrams) to discuss work distribution concepts. This paper takes a truly dynamic model – a Colored Petri Net model – as a starting point, thus clearly differentiating our contribution from existing work reported in literature.
Colored Petri Nets (CPNs) [31, 34] are a natural extension of the classical Petri net [45]. There are several reasons for selecting CPNs as the language for modeling work distribution in the context of workflow management. First of all, CPNs have formal semantics and allow for different types of analysis, e.g., state-space analysis and invariants [32]. Second, CPNs are executable and allow for rapid prototyping, gaming, and simulation. Third, CPNs are graphical and their notation is similar to existing workflow languages. Finally, the CPN language is supported by CPN Tools\(^1\) – a graphical environment to model, enact and analyze CPNs.
In this paper, we provide a basic CPN model that can be seen as the “greatest common denominator” of existing workflow management systems. The model will incorporate concepts of a task, case, user, work item, role and group. This model should be seen as a starting point towards a more comprehensive reference model for work distribution. The basic CPN model is extended and specialized for three specific systems: Staffware [53], FileNet [24], and FLOWer [43]. These three models are used to investigate differences between and similarities among different work distribution mechanisms in order to gain a deeper understanding of these mechanisms. In addition, advanced resource patterns that are not supported by these three systems are modeled by extending the basic CPN model.
The remainder of this paper is organized as follows. Section 2 presents the basic CPN model which should be considered a the “greatest common denominator” of existing workflow management systems. Section 3 extends this model in two directions: (1) Section 3.1 specializes the model for three different systems (i.e., Staffware, FileNet, and FLOWer), and (2) Section 3.2 extends the basic model for selected resource patterns. An overview of related work is given in Section 4. Section 5 discusses our findings and, finally, Section 6 concludes the paper.
2 Basic Model
Different workflow management systems tend to use not only different work distribution concepts, but also completely different terminologies. This makes it difficult to compare these systems. Therefore, we will not start by developing CPN models for different systems and see how these can be unified, but, instead, start with modeling the “greatest common denominator” of existing systems. This model can assist in comparing systems and unifying concepts and terminology. We will use the term Basic Model to refer to this “greatest common denominator” and represent it in terms of a CPN model.
In the introduction we already motivated the use of CPNs as a modeling language [31, 34]. A CPN consists of places and transitions connected by arcs. The network structure is static but places can hold tokens thus representing the state of the model. The number of tokens per place can vary over time. Moreover, unlike the classical Petri net, tokens can have both a value and a time-stamp. The time-stamps indicate the availability of tokens and can be used to model delays, processing times, timeouts, etc. The value of a token indicates the properties of the object represented by this token. Places (represented by ovals) are typed, i.e., the tokens in a place have values of a particular type (or color in CPN jargon). These types are a subset of the data types in Standard ML such as the primitive types integer and string and compositional types such as tuple, list and record. Each place can hold tokens with values of a certain type. Transitions (represented by rectangles) may consume and produce tokens. Since tokens have values, arc inscriptions are needed to specify the input-output relations. Besides the extension with token colors and time-stamps, CPN models allow for hierarchy. Complex models may be decomposed into sub-pages, also referred to as sub-processes or modules, to obtain a layered hierarchical description. A more detailed discussion of the CPN concepts is beyond the scope of this paper. In the remainder, we assume that the reader is familiar with the CPN language and refer to [31, 34] for more details.
\(^1\) CPN Tools can be downloaded from wiki.daimi.au.dk/cpntools/.
The Basic Model represents a workflow management system where the business process is defined as a set of tasks. Before the process can be initiated and executed, it has to be instantiated. One (executable) instance of a process is referred to as a case. Each case traverses the process. If a task is enabled for a specific case, a work item, i.e., a concrete piece of work, is created. There is a set of users that can execute work items. The users are embedded in the organizational structure on the basis of their roles, and the groups they belong to. Group is an organizational unit (e.g., sales, purchasing, production, etc.), while role represents a capability of the user (e.g., manager, software developer, accountant, etc.). These concepts are mapped onto CPN types as shown in Table 1. As indicated, CPN uses Standard ML types (e.g., string and int) and type constructors such as product to create pairs and other complex constructs (e.g., \((1, \text{"taskA"})\) represents a value of type WI).
During the work distribution work items change state. The change of state depends on previous and determines the next actions of users and the distribution mechanism. A model of a life cycle of a work item shows how a work item changes states during the work distribution. For more detailed models about life cycle models we refer the reader to literature, e.g., [5, 18, 20, 30, 37, 41]. We have developed and use the life cycle models as aid to describe work distribution mechanisms of each of the workflow systems we have modeled in CPN. The Basic Model uses a simple model of the life cycle of work items and it covers only the general, rather simplified, behavior of workflow management systems (e.g., errors and aborts are not considered). Figure 1 shows the life cycle of a work item of the Basic Model. After the new work item has arrived, it is automatically also enabled and then taken into distribution (i.e., state initiated). Next, the work item is offered to the user(s). Once a user selects the work item, it is assigned to him/her, and (s)he can start executing it. After the execution, the work item is considered to be completed, and the user can begin working on the next work item.
Table 1. Basic Workflow Concepts
<table>
<thead>
<tr>
<th>Color</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Task</td>
<td>color of Task = string;</td>
</tr>
<tr>
<td>Case</td>
<td>color of Case = int;</td>
</tr>
<tr>
<td>WI</td>
<td>color of WI = product Case * Task;</td>
</tr>
<tr>
<td>User</td>
<td>color of User = string;</td>
</tr>
<tr>
<td>Role</td>
<td>color of Role = string;</td>
</tr>
<tr>
<td>Group</td>
<td>color of Group = string;</td>
</tr>
</tbody>
</table>
Fig. 1. Basic Model - Work Item Life Cycle
For the simulation (execution) of the work distribution in the model it is necessary to initiate the model by defining input elements. Table 2 shows that four elements are required for the simulation of the Basic Model. For every input element, Table 2 shows the element name (i.e., “system users”, “new work items”, “task maps” and “user maps”). Below the name there are a short description of the element, the color in the CPN model that represents the element and a simple example of the initial element value. In this example, there are two work items available for the case “1”: “write article” and “read article” (new work items). The authorization (task maps) of these two tasks is specified in such a way that the task “write article” is mapped to the user who has the role “student”, and is in the group “Information Systems”. The task “read article” is mapped to the user with the role “professor”, from the group “Information Systems”. The organizational structure (user maps) contains two users. First, there is “Mary” who has the role of “student” in the group “Information Systems”. Second, user “Joe” has the role “professor” and he works in the groups “Information Systems” and “Mathematics”.
As a model of an abstract workflow management system, we have developed the Basic Model on the basis of predefined assumptions: (1) we abstract from the process perspective (i.e., splits, joins, creation of work items), (2) we only consider the “normal” behavior (i.e., work items are
Table 2. Input For The Basic Model
| 1. system users | description: a set of available users;
CPN color: color Users = list User;
example: iUser = 1"Mary"++1"Joe"; |
|-----------------|------------------------------------------------------------------|
| 2. new work items | description: work items that have arrived and are ready to be distributed to users;
CPN color: color WI = product Case * Task;
example: iWI = 1'(1,"write article")++1'(1,"read article"); |
| 3. task maps | description: decision about which work items can be executed by which users is made based on the authorizations given for every task in the process definition;
CPN color: color TMap = product Task * Role * Group;
example: iTMaps = [("write article","student","Information Systems"), ("read article","professor","Information Systems")] |
| 4. user maps | description: the organizational structure is used to map users to the authorization of tasks;
CPN color: color UMap = product User * Roles * Groups;
(example Roles = list Role; color Groups = list Group;
exaexample: iUMaps = [("Mary",["student"],["Information Systems"]), ("Joe",["professor"],["Mathematics","Information Systems"]); |
completed successfully; errors and aborts are not included), and (3) we abstract from the user interface.
The Basic Model model is organized into two sub-systems: the Work Distribution and the Work Lists module. The CPN language allows for the decomposition of complex nets into sub-pages, which are also referred to as sub-systems, sub-processes or modules. By using such modules we obtain a layered hierarchical description. Figure 2 shows the modular structure of the Basic Module. The two sub-modules communicate by exchanging messages via six places. These messages contain information about a user and a work item. Every message place is of the type (i.e., the CPN color) “user work item” (color UWI = product User * WI), which is a combination of a user and a work item. Table 3 shows the description of the semantics of each of the messages that can be exchanged in the model.
Table 3. Messages Between Modules
<table>
<thead>
<tr>
<th>Place</th>
<th>Message</th>
</tr>
</thead>
<tbody>
<tr>
<td>to be offered</td>
<td>A work item is offered to the user.</td>
</tr>
<tr>
<td>withdrawn offer</td>
<td>Withdraw the offered work item from the user.</td>
</tr>
<tr>
<td>selected</td>
<td>The user requests to select the work item.</td>
</tr>
<tr>
<td>approved</td>
<td>Allow the user to select the work item.</td>
</tr>
<tr>
<td>rejected</td>
<td>Do not allow the user to select the work item.</td>
</tr>
<tr>
<td>completed</td>
<td>The user has completed executing the work item</td>
</tr>
</tbody>
</table>
Fig. 2. Basic Model - Main
Work Distribution. The Work Distribution module manages the distribution of work items by managing the process of work execution and making sure that work items are executed correctly. It allocates users to which the new work items should be offered, based on authorization (TMap) and organization (UMap) data. Three (out of four) input elements are placed in this module: new work items, user maps and task maps. The variables used in this module are shown in Table 4.
Table 4. Basic Model - Variables in Work Distribution Module
| var tmaps: TMaps; |
| var umaps: UMaps; |
| var wi: WI; |
| var wis: WIs; ( color WIs = list WI; ) |
| var uwi: UWI; |
Figure 3(a) shows the Work Distribution module. The allocation function offer contains allocation rules (allocation algorithm) of the specific distribution mechanism. Work items that are offered to users are stored in the place offered work items. After receiving a request from the user to select the work item, the decision is made weather to allow the user to select the item (and thus to execute it), or to reject this request. This decision is made based on the assumption that at one moment, only one user can work on the work item. If the work item has already been selected (i.e., it is not in the place offered work items), then the model rejects this request. If nobody has selected the work item yet, the approval is sent to the user and the work item is moved to the place assigned work items. A work item that is moved to the place selected work items cannot be selected again.
Work Lists. Figure 3(b) shows the Work Lists module. This module receives messages from the Work Distribution module about which work items are to be offered to which users. The Work Lists module further manages events associated with the activities of users. It is decomposed into three units, which correspond to three basic actions users can make: log on and off (cf. Figure 3(c)) in the system, select work (cf. Figure 3(d)), start work (cf. Figure 3(e)), and stop work (cf. Figure 3(f)). Once the work item has been offered to users, they can select it. When a user selects the work item, the request is sent to the Work Distribution module. If the request is rejected, the action is aborted. If the Work Distribution Module approves the request, the user can start working on the work item. Once the user has started working, the work item is considered to be in progress. Next, the user can stop working, and the work item is completed. In order to perform any of these actions, it is necessary that the user is logged on in the system.
3 Work Distribution Models
The Basic Model presented in previous section (Section 2) is used as a reference for different extensions and specializations of work distribution. In this section, we first extend and specialize the Basic Model for Staffware, FileNet and FLOWer (Section 3.1). In Section 3.2 we select four of the more advanced resource patterns reported in [48, 50]. These four patterns are not supported by Staffware, FileNet and FLOWer, but we will show that it is easy to extend the Basic Model to adequately address the patterns.
3.1 Workflow Management Systems
We have modelled the work distribution mechanisms of three commercial workflow management systems: Staffware, FileNet and FLOWer. FileNet and Staffware are examples of two widely used traditional workflow management systems. FLOWer is based on the case-handling paradigm, which can be characterized as “the more flexible approach” [3, 9]. Each of the models we have developed will be described in the remainder of this section.
(* function "offer" takes new work items, and offers them to users, based on task maps and user maps. *)
offer(wi; tmaps; umaps)
(* work item cannot be selected more than once *)
[not(elt(wi,wis))]
(* prevent users to select the work item again, after someone has selected it *)
offer(wi; tmaps; umaps)
(* allow user to select the work item *)
select(wi; umaps)
(* input *)
(* work item cannot be selectd more than once *)
(* allow user to select the work item *)
(* prevent users to select the work item again, after someone has selected it *)
(* send request for the work item *)
insert(wi; u; tmaps; umaps)
(* request has been sent, wait for the response *)
(* the user is executing the work item *)
(* request approvement for executing the work item *)
(* the work item is assigned to the user *)
(* the request is approved *)
(* only the user wich is logged on can work *)
(* the user is executing the work item *)
(* the request has been sent, wait for the response *)
(* the request is approved *)
(* only the user wich is logged on can work *)
(* the user is executing the work item *)
(* request approved *)
(* request rejected *)
(* request approved *)
(* request rejected *)
(* request approvement for executing the work item *)
(* the work item is assigned to the user *)
(* the request is approved *)
(* only the user wich is logged on can work *)
(* the user is executing the work item *)
(* the request has been sent, wait for the response *)
(* the request is approved *)
(* only the user wich is logged on can work *)
(* the user is executing the work item *)
(* request approvement for executing the work item *)
(* the work item is assigned to the user *)
(* the request is approved *)
(* only the user wich is logged on can work *)
(* the user is executing the work item *)
(* request approvement for executing the work item *)
(* the work item is assigned to the user *)
(* the request is approved *)
(* only the user wich is logged on can work *)
(* the user is executing the work item *)
(* request approvement for executing the work item *)
(* the work item is assigned to the user *)
(* the request is approved *)
(* only the user wich is logged on can work *)
**Staffware** The Basic Model is upgraded to represent the work distribution of Staffware. The way of modelling the organizational structure and resource allocation algorithm are changed, while the concept of work queues and the possibility of the user to forward and suspend a work item are added to the model.
**Organizational Structure.** Simple organizational structure can be created in Staffware using the notions of groups and roles. The notion of group is defined as in the Basic Model, i.e., one group can contain several users, and one user can be a member of several groups. However, specific for Staffware is that a role can be defined for only one user. This feature does not require any changes in the model itself. It changes the way the initial value for the user maps should be defined – one role should be assigned to only one user.
**Work Queues.** Groups are used to model a set of users that share common rights. The work item can be allocated to the whole group, instead of listing the names of users that can execute it. Staffware introduces a work queue for every group. The work queue is accessible to all members of the group. Single users are also considered to be groups that contain only one member. Thus, one work queue is also created for every user and this personal queue is only accessible by a single user. From the perspective of the user, (s)he has access to the personal work queue and to work queues of all the groups (s)he is a member of. Table 5 shows which colors are added to the model to represent work queues in Staffware. While the Basic Model (Section 2) offers the work item directly to users, Staffware offers items in two levels. First, the work item is offered to work queues (color $WQ$). We refer to this kind of work items as to the queue work item (color $QWI$). Second, after a queue work item is offered to a group (work queue) it is offered to each of its members and only one member will execute the queue work item once. We refer to a queue work item that is offered to a member as to the user work item (color $UWI$).
<table>
<thead>
<tr>
<th>Table 5. Staffware - “Work Queue” Colors</th>
</tr>
</thead>
<tbody>
<tr>
<td>color $WQ$ = string;</td>
</tr>
<tr>
<td>color $QWI$ = product $WI * WQ$;</td>
</tr>
<tr>
<td>color $UWI$ = product $User *QWI$;</td>
</tr>
</tbody>
</table>
Figures 4 and 5 show that we create two levels in the Work Distribution module to support the two-level distribution of Staffware:
1. In the module itself a new work item is offered to work queues (as a queue work item). The new work item is completed when each of its queue work items is executed. Thus, if a new work item is offered to multiple work queues, it is executed multiple times.
2. In the sub-module Offering to Users every queue work item is offered to the queue members (user work item). A queue work item is completed when one of the members executes the user work item.
**Resource Allocation.** We have changed the allocation function offer to represent the allocation algorithm in Staffware. Just like the Basic Model, Staffware searches for possible users based on roles and groups. In addition to this, in Staffware users can be allocated by their user-names and data fields in the process. In user maps we use the fields reserved for groups when we want to specify the allocation for user-names. We do this by assuming that every user-name refers to a group with only one member – the specified user. The second addition in Staffware refers to the fact that resource allocation can be also done at “run-time” on the basis of data fields in task maps (cf. Table 6). This kind of allocation is referred to as a dynamic work allocation. Every field has a unique name, e.g., next user. During the execution of the process, every field is assigned a value, and this value changes (e.g., users can assign values to fields). Staffware assumes that the value of the assigned data field is a group name, a role name or a user name. If the field next user
(which for example has the value of “John Smith” assigned) is specified in the task map of a task, then the actual value of the field is assigned to the task map entry at the moment when the task becomes enabled. Thus, “John Smith” will be used in the allocation.
**Table 6. Staffware - Dynamic Work Allocation**
| color Field = string; | color Fields = list Field; |
| color FValue = string; | color FMaps = list FMap; |
| color FMap = product Field*FValue; | color TMap = product Task * Users * Roles * Groups * Fields; |
Table 7 represents the difference between the work distribution and the allocation functions offer in the Basic Model and Staffware. Work distribution in Staffware starts with distribution to work queues. Thus, allocation of resources starts with the allocation to work queues: (1) if the allocation refers to group names the work item is allocated to group work queues, and (2) if the allocation refers to user names or roles the work item is allocated to personal work queues. Allocation for every task is specified in the type TMap, as Table 6 shows. If we look at the example from Table 2, we can see that the task “read article” should be allocated to users which are from the group “Information Systems” and have the role “professor”. Figure 6 shows how this can be done in Staffware. The Basic Model allocates this task to users that are from the group “Information Systems” and have the role “professor”, i.e., to the user “Joe”. Staffware allocates this task to the work queue of the group “Information Systems” and the personal queue of the user who has the role “professor” (cf. Figure 7). In Staffware model, this work item will be executed two times: (1) by one member of the group “Information Systems”, i.e., by “Joe” or “Mary”, and (2) by the user with the role “professor”, i.e., by “Joe”. Thus, in Staffware model task “read article” will either be executed (1) twice by “Joe” or (2) once by “Joe” and once by “Mary”.
**Forward and Suspend.** In the Basic Model once the user selects the work item, (s)he can start working on a work item and complete it. Figure 8 shows that Staffware offers a more realistic and
somewhat more complex model of the life cycle of a work item. After the user starts the work item, (s)he can either execute it, or forward it to another user. Forwarding transfers the work item to the state offered, because it is automatically offered to the new user. If the user chooses to execute the work item, (s)he can complete or suspend it. When work item is suspended it is transferred back to the state of initiated. After this, the system offers the work item again, and other users are able to select it again.
Forwarding and suspending of work items adds two messages that are exchanged between Work Distribution and Work Lists modules in Staffware model. Figures 4 and 5 show two new places – forward and suspend. These two new actions are triggered in the Work List module by the user. Figure 9(a) shows that in the module Start Work the user can choose to select or forward (to another work queue) the work item. Figure 9(b) shows that in the module Stop Work the user can
choose to complete or suspend the work item. The Work Distribution module handles forwarding and suspending in the Offering to Users sub-module. Figure 9(c) represents how: (1) in case of forwarding the work item is automatically cancelled for the current work queue and offered to the new work queue, and (2) in case of suspending the work item is cancelled for the current work queue and re-offered as a new work item.
**FileNet** Like Staffware, FileNet is a widely used traditional process-oriented workflow management system. In this section we will describe the FileNet CPN model that we develop using the Basic Model as a starting reference model.
**Organization.** The organizational model in FileNet does not allow for modelling roles. Table 9 shows which colors are added to the CPN model to represent the two types of organizational groups:
1. Administrators define work queues (color \( WQ \)) and assign their members in the FileNet system. Work queues are valid for every process (workflow) definition.
2. Process modelers can define workflow groups (color \( WG \)) in every process model. Thus, workflow groups are valid only in the process (workflow) model in which they are defined. Workflow groups represent teams in FileNet. While executing a task of a process definition, users have the possibility to change the structure of workflow groups that are defined in that process. Figure 10 shows that users can alter workflow groups (add and remove members) only when a work item is in progress. Workflow groups represent teams in FileNet.
Table 8. FileNet - “Work Queue” Colors
<table>
<thead>
<tr>
<th>color WQ = string;</th>
<th>color WG = string;</th>
</tr>
</thead>
<tbody>
<tr>
<td>color WQs = list WQ;</td>
<td>color WGs = list WG;</td>
</tr>
<tr>
<td>color UMap = product User * WGs * WQs;</td>
<td></td>
</tr>
</tbody>
</table>
Fig. 10. FileNet - Work Lists
Queues. Work queues and personal queues are two types of pools from which users can select and execute work items. A work queue can have a number of members while a personal queue has only one member. When a work item is offered to a queue one of the queue members can select and execute the work item. Table 9 shows which colors are added to the FileNet model to represent queues. FileNet distributes work in two levels using queues. First, the work item is offered to queues as a queue work item (color QWI). Second, the queue work item is offered to the members of the queue as a user work item (color UWI).
Table 9. FileNet - “Queue Work Item” Colors
<table>
<thead>
<tr>
<th>color Q = string;</th>
</tr>
</thead>
<tbody>
<tr>
<td>color QWI = product WI * Q;</td>
</tr>
<tr>
<td>color UWI = product User * QWI;</td>
</tr>
</tbody>
</table>
Figures 11 and 12 show that the model of the two-level work distribution in FileNet is similar to the Staffware model. For more detailed description of this kind of distribution we refer the reader to the Staffware description in Section 3.1.
Resource Allocation. FileNet allocates work using work queues and lists of participants. Figure 13 shows that a task in FileNet can be allocated to either a work queue or to a list of participants. Users and workflow groups can be entries or a list of participants. In the FileNet model task maps are defined as a combination of a task, a list of work groups, and a work queue (color TMap = product Task * WGs * WQ;). It is necessary to highlight that, when defining the input value for a task map, only work queue or a list of workflow groups should be initiated.
If the task is allocated to a work queue FileNet offers the work item to the work queue. If the task is allocated to a list participants then it is offered to personal queues of all users that are listed as participants or are members in workflow groups that are listed. Allocation via participants is introduced to support team work in FileNet, via so-called “process voting”. During the execution of a task, all participants vote for the specified decision. The work distribution mechanism uses their decisions to determine which work items will be executed next. Since our models abstract from the process perspective, we did not model process voting in the FileNet model.
Forward and Suspend. Users can forward and suspend work items in FileNet. When the user selects a work item (s)he can start working on it or forward it to another user. In this case FileNet
automatically offers the work item to the new user. When the user is executing a work item he/she can complete or suspend the work item. In this case FileNet needs to apply the distribution mechanism again, and offer the work item to all allocated users. Figure 14 shows the life cycle of a work item in FileNet. If the life cycle models of FileNet and Staffware (cf. Section 3.1) are compared it can be seen that they are identical. Therefore, we use the same adjustments in FileNet and Staffware models to implement forwarding and suspension: modules Start Work and Stop Work are changed and sub-module Suspend and Forward is added in the Work Distribution module, as Figure 15 shows. For detailed description we refer the reader to Staffware description in Section 3.1.
**FLOWer** FLOWer is a case handling system. Case handling systems differ in their perspective from traditional process-oriented workflow management systems because they focus on the case, instead of the process [3, 9]. The user is offered the whole case by offering all available work items
Fig. 14. FileNet - Work Item Life Cycle
Fig. 15. FileNet - Forward and Suspend
from the case and s(he) does not have to follow the predefined order of tasks in the process
definition.
When modelling FLOWer, we upgraded the Basic Model in such a way that (1) it handles case-
handling distribution (instead of the process-oriented one), (2) it enables the complex authoriza-
tion and distribution specifications that FLOWer has, and (3) it enables users to execute, open, skip
and redo work items.
Case Handling. Table 10 shows which colors are used to model FLOWer as a case-handling system.
Every process definition in FLOWer is referred to as a case type. One case represents an instance of
a case type and is identified by the case identification (color CaseID). Figures 16 and 17 show
that FLOWer distributes work in two levels:
1. The case is distributed to users (color \textit{UCase}). Only one user can select and open the case at one moment. Figure 16 shows that in the FLOWer Work Distribution model a \textit{case} becomes the object of distribution instead of a \textit{work item}.
2. The selected case is opened for the user in the \textit{Case Distribution} sub-module. Work items from the case are offered to the user, based on the authorization and distribution rules. The user can execute, open, skip and redo work items from the selected case. The \textit{Case Distribution} sub-module and authorization and distribution rules are described in the remainder of this section.
\begin{table}[h]
\centering
\caption{FLOWer - Basic Colors}
\begin{tabular}{l}
\textbf{color \textit{CaseType}} = \text{string}; \\
\textbf{color \textit{Tasks}} = \text{list Task}; \\
\textbf{color \textit{Process}} = \text{product \textit{CaseType} * \textit{Tasks}}; \\
\textbf{color \textit{CaseID}} = \text{INT}; \\
\textbf{color \textit{Case}} = \text{product \textit{CaseID} * \textit{CaseType}}; \\
\textbf{color \textit{WI}} = \text{product \textit{Case} * \textit{Task}}; \\
\textbf{color \textit{UCase}} = \text{product \textit{User} * \textit{Case}};
\end{tabular}
\end{table}
\textbf{Authorization Rights.} When designing the process for a case type it is necessary to define case-type-specific roles and to assign each role authorization rights for tasks in the process. The authorization rights determine what users \textit{can do}. These rights are applied by the distribution mechanism when opening the case for the user. The user is allowed to work only on tasks for which (s)he has the
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{FLOWer_Work_Distribution.png}
\caption{FLOWer - Work Distribution}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{FLOWer_Work_Lists.png}
\caption{FLOWer - Work Lists}
\end{figure}
authorized roles. Information about the authorization is stored in task maps (i.e., \( \text{color TMap} = \text{product Task} \times \text{Role} \times \text{CaseType} \)).
**Distribution Rights.** Distribution rights define what users should do. These rights are used to model the organizational structure and to assign authorization rights from the process definitions (case types) to users. *Function profiles* and *work profiles* define distribution rights. Table 11 shows which colors are added to represent these profiles. *Function profile* (FP) assigns authorization roles to users. If, for example, there are two case types (two processes) that both have “secretary” as an authorization role, the function profile “secretary” includes both authorization roles. When we assign the function profile “secretary” to a user, we indirectly assign both authorization roles from two processes. *Work profiles* (WP) assign function profile(s) to users and they can be used to structure organization into groups, departments or units.
**Open, Execute, Skip and Redo.** Although in a case type tasks in the process definition have the execution order that is suggested to the user, (s)he is not obliged to follow it. When working with an open case in FLOWer, users can: (1) *Execute* the work item which is next in the process definition; (2) *Open* for execution a work item that is still not ready for execution according to the process definition; (3) *Skip* a work item by choosing not to execute the work item which is next according to the process definition, or (4) *Redo* a work item by executing again a work item which has already been executed. Figures 16 and 17 show that four new places are added to the model to represent these four actions. In order to implement these possibilities in the FLOWer model it is necessary to keep the information about the case state, i.e., about the work items that are (1) waiting to be enabled, (2) active (i.e. they are enabled and can be executed), (3) finished (executed), and (4) skipped. Table 12 shows that an open case keeps information about the case state in four lists of work items (waiting, active, finished and skipped).
**Table 11. FLOWer - Distribution Rights in CPN Colors**
| color PRole = product Role * CaseType;
| color FN = STRING;
| color FP = product FN * PRoles;
| color WN = STRING;
| color WP = product WN * FNs * Users; |
In FLOWer users work with the interface tool “Wave Front” [43] where they can see the state of the open case. Users can see which work items are waiting, active, finished and skipped. Figure 18 shows one example of an open case in the “Wave Front”. The first two tasks (“Claim Start” and “Register Claim”) are finished work items and they are marked with a “check” symbol. The third work item (“Get Medical Report”) was skipped, as can be seen from the “arrow” symbol. Thus, finished and skipped work items are presented after the “Wave Front” line. The three active work items on the Wave Front are “Get Police Report”, “Assign Loss Adjuster” and “Witness Statements”. Finally, the two last work items (“Policy Holder Liable” and “Close Case”) are waiting before the Wave Front to become active.
Figure 19 shows the sub-module *Action* (in the FLOWer Work List module) where we model how user performs the actions to execute, open, skip and redo work items. In FLOWer users can
choose work items on their own discretion but (due to the complexity of the model) we model this selection as a random function. When the user wants to:
1. **open** an item s(he) selects a work item from the list of *waiting items*;
2. **execute** an item s(he) selects a work item from the list of *active items*;
3. **skip** an item s(he) selects a work item from the lists of *waiting and active items*;
4. **redo** an item s(he) selects a work item from the lists of *finished and skipped items*.
Each of the four actions the user performs changes the state of the open case. For example, opening a work item transfers it to the state active (and, therefore, it is transferred to the list of active items). Figure 20 shows that the Case Distribution module responses in different ways (functions *execute item*, *open item*, *skip item*, and *redo item*) when each of the four actions is performed. When an action is performed over a work item, the state of the work item changes, as shown in Table 13. The for actions are listed in the column “action”. The column “work item becomes” shows how the action changes state of the work item. It often happens that an action performed on a selected work item also effects other items and this is described in the column “side effects”.
In FLOWer a work item has a different life cycle than in Staffware and FileNet. First, the moment of enabling is different. It is not necessary for the new work item to be enabled in order to be initiated, offered, selected and finally assigned. Second, FLOWer adds more possibilities for switching between states assigned, enabled, started, executed and completed. In the model of the FLOWer work item life cycle (cf. Figure 21), additional actions to skip, open and redo are added to the basic life cycle model. A path that is marked with brackets (e.g., (“* redo *)”) is a side effect of an action being taken over another work item (e.g., a work item can be directly transferred from the state completed to the state assigned if the action redo is performed on a preceding work item).
### Table 13. FLOWer - The Four Actions
<table>
<thead>
<tr>
<th>action</th>
<th>work item becomes</th>
<th>side effects</th>
</tr>
</thead>
<tbody>
<tr>
<td>open</td>
<td>active</td>
<td>Items from <em>waiting</em> that preceded become skipped.</td>
</tr>
<tr>
<td>execute</td>
<td>finished</td>
<td>The direct successors in <em>waiting</em> become <em>active</em>.</td>
</tr>
<tr>
<td>skip</td>
<td>skipped</td>
<td>Items from <em>waiting</em> that preceded become skipped. The direct successors in <em>waiting</em> become <em>active</em>.</td>
</tr>
<tr>
<td>redo</td>
<td>active</td>
<td>Subsequent items from (<em>skipped & finished</em>) become <em>waiting</em>.</td>
</tr>
</tbody>
</table>
### 3.2 Resource Patterns
Instead of extending the Basic Model for more systems, we also looked at a more systematic way of work distribution. As indicated, similar concepts are often named and presented differently in different workflow management systems. Therefore, it is interesting to define these concepts in a system-independent manner. We have used 43 documented resource patterns [48, 50]. These patterns can be used as representative examples for analyzing, evaluating and comparing different workflow management systems with respect to work distribution. Resource patterns are grouped into a number of categories: creation patterns, push patterns, pull patterns, detour patterns, auto-start patterns, visibility patterns, and multiple resource patterns. Each of these patterns can be modeled in terms of a CPN model.
Table 14 shows an overview of the patterns. It also shows whether a pattern is directly supported by the three systems (SW = Staffware, FN = FileNet, FW = FLOWer) and the Basic Model (BM). The Basic Model supports less patterns than any of the three systems. This makes sense since each of the system-specific models can be seen as an extension of the Basic Model. It is interesting to see that existing systems typically support less than half of the patterns directly. This reveals typical limitations of contemporary products. Some of the patterns are considered out-of-scope for our models (marked with “o”). These are typically patterns directly depending on control-flow functionality, while we prefer to focus exclusively on work distribution. Each of the patterns not marked with “o” can easily be added to the Basic Model separately.
We cannot elaborate on each of the patterns, but we will discuss four to illustrate our work. None of the systems supports Pattern 16: Round Robin, Pattern 17: Shortest Queue, Pattern 38: Piled Execution, and Pattern 39: Chained Execution. Patterns 16 and 17 are push patterns, i.e., a patterns to push work to a specific user. As auto-start patterns, patterns 38 and 39 enable the automatic start of the execution of the next work item once the previous has been completed.
*Round Robin and Shortest Queue.* Round Robin and Shortest Queue push the work item to one user of all users that qualify. Round Robin allocates work on a cyclic basis and Shortest Queue to the user with the shortest queue. This implies that each user has a counter to: (1) count the sequence of allocations in Round Robin and (2) count the number of pending work items in Shortest Queue. Tables 15 and 16 show which colors and variables are used to implement counters in models of these two patterns. As Figures 22 and 23 show, these two patterns are implemented in a similar way in the Work Distribution Module. The required changes to the Basic Model are minimal. A counter is introduced for each user (token in place `available`) and functions `round_robin` and `shortest_queue` are used to select one user from the set of possible users based on these counters. Similarly, most of the other patterns can be realized quite easily. The model for Shortest Queue has an additional connection (two arcs) that updates the counter when a work item is completed to remove it from the queue.
Table 14. Support for Resource Patterns in 3 Workflow Systems and Basic Model
(+ = direct support, – = no direct support, +/- = partial support, o = out-of-scope)
<table>
<thead>
<tr>
<th>Nr</th>
<th>Pattern</th>
<th>SW</th>
<th>FN</th>
<th>FW</th>
<th>BM</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Direct Allocation</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+/</td>
</tr>
<tr>
<td>2</td>
<td>Role-based Allocation</td>
<td>+</td>
<td>+/-</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>3</td>
<td>Deferred Allocation</td>
<td>+</td>
<td>+</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>4</td>
<td>Authorization</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>5</td>
<td>Separation of Duties</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>6</td>
<td>Case Handling</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>7</td>
<td>Retain Familiar</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>8</td>
<td>Capability-based Allocation</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>9</td>
<td>History-based Allocation</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>10</td>
<td>Organizational Allocation</td>
<td>+/-</td>
<td>+/-</td>
<td>+</td>
<td>+/</td>
</tr>
<tr>
<td>11</td>
<td>Automatic Execution</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>o</td>
</tr>
<tr>
<td>12</td>
<td>Distribution by Offer – Single Resource</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>13</td>
<td>Distribution by Offer – Multiple Resources</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>14</td>
<td>Distribution by Allocation – Single Resource</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>15</td>
<td>Random Allocation</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>16</td>
<td>Round Robin Allocation</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>17</td>
<td>Shortest Queue</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>18</td>
<td>Early Distribution</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>19</td>
<td>Distribution on Enablement</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>20</td>
<td>Late Distribution</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>21</td>
<td>Resource-Initiated Allocation</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>22</td>
<td>Resource-Initiated Execution – Allocated Work Item</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>23</td>
<td>Resource-Initiated Execution – Offered Work Item</td>
<td>+</td>
<td>+</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>24</td>
<td>System-Determined Work List Management</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>o</td>
</tr>
<tr>
<td>25</td>
<td>Resource-Determined Work List Management</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>o</td>
</tr>
<tr>
<td>26</td>
<td>Selection Autonomy</td>
<td>+</td>
<td>+</td>
<td>+</td>
<td>+</td>
</tr>
<tr>
<td>27</td>
<td>Delegation</td>
<td>+</td>
<td>+</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>28</td>
<td>Escalation</td>
<td>+</td>
<td>+</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>29</td>
<td>Deallocation</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>30</td>
<td>Stateful Reallocation</td>
<td>+/-</td>
<td>+/-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>31</td>
<td>Stateless Reallocation</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>32</td>
<td>Suspension/Resumption</td>
<td>+/-</td>
<td>+/-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>33</td>
<td>Skip</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>o</td>
</tr>
<tr>
<td>34</td>
<td>Redo</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>o</td>
</tr>
<tr>
<td>35</td>
<td>Pre-Do</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>o</td>
</tr>
<tr>
<td>36</td>
<td>Commencement on Creation</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>37</td>
<td>Commencement on Allocation</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>38</td>
<td>Piled Execution</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td>39</td>
<td>Chained Execution</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>-</td>
</tr>
<tr>
<td>40</td>
<td>Configurable Unallocated Work Item Visibility</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>o</td>
</tr>
<tr>
<td>41</td>
<td>Configurable Allocated Work Item Visibility</td>
<td>-</td>
<td>-</td>
<td>+</td>
<td>o</td>
</tr>
<tr>
<td>42</td>
<td>Simultaneous Execution</td>
<td>+</td>
<td>+</td>
<td>+/-</td>
<td>+</td>
</tr>
<tr>
<td>43</td>
<td>Additional Resources</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
</tbody>
</table>
Table 15. Round Robin - Colors and Variables
| color RRCounter = product User*INT; |
| color RRCounters = list RRCounter; |
| var count, i; INT; |
| var rrc: RRCounter; |
| var rrcs: RRCounters; |
Table 16. Shortest Queue - Colors and Variables
| color SQCounter = product User*INT; |
| color SQCounters = list SQCounter; |
| var count, i; INT; |
| var sqc: SQCounter; |
| var sqcs: SQCounters; |
Fig. 22. Push Patterns - Round Robin
Piled and Chained Execution. Piled and Chained Execution are auto-start patterns, i.e., when the user completes execution of current work item the next work item starts automatically. When working in Chained Execution, the next work item will be for the same case as the completed one (the user works on different tasks for one case). Similarly, if the user works in Piled Execution the next work item will be for the same task as the completed one (the user works on one task for different cases). Figures 24 and 25 show how Piled and Chained Execution are implemented similarly in the Stop Work sub-module. Users can choose to work in the normal mode or in the auto-start mode (which is represented by the token in place special mode). The function select is implemented to search for the next work item for the same: (1) task in Piled Execution and (2) case in Chained Execution.
4 Related Work
Since the early nineties workflow technology has matured [26] and several textbooks have been published, e.g., [5, 20, 30, 37, 41]. During this period many languages for modelling workflows have been proposed, i.e., languages ranging from generic Petri-net-based languages to tailor-made domain-specific languages. The Workflow Management Coalition (WfMC) has tried to standardize workflow languages since 1994 but failed to do so [25]. XPDL, the language proposed by the WfMC, has semantic problems [2] and is rarely used. In a way BPEL [11] succeeded in doing what the WfMC was aiming at. However, both BPEL and XPDL focus on the control-flow rather than the resource perspective.
Despite the central role that resources play in workflow management systems, there is a surprisingly small body of research into resource and organizational modelling in the workflow context.
[1, 35]. In early work, Bussler and Jablonski [15] identified a number of shortcomings of workflow management systems when modelling organizational and policy issues. In subsequent work [30], they presented one of the first broad attempts to model the various perspectives of workflow management systems in an integrated manner including detailed consideration of the organizational/resource view.
One line of research into resource modelling and enactment in a workflow context has focused on the characterization of resource managers that can manage organizational resources and enforce resource policies. In [19], the design of a resource manager is presented for a workflow management system. This work includes a high level resource model together with proposals for resource definition, query and policy languages. Similarly, in [36], an abstract resource model is presented in the
context of a workflow management system although the focus is more on the efficient management of resources in a workflow context than the specific ways in which work is allocated to them. In [29], a proposal is presented for handling resource policies in a workflow context. Three types of policy – qualification, requirement and substitution – are described together with a means for efficiently implementing them when allocating resources to activities.
Another area of investigation has been into ensuring that only appropriate users are selected to execute a given work item. The RBAC (Role-Based Access Control) model [23] presents an approach for doing this. RBAC models are effective but they tend to focus on security considerations and neglect other organizational aspects such as resource availability.
Several researchers have developed meta-models, i.e., object models describing the relation between workflow concepts, which include work allocation aspects, cf. [8, 40–42, 47]. However, these meta-models tend to focus on the structural description of resource properties and typically do not describe the dynamics aspects of work distribution.
Flexibility has been a research topic in workflow literature since the late nineties [4, 7, 9, 10, 16, 22, 28, 33, 44, 46, 55]. Flexibility triggers all kinds of interesting research questions, e.g., if a process changes how this should influence the running cases? [7]. Examples of qualitative analysis of flexibility of workflow management system can be found in [13] and [27]. One way of allowing for more flexibility is to use the case handling concept as defined in [3, 9]. FLOWer [12, 43] can be seen as a reference implementation of the case handling concept. Therefore, its resource perspective was modeled in this paper. Besides FLOWer there are few other case handling tools: E.C.H.O. (Electronic Case-Handling for Offices), a predecessor of FLOWer, the Staffware Case Handler [52] and the COSA Activity Manager [51], both based on the generic solution of BPi [14], Vectus [38, 39], and the open-source system concern (http://con-cern.org/).
The work reported in this paper can be seen as an extension of the workflow patterns initiative (cf. www.workflowpatterns.com). Besides a variety of control-flow [6] and data [49] patterns, 43 resource patterns [48, 50] have been defined. This paper complements the resource patterns [48, 50] by providing executable models for work distribution mechanisms.
5 Discussion
Workflow management systems should provide flexible work distribution mechanisms for users. This will increase the work satisfaction of users and improve their ability to deal with unpredictable situations at work. Therefore, work distribution is investigated as the functionality provided for the user – workflow management systems are tested in laboratories [48, 50] or observed (in empirical research) in companies [13]. This kind of research observes systems externally and provides insights into what systems do. Analysis of the systems form an internal perspective can explain how systems provide for different work distribution mechanisms. Due to the complexity of workflow management systems as software products, internal analysis starts with developing a model of the system. Unlike statical models (e.g., UML models), dynamical models (e.g., CPN models) provide for interactive investigation of work distribution as a dynamic feature. CPN models can be used for the investigation of both what systems do and how they do it.
Workflow management systems often provide for different features or use different naming for the same features. Investigation of work distribution requires analysis, evaluation and comparison of models of several systems. In order for models of different systems to be comparable, it is necessary to start with developing a common framework – a reference model. We have developed the Basic Model as a reference model for work distribution mechanisms in workflow management system. The models of Staffware, FileNet, FLOWer and resource patterns are comparable because all models are developed as upgrades of a reference model (the Basic Model).
The model of a workflow system is structured into two modules (sub-models). The Work Distribution module represents the core of the system which is often called the “workflow engine”. The Work Lists module represents the so-called “work list handler” of a workflow system and it serves as an interface between the workflow engine and users. The interface between the two modules (i.e., the messages that are exchanged between them) should contain as little information
about the way work items are managed in modules as possible. The way the work items are
created, allocated and offered in the Work Distribution module should be abstracted from for the
Work Lists module. The reverse also holds: how work items are actually processed by users is
implemented in the Work Lists module. Once a proper interface is defined, it is easy to implement
various ways of work distribution by adding/removing simple features in either one of the modules.
For example, push patterns (Round Robin and Shortest Queue) are implemented in the Work
Distribution module and auto-start resource patterns (Chained and Piled Execution) in the Work
Lists module.
The flexibility of a work distribution mechanism can be observed through the model of the life
cycle of a work item. The Basic Module has a simple, straightforward model and the work item
(the user) follows a fixed predefined path. Work items in Staffware and FileNet life cycle models
have more freedom to “walk back”, thus allowing for implicit cycles in the model (e.g., forwarding
and suspending). FLOWer, as the most flexible system, has many alternative paths in the life cycle
model. A more complex model of the life cycle of a work item adds messages between the Work
Distribution and Work Lists modules. The new messages correspond to new states and paths in
the life cycle model.
Both the system-based and the patterns-based CPN models showed that one of the core el-
ements of work distribution is the “allocation algorithm”. This algorithm includes the “rules”
for work distribution. It is implemented in the Work Distribution module as the function offer,
which allocates work based on (1) new work items, (2) process definition, and the (3) organiza-
tional model. This function should be analyzed further in order to discover an advanced allocation
algorithm, which should be more configurable and less system-dependent.
Every system has its own method of modelling organizational structure. Staffware models
groups and roles. In FileNet the organizational model includes groups of users and teams, but
does not model roles. FLOWer groups users based on a hierarchy of roles, function profiles and
work profiles. Thus, each of the system offers a unique predefined type of the organizational
structure. Since every allocation mechanism uses elements of the organizational model, limitations
of the organizational model can have a negative impact on the work distribution in the system.
For example, because in Staffware one role can be assigned to only one user, it is not be possible
to offer a work item to a set of “call center operator”-s.
Each of the three models of workflow systems distributes work using two hierarchy levels.
Staffware and FileNet use two levels of work distribution: queue work items are first distributed to
work queues, and then work items are distributed within each of the work queues. The FLOWer
model starts with the case distribution and then distributes work items of the whole case. Although
all three systems distribute work at two levels, they have unique distribution algorithms (the set of
allocation rules implemented in the function offer) and objects of distribution (work items, queue
work items, cases).
Models of resource patterns [48, 50] show that push patterns (Round Robin and Shortest
Queue) can be implemented “on top of” the pull mechanism, as a filter. Once the pull mecha-
nism determines the set of allocated users, the “push” allocation function extracts only one user
from this set. Auto-start patterns turned out to be remarkable straightforward to model, triggering
the question why this is not supported by systems like Staffware and FileNet (FLOWer supports
the Chained Execution in a limited form).
6 Conclusions
This paper focused on the resource perspective, i.e., the way workflow management systems dis-
tribute work based on the structure of the organization and capabilities/qualifications of people.
To understand work distribution, we used the CPN language and CPN Tools to model and analyze
different work distribution mechanisms. To serve as a reference model, we provided a model that
can be seen as the “greatest common denominator” of existing workflow management systems. This
model was upgraded for models of three workflow management systems – Staffware, FileNet, and
FLOWer. Although the reference model already captures many of the resource patterns, we also
modelled four more advanced patterns by extending the reference model. In contrast to existing research that mainly uses static models (e.g., UML class diagrams), we focused on the dynamics of work distribution. Our experiences revealed that it is relatively easy to model and analyze the workflow systems and resource patterns using CPN Tools. This suggests that CPN language and the basic CPN model are a good basis for future research. We plan to test completely new ways of work distribution using the approach presented in this paper. The goal is to design and implement distribution mechanisms that overcome the limitations of existing systems. An important ingredient will be to use insights from socio-technical design [13, 17, 21, 54] as mentioned in the introduction.
References
|
{"Source-Url": "https://pure.tue.nl/ws/files/2076751/596574.pdf", "len_cl100k_base": 15247, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 79896, "total-output-tokens": 18869, "length": "2e13", "weborganizer": {"__label__adult": 0.00037288665771484375, "__label__art_design": 0.001628875732421875, "__label__crime_law": 0.00042366981506347656, "__label__education_jobs": 0.01000213623046875, "__label__entertainment": 0.00018513202667236328, "__label__fashion_beauty": 0.00028228759765625, "__label__finance_business": 0.0024204254150390625, "__label__food_dining": 0.0004127025604248047, "__label__games": 0.0007524490356445312, "__label__hardware": 0.0011310577392578125, "__label__health": 0.0005931854248046875, "__label__history": 0.0006122589111328125, "__label__home_hobbies": 0.00022172927856445312, "__label__industrial": 0.00109100341796875, "__label__literature": 0.0006299018859863281, "__label__politics": 0.00037479400634765625, "__label__religion": 0.0004935264587402344, "__label__science_tech": 0.23388671875, "__label__social_life": 0.0002340078353881836, "__label__software": 0.06622314453125, "__label__software_dev": 0.6767578125, "__label__sports_fitness": 0.00023496150970458984, "__label__transportation": 0.0007343292236328125, "__label__travel": 0.0002510547637939453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74547, 0.01685]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74547, 0.20735]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74547, 0.88836]], "google_gemma-3-12b-it_contains_pii": [[0, 1968, false], [1968, 5719, null], [5719, 10181, null], [10181, 14524, null], [14524, 17141, null], [17141, 20742, null], [20742, 22979, null], [22979, 26955, null], [26955, 29137, null], [29137, 30127, null], [30127, 31687, null], [31687, 34385, null], [34385, 35451, null], [35451, 36288, null], [36288, 38232, null], [38232, 41621, null], [41621, 43698, null], [43698, 47676, null], [47676, 51139, null], [51139, 53354, null], [53354, 54243, null], [54243, 58881, null], [58881, 63314, null], [63314, 67572, null], [67572, 72181, null], [72181, 74547, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1968, true], [1968, 5719, null], [5719, 10181, null], [10181, 14524, null], [14524, 17141, null], [17141, 20742, null], [20742, 22979, null], [22979, 26955, null], [26955, 29137, null], [29137, 30127, null], [30127, 31687, null], [31687, 34385, null], [34385, 35451, null], [35451, 36288, null], [36288, 38232, null], [38232, 41621, null], [41621, 43698, null], [43698, 47676, null], [47676, 51139, null], [51139, 53354, null], [53354, 54243, null], [54243, 58881, null], [58881, 63314, null], [63314, 67572, null], [67572, 72181, null], [72181, 74547, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74547, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74547, null]], "pdf_page_numbers": [[0, 1968, 1], [1968, 5719, 2], [5719, 10181, 3], [10181, 14524, 4], [14524, 17141, 5], [17141, 20742, 6], [20742, 22979, 7], [22979, 26955, 8], [26955, 29137, 9], [29137, 30127, 10], [30127, 31687, 11], [31687, 34385, 12], [34385, 35451, 13], [35451, 36288, 14], [36288, 38232, 15], [38232, 41621, 16], [41621, 43698, 17], [43698, 47676, 18], [47676, 51139, 19], [51139, 53354, 20], [53354, 54243, 21], [54243, 58881, 22], [58881, 63314, 23], [63314, 67572, 24], [67572, 72181, 25], [72181, 74547, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74547, 0.22831]]}
|
olmocr_science_pdfs
|
2024-11-25
|
2024-11-25
|
bfed189148dd46219b18fce561bfbcc7d5d947f0
|
Control Statements:
**if statements**: An *if* statement allows code to be executed or not based on the result of a comparison. If the condition evaluates to True, then the statements of the **indented body** is executed. If the condition is False, then the body is skipped. The syntax of *if* statements is:
```
if <condition>:
statement_1
statement_2
statement_3
```
```
if <condition>:
statement_1
statement_2
else:
statement_1
statement_2
```
```
if <condition>:
statement_1
statement_2
elif <condition2>:
statement_1
statement_2
else:
statement_1
statement_2
```
Typically, the condition involves comparing “stuff” using relational operators ( <, >, ==, <=, >=, != ). Complex conditions might involve several comparisons combined using Boolean operators: not, or, and. For example, we might want to print “Your grade is B.” if the variable score is less than 90, but greater than or equal to 80.
```python
if score < 90 and score >= 80:
print "Your grade is B."
```
The precedence for mathematical operators, Boolean operators, and comparisons are given in the table.
<table>
<thead>
<tr>
<th>Operator(s)</th>
<th>Precedence</th>
</tr>
</thead>
<tbody>
<tr>
<td>** (exponential)</td>
<td>highest</td>
</tr>
<tr>
<td>+, - (unary pos. & neg.)</td>
<td></td>
</tr>
<tr>
<td>*, /, % (remainder)</td>
<td></td>
</tr>
<tr>
<td>+, - (add, sub)</td>
<td></td>
</tr>
<tr>
<td><, >, ==, <=, >=, !=, <></td>
<td></td>
</tr>
<tr>
<td>= (assignment)</td>
<td>lowest</td>
</tr>
</tbody>
</table>
**for loop**: The *for* loop iterates once for each item in some sequence type (i.e, list, tuple, string).
```
for value in [1, 3, 9, 7]:
print value
```
```
for character in 'house':
print character
```
Often the *for* loop iterates over a list generated by the built-in *range* function which has the syntax of:
```
range([start,] end, [, step]), where [] are used to denote optional parameters. Some examples:
```
- `range(5)` generates the list [0, 1, 2, 3, 4]
- `range(2,7)` generates the list [2, 3, 4, 5, 6]
- `range(10,2,-1)` generates the list [10, 9, 8, 7, 6, 5, 4, 3]
Since the list generated by the *range* function needs to be stored in memory, a more efficient *xrange* function is typically using in *for* loops to generate each value one at a time for each iteration of the loop. For example:
```
for count in xrange(1,6):
print count, " ",
print \n"Done"
```
```
1 2 3 4 5
Done
```
**while loop:** A while statement allows code to be executed repeated (zero or more times) as long as the condition evaluates to True. The syntax of a while statement is:
```python
while <condition>:
statement_1
statement_2
statement_3
```
An **infinite loop** is one that would loop forever. (FYI, in a Python shell `ctrl-c (^c)` can be used to kill the running program.) Most infinite loops are caused by programmer error, but sometimes they are intentional. The following “sentinel-controlled” code uses an infinite loop and a `break` statement that immediately causes control to exit the loop.
```python
total = 0
counter = 0
while True: # an infinite loop
score = input("Enter a score (or negative value to exit): ")
if score < 0:
break
total += score
counter += 1
print "Average is", float(total)/counter
```
**Strings:** Strings in Python are sequential collections of only characters. Strings are immutable (i.e., cannot be changed), so new strings are generated by string operations. Operations on strings (or any sequence collection) include:
<table>
<thead>
<tr>
<th>Operation</th>
<th>Operator</th>
<th>Explanation</th>
<th>Example</th>
<th>Result of Example</th>
</tr>
</thead>
<tbody>
<tr>
<td>Indexing</td>
<td>[ <index> ]</td>
<td>Access the element specified by the index</td>
<td><code>myString[1]</code></td>
<td>‘e’</td>
</tr>
<tr>
<td>Slicing</td>
<td>[ : ]</td>
<td>Extract a part of the string</td>
<td><code>myString[ 1:5 ]</code></td>
<td>‘ello’</td>
</tr>
<tr>
<td>Concatenation</td>
<td>+</td>
<td>Combine strings together</td>
<td><code>myString + aString</code></td>
<td>‘Hello!!!cat’</td>
</tr>
<tr>
<td>Repetition</td>
<td>*</td>
<td>Concatenate a repeated number of times</td>
<td><code>aString * 3</code></td>
<td>‘catcatcat’</td>
</tr>
<tr>
<td>Membership</td>
<td>in</td>
<td>Ask whether a substring is in a string</td>
<td>‘ell’ in <code>myString</code></td>
<td>True</td>
</tr>
<tr>
<td>Length</td>
<td>len(string)</td>
<td>How many items are in the string?</td>
<td><code>len( myString )</code></td>
<td>8</td>
</tr>
</tbody>
</table>
Indexing of strings starts with 0 on the left end, and -1 on the right end:
```
1111
01234567890123
cheer = ‘GO Panthers!!!’
```
Omitted indexes in a slice means “from the end.” For example, `cheer[:4]` generates ‘GO P’.
Omitted indexes in a slice means “from the end.” For example, `cheer[-4:]` generates ‘s!!!’.
String objects also have the following methods: (the `string` module can be imported to provide more operations.)
<table>
<thead>
<tr>
<th>Method</th>
<th>Usage</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>center</td>
<td><code>myString.center(w)</code></td>
<td>Returns a string with <code>myString</code> centered in a field of size <code>w</code></td>
</tr>
<tr>
<td>ljust</td>
<td><code>myString.ljust(w)</code></td>
<td>Returns a string with <code>myString</code> left-justified in a field of size <code>w</code></td>
</tr>
<tr>
<td>rjust</td>
<td><code>myString.rjust(w)</code></td>
<td>Returns a string with <code>myString</code> right-justified in a field of size <code>w</code></td>
</tr>
<tr>
<td>upper</td>
<td><code>myString.upper()</code></td>
<td>Returns a string with <code>myString</code> in all upper-case characters</td>
</tr>
<tr>
<td>lower</td>
<td><code>myString.lower()</code></td>
<td>Returns a string with <code>myString</code> in all lower-case characters</td>
</tr>
<tr>
<td>strip</td>
<td><code>myString.strip()</code></td>
<td>Returns a string with leading and trailing whitespace (space, tab, new-line) chars. removed. An optional string parameter can be used to supply characters to strip instead of whitespace.</td>
</tr>
<tr>
<td>count</td>
<td><code>myString.count(sub)</code></td>
<td>Returns number of occurrences of <code>sub</code> in <code>myString</code></td>
</tr>
<tr>
<td></td>
<td>(Optional parameters:</td>
<td><code>myString.count(sub [, start [, end] ] )</code></td>
</tr>
<tr>
<td>endswith</td>
<td><code>myString.endswith(sub)</code></td>
<td>Returns True if <code>myString</code> ends with the substring <code>sub</code>; otherwise it</td>
</tr>
<tr>
<td></td>
<td></td>
<td>returns False</td>
</tr>
<tr>
<td>startswith</td>
<td><code>myString.startswith(sub)</code></td>
<td>Returns True if <code>myString</code> starts with the substring <code>sub</code>; otherwise it</td>
</tr>
<tr>
<td></td>
<td></td>
<td>returns False</td>
</tr>
<tr>
<td>isdigit</td>
<td><code>myString.isdigit()</code></td>
<td>Returns True if <code>myString</code> contains only digits; otherwise it returns False</td>
</tr>
<tr>
<td>isalpha</td>
<td><code>myString.isalpha()</code></td>
<td>Returns True if <code>myString</code> contains only letters; otherwise it returns</td>
</tr>
<tr>
<td></td>
<td></td>
<td>False</td>
</tr>
<tr>
<td>split</td>
<td><code>myString.split()</code></td>
<td>Returns a list of substrings of <code>myString</code> splits at whitespace characters.</td>
</tr>
<tr>
<td></td>
<td>An optional string parameter</td>
<td>An optional string parameter can supply characters to split on.</td>
</tr>
<tr>
<td></td>
<td>can supply characters to</td>
<td></td>
</tr>
<tr>
<td></td>
<td>split on</td>
<td></td>
</tr>
<tr>
<td>find</td>
<td><code>myString.find(sub)</code></td>
<td>Returns the starting index of the first occurrence of <code>sub</code>.</td>
</tr>
<tr>
<td></td>
<td>(Optional parameters:</td>
<td><code>myString.find(sub [, start [, end] ] )</code></td>
</tr>
<tr>
<td>replace</td>
<td><code>myString.replace(old,new)</code></td>
<td>Returns a string with all occurrences of substring <code>old</code> replaced by</td>
</tr>
<tr>
<td></td>
<td></td>
<td>substring <code>new</code>. An additional integer parameter can specify the number of replacements to perform, e.g., <code>myString.replace(old,new, 3)</code></td>
</tr>
</tbody>
</table>
**Lists:** A Python list is also a sequence collection, but a list can contain items of any type (e.g., character, strings, integers, floats, other lists, etc.), and lists are mutable. Lists are represented by comma-separated values enclosed in square brackets (`['`, `']`). Operations on lists (or any sequence collection, e.g., strings) include:
<table>
<thead>
<tr>
<th>Operation</th>
<th>Operator</th>
<th>Explanation</th>
<th>Example</th>
<th>Result of Example</th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td>ListB=[8,9]</td>
</tr>
<tr>
<td>Slicing</td>
<td>[ : ]</td>
<td>Extract a part of the list</td>
<td>myList[ 1:3 ]</td>
<td>[6, 7]</td>
</tr>
<tr>
<td>Concatenation</td>
<td>+</td>
<td>Combine lists together</td>
<td>myList + ListB</td>
<td>[5, 6, 7, 8, 8, 9]</td>
</tr>
<tr>
<td>Repetition</td>
<td>*</td>
<td>Concatenate a repeated number of times</td>
<td>ListB * 3</td>
<td>[8, 9, 8, 9, 8, 9]</td>
</tr>
<tr>
<td>Membership</td>
<td>in</td>
<td>Ask whether an item is in a list</td>
<td>3 in myList</td>
<td>False</td>
</tr>
<tr>
<td>Length</td>
<td>len(list)</td>
<td>How many items are in the list?</td>
<td>len( myList )</td>
<td>4</td>
</tr>
</tbody>
</table>
The following list methods are provided by Python:
<table>
<thead>
<tr>
<th>Method</th>
<th>Usage</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>append</td>
<td>myList.append(item)</td>
<td>Adds item to the end of myList</td>
</tr>
<tr>
<td>extend</td>
<td>myList.extend(otherList)</td>
<td>Extends myList by adding all items in otherList to myList’s end</td>
</tr>
<tr>
<td>insert</td>
<td>myList.insert(i, item)</td>
<td>Insert item in myList at index i</td>
</tr>
<tr>
<td>pop</td>
<td>myList.pop()</td>
<td>Remove and return the last item in myList</td>
</tr>
<tr>
<td>pop(i)</td>
<td>myList.pop(i)</td>
<td>Remove and return the ith item in myList</td>
</tr>
<tr>
<td>del</td>
<td>del myList[i]</td>
<td>Deletes the item in the ith position of myList</td>
</tr>
<tr>
<td>remove</td>
<td>myList.remove(item)</td>
<td>Removes the first occurrence of item in myList</td>
</tr>
<tr>
<td>index</td>
<td>myList.index(item)</td>
<td>Returns the index of the first occurrence of item in myList</td>
</tr>
<tr>
<td>count</td>
<td>myList.count(item)</td>
<td>Returns the number of occurrences of item in myList</td>
</tr>
<tr>
<td>sort</td>
<td>myList.sort( )</td>
<td>Modifies myList to be sorted</td>
</tr>
<tr>
<td>reverse</td>
<td>myList.reverse( )</td>
<td>Modifies myList to be in reverse order</td>
</tr>
</tbody>
</table>
Tuples: A tuple is another sequence data type, so the sequence operations of indexing, slicing, concatenation, repetition, membership (in), and len() work on tuples too. Tuples are very similar to lists, i.e., comma-separated items enclosed in parentheses. The main difference is that tuples are immutable (cannot be modified).
Create two tuples as:
\[
\text{student1} = ('Bob', 123456, 'Jr', 3.12) \\
\text{student2} = 'Sally', 654321, 'Fr', 0.0
\]
In addition to indexing, “fields” of a tuple can be unpacked using a single assignment statement as:
\[
\text{name, idnum, rank, gpa} = \text{student1}
\]
Dictionaries: A dictionary is an unordered set of key-value pairs (written as key:value). Keys must be unique and immutable (e.g., numerics, strings, tuples of immutable objects). Dictionaries are typically used to lookup the value corresponding to a specified key. Dictionaries can be written as comma-separated key:value pairs enclosed in curly braces. For example,
\[
\text{phoneNumbers} = \{'fienup’:35918,’gray’:35917,’east’:32939,’drake’:35811,’schafer’:32187\}
\]
Access to individual key:value pairs looks syntactically like a sequence lookup using a key instead of an index. For example, phoneNumbers[‘east’] returns 32939, and a new key:value pair can be added by
\[
\text{phoneNumbers[‘wallingford’] = 35919}
\]
Additional, methods on dictionaries are:
<table>
<thead>
<tr>
<th>Method</th>
<th>Usage</th>
<th>Explanation</th>
</tr>
</thead>
<tbody>
<tr>
<td>keys</td>
<td>myDictionary.keys()</td>
<td>Returns a list of keys in myDictionary</td>
</tr>
<tr>
<td>values</td>
<td>myDictionary.values()</td>
<td>Returns a list of values in myDictionary</td>
</tr>
<tr>
<td>items</td>
<td>myDictionary.items()</td>
<td>Returns a list of key:value tuples in myDictionary</td>
</tr>
<tr>
<td>get</td>
<td>myDictionary.get(myKey)</td>
<td>Returns the value associated with myKey; otherwise None</td>
</tr>
<tr>
<td>get</td>
<td>myDictionary.get(myKey, alt)</td>
<td>Returns the value associated with myKey; otherwise alt</td>
</tr>
<tr>
<td>in</td>
<td>myKey in myDictionary</td>
<td>Returns True if myKey is in myDictionary; otherwise False</td>
</tr>
<tr>
<td>has_key</td>
<td>myDictionary.has_key(myKey)</td>
<td></td>
</tr>
<tr>
<td>del</td>
<td>del myDictionary[myKey]</td>
<td>Deletes the key:value pair whose key is myKey</td>
</tr>
</tbody>
</table>
Functions:
A function is a procedural abstract, i.e., a named body of code that performs some task when it is called/invoked. Often a function will have one or more parameter that allows it to perform a more general (variable) task. For example, the cube function below can be called with any numeric value with the corresponding cube of that number being returned.
```python
# Function to calculate the cube of a number
def cube(num):
num_squared = num * num
return num_squared * num
# call the function
value = 2
print 'The value', value, 'raised to the power 3 is', cube(value)
print 'The value 3 raised to the power 3 is', cube(3)
```
Terminology:
- a **formal parameter** is the name of the variable used in the function definition. It receives a value when the function is called. In the function cube, `num` is the formal parameter. Formal parameters are only known inside of the function definition. The section of a program where a variable is known is called its scope, so the scope of a formal parameter (and other local variable defined in the function such as `num_squared`) is limited to the function in which it is defined.
- an **actual parameter/argument** is the value used in the function call that is sent to the function. In the call to function cube, the variable `value` supplies the actual parameter value of 2.
- a **global variable** is created outside all functions and is known throughout the whole program file, e.g. `value`.
It is helpful to understand the “rules of the game” when a function is called. Memory is used to store the current program and the data associated with it. The memory used to store the data is divided as shown below.
- Global memory is used to store the global variables (and constants).
- The heap is used to store dynamically allocated objects as the program runs, e.g. lists and objects
- The run-time stack is used to store call-frames (or activation records) that get pushed on the stack when a function is called, and popped off the stack when a function returns.
When a function is called the section of code doing the calling is temporarily suspended, and a new call-frames gets pushed on top of the stack before execution of the function body. The call-frame contains the following information about the function being called:
- the **return address** -- the spot in code where the call to the function occurred. This is needed so execution (control) can return there when the end of the function is reached or a return statement executes.
- room to store the formal parameters used by the function. In Python, parameters are passed-by-value which means that the value of each actual parameter in the function call is assigned to the corresponding formal parameter in the function definition before the function starts executing. However, the memory location for actual parameters for strings, lists, dictionaries, tuples, objects, etc. contain only references to the heap
- room to store the local variables defined in the function.
When a function returns, execution resumes at the function call (which is specified by the return address). A function typically sends back a value to the call by specifying an expression after return in the return statement. In Python if no expression is specified returned, then the special object `None` is returned.
def play(myInt, myLongInt, myList, myString):
print 'START OF play Function'
print 'myInt=',myInt,'myLongInt=',myLongInt
print 'myList=',myList,'myString=',myString
myInt += 1
myLongInt += 1
myList.append(1)
myString += 'a'
print 'END OF play Function'
print 'myInt=',myInt,'myLongInt=',myLongInt
print 'myList=',myList,'myString=',myString
return
anInt = 10
aLongInt = 123456789012345678901234567890L
aList = range(5)
aString = 'hello'
print 'BEFORE CALL'
print 'anInt=',anInt,'aLongInt=',aLongInt
print 'aList=',aList,'aString=',aString
play(anInt, aLongInt, aList, aString)
print 'AFTER CALL'
print 'anInt=',anInt,'aLongInt=',aLongInt
print 'aList=',aList,'aString=',aString
Output of complete program:
```python
>>> BEFORE CALL
anInt= 10 aLongInt= 123456789012345678901234567890
aList= [0, 1, 2, 3, 4] aString= hello
START OF play Function
myInt= 10 myLongInt= 123456789012345678901234567890
myList= [0, 1, 2, 3, 4] myString= hello
END OF play Function
myInt= 11 myLongInt= 123456789012345678901234567891
myList= [0, 1, 2, 3, 4, 1] myString= helloa
AFTER CALL
anInt= 10 aLongInt= 123456789012345678901234567890
aList= [0, 1, 2, 3, 4, 1] aString= hello
```
Text Files: Below is a summary of the important text-file operations in Python.
<table>
<thead>
<tr>
<th>General syntax</th>
<th>Example</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>open(filename)</td>
<td><code>f = open('data.txt', 'w')</code></td>
<td>Modes: ‘r’ read only; ‘w’ write only; ‘a’ append; ‘r+’ both reading and writing. Default mode is ‘r’</td>
</tr>
<tr>
<td>open(filename, mode)</td>
<td></td>
<td></td>
</tr>
<tr>
<td>f.close()</td>
<td><code>f.close()</code></td>
<td>Close the file to free up system resources.</td>
</tr>
<tr>
<td>f.read()</td>
<td><code>all = f.read()</code></td>
<td>Returns the whole file as a string.</td>
</tr>
<tr>
<td>f.read(size)</td>
<td><code>chunk = f.read(100)</code></td>
<td>Returns a string of at most 100 (size) bytes. If the file has been completely read, an empty string is returned.</td>
</tr>
<tr>
<td>f.readline()</td>
<td><code>nextLine = f.readline()</code></td>
<td>Returns the next line from the file. The newline (‘\n’) character is left at the end of the string, unless it is the last line of a file which does not end in a newline character.</td>
</tr>
<tr>
<td>f.readlines()</td>
<td><code>allLines = f.readlines()</code></td>
<td>Returns a list containing all the lines of the file.</td>
</tr>
<tr>
<td>f.readlines(size)</td>
<td><code>someLines = f.readlines(5000)</code></td>
<td>Returns the next 5000 bytes of line. Only complete lines will be returned.</td>
</tr>
<tr>
<td>f.write(string)</td>
<td><code>f.write('cats and dogs')</code></td>
<td>Writes the string to the file.</td>
</tr>
<tr>
<td>loop over the file object</td>
<td><code>for line in f: print line,</code></td>
<td>Memory efficient, fast and simple code to loop over each line in the file.</td>
</tr>
</tbody>
</table>
Classes: A class definition is like a blueprint (recipe) for each of the objects of that class
- A class specifies a set of data attributes and methods for the objects of that class
- The values of the data attributes of a given object make up its state
- The behavior of an object depends on its current state and on the methods that manipulate this state
- The set of a class’s methods is called its interface
The general syntax of class definition is:
```python
class MyClass [( superClass1 [, , superClass2 ]* ) ]:
'''Document comment which becomes the __doc__ attribute for the class'''
def __init__(self, [param [, , param]*]):
'''Document comment for constructor method with self be referencing to the object itself'''
__init__body
# defs of other class methods and assignments to class attributes
# end class MyClass
```
Classes in Python have the following characteristics:
- all class attributes (data attributes and methods) are public by default, unless your identifier starts with a single underscores, e.g. `self._numSides`
- all data types are objects, so they can be used as inherited base classes
- most built-in operators (+, -, *, <, >, ==, etc.) can be redefined for a class. This makes programming with objects a lot more intuitive. For example suppose we have two Die objects: `die1` & `die2`, and we want to add up their combined rolls. We could use accessor methods to do this:
```python
diceTotal = die1.getRoll() + die2.getRoll()
```
Here, the `getRoll` method returns an integer (type `int`), so the `+` operator being used above is the one for `ints`. But, it might be nice to “overload” the `+` operator by defining an `__add__` method as part of the `Die` class, so the programmer could add dice directly as in:
```python
diceTotal = die1 + die2
```
- **objects are passed by reference when used as parameters to functions**
- all classes have a set of standard methods provided, but may not work properly (`__str__`, `__doc__`, etc.)
The three most important features of *Object-Oriented Programming* (OOP) to simplify programs and make them maintainable are:
1. **encapsulation** - restricts access to an object's data to access only by its methods
- helps to prevent indiscriminant changes that might cause an invalid object state (e.g., 6-side die with a roll 8)
2. **inheritance** - allows one class (the *subclass*) to pickup data attributes and methods of other class(es) (the *parents*)
- helps code reuse since the subclass can extend its parent class(es) by adding addition data attributes and/or methods, or overriding (through polymorphism) a parent's methods
3. **polymorphism** - allows methods in several different classes to have the same names, but be tailored for each class
- helps reduce the need to learn new names for standard operations (or invent strange names to make them unique)
```python
# File: die_simple.py
This module defines the Die class.
from random import randint
class Die(object):
"""This class represents a six-sided die."""
def __init__(self):
"""The initial face of the die."""
self._currentRoll = randint(1, 6)
def roll(self):
"""Resets the die's value to a random number between 1 and 6."""
self._currentRoll = randint(1, 6)
def getRoll(self):
"""Returns the face value of the die."""
return self._currentRoll
def __str__(self):
"""Returns the string representation of the die."""
return str(self._currentRoll)
```
Consider the interface for a generalized AdvancedDie class that can have any number of sides.
<table>
<thead>
<tr>
<th>Method</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>init</strong></td>
<td>myDie = AdvancedDie(8) Constructs a die with a specified number of sides and randomly rolls it (Default of 6 sides if no argument supplied)</td>
</tr>
<tr>
<td><strong>cmp</strong></td>
<td>if myDie == otherDie: Allows the comparison operations (> , < , == , etc.) to work correctly for AdvancedDie objects.</td>
</tr>
<tr>
<td><strong>add</strong></td>
<td>sum = myDie + otherDie Allows the direct addition of AdvancedDie objects, and returns the integer sum of there current values.</td>
</tr>
<tr>
<td><strong>str</strong></td>
<td>Directly as: myDie.<strong>str</strong>() str(myDie) or indirectly as: print myDie Returns a string representation for the AdvancedDie. By overriding the <strong>str</strong> method of the Die class, so the “print” statement will work correctly with an AdvancedDie.</td>
</tr>
<tr>
<td>roll</td>
<td>myDie.roll() Rolls the die randomly and return the value rolled</td>
</tr>
<tr>
<td>getRoll</td>
<td>myDie.getRoll() Returns the current roll of the die</td>
</tr>
<tr>
<td>getSides</td>
<td>myDie.getSides() Returns the number of sides on the die</td>
</tr>
<tr>
<td>show</td>
<td>myDie.show() Displays the die’s value to standard output</td>
</tr>
</tbody>
</table>
Consider the following script and associated output:
```python
# testDie.py - script to test AdvancedDie class
from advanced_die import AdvancedDie
die1 = AdvancedDie(100)
die2 = AdvancedDie(100)
die3 = AdvancedDie()
print 'die1 =', die1 #calls __str__
print 'die2 =', die2
print 'die3 =', die3
print 'die1.show() = ', die1.show()
print 'die1.getRoll() = ', die1.getRoll()
print 'die1.roll() = ', die1.roll()
print 'die1.getRoll() = ', die1.getRoll()
print 'die1.getRoll() = ', die1.getRoll()
print 'die1.getRoll() = ', die1.getRoll()
print 'die1 == die2:', die1==die2
print 'die1 < die2:', die1<die2
print 'die1 > die2:', die1>die2
print 'die1 <= die2:', die1<=die2
print 'die1 >= die2:', die1>=die2
print 'die1 != die2:', die1!=die2
print 'die1.__str__():', die1
print 'currentRoll = ', die1._currentRoll
die1 = 59
die2 = 49
die3 = 1
die1.show() = 59
die1.getRoll() = 59
die1.roll() = 53
die1.getRoll() = 53
die1.getRoll() = 53
die1 == die2: False
die1 < die2: False
die1 > die2: True
die1 <= die2: False
die1 >= die2: True
die1 != die2: True
die1.__str__(): # Sides=100 Roll=65
```
Notice that the testDie.py script needed to import AdvancedDie, but not the Die class.
The AdvancedDie class that inherits from the Die superclass.
```python
from die_simple import Die
from random import randint
class AdvancedDie(Die):
"""Advanced die class that allows for any number of sides"""
def __init__(self, *args):
"""Constructor for any sided Die that takes an the number of sides
as a parameter; if no parameter given then default is 6-sided."""
# call Die parent class constructor
Die.__init__(self)
if len(args) == 0:
self._numSides = 6
elif len(args) == 1 and isinstance(args[0], int):
self._numSides = args[0]
else:
print "Usage: Die() or Die(numberOfSides)"
return None
self._currentRoll = randint(1, self._numSides)
def roll(self):
"""Causes a die to roll itself -- overrides Die class roll""
self._currentRoll = randint(1, self._numSides)
return self._currentRoll
def show(self):
"""Displays a Die by printing it"
print self._currentRoll
def __cmp__(self, rhs_Die):
"""Overrides the '__cmp__' operator for Dies, to allow for
a deep comparison of two Dice"
if self._currentRoll < rhs_Die._currentRoll:
return -1
elif self._currentRoll == rhs_Die._currentRoll:
return 0
else:
return 1
def __add__(self, rhs_Die):
"""Returns the sum of two dice rolls"
return self._currentRoll + rhs_Die.currentRoll
def __str__(self):
"""Returns the string representation of the AdvancedDie."
return '# Sides=' + str(self._numSides) + ' Roll=' + str(self._currentRoll)
def getSides(self):
"""Returns the number of sides on the die."
return self._numSides
```
---
**File:** advanced_die.py
**Description:** Provides a AdvancedDie class that allows for any number of sides
Inherits from the parent class Die in module die_simple
---
```python
from die_simple import Die
from random import randint
class AdvancedDie(Die):
"""Advanced die class that allows for any number of sides"""
def __init__(self, *args):
"""Constructor for any sided Die that takes an the number of sides
as a parameter; if no parameter given then default is 6-sided."""
# call Die parent class constructor
Die.__init__(self)
if len(args) == 0:
self._numSides = 6
elif len(args) == 1 and isinstance(args[0], int):
self._numSides = args[0]
else:
print "Usage: Die() or Die(numberOfSides)"
return None
self._currentRoll = randint(1, self._numSides)
def roll(self):
"""Causes a die to roll itself -- overrides Die class roll""
self._currentRoll = randint(1, self._numSides)
return self._currentRoll
def show(self):
"""Displays a Die by printing it"
print self._currentRoll
def __cmp__(self, rhs_Die):
"""Overrides the '__cmp__' operator for Dies, to allow for
a deep comparison of two Dice"
if self._currentRoll < rhs_Die._currentRoll:
return -1
elif self._currentRoll == rhs_Die._currentRoll:
return 0
else:
return 1
def __add__(self, rhs_Die):
"""Returns the sum of two dice rolls"
return self._currentRoll + rhs_Die.currentRoll
def __str__(self):
"""Returns the string representation of the AdvancedDie."
return '# Sides=' + str(self._numSides) + ' Roll=' + str(self._currentRoll)
def getSides(self):
"""Returns the number of sides on the die."
return self._numSides
```
|
{"Source-Url": "http://www.cs.uni.edu/~fienup/cs052f10/pythonSummary.pdf", "len_cl100k_base": 8241, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 25428, "total-output-tokens": 7884, "length": "2e13", "weborganizer": {"__label__adult": 0.000286102294921875, "__label__art_design": 0.0002312660217285156, "__label__crime_law": 0.00017070770263671875, "__label__education_jobs": 0.0008554458618164062, "__label__entertainment": 7.337331771850586e-05, "__label__fashion_beauty": 8.362531661987305e-05, "__label__finance_business": 9.548664093017578e-05, "__label__food_dining": 0.0003407001495361328, "__label__games": 0.0006709098815917969, "__label__hardware": 0.0006170272827148438, "__label__health": 0.00018167495727539065, "__label__history": 0.00015532970428466797, "__label__home_hobbies": 0.00010830163955688477, "__label__industrial": 0.0003058910369873047, "__label__literature": 0.00025081634521484375, "__label__politics": 0.00013554096221923828, "__label__religion": 0.00036406517028808594, "__label__science_tech": 0.0031070709228515625, "__label__social_life": 8.040666580200195e-05, "__label__software": 0.0055389404296875, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00023281574249267575, "__label__transportation": 0.00024116039276123047, "__label__travel": 0.00017380714416503906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28340, 0.03492]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28340, 0.65204]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28340, 0.67023]], "google_gemma-3-12b-it_contains_pii": [[0, 2363, false], [2363, 4824, null], [4824, 9401, null], [9401, 12449, null], [12449, 15785, null], [15785, 16992, null], [16992, 20177, null], [20177, 22212, null], [22212, 24648, null], [24648, 28340, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2363, true], [2363, 4824, null], [4824, 9401, null], [9401, 12449, null], [12449, 15785, null], [15785, 16992, null], [16992, 20177, null], [20177, 22212, null], [22212, 24648, null], [24648, 28340, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28340, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28340, null]], "pdf_page_numbers": [[0, 2363, 1], [2363, 4824, 2], [4824, 9401, 3], [9401, 12449, 4], [12449, 15785, 5], [15785, 16992, 6], [16992, 20177, 7], [20177, 22212, 8], [22212, 24648, 9], [24648, 28340, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28340, 0.20742]]}
|
olmocr_science_pdfs
|
2024-12-03
|
2024-12-03
|
b06322ad75db157780ad842a27f9d249026eddf9
|
Synthesis of Self-Timed Circuits
by Program Transformation
Steven M. Burns
and
Alain J. Martin
Computer Science Department
California Institute of Technology
5253: TR: 87
Synthesis of Self-Timed Circuits by Program Transformation
SYNTHESIS OF SELF-TIMED CIRCUITS BY PROGRAM TRANSFORMATION
Steven M. Burns and Alain J. Martin
Computer Science Department
California Institute of Technology
Pasadena, CA 91125 USA
Self-timed circuits can be synthesized from concurrent programs in two logically separate phases. First, through a series of program transformations, the source program is decomposed into an equivalent program constructed entirely from instances of basic processes. These basic processes correspond to the syntactic constructs of the source language. The remainder of the synthesis procedure consists of compiling each of the basic processes into a self-timed circuit using techniques described in earlier papers. These compilations need to be done only once. This paper describes in detail the program transformations used in an automated synthesis procedure developed at Caltech. The transformations used are applications of process decomposition, a simple technique that is easy to verify. The circuits synthesized by these program transformations are correct by construction; thus, this technique provides a simple method for constructing provably correct circuits from a high-level specification.
We propose a method for developing VLSI circuits from an abstract specification. The programmer designs a concurrent program that meets this specification, and then an automatic mechanism transforms the program into a circuit. The programmer's proof obligation is limited to verifying that the concurrent program is an implementation of the specification. The program-to-circuit transformation is verified separately.
Program transformations within the source language provide a powerful tool for deriving implementations of programs. By working at an abstract level in a language with a well-defined semantics, transformations, which are complex if performed at the circuit level, are reduced to trivial syntactic manipulations. Such transformations are both easy to perform and easy to verify. They form the core of an automatic compiler for synthesizing self-timed circuits.
We have constructed a set of program transformation rules that, when applied to any program in the source language, transform it into an equivalent program of a very simple form. This form is composed only of simple basic processes that have already been compiled into circuits. In this paper, we describe in detail this set of program transformations. In addition, we show the compiled circuits for each of the basic processes and the resulting syntax-directed translation rules. We also introduce and compare various schemes for guard evaluation and then apply these schemes to a simple example.
1. \( \langle \text{process} \rangle \) ::= ( \( \langle \text{process} \rangle \) \{ \( \langle \text{process} \rangle \) \} ) \{ \langle \text{channel} \rangle \}
2. \( \langle \text{channel} \rangle \) ::= \langle \text{channel} \rangle \langle \text{NAME} \rangle , \langle \text{NAME} \rangle
3. \( \langle \text{port} \rangle \) ::= \langle \text{passive} \mid \text{active} \rangle \langle \text{NAME} \rangle \langle \text{INT} \rangle , \langle \text{INT} \rangle
4. \( \langle \text{var} \rangle \) ::= \langle \text{boolean} \rangle \langle \text{NAME} \rangle = ( \langle \text{true} \mid \text{false} \rangle
5. \( \langle \text{sequence} \rangle \) ::= \langle \text{statement} \rangle [ ; \langle \text{sequence} \rangle ]
6. \( \langle \text{statement} \rangle \) ::= \text{skip}
7. \( \langle \text{NAME} \rangle \) ::= \langle \text{NAME} \rangle ( \langle \text{up} \mid \text{down} \rangle
8. \( \langle \text{NAME} \rangle \) ::= \langle \text{NAME} \rangle ( \langle \text{INT} \rangle ) : [ \langle \text{responses} \rangle ]
9. \( \langle \text{responses} \rangle \) ::= \langle \text{response} \rangle [ \langle \text{response} \rangle ]
10. \( \langle \text{response} \rangle \) ::= \langle \text{INT} \rangle \rightarrow \langle \text{sequence} \rangle
11. \( \langle \text{gcs} \rangle \) ::= \langle \text{gc} \rangle \{ \langle \text{gc} \rangle \}
12. \( \langle \text{gc} \rangle \) ::= \langle \text{expr} \rangle \rightarrow \langle \text{sequence} \rangle
13. \( \langle \text{expr} \rangle \) ::= \langle \text{primary} \rangle [ \langle \text{conjunct} \rangle \mid \text{or} \langle \text{expr} \rangle ]
14. \( \langle \text{conjunct} \rangle \) ::= \langle \text{primary} \rangle [ \langle \text{conjunct} \rangle ]
15. \( \langle \text{primary} \rangle \) ::= \text{not} \langle \text{primary} \rangle
16. \( \langle \text{primary} \rangle \) ::= \langle \text{primary} \rangle
17. \( \langle \text{primary} \rangle \) ::= \langle \text{primary} \rangle
18. \( \langle \text{primary} \rangle \) ::= \langle \text{primary} \rangle
19. \( \langle \text{primary} \rangle \) ::= \langle \text{primary} \rangle
20. \( \langle \text{primary} \rangle \) ::= \langle \text{primary} \rangle
21. \( \langle \text{primary} \rangle \) ::= \langle \text{primary} \rangle
Figure 1: Backus-Naur Form (BNF) for Source Language
# Source Language
The source language is based on CSP[3], with the addition of the probe[6] and a new communication construct. A complete description of the language syntax is given in Figure 1. We shall refer to this figure when deriving the individual transformation rules.
A program in this language consists of a set of sequential processes with interconnecting channels. Associated with each sequential process is a set of ports, a set of private variables, and a list of statements to be executed sequentially. Ports that do not connect to another process connect to the environment.
Only boolean variables are allowed. Variables are changed by assignment to true (x up) or to false (x down). The selection (I(\langle gcs \rangle)) and repetition (*I(\langle gcs \rangle)) constructs are based on guarded commands. We use *I(\langle sequence \rangle) as an abbreviation for *I(\langle true \rightarrow \langle sequence \rangle \rangle).
Synchronization between two processes is accomplished by zero-slack communication actions across channels denoted by pairs of ports. Of the two ports that make up a channel, one is declared active and the other is declared passive. The process that owns the passive port can determine whether the other process is waiting for a communication on this channel by evaluating a boolean condition called a probe. Probes may be used in arbitrary boolean expressions.
Though concurrently operating processes may not share variables, processes may communicate data by exchanging values from small sets during a synchronization action.
When declaring a port, we specify both the send and receive sets of values, each set being represented by a single integer. For example,
\[ \text{passive } L(3,2) \]
declares a passive port \( L \) with send set \( \{0,1,2\} \) and receive set \( \{0,1\} \). The syntactic construct for a communication action allows different sequences of commands to be executed based on the value received during a communication. An execution of the communication action (on the same port, \( L \))
\[ L(1): [0 \rightarrow \text{down} | 1 \rightarrow \text{up}] , \]
sends the value 1 and simultaneously receives either a 1 or a 0. If a 0 is received, \( x \) is set to false; if a 1 is received, \( x \) is set to true. We allow two abbreviations in the specification of a communication action: The output value may be omitted if the port has only one send value, and the receive value selection may be omitted if the port has only one receive value.
2 Target Language — Self-timed Circuits
The target of the compilation is a self-timed circuit—a set of circuit variables (nodes) interconnected by a set of operators (gates). These circuits are designed to function correctly regardless of the internal delays of the operators. The required operator types include the combinational elements, \( WIRE, AND, \) and \( OR \); and the state-holding elements shown in Figure 2. Each operator is defined in terms of a set of production rules[4,5]. A production rule is a simple transition rule of the form \( G \leftarrow S \), where \( G \) is a boolean expression and \( S \) is an assignment to true or false. All references to a circuit variable are assumed to have the same value (isochronic forks)[1,4]. A synchronizer, which cannot be represented in terms of production rules, is included to allow the implementation of programs with negated probes. The synchronizer, as well as the other operators, have been implemented as CMOS standard cells.
Self-timed circuit implementations of concurrent programs are generated by implementing each sequential process as a separate sub-circuit. The sub-circuits are connected (by wire operators) only to implement communication actions. The simultaneity required in the zero-slack communications is implemented using a four-phase handshaking protocol. In order to implement general communication actions (those in which data is transmitted) the usual request/acknowledge pair of wires is replaced by one wire for each send value and one wire for each receive value.
3 Syntax-directed Compilation
An arbitrary program in the source concurrent language is compiled into a target self-timed circuit by a syntax-directed translator, similar to that used in standard
program-to-machine-code compilers. Such a translator requires a set of BNF rules describing the syntax of the source language and a set of translation rules describing how to construct objects in the target language. In this application, an object in the target language is a self-timed circuit. The translation rules specify how to generate and connect circuits corresponding to the syntactic constructs. The translation rules are derived in two logically separate phases: program transformation and basic process compilation.
3.1 Process Decomposition
Process decomposition is the most commonly used program transformation. An arbitrary program part $\beta$ is replaced by a single active communication and a separate process implementing $\beta$:
$$\alpha;\beta;\gamma \triangleright \text{active } A' \text{ passive } A \ (\alpha;A';\gamma \ || \ \ast[[\overline{A} \rightarrow \beta;A]]) \text{ channel } (A',A)\ .$$
(Read ‘$\triangleright$‘ as “is replaced by”.) Process decomposition does not introduce concurrency; the active communication $A'$ cannot finish until $A$ and, thus, $\beta$ complete. The original process and the new process may share variables and ports. These two processes are never active concurrently; thus, exclusive access to each variable and port is ensured. In the following, we do not explicitly declare the ports and channel used in a process decomposition. The two ports of a channel are denoted by the same capital letter. The primed letter represents the active port. We write the above process decomposition as:
$$\alpha;\beta;\gamma \triangleright \alpha;A';\gamma \ || \ (A/\beta) \ .$$
A more general form of process decomposition is used to implement constructs involving guard evaluation. An evaluation construct may be implemented in a separate process, and if this is the case, the multi-valued result of the evaluation is communicated back to the original process by a general communication action. Such a decomposition
is of the form:
\[ \gamma_0 \rightarrow \beta_0 | \cdots | \gamma_{n-1} \rightarrow \beta_{n-1} \]
\[ \triangleright \quad \text{active } G'(1,n) \quad \text{passive } G(n,1) \]
\[ (G' : [0 \rightarrow \beta_0 | \cdots | n-1 \rightarrow \beta_{n-1}] \]
\[ \| *[G' \rightarrow [\gamma_0 \rightarrow G(0) | \cdots | \gamma_{n-1} \rightarrow G(n-1)]]] \]
\[ ) \quad \text{channel } (G', G) . \]
Notice that each new process is less complicated than the original. The first process performs a general communication action, while the second process evaluates the guard set. Again, no concurrency is introduced by this transformation. The evaluation of the guards \( \gamma_i \) in the second process completes before a statement \( \beta_i \) initiates. This follows from the semantics of the general communication action.
To precisely describe the transformations that follow, we use quantification instead of abbreviated enumeration to denote structures of indefinite size. Using quantification notation, the above decomposition becomes:
\[ \langle i : 0 \leq i < n :: \gamma_i \rightarrow \beta_i \rangle \]
\[ \triangleright \quad \text{active } G'(1,n) \quad \text{passive } G(n,1) \]
\[ (G' : [\langle i : 0 \leq i < n :: i \rightarrow \beta_i \rangle] \]
\[ \| *[G \rightarrow [\langle i : 0 \leq i < n :: \gamma_i \rightarrow G(i) \rangle]]] \]
\[ ) \quad \text{channel } (G', G) . \]
Again, we write the final form of this decomposition as:
\[ G' : [\langle i : 0 \leq i < n :: i \rightarrow \beta_i \rangle] \| (G(n,1))/\langle i : 0 \leq i < n :: \gamma_i \rangle) . \]
The \( G(n,1) \) denotes the name and size of the passive port used in the decomposition. In this case, the number of output values is \( n \) (one per guard) and the number of input values is 1.
### 3.2 Target Language of the Transformations
The target language of the program transformations is a slight extension of the above source language. Because of process decomposition, a restricted form of shared variables and shared ports is allowed. Processes may share ports and variables if references to these objects are not made concurrently. Also, concurrent execution of multiple statements is allowed and denoted by the comma. For example, \( \alpha ; \beta_1, \beta_2; \gamma \) denotes the execution of \( \alpha \), followed by the concurrent execution of \( \beta_1 \) and \( \beta_2 \), and, finally, the execution of \( \gamma \).
Programs translated into this language are written in a different typeface than source language programs. This distinction is not necessary, but serves as an aid in describing which syntactic forms have already been or have yet to be translated. For compatibility with the notation of previous papers [4,5], we use overlines to denote probes (\( \overline{X} \)) and up and down arrows to denote assignment (\( \uparrow \), \( \downarrow \)).
2. Process $Q'$
3. Sequence $*[(Q \rightarrow A_1; A_2; Q)]$
4. Skip $*[(Q)]$
5. Assignment $*[(Q \rightarrow x; Q)]$
6. Com $*[(Q \rightarrow L(i) : [(i : k : 0 \leq k < n : k \rightarrow A_k); Q)]$
7. Selection $*[(Q \rightarrow G' : [1 \rightarrow Q(0) \rightarrow \text{skip}])]$
8. Control $*[(Q \rightarrow G' : [0 \rightarrow Q(0) \rightarrow \{i : 1 \leq i \leq n : i \rightarrow A_i; Q(1)\}])]$
9. Seq guards $*[(Q \rightarrow G' : [1 \rightarrow Q(1)]$
10. Con guards $*[(Q \rightarrow Q(i) ; [1 \rightarrow Q(1)])$
11. Conjunction $*[(Q \rightarrow \{i : 0 \leq i \leq n : G_1 \rightarrow x_i \uparrow [0 \rightarrow \text{skip}])$
12. Seq AND $*[(Q \rightarrow G_1 : [1 \rightarrow G_2 : [1 \rightarrow Q(1) ; 0 \rightarrow Q(0)])\rightarrow Q(0)]]
13. Con AND $*[(Q \rightarrow G_1 : [1 \rightarrow x_1 \uparrow [0 \rightarrow x_1] ; G_2 : [1 \rightarrow x_2 \uparrow [0 \rightarrow x_2 ] ;$
14. Negation $*[(Q \rightarrow G' : [1 \rightarrow Q(0) ; 0 \rightarrow Q(1)])]
15. Variable $*[(Q \rightarrow \overline{x} \rightarrow Q(1) [Q \rightarrow \overline{x} \rightarrow Q(0)])]
16. Probe $*[(Q \rightarrow \overline{X} \rightarrow Q(1) [Q \rightarrow \overline{X} \rightarrow Q(0)])]
17. True $*[(Q(1)]$
Figure 3: The above basic process types are generated as the result of the program transformations. Each process corresponds to a syntactic construct and is readily compiled into a self-timed circuit.
3.3 Compilation of the Basic Constructs
Figure 3 displays all of the basic processes produced by the program transformations described in the next chapter. The remaining step is to compile these basic processes in self-timed circuits. These compilations are straightforward applications of the methods described in [4,5]. When possible, reshuffling is performed on passive communications introduced by process decomposition. The complete compilations are described in [1].
Both the program transformations and the resulting circuits for each basic process are described together succinctly as translation rules in circuit form. These rules are shown later in the text as Figures 4, 5, and 6.
4 Transformations Rules
We now derive the program transformations corresponding to each syntactic construct. The equation numbers used to identify the transformations correspond to the numbers used in Figures 1 and 3. Several of the BNF rules are only used to define precedence. We do not define program transformations corresponding to these rules.
4.1 Processes, Declarations, and Channels
No transformations are applied to the parallel composition of processes or to the declaration of ports and variables. The only transformation needed in the first five BNF rules involves rule 2. For compatibility with the following transformations, one process decomposition is applied so that all (sequence) forms are guarded by a passive communication:
\[
\begin{align*}
\langle \text{process} \rangle & \triangleright \{ \langle \text{port} \rangle \} \{ \langle \text{var} \rangle \} \langle \text{sequence} \rangle \\
& \triangleright \{ \langle \text{port} \rangle \} \{ \langle \text{var} \rangle \} \langle Q' \rangle \parallel \langle Q/\langle \text{sequence} \rangle \rangle.
\end{align*}
\]
The basic process \( Q' \) performs exactly one active communication; thus, (sequence) also is executed exactly once.
4.2 Sequencing
The sequential composition of a (statement) and a (sequence) is transformed by process decomposition into the sequence of two active communications and a process implementing each statement:
\[
\begin{align*}
\langle Q/\langle \text{sequence} \rangle \rangle & \triangleright \langle Q/\langle \text{statement} \rangle_1; \langle \text{sequence} \rangle_2 \rangle \\
& \triangleright \ast[\langle Q \rightarrow A'_1; A'_2; Q \rangle] \parallel (A_1/\langle \text{statement} \rangle_1) \parallel (A_2/\langle \text{sequence} \rangle_2).
\end{align*}
\]
4.3 Skip
The skip statement is implemented as the infinite repetition of a passive communication:
\[
\begin{align*}
\langle Q/\langle \text{statement} \rangle \rangle & \triangleright \langle Q/\text{skip} \rangle \triangleright \ast[\langle Q \rightarrow \text{skip}; Q \rangle] \triangleright \ast[\langle Q \rightarrow Q \rangle] \triangleright \ast[Q].
\end{align*}
\]
The probe of a passive communication is always a precondition to performing the action, so we may remove the selection statement with guard \( Q \).
4.4 Assignment
The assignment statement decomposes into
\[
\begin{align*}
\langle P/\langle \text{statement} \rangle \rangle & \triangleright \langle P/\langle \text{NAME} \rangle \uparrow \text{up} \rangle \triangleright \ast[\langle P \rightarrow x \uparrow; P \rangle],
\end{align*}
\]
a simple process implementing a register. The name \( x \) represents an arbitrary (NAME). A similar decomposition is applied to assignments of \texttt{false}.
4.5 Communication
By applying the BNF rules corresponding to communication, we get
\[
\begin{align*}
\langle Q/\langle \text{statement} \rangle \rangle & \triangleright \langle Q/L(j) : [ \langle k : 0 \leq k < n \Rightarrow k \rightarrow \langle \text{sequence} \rangle_k \rangle] \rangle,
\end{align*}
\]
7
where \( L \) and \( j \) represent an arbitrary \( \langle \text{NAME} \rangle \) and \( \langle \text{INT} \rangle \), respectively (rules 9, 11 and 12). Process decomposition produces new processes to implement each \( \langle \text{sequence} \rangle \), yielding:
\[
*[[Q \longrightarrow L(j) : [\\langle | k : 0 \leq k < n :: k \longrightarrow A_k \rangle]; Q]] \\
\| [\\langle | k : 0 \leq k < n :: (A_k/ \langle \text{sequence} \rangle_k) \rangle].
\]
### 4.6 Selection and Repetition
To derive the implementation of the control structures, we first review the semantics of these constructs. Operationally, the execution of the selection statement can be described as: Repetitively evaluate each guard until one or more is true, then pick a true one and execute the corresponding command. The program transformation,
\[
*[[Q \longrightarrow [\gamma_1 \longrightarrow \beta_1 | \ldots | \gamma_n \longrightarrow \beta_n]; Q]] \\
\triangleright*[[Q \longrightarrow [\gamma_1 \longrightarrow \beta_1; Q | \ldots | \gamma_n \longrightarrow \beta_n; Q | \Lambda_i \gamma_i \longrightarrow \text{skip}]]],
\]
does not change the meaning of the selection statement, but makes it easier to implement because at least one guard will evaluate to true. Similarly, the repetition statement may be transformed into:
\[
*[[Q \longrightarrow *[\gamma_1 \longrightarrow \beta_1 | \ldots | \gamma_n \longrightarrow \beta_n]; Q]] \\
\triangleright*[[Q \longrightarrow [\gamma_1 \longrightarrow \beta_1 | \ldots | \gamma_n \longrightarrow \beta_n | \Lambda_i \gamma_i \longrightarrow Q]]].
\]
The new forms for selection and repetition are similar. Only the position of the communication \( Q \) is different. We can perform a general process decomposition on both the selection and repetition forms and use the same implementation of a guarded command set:
\[
(Q/\langle \text{statement} \rangle) \triangleright (Q/\langle \text{gcs} \rangle) \| (G(2,1)/\langle \text{gcs} \rangle),
\]
where
\[
(G(2,1)/\langle \text{gcs} \rangle) \triangleright (G(2,1)/\langle | i : 1 \leq i \leq n :: \langle \text{expr} \rangle_i \longrightarrow \langle \text{sequence} \rangle_i \rangle) \\
\triangleright*[[G \longrightarrow [\\langle | i : 1 \leq i \leq n :: \langle \text{expr} \rangle_i \longrightarrow \langle \text{sequence} \rangle_i ; G(1) \rangle \\
\langle | i : 1 \leq i \leq n :: \lnot \langle \text{expr} \rangle_i \longrightarrow G(0) \rangle]]).
\]
The value returned by the communication \( G \) denotes the result of evaluating the disjunction of the guards within the guarded command set. Notice that in the selection statement, the guarded command set is reevaluated if a false value is returned. Repetition is the opposite of selection. Reevaluation occurs if the guarded command set returns true as described by the basic process:
\[
*[[Q \longrightarrow G' : [0 \longrightarrow Q|1 \longrightarrow \text{skip}]]].
\]
4.7 Guarded Command Sets
The guarded command set process is decomposed into a control process that sequences guard evaluation and the associated command execution, a set of processes that implement the commands and a process that evaluates the guard set:
\[(Q(2,1)/(\text{gcs})) \triangleright (Q(2,1)/(\land i:1 \leq i \leq n :: (\text{expr})_i \rightarrow (\text{sequence})_i))
\triangleright \star[Q \rightarrow G': [0 \rightarrow Q(0)](\land i:1 \leq i \leq n :: i \rightarrow A'_i; Q(1))]]
\|[\land i:1 \leq i \leq n :: (A_i/(\text{sequence})_i))
\| (G(n+1,1)/(\land i:0 \leq i \leq n :: (\text{expr})_i)) .
\] (13)
(Rules 13 and 14 are applied here.) The control process provides a separation between the issues of guard evaluation and statement execution by storing the guard that evaluated to true. This process distinguishes between the program state prior to the guarded command set and the program states following the arrows in each guarded command. Guard evaluation completes before subsequent statements change variable values. The guard set process includes \( (\text{expr})_0 \), the negation of the disjunction of all the other expressions.
The non-trivial translation rules corresponding to the BNF rules 1–12 are shown in Figure 4. The remainder of the paper is concerned with the compilation of the guard set process.
4.8 Guard Set Evaluation
The semantics of the language does not specify the order in which to evaluate the guards. Whereas the other constructs require a strict ordering between command executions, concurrency may be exploited in guard evaluation. Because of the potential gains of concurrency, there is no single best scheme for guard evaluation. Instead, depending on the syntactic structure of the guard set and on invariant properties of the original program, different evaluation schemes will produce the most efficient implementation. Of the four schemes we describe, one is entirely sequential, while the other three represent different methods for using and controlling concurrency.
All four decomposition schemes require that the guard sets be in special forms. The special forms consists of both syntactic and invariant properties. For each property, we define a program transformation that, from an arbitrary guard set, produces an equivalent set in which the property holds. We choose to define both the properties and transformations because often a programmer can establish the properties (in particular, the invariants) by more subtle transformations. An automatic compiler can bypass these transformations if the programmer specifies in the source program that the desired properties are satisfied. We now define some properties and transformations on the guard set process,
\[(Q(n+1,1)/(\land i:0 \leq i \leq n :: (\text{expr})_i)) .
\]
Figure 4: Translation Rules Excluding Those for Guard Set Evaluation
4.8.1 Syntactic and Invariant Properties
Mutual Exclusion A guard set is exclusive if, when it is evaluated, at most one guard is true. This property is expressed by the invariant,
$$\neg \overline{Q} \lor (\land i, j : 0 \leq i, j \leq n \land i \neq j :: (\neg(\text{expr})_i \lor \neg(\text{expr})_j)) .$$
The invariant can always be achieved by successive strengthenings of the guards, which will produce an entirely deterministic implementation of the original non-deterministic guard set. Precisely, for $1 \leq i \leq n$,
$$\langle\text{expr}\rangle'_i \equiv \langle \land j : 1 \leq j < i :: \neg(\text{expr})_j \rangle \land \langle\text{expr}\rangle_i$$
and
$$\langle\text{expr}\rangle'_0 \equiv \langle\text{expr}\rangle_0 .$$
(Read '≡' as "is defined as".)
Disjoint Disjunctive Form Two of the implementation schemes require that each guard be in so-called "disjoint disjunctive form". Each guard must be expressed in AND-OR form, and when the guard set is evaluated, at most one AND term is true. That is, a guard set is ddf, if for each expression $\langle\text{expr}\rangle_i$, $0 \leq i \leq n$,
$$\neg \overline{Q} \lor (\land j, k : 0 \leq j, k < m_i \land j \neq k :: (\neg(\text{conj})_i^j \lor \neg(\text{conj})_i^k)) ,$$
where $\langle\text{expr}\rangle_i \equiv \langle \lor j : 0 \leq j < m_i :: (\text{conj})_i^j \rangle$ and each $\langle\text{conj}\rangle_i^j$ is a simple conjunction of possibly negated variables and probes.
The program transformation to achieve the ddf property is similar to the transformation of an arbitrary expression into disjunctive normal form, except that the conjuncts must be successively strengthened to achieve disjointness. See [1] for details. Notice that this transformation potentially increases the size of a guard set to an exponential in the number of its variables.
Negated Probes An expression is stable if once it becomes true, it remains true. The underlying compilation method allows only restricted implementations of non-stable expressions. Since process decomposition does not introduce concurrency, the guard set process is not active concurrently with any processes modifying variables; thus, all variables are stable in the guard set process. All positive probes are, as well, but negated probes are not. A negated probe may change asynchronously from true to false. We call a guard set noneg if it contains no negated probes.
Any expression containing a negated probe is potentially non-stable. We define a transformation that stabilizes all negated probes. Each probe in the guard set is evaluated and assigned to a local variable before the boolean expressions are evaluated. References of a probe's value within the guard set refer to the corresponding variable's value.
Let $X$ be the set of all negated probes named in the guard set. To transform the guard set into \texttt{noneg} form,
$$
(Q(n + 1, 1)/\langle \langle i : 0 \leq i \leq n : \langle \text{expr}\rangle_i \rangle \rangle
\downarrow \star[\overline{Q} \longrightarrow \langle X : X \in X :: X \rightarrow \, x \uparrow \mid \overline{X} \rightarrow x \downarrow \rangle];
\langle G(n + 1, 1) / \langle \langle i : 0 \leq i \leq n : i \rightarrow Q(i) \rangle \rangle \rangle
\downarrow \longrightarrow \langle \langle i : 0 \leq i \leq n : \langle \text{expr}\rangle_i \rangle \rangle)
$$
where, for $0 \leq i \leq n$,
$$
\langle \text{expr}\rangle_i' \equiv \langle X : X \in X :: \text{replace } \overline{X} \text{ by } x \rangle \langle \text{expr}\rangle_i .
$$
\textbf{Non-atomic Evaluation} If a guard set is not evaluated atomically, expressions that change value during the evaluation cause special problems. For example, the expression $X \lor \overline{X}$ may evaluate to false if different values for $X$ are used in the two subexpressions. A guard set is \texttt{nonatomic} if the subexpressions within it can be evaluated in any order. This property is achieved if each probe in the guard set is named only once. The same transformation used to achieve the \texttt{noneg} property will put an arbitrary guard set in this form if we let $X$ be the set of all probes named more than once.
\subsection*{4.8.2 Evaluation Schemes}
\textbf{Sequential Guard Evaluation} This scheme requires that the guard set fulfill the \texttt{nonatomic} property. The guards are evaluated one by one until one evaluates to true. If none evaluate to true, the communication $Q(0)$ is performed. Process decomposition for this scheme may be defined recursively. If $n > 1$,
$$
(Q(n + 1, 1)/\langle \langle i : 0 \leq i \leq n : \langle \text{expr}\rangle_i \rangle \rangle
\downarrow \star[\overline{Q} \longrightarrow G' : [1 \rightarrow Q(1)]
\downarrow \longrightarrow P' : [\langle \langle i : 2 \leq i \leq n : i - 1 \rightarrow Q(i) \rangle \rangle
\downarrow \longrightarrow Q(0)]
$$
and, if $n = 1$,
$$
(Q(2, 1)/\langle \langle \text{expr}\rangle_1 \rangle
\downarrow \star (Q(2, 1)/\langle \langle \text{expr}\rangle_1 \rangle)
$$
We note that the \texttt{exclusive} property is not required for this decomposition. Conditional evaluation ensures mutual exclusion among the guards. For the same reason, $\langle \langle \text{expr}\rangle_0 \rangle$ is not used.
Evaluation of each individual guard is implemented conditionally. Sequential evaluation of the and connective starts by evaluating the first sub-expression. If the first sub-expression is true, the value of the second sub-expression determines the value of...
conjunction. Otherwise, the value of the conjunction is false:
\[
(Q(2,1)/\langle\text{conjunct}\rangle) \triangleright (Q(2,1)/\langle\text{primary}_1 \text{ and conjunct}_2\rangle) \\
\triangleright^*([[Q \rightarrow G_1': [1 \rightarrow G_1' \rightarrow Q(1)|0 \rightarrow Q(0)]]|0 \rightarrow Q(0)]]] \\
|| (G_1(2,1)/\langle\text{primary}_1\rangle) || (G_2(2,1)/\langle\text{conjunct}_2\rangle).
\]
The sequential scheme and the next scheme (concurrent-all) share the same decompositions for the remaining expression constructs. De Morgan’s Law allows the or connective to be defined in terms of and and not:
\[
(Q(1,2)/\langle\text{expr}\rangle) \triangleright (Q(1,2)/\langle\text{conjunct}_1 \text{ or expr}_2\rangle) \\
\triangleright (Q(1,2)/\langle\text{not conjunct}_1 \text{ and not expr}_2\rangle). \quad (15)
\]
Similarly, false is defined in terms of true and not. Negation exchanges the results of the evaluation:
\[
(Q(1,2)/\langle\text{primary}\rangle) \triangleright (Q(1,2)/\langle\text{not primary}_1\rangle) \\
\triangleright^*([[Q \rightarrow G': [0 \rightarrow Q(1)|1 \rightarrow Q(0)]]|0 \rightarrow Q(0)]]] || (G(1,2)/\langle\text{primary}_1\rangle). \quad (17)
\]
The evaluation of a simple variable is implemented by the process:
\[
(Q(1,2)/\langle\text{primary}\rangle) \triangleright (Q(1,2)/\langle\text{NAME}\rangle) \\
\triangleright^*([[Q \rightarrow [x \rightarrow Q(1)|\neg x \rightarrow Q(0)]]]). \quad (19)
\]
If a probe is named only once in a guard set, the evaluation of a probe is implemented by the process:
\[
(Q(2,1)/\langle\text{primary}\rangle) \triangleright (Q(2,1)/\langle\text{probe NAME}\rangle) \\
\triangleright^*([[Q \rightarrow [X \rightarrow Q(1)|\neg X \rightarrow Q(0)]]]). \quad (20)
\]
The evaluation of true has an implementation similar to that of skip:
\[
(Q(2,1)/\langle\text{primary}\rangle) \triangleright (Q(2,1)/\langle\text{true}\rangle) \\
\triangleright^*([[Q \rightarrow [true \rightarrow Q(1)|\text{false} \rightarrow Q(0)]]] \triangleright^* [Q(1)]). \quad (21)
\]
Figure 5 shows all the translation rules used in the sequential guard evaluation scheme.
Concurrent-all Guard Evaluation This scheme requires that the guard set fulfill both the exclusive and the nonatomic properties. Each guard is evaluated separately and concurrently. The variable corresponding to the true guard (exactly one because the exclusive property holds) is raised. When all guards have been evaluated, the communication corresponding to the set variable is performed, and then the variable is reset:
\[
(Q(n + 1)/\langle\langle i : 0 \leq i \leq n :: \langle\text{expr}_i\rangle\rangle) \\
\triangleright^*([[Q \rightarrow \langle i : 0 \leq i \leq n :: G_i : [1 \rightarrow x_i | 0 \rightarrow \text{skip}]\rangle; \\
[\langle i : 0 \leq i \leq n :: x_i \rightarrow Q(i); x_i \downarrow\rangle] || \langle i : 0 \leq i \leq n :: (G_i(2,1)/\langle\text{expr}_i\rangle)\rangle). \nonumber
\]
13
To evaluate the and connective, both sub-expressions are evaluated concurrently and the results stored in variables. The conjunction of these values is returned by the communication $Q$:
$$(Q(2,1)/\langle\text{conjunct}\rangle) \triangleright (Q(2,1)/\langle\text{primary}\rangle_1 \text{and conjunct}_2)$$
$$\triangleright \#([Q \rightarrow G'_1 : [1 \rightarrow x_1 \uparrow \downarrow \rightarrow x_1 \downarrow], G'_2 : [1 \rightarrow x_2 \uparrow \downarrow \rightarrow x_2 \downarrow]; \neg x_1 \lor x_2 \rightarrow Q(0) \lor (G_1(2,1)/\langle\text{primary}\rangle_1) \lor (G_2(2,1)/\langle\text{conjunct}\rangle_2) .$$
Figure 6 shows the translation rules corresponding to concurrent-all guard evaluation.
**Concurrent-one Guard Evaluation** In this scheme, all guards are evaluated simultaneously. The evaluation of each guard (in fact, each conjunct of each guard) is implemented by a separate process. For this decomposition to be valid, no two of these processes may operate concurrently. This is ensured by both the exclusive and the ddf properties. After decomposition, each remaining basic process implements a simple conjunction of variables and probes:
$$(Q(n + 1,1)/\langle i: 0 \leq i \leq n :: (\lor j: 0 \leq j < m_i :: \langle\text{conj}\rangle_j)\rangle) \triangleright \langle i: 0 \leq i \leq n :: \langle j: 0 \leq i < m :: [Q \land \langle\text{conj}\rangle_j \rightarrow Q(i)]\rangle .$$
The noneg property is required for the implementation of each conjunction.
**Concurrent-one-wait Guard Evaluation** In special cases, guard evaluation may be implemented as above without performing the all-false communication $Q(0)$. This evaluation scheme is possible when: i) the guard set is part of a selection statement (no repetitions); ii) the guard set has the exclusive and ddf properties; and iii) after replacing $\langle\text{expr}\rangle_0$ by false, no negated probes are named in the guard set.
4.8.3 Applying the Guard Evaluation Schemes to an Example
We illustrate the four different schemes for decomposing guard sets by applying these schemes to a small program fragment. Consider the program fragment:
\[ \ldots; [\overline{X} \land s \rightarrow \ldots; \overline{Y} \rightarrow \ldots]; \ldots. \]
Syntax-directed application of the program transformations results in an intermediate form containing the process:
\[ (Q(3,1)/\overline{X} \land s \mid \overline{Y} \mid \neg(\overline{X} \land s \lor \overline{Y})) \]
We construct implementations of this guard set using the four different evaluation schemes. These circuits are shown in Figure 7. The number of two-input operators required for each implementation is used as a general space comparison between the schemes. For operators with more than two inputs, each extra input adds \( \frac{1}{2} \) to the operator count.
**Sequential** Since the guard set has the nonatomic property (excluding the all-false expression), the decomposition is straightforward, requiring no initial transformation of the guard set. The resulting circuit requires 2 AND, 2 SYNC and 1 OR operators.
**Concurrent-all** We first put the guard set into a form that satisfies the exclusive property:
\[ \overline{X} \land s \mid \neg(\overline{X} \land s) \land \overline{Y} \mid \neg(\overline{X} \land s) \land \overline{\overline{Y}}. \]
Since \( \overline{X} \) and \( \overline{Y} \) are named more than once, this transformed guard set does not have the nonatomic property. However, in this scheme, the evaluation of common subexpressions can be shared between guards. In particular, probes need only be evaluated once per guard set; thus, the nonatomic property is satisfied. The resulting circuit requires 2 AND, 2 SYNC, 16\( \frac{1}{2} \) C, and 7\( \frac{1}{2} \) OR operators.
Concurrent-one In this case, the guard set must satisfy both the exclusive and the ddf properties. The transformed guard set is:
\[ \overline{X} \land s \lor \overline{X} \land \overline{Y} \lor X \land \overline{s} \land \overline{Y} \lor \overline{X} \land \overline{Y} \lor X \land \overline{s} \land \overline{Y} \].
Notice the extra literals needed to ensure disjointness among the conjuncts. Since negated probes are named in this guard set, the value of each probe is assigned to a variable, thus satisfying the noneg property. The resulting circuit requires 2 SYNC, 2 FF, 1 OR and 12 \(\frac{1}{2}\) AND operators.
Concurrent-one-wait In order to use this scheme, the exclusive property must hold on the original guard set, that is \(\overline{Q} \lor \overline{X} \lor \overline{s} \lor \overline{Y}\) must be an invariant of the original program. If this is the case, the guard set
\[ \overline{X} \land s \lor \overline{Y} \lor \text{false} \]
can be implemented directly without any transformations, resulting in a simple circuit requiring only 2 \(\frac{1}{2}\) AND operators.
4.8.4 Comparison of the Guard Evaluation Schemes
In the above example, the sequential and the concurrent-one-wait schemes produce the most efficient circuits. This is not always the case. For other guard sets, each of the schemes can produce the best implementation. We discuss pros and cons of the different schemes.
Sequential The sequential scheme provides a straightforward implementation of an arbitrary guard set in space that is proportional to the size of the guard set’s representation in the source language. Unfortunately, because of its sequential nature, the time needed for evaluation is also linearly related to its size.
Concurrent-all This evaluation scheme offers several potential advantages over the previous one. For guard sets with many guards, the time needed to evaluate the guard set is proportional to the logarithm of the number of guards. The ability to do common sub-expression elimination at no cost is an added benefit; however, the basic processes have larger implementations. While this scheme has the best asymptotic area-time performance, we have yet to find an application large enough to reap its benefits.
Concurrent-one While exponential blow-up may occur when transforming pathological guard sets into disjoint disjunctive form, this scheme produces the smallest and fastest implementations of most expressions that do not contain probes.
Figure 7: The four circuits show four different guard evaluation schemes applied to the guard set $\overline{X} \land s | \overline{Y}$. From top to bottom, the circuits were produced by the sequential, concurrent-all, concurrent-one, and concurrent-one-wait schemes. In the circuit produced by the concurrent-all scheme, the boxes enclosing the $\land$ symbol represent the circuit implementing the and construct, as shown in Figure 6.
Concurrent-one-wait In the cases when the programmer can prove the exclusive property without introducing negated probes, this scheme applies and provides a non-polling implementation that does not dissipate any static power. Again, exponential blow-up may occur, but typical implementations are much smaller and much faster than the other schemes.
5 Automatic Compiler
We have constructed an automatic compiler which applies the translation rules derived in this paper. The self-timed circuit description produced by the compiler is then used as input by an automatic place-and-route tool which produces a standard cell implementation of the circuit in VLSI. Using this completely automatic design method, we have fabricated a functionally correct chip implementing a worm-hole message routing system[2].
The translation method produces correct, self-timed implementations of arbitrarily large concurrent programs, and because each translation rule is of fixed size, the size of the implementation is no worse than linearly related to the size of the source program. The translation method and the compiler provide a constructive proof that this design methodology, based on programs, represents a practical approach to the design of VLSI systems.
Acknowledgments and References
We wish to thank Pieter Hazewindus for his comments on early versions of this manuscript and Andy Fyfe for his POSTSCRIPT expertise. This research is sponsored by the Defense Advanced Research Projects Agency, ARPA Order number 6202, and monitored by the Office of Naval Research under contract number N00014-87-K-0745.
|
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a443297.pdf", "len_cl100k_base": 10756, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 23859, "total-output-tokens": 12273, "length": "2e13", "weborganizer": {"__label__adult": 0.0006194114685058594, "__label__art_design": 0.000637054443359375, "__label__crime_law": 0.0004050731658935547, "__label__education_jobs": 0.0009236335754394532, "__label__entertainment": 0.00011247396469116212, "__label__fashion_beauty": 0.0002956390380859375, "__label__finance_business": 0.0004162788391113281, "__label__food_dining": 0.0005145072937011719, "__label__games": 0.0010890960693359375, "__label__hardware": 0.009735107421875, "__label__health": 0.0008630752563476562, "__label__history": 0.0004432201385498047, "__label__home_hobbies": 0.0002282857894897461, "__label__industrial": 0.001377105712890625, "__label__literature": 0.0003345012664794922, "__label__politics": 0.00037217140197753906, "__label__religion": 0.0010156631469726562, "__label__science_tech": 0.11358642578125, "__label__social_life": 8.589029312133789e-05, "__label__software": 0.005168914794921875, "__label__software_dev": 0.859375, "__label__sports_fitness": 0.0005588531494140625, "__label__transportation": 0.0014820098876953125, "__label__travel": 0.0002980232238769531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43084, 0.0285]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43084, 0.34722]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43084, 0.77544]], "google_gemma-3-12b-it_contains_pii": [[0, 174, false], [174, 233, null], [233, 2999, null], [2999, 6932, null], [6932, 9631, null], [9631, 11603, null], [11603, 14482, null], [14482, 16951, null], [16951, 19677, null], [19677, 22605, null], [22605, 25405, null], [25405, 25474, null], [25474, 28240, null], [28240, 30971, null], [30971, 33934, null], [33934, 35866, null], [35866, 37707, null], [37707, 40191, null], [40191, 40628, null], [40628, 43084, null]], "google_gemma-3-12b-it_is_public_document": [[0, 174, true], [174, 233, null], [233, 2999, null], [2999, 6932, null], [6932, 9631, null], [9631, 11603, null], [11603, 14482, null], [14482, 16951, null], [16951, 19677, null], [19677, 22605, null], [22605, 25405, null], [25405, 25474, null], [25474, 28240, null], [28240, 30971, null], [30971, 33934, null], [33934, 35866, null], [35866, 37707, null], [37707, 40191, null], [40191, 40628, null], [40628, 43084, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43084, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43084, null]], "pdf_page_numbers": [[0, 174, 1], [174, 233, 2], [233, 2999, 3], [2999, 6932, 4], [6932, 9631, 5], [9631, 11603, 6], [11603, 14482, 7], [14482, 16951, 8], [16951, 19677, 9], [19677, 22605, 10], [22605, 25405, 11], [25405, 25474, 12], [25474, 28240, 13], [28240, 30971, 14], [30971, 33934, 15], [33934, 35866, 16], [35866, 37707, 17], [37707, 40191, 18], [40191, 40628, 19], [40628, 43084, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43084, 0.0]]}
|
olmocr_science_pdfs
|
2024-11-30
|
2024-11-30
|
7187eed0af96d353469e4fcf4ebf3acef1a8f9b4
|
STRENGTHEN AND SUPPORT THE MAINTENANCE OF OBJECT-ORIENTED SOFTWARE
Ming-Chi Lee
Dept. of Business Administration, National Ping Tung Institute of Commerce
Taiwan, R.O.C.
E-mail: lmc@sun1.npic.edu.tw
Timothy K. Shih and Teh-Sheng Huang
Dept. of Computer Science and Information Engineering, Tamkang University
Taiwan, R.O.C.
ABSTRACT
Inheritance is one of the most common features of object-oriented languages, and has been widely applied to develop large and complex software systems. However, designing a suitable inheritance hierarchy, involving redundant inheritance, is a difficult task and easily suffers from name-conflict and repeated inheritance which are error-prone and difficult to test. In this paper, we explain how redundant inheritance makes object-oriented programs difficult to test and maintain, and we give a concrete example of the problems that arise. We show that the difficulty lies in the fact that we lack an effective detection tool suited for work with inheritance problems. Therefore, a formal checking mechanism is proposed to detect and resolve redundant inheritance. Furthermore, this checking mechanism can be easily incorporated with object-oriented CASE tool to enhance software quality.
Keywords: Inheritance Hierarchy, Redundant Inheritance, Repeated Inheritance, Inference Rule, Object-Oriented Program
INTRODUCTION
Inheritance is a relationship among classes wherein one class shares the structure or behavior defined in one (single inheritance) or more (multiple inheritance) other classes. Class hierarchy consists of a set of ordered inheritance relationships and is often denoted as a directed acyclic graph. It plays a vital role and a fabric in object-oriented design (OOD). However, designing a suitable inheritance hierarchy, especially with multiple inheritance, is a difficult task (Moises et al, 1992). This is because several problems associated with multiple inheritance remain still in debate, such as name collisions and repeated inheritances (Cardelli, 1984, Meyer, 1988). Booch (1991) showed that we have never been able to define a class hierarchy right the first time except for trivial small cases. In practice, this design of class hierarchy is an incremental and iterative process. If you allow multiple inheritance into a language, then sooner or later someone is going to write a redundant inheritance. For example, given a class hierarchy \( Q = \{ \alpha_1 \rightarrow \alpha_2, \alpha_2 \rightarrow \alpha_3, \alpha_3 \rightarrow \alpha_4, \alpha_4 \rightarrow \alpha_5 \} \), if we add a new inheritance relationship, \( \alpha_1 \rightarrow \alpha_4 \), then this inheritance \( \alpha_1 \rightarrow \alpha_4 \) is called a redundant inheritance because \( \alpha_4 \) could inherit indirectly from \( \alpha_1 \). After adding \( \alpha_1 \rightarrow \alpha_4 \), \( \alpha_4 \) inherits twice (or more) from \( \alpha_1 \). This situation suffers from name-confliction problem and the ambiguous method invocation. As a consequence, many implicit software faults very much difficult to test are generated (Chung & Lee, 1997). Although some object-oriented programming languages permits this writing, it is necessary to detect and refine them before coding (Meyer, 1988). We argue that an inheritance-based checking mechanism is essential for effective testing and maintenance of object-oriented programs. In addition, it is worthy to note that an inheritance hierarchy is dynamic rather than static during the lifetime of OO software development. It is therefore almost impossible to maintain an unambiguous and nonredundant inheritance hierarchy forever without a checking mechanism.
In Section 2, we introduce the redundant inheritance problem associated with repeated inheritance and name confictions. In Section 3, inference rules and the inheritance constraints are introduced. Meanwhile, we show how to compute the closure set of an inheritance hierarchy by inference rule. In Section 4, a concept of minimal class hierarchy is proposed to address the issue of redundant inheritance. Also, a simple but delicate method to compute the closure of a set of classes with respect to a class hierarchy is introduced. In Section 5, we derive an algorithm based on the concept of minimal class hierarchy to determine the redundant inheritances in an inheritance hierarchy. Three approaches to resolving redundant inheritances are presented. Finally in Section 6 the conclusion and future research is presented and discussed.
REDUNDANT INHERITANCE AND BASIC WORKS
Basically, inheritance relationships have two different structures. One is single inheritance which allows every class inherits from at most one superclass. In contrast, multiple inheritance allows every class inherits from more than one superclass. In OOD, designing a suitable class hierarchy involving multiple inheritance is a difficult task. Two problems present themselves when we have multiple inheritance: how to deal with name confictions from different superclasses, and how do we handle redundant inheritance. Name confictions are possible when two or more different superclasses use the same name for some element of their interfaces, such as instance
variables and methods (Carre & Geib 1990). In part (a) of Fig.1, class US-Driver and class China-Driver both have a method named traffic-violation, representing the number of traffic violations in US or China respectively. If someone has driver licenses both in US and China, then a class US-China-Drive is declared to be inherited from both of these classes, what does it mean to inherit two traffic-violation with the same name? There are basically three approaches to resolving this clash. First, the language semantics might regard a name confliction as illegal, and reject the compilation of the class. This is the approach taken by Smalltalk and Eiffel (Borning & Ingalls 1982). In Eiffel, however, it is possible to rename items so that there is no ambiguity. Second, the language semantics might regard the same name introduced by different classes as referring to the same traffic-violation, which is the approach taken by CLOS. Third, the language semantics might permit the confliction, but require that all references to the name fully qualify the source of its declaration. This is the approach taken by C++.
The second problem is redundant inheritance which is raised by the presence of repeated inheritance when a class is an ancestor of another in more than one way. Consider the inheritance graph in Fig.1-b. A base class Driver is declared to be the parent class of both class US-Driver and class China-Driver. We find that class US-China-Drive inherits twice from class Driver. This situation is called a repeated inheritance. Furthermore, the class Driver, suppose, is declared to be the parent class of class US-China, then this inheritance is called a redundant inheritance because class US-China-Drive could inherits all the value and behaviors from class Driver through class US-Driver or class China-Driver (see Fig.1-c). From the programming viewpoints, the redundant inheritance is not only unnecessary but also error-prone. In this redundant inheritance graph, class US-China-Drive inherits twice (or more) from class Driver in the redundant inheritance. Redundant inheritance is often confused with repeated inheritance. To formally verify the difference, repeated and redundant inheritance are defined individually as follows:
Definition 1: Given two classes denoted as αi and αj in some class hierarchy Ω, if there exists at least two (or more) inheritance paths between αi and αj, all the classes on these inheritance paths form a repeated inheritance.
Definition 2: Given a class hierarchy Ω with n inheritances, for an inheritance relation αi→αj ∈ Ω, if there has been existed other k inheritances for 1 < k < n, such that αi→αi1→αi2→…→αi,k→αj, then αi→αj is called a redundant inheritance.
Basically, redundant inheritance is generated on the extension of repeated inheritance. In this paper, we call a repeated inheritance graph which contains a redundant inheritance as a redundant inheritance graph. Consequently, a redundant inheritance graph must also be (contain) a repeated inheritance graph. Apparently, Fig.1-c is just a redundant inheritance graph. After the above analysis, we can derive a mathematical expression to verify the relationships among multiple inheritance graph, repeated inheritance graph and redundant inheritance graph. This verification is helpful to reduce the occurrence of controversial and error-prone inheritances.
multiple inheritance graph \( \subseteq \) repeated inheritance graph \( \subseteq \) redundant inheritance graph (1)
For a large OOD system, the detection of redundant inheritance is a time-consuming task. In next section, inference rule is used to address the redundant inheritance. This inference rule can be easily programmed and maintained in the OO software development.
INHERITANCE CONSTRAINTS AND INFERENCE RULES
One of the key issues in object-oriented design is certainly the specification of inheritance constraints. In the object-oriented design (OOD) phase, system designer declares inheritance constraints initially and modifies them later. Since inheritance constraints must not be ambiguous and redundant, some external checking mechanisms must be provided. The constraints detected in the OO design should be embedded in some way so that unambiguity and nonredundancy are preserved. The classification of inheritance constraints are shown as follows:
1. Single Inheritance Constraints (SIC): Given two classes, \( \alpha \), and \( \beta \), \( \alpha \rightarrow \beta \) means that \( \beta \) is a subclass of \( \alpha \), and that every instance in \( \beta \) is also an instance in \( \alpha \).
2. Multiple Inheritance Constraints (MIC): Given three classes, \( \alpha \), \( \beta \), and \( \gamma \), \( \alpha \beta \rightarrow \gamma \) means that \( \gamma \) is a subclass of \( \alpha \) and also a subclass of \( \beta \). \( \alpha \beta \rightarrow \gamma \) implies that \( \alpha \rightarrow \gamma \wedge \beta \rightarrow \gamma \).
We can characterize the constraint membership problem in an inference rule system. In what follows, we assume that we are given a set of classes denoted by \( \Psi \) and a class hierarchy denoted by \( \Omega \), involving only classes in \( \Psi \). The inference rules are:
**Axiom 1:** The inference rules for single inheritance constraints are
1. Transitivity: given three classes, \( \alpha \), \( \beta \), and \( \gamma \in \Psi \), if \( \alpha \rightarrow \beta \) and \( \beta \rightarrow \gamma \) hold, then \( \alpha \rightarrow \gamma \).
2. Union: if \( \alpha \rightarrow \beta \) and \( \alpha \rightarrow \gamma \) hold, then \( \alpha \rightarrow \beta \gamma \).
3. Decomposition: if \( \alpha \rightarrow \beta \gamma \) holds, then \( \alpha \rightarrow \beta \) and \( \alpha \rightarrow \gamma \).
There are several other inference rules that follows from axiom 1. We state two of them in the next theorem.
**Theorem 1:** The inference rules for multiple inheritance constraints are
1. The composition rule: given classes \( \beta \), \( \alpha_1 \), \( \alpha_2 \), ..., \( \alpha_n \in \Psi \), if \( \alpha_i \rightarrow \beta \) for every \( i=1, ..., n \), then \( \alpha_1 \alpha_2 ... \alpha_n \rightarrow \beta \).
2. The pseudotransitivity rule: given classes \( \alpha \), \( \beta \), \( \gamma \) and \( \phi \), if \( \alpha \rightarrow \beta \) and \( \phi \beta \rightarrow \gamma \) hold, then \( \alpha \phi \rightarrow \gamma \) holds.
**Proof:**
1): We are given \( \alpha_1 \rightarrow \beta \), \( \alpha_2 \rightarrow \beta \). By composition rule, \( \alpha_1 \rightarrow \beta \), \( \alpha_2 \rightarrow \beta \) deduce \( \alpha_1 \alpha_2 \rightarrow \beta \). By induction, \( \alpha_1 \alpha_2 ... \alpha_n \rightarrow \beta \) holds.
2): By decomposition rule, \( \phi \beta \rightarrow \gamma \) tells us \( \phi \rightarrow \gamma \), \( \beta \rightarrow \gamma \) holds. By union rule, \( \alpha \rightarrow \gamma \), \( \phi \rightarrow \gamma \) implies \( \alpha \phi \rightarrow \gamma \).
Before tackling the main issues in this paper, it is important to introduce transitivity rule and the closure computation with respect to a given class hierarchy.
**A. Transitivity Rule for Inheritance Hierarchy**
Given a class hierarchy consists of \( \alpha \rightarrow \beta \) and \( \beta \rightarrow \gamma \). Then, we could claim that \( \alpha \rightarrow \gamma \) must also hold in \( \Omega \).
This proof is easy by the property of transitivity. In general, let \( \Omega \) be a set of inheritance relations. We say \( \xi \rightarrow \eta \) is indirectly inherited from \( \Omega \), written \( \Omega \Rightarrow \xi \rightarrow \eta \). We saw above that if \( \Omega \) contains \( \alpha \rightarrow \beta \) and \( \beta \rightarrow \gamma \), then \( \alpha \rightarrow \gamma \) is indirectly inherited from \( \Omega \). That is, \{ \( \alpha \rightarrow \beta \), \( \beta \rightarrow \gamma \) \} \Rightarrow \alpha \rightarrow \gamma \). Let \( \Omega^* \), the closure of \( \Omega \), be the set of inheritance relations that are indirectly inherited from \( \Omega \), i.e., \( \Omega^* = \{ \xi \rightarrow \eta | \Omega \Rightarrow \xi \rightarrow \eta \} \).
For example, Let \( \Phi = \{ \alpha, \beta, \gamma \} \), and \( \Omega = \{ \alpha \rightarrow \beta \), \( \beta \rightarrow \gamma \} \). Then \( \Omega^* \) consists of all those inheritances \( \xi \rightarrow \eta \) such that either
1. $\xi$ contains $\alpha$, e.g., $\alpha \beta \rightarrow \gamma$, $\alpha \rightarrow \beta$, or $\beta \rightarrow \gamma$.
2. $\xi$ contains $\beta$ but not $\alpha$, and $\eta$ does not contain $\alpha$, e.g., $\beta \rightarrow \gamma$, or $\beta \rightarrow \phi$, and
3. $\xi \rightarrow \eta$ is $\gamma \rightarrow \phi$.
It turns out that computing the closure set for a class hierarchy $\Omega$ is a time-consuming task in general, simply because the set of relationships in $\Omega^*$ can be large even though $\Omega$ itself is small. Consider the set $\Omega = \{ \alpha_1 \rightarrow \alpha_2, \ldots, \alpha_n \rightarrow \alpha_m \}$. Then $\Omega^*$ includes all the transitive inheritances $\alpha_i \rightarrow \psi$, where $\psi$ is a subset of $\{ \alpha_1, \alpha_2, \ldots, \alpha_n \}$. Obviously, the number of all derived inheritances on the $\psi$ by inference rules could be $C(n,1) + C(n,2) + \ldots + C(n, n-1) + C(n, n)$, the number is equal to $2^n$. As there are $2^n$ such sets of $\Omega$, we cannot expect to list $\Omega^*$ conveniently, even for a reasonably sized $n$. By contrast with $\Omega^*$, computing $\xi^*$, for a set of classes $\xi$, is not hard; it takes time proportional to the length of all inheritances in $\Omega$, written out. Instead of computing the tedious $\Omega^*$, $\xi^*$ is informative enough to tackle the cyclic and redundant inheritance issues. In the next section, an algorithm to compute $\xi^*$ will be presented.
B. Equivalence of Two Class Hierarchies
Let $\Omega$ and $\Theta$ be two class hierarchies; $\Omega$ and $\Theta$ are said to be equivalent if and only if $\Omega^* = \Theta^*$. To test whether $\Omega$ and $\Theta$ are equivalent, we must verify whether both $\Omega^* \subseteq \Theta^*$ and $\Theta^* \subseteq \Omega^*$ hold. The verification of the equivalence of two class hierarchies by inference rules is relatively time-consuming. An illustration is shown as follows:
**Example A:**
Given a class hierarchy $\Omega$ consists of two multiple inheritance constraints (MIC) and two single inheritance constraints (SIC) as follows:
\[
\Omega = \begin{cases}
SIC: & \alpha_1 \rightarrow \alpha_2 \alpha_3 \alpha_4 \alpha_6 \rightarrow \alpha_5 \\
M IC: & \alpha_1 \alpha_2 \alpha_3 \rightarrow \alpha_5, \alpha_3 \alpha_4 \rightarrow \alpha_6
\end{cases}
\]
We can find other equivalent class hierarchies whose closure is equal to $\Omega$. Suppose $\Theta$ is an equivalent one which contains four single inheritance constraints and one multiple inheritance constraint as follows:
\[
\Theta = \begin{cases}
SIC: & \alpha_1 \rightarrow \alpha_2 \alpha_3 \alpha_4 \alpha_5 \alpha_6 \rightarrow \alpha_5, \alpha_3 \alpha_4 \rightarrow \alpha_6 \\
M IC: & \alpha_2 \alpha_3 \rightarrow \alpha_5
\end{cases}
\]
By use of Axiom 1 and Theorem 1, we can verify that $\Omega$ and $\Theta$ are equivalent by checking whether $\Omega^*$ is equal to $\Theta^*$. However, we cannot expect to list $\Omega^*$ and $\Theta^*$ conveniently. A useful alternative equivalence is addressed in the next theorem.
**Theorem 2:** For each class hierarchy $\Omega$, there is an equivalent class hierarchy $\Theta$ in which no both sides of an inheritance have more than one class.
**Proof:**
(1) Let $\Theta$ be the set of inheritance relations $\alpha \rightarrow \phi_i$ such that for some $\alpha \rightarrow \psi$ in $\Omega$, $\phi_i \in \psi$ where $\phi_i$ is a single class. The $\alpha \rightarrow \phi_i$ follows from $\psi$ by the decomposition rule. Thus, $\Theta \subseteq \Omega^*$ also hold, since if $\alpha \rightarrow \psi = \phi_1 \ldots \phi_m$, then $\alpha \rightarrow \psi$ follows from $\alpha \rightarrow \phi_1, \ldots, \alpha \rightarrow \phi_m$ using the union rule.
(2) Let $\Theta$ be the set of inheritance relations $\phi_i \rightarrow \beta$ such that for some $\psi \rightarrow \beta$ in $\Omega$, $\phi_i \in \psi$. The $\phi_i \rightarrow \beta$ follows from $\psi$ by the decomposition rule. Thus, $\Theta \subseteq \Omega^*$ also holds, since if $\psi = \phi_1 \ldots \phi_n$, then $\psi \rightarrow \beta$ follows from $\phi_1 \rightarrow \beta, \phi_2 \rightarrow \beta, \ldots, \phi_n \rightarrow \beta$ using the union rule.
**Example B**
Consider class hierarchy $\Omega$ in Example A again. By Theorem 2, we can find an equivalent class hierarchy denoted $\Theta$ whose both sides has only one class and is shown as follows:
\[
\Omega = \begin{cases}
SIC: & \alpha_1 \rightarrow \alpha_2 \alpha_3 \alpha_4 \alpha_6 \rightarrow \alpha_5 \\
M IC: & \alpha_2 \alpha_3 \alpha_4 \rightarrow \alpha_6
\end{cases}
\quad \Rightarrow \quad
\Theta = \begin{cases}
\alpha_1 \rightarrow \alpha_2 \alpha_3 \alpha_4 \rightarrow \alpha_5 \\
\alpha_3 \rightarrow \alpha_5 \alpha_6 \rightarrow \alpha_5
\end{cases}
\]
\[
\frac{\text{equivalent}}{\alpha_1 \rightarrow \alpha_2 \alpha_3 \alpha_4 \rightarrow \alpha_5}
\]
\[
\frac{\text{equivalent}}{\alpha_3 \rightarrow \alpha_5 \alpha_6 \rightarrow \alpha_5}
\]
It turns out to be useful, when we develop a class hierarchy design, to consider a stronger restriction on equivalence than that both sides have but one class.
**MINIMAL CLASS HIERARCHY**
In this section, we propose a useful concept called *minimal class hierarchy* to address redundant inheritance. This minimal class hierarchy plays a vital role in our checking mechanism for redundant inheritance.
*Definition 3:* A class hierarchy is said to be *minimal* if:
1. Every both sides of an inheritance relation in $\Omega$ is a single class.
2. No $\xi \rightarrow \eta \in \Omega$ such that the set $(\Omega - \{\xi \rightarrow \eta\})$ is equivalent to $\Omega$.
*Theorem 3:* If a class hierarchy $\Omega$ contains redundant inheritance relationships, then there is a *minimal* class hierarchy $\Theta$ such that $\Theta \subset \Omega$ and $\Theta^\ast = \Theta$.
*Proof:* We have to show that $\Theta$ satisfy the two conditions mentioned above.
1. For each redundant inheritances $\xi \rightarrow \eta \in \Omega$, $(\Omega - \{\xi \rightarrow \eta\})^\ast = \Omega^\ast$ must hold, and we can remove $\xi \rightarrow \eta$ from $\Omega$. Let $\Theta = \Omega - \{\xi \rightarrow \eta\}$. Because $\Omega$ contain redundant inheritance relationships, there exists at least one redundant relationships $\xi \rightarrow \eta$ such that $\Omega - \{\xi \rightarrow \eta\}$ is equivalent to $\Omega$.
2. By Theorem 3, all the inheritance relationships in $\Omega$ can be decomposed into a class hierarchy $\Theta$, in which no both sides have more than one class. Therefore, $\Theta$ can be done in the same way.
*Example C:* Consider the class hierarchy $\Theta$ in *Example B* again. We apply Theorem 3 to $\Theta$, then get a minimal class hierarchy denoted $\Delta$ equivalent to $\Theta$ shown as follows.
$$
\Theta = \left\{ \begin{array}{cccc}
\alpha_1 \rightarrow \alpha_2 & \alpha_2 \rightarrow \alpha_3 & \alpha_1 \rightarrow \alpha_4 & \alpha_1 \rightarrow \alpha_2 & \alpha_1 \rightarrow \alpha_3 & \alpha_1 \rightarrow \alpha_4 \\
\alpha_1 \rightarrow \alpha_5 & \alpha_6 \rightarrow \alpha_3 & \alpha_5 \rightarrow \alpha_2 & \alpha_6 \rightarrow \alpha_5 & \alpha_2 \rightarrow \alpha_5 & \alpha_3 \rightarrow \alpha_6 & \alpha_6 \rightarrow \alpha_3 & \alpha_4 \rightarrow \alpha_6
\end{array} \right\}$$
By the comparison of *Example A* and *Example C*, we can conclude that $\Omega$, $\Theta$, and $\Delta$ are equivalent mutually. (i.e., $\Omega^\ast = \Theta^\ast = \Delta^\ast$.) Theorem 3 derives a useful property that a minimal class hierarchy itself is a non-redundant class hierarchy. This non-redundancy is just what we are seeking for. We can derive a redundant inheritance detection algorithm straightforward from the Theorem 3. The idea of this algorithm is that for each inheritance $\xi \rightarrow \eta \in \Omega$, if $\Omega^\ast$ is equal to $(\Omega - \{\xi \rightarrow \eta\})^\ast$, then $\xi \rightarrow \eta$ is a redundant inheritance.
*Algorithm 1:* Find out Redundant Inheritances with respect to a class hierarchy
**Input:** A class hierarchy $\Omega=\{\epsilon_1, \epsilon_2, \ldots, \epsilon_n\}$ with $n$ inheritances.
**Output:** A redundant inheritance set $\Gamma$
1. $\Gamma = \emptyset$; initialize $\Gamma$ with empty
2. for each $\epsilon_i \in \Omega$, $i = 1,2,\ldots,n$
- if $\Omega^\ast = \{\epsilon_i\}^\ast$ then add $\epsilon_i$ to $\Gamma$
However, this algorithm is not efficient because the computing of $\Omega^\ast$ is an exponential time. Fortunately, there is an alternative approach to replace $\Omega^\ast$. At the other extreme, computing $\xi^\ast$, for a set of classes $\xi$, is not hard; it takes time proportional to the length of all inheritances in $\Omega$, written out. A more efficient algorithm based on $\xi^\ast$ will be proposed in the next section. Now we define $\xi^\ast$ formally as follows:
*Definition 4:* Let $\Omega$ be a class hierarchy on a set of classes denoted by $\Psi$, and let $\xi$ be a subset of $\Psi$. Then the closure of $\xi$ with respect to $\Omega$, denoted by $\xi^\ast$, is defined as the set of classes $\eta$ such that $\xi \rightarrow \eta$ can be deduced from $\Omega$ by inference rule.
The redundant inheritance detection could be achieved by checking whether $\xi^\ast$ still contains $\eta$ after removing $\xi \rightarrow \eta$. If it does, then $\xi \rightarrow \eta$ is a redundant inheritance relationship. A simple algorithm to compute $\xi^\ast$ is shown in the following.
Algorithm 2: Computation of the Closure of a set of classes with respect to a class hierarchy
Input: A finite set of classes \( \Psi \), a class hierarchy \( \Omega \) on \( \Psi \), and a set \( \xi \subseteq \Psi \).
Output: \( \xi^* \), the closure of \( \xi \) with respect to \( \Omega \).
Method: We compute a sequence of sets of classes \( \xi^{(0)}, \xi^{(1)}, \ldots \) by the rules:
1. \( \xi^{(0)} \leftarrow \xi \)
2. \( \xi^{(i+1)} = \xi^{(i)} \cup \{ \eta \mid \text{there is some inheritance } Y \rightarrow Z \in \Omega, \eta \in Z, \text{ and } Y \subseteq \xi^{(i)} \} \).
Proof: By transitivity, \( \xi
ightarrow Y \) and \( Y \rightarrow Z \) imply \( \xi \rightarrow Z \). By reflexivity, \( Z \rightarrow Z \), so \( \xi \rightarrow Z \) by transitivity. Thus \( \xi^* = \xi^{(i+1)} \).
In the following, we use an example to illustrate the algorithm. To illustrate the algorithm clearly, a complex class hierarchy is chosen and executed instead of a trivial one. Although the class hierarchy may be very complex and contains cyclic inheritance which is controversial in semantics, the major goal is to demonstrate the process of algorithm execution. Therefore, we do not request its programming feasibility in practice.
Example: Let \( \Omega \) consists of the following eight inheritances:
\[
\begin{align*}
\alpha_1 \rightarrow \alpha_3, & \quad \alpha_3 \rightarrow \alpha_2, \quad \alpha_2 \rightarrow \alpha_4, \\
\alpha_4 \rightarrow \alpha_5 \alpha_6, & \quad \alpha_5 \rightarrow \alpha_3, \\
\alpha_6 \rightarrow \alpha_2 \alpha_4, & \quad \alpha_2 \rightarrow \alpha_6
\end{align*}
\]
and let \( \xi = \alpha_2 \alpha_4 \). To apply algorithm 2, we let \( \xi^{(0)} = \alpha_2 \alpha_4 \). To compute \( \xi^{(1)} \), we look for \( \Omega \) that have a left side \( \alpha_2 \), \( \alpha_4 \), or \( \alpha_6 \). There is only one, \( \alpha_6 \rightarrow \alpha_2 \alpha_4 \), so we adjoin \( \alpha_2 \) and \( \alpha_4 \) to \( \xi^{(0)} \) and make \( \xi^{(1)} = \alpha_2 \alpha_4 \alpha_6 \alpha_5 \). For \( \xi^{(2)} \), we look for left sides contained in \( \xi^{(1)} \) and find \( \alpha_5 \rightarrow \alpha_2 \alpha_4 \) and \( \alpha_6 \rightarrow \alpha_3 \). Thus \( \xi^{(2)} = \alpha_2 \alpha_4 \alpha_6 \alpha_5 \alpha_3 \). Then, for \( \xi^{(3)} \), we look for left sides contained in \( \alpha_5 \alpha_3 \alpha_2 \alpha_4 \alpha_6 \) and find, in addition to the two previously found, \( \alpha_6 \rightarrow \alpha_5 \alpha_3 \alpha_2 \alpha_4 \alpha_6 \). Thus \( \xi^{(3)} = \alpha_2 \alpha_4 \alpha_6 \alpha_5 \alpha_3 \alpha_2 \alpha_4 \alpha_6 \). It therefore comes as no surprise that \( \xi^{(4)} = \xi^{(5)} = \ldots \). Thus \( \xi^{(k)} = \alpha_1 \alpha_2 \alpha_3 \alpha_4 \alpha_5 \alpha_6 \).
Time Complexity and Data Structure Analysis for Algorithm 2
Algorithm 2 can be implemented to run in proportional to the sum of the lengths of the inheritances in \( \Omega \) if we keep, for each inheritance \( Y \rightarrow Z \), a count of the number of classes in \( Y \) that are not yet in \( \xi^{(i)} \). We must also maintain \( \xi^{(i)} \) as a Boolean array, indexed by class numbers, so when we discover \( \xi^{(i)} \), we decrement by one that count for each inheritance on \( Y \rightarrow Z \) in \( \xi^{(i)} \). Lastly, we must maintain \( \xi^{(i)} \) to \( \xi^{(i+1)} \), a count of the number of classes in \( Y \) that are not yet in \( \xi^{(i)} \). When the count for \( Y \rightarrow Z \) becomes 0, we know \( Y \subseteq \xi^{(i)} \), so \( \xi \rightarrow Y \) by transitivity. Thus \( \xi^{(i)} \). Since \( \xi^{(i)} \) where \( Y \rightarrow Z \) is an inheritance in \( \Omega \), we can tell in time proportional to the size of \( Z \) those classes in \( \xi^{(i)} \) which need to be adjoined to \( \xi^{(i)} \). When computing \( \xi^{(i+1)} \) from \( \xi^{(i)} \) we have only to set to true the entries of the array corresponding to classes added to \( \xi^{(i)} \); there is no need to copy \( \xi^{(i)} \).
Now the problem of proving that algorithm 2 is correct must be addressed. It is easy to prove that every class placed in some \( \xi^{(k)} \) belongs in \( \xi^* \), but harder to show that every class in \( \xi^* \) is placed in some \( \xi^{(k)} \).
Theorem 4: Algorithm 2 correctly computes \( \xi^* \).
Proof: First we show by induction on \( k \) that if \( k \) is placed in \( \xi^{(k)} \), then \( \eta \) is in \( \xi^* \).
Basis: \( k = 0 \). Then \( \eta \) is in \( \xi^{(0)} \), so by reflexivity, \( \xi \rightarrow \eta \).
Induction: Let \( k > 0 \) and assume that \( \xi^{(k-1)} \) consists only of class in \( \xi^* \). Suppose \( \eta \) is placed in \( \xi^{(k)} \) because \( \eta \) is in \( \xi \) of \( \xi^{(k-1)} \) in \( \Omega \), and \( Y \subseteq \xi^{(k-1)} \). Since \( \xi \subseteq \xi^{(k-1)} \), we know \( \xi \subseteq \xi^* \) by the inductive hypothesis. Thus \( \xi \rightarrow Y \) by Lemma 1. By transitivity, \( \xi \rightarrow Y \) and \( Y \rightarrow Z \) imply \( \xi \rightarrow Z \). By reflexivity, \( Z \rightarrow Z \), so \( \xi \rightarrow \eta \) by transitivity. Thus \( \eta \) is in \( \xi^* \).
REDUNDANT INHERITANCE DETECTION AND RESOLUTION
In this section, a detection algorithm for redundant inheritance will be presented. Also, three approaches to resolving redundant inheritances are proposed. In Section 3, we have shown that the computation of a closure set of a class hierarchy, for example \( \Omega \), is very time consuming. In contrast, computing \( \xi^* \), for a set of classes \( \xi \), is not hard. Previously, \( \xi^* \) is defined as the closure of a set of classes \( \xi \) on some class hierarchy. Because there may exist various different class hierarchies in an OO design, the definition of the closure \( \xi^* \) is not precise
enough to specify to which class hierarchy it belongs. For the sake of precision, the term $\xi^*$ is used to strengthen the previous definition. Simply stated, the closure of $\xi$ on $\Omega$ is denoted as $\xi^*\Omega$. The idea of the algorithm is that for each $\xi \rightarrow \eta \in \Omega$, if $\eta \in \xi^*\Omega$, then $\xi \rightarrow \eta$ is a redundant inheritance.
**Algorithm 3:** Find out Redundant Inheritances
Input: A class hierarchy $\Omega$
Output: A set of redundant inheritance $\Gamma$
1. Apply Theorem 2 to $\Omega$, then get an equivalent class hierarchy $\Theta$
2. For each $\xi \rightarrow \eta \in \Theta$ do
(1) Apply Algorithm 2 to compute $\xi^*\Theta$.
(2) If $\eta \in \xi^*\Theta$ then add $\xi \rightarrow \eta$ to $\Gamma$
**Example D:** Given a class hierarchy $\Omega$, find out its redundant inheritances. Let $\Omega$ consists of the following four inheritances:
$$\Omega = \{ \alpha_1 \rightarrow \alpha_2 \alpha_3, \alpha_2 \rightarrow \alpha_3, \alpha_3 \alpha_4 \rightarrow \alpha_2 \alpha_3, \alpha_3 \rightarrow \alpha_4 \}$$
Step 1. To apply Theorem 2 to $\Omega$, we then get a class hierarchy $\Theta$ as follows:
$$\Theta = \{ \alpha_1 \rightarrow \alpha_3, \alpha_1 \rightarrow \alpha_2, \alpha_2 \rightarrow \alpha_3, \alpha_3 \rightarrow \alpha_4 \}$$
Step 2. For each inheritance $\xi \rightarrow \eta$ in $\Theta$ we do:
1. $\alpha_1 \rightarrow \alpha_3$ is chosen: we apply algorithm 2 to compute the closure of $\alpha_1^*$ on ($\Theta - \{\alpha_1 \rightarrow \alpha_3\}$), and we get $\alpha_1^* = \{\alpha_3\}$. We find that $\alpha_3 \in \alpha_1^*$ does not hold, then discard $\alpha_1 \rightarrow \alpha_3$.
2. $\alpha_1 \rightarrow \alpha_2$ is chosen: we apply algorithm 2 to compute the closure of $\alpha_1^*$ on ($\Theta - \{\alpha_1 \rightarrow \alpha_2\}$), and we get $\alpha_1^* = \{\alpha_2, \alpha_3, \alpha_4\}$. We find that $\alpha_2 \in \alpha_1^*$ holds, then we adjoin $\alpha_1 \rightarrow \alpha_2$ to $\Gamma$.
3. $\alpha_2 \rightarrow \alpha_3$ is chosen: we apply algorithm 2 to compute the closure of $\alpha_2^*$ on ($\Theta - \{\alpha_2 \rightarrow \alpha_3\}$), and we get $\alpha_2^* = \{\emptyset\}$ (denoting empty set). We find that $\alpha_3 \in \alpha_2^*$ does not hold, then discard $\alpha_2 \rightarrow \alpha_3$.
4. $\alpha_3 \rightarrow \alpha_4$ is chosen: we apply algorithm 2 to compute the closure of $\alpha_3^*$ on ($\Theta - \{\alpha_3 \rightarrow \alpha_4\}$), and we get $\alpha_3^* = \{\alpha_3\}$. We find that $\alpha_4 \in \alpha_3^*$ does not hold, then discard $\alpha_3 \rightarrow \alpha_4$.
5. $\alpha_4 \rightarrow \alpha_5$ is chosen: we apply algorithm 2 to compute the closure of $\alpha_4^*$ on ($\Theta - \{\alpha_4 \rightarrow \alpha_5\}$), and we get $\alpha_4^* = \{\alpha_3\}$. We find that $\alpha_5 \in \alpha_4^*$ holds, then we adjoin $\alpha_4 \rightarrow \alpha_5$ to $\Gamma$.
6. $\alpha_5 \rightarrow \alpha_6$ is chosen: we apply algorithm 2 to compute the closure of $\alpha_5^*$ on ($\Theta - \{\alpha_5 \rightarrow \alpha_6\}$), and we get $\alpha_5^* = \{\emptyset\}$. We find that $\alpha_6 \in \alpha_5^*$ does not hold, then discard $\alpha_5 \rightarrow \alpha_6$.
After the execution of algorithm 3, $\Gamma$ will contain a set of redundant inheritances $\{\alpha_1 \rightarrow \alpha_3, \alpha_1 \rightarrow \alpha_5 \}$. For this redundant inheritance $\alpha_1 \rightarrow \alpha_3$ in $\Gamma$, it means that class $\alpha_3$ could inherit directly or indirectly from $\alpha_1$. Equation (1), shown in Section 2, reveals that $\alpha_3$ would inherit twice (or more) from $\alpha_1$. To reduce the occurrence of implicit errors caused by this redundant inheritance, we had better refine all the ancestor classes of class $\alpha_3$. The depth-first or breadth-first traversal algorithm can be used to determine these ancestor classes. The refinement depends on the actual inheritance relations. We can combine related classes to remove these unnecessary duplicates. This behavior is also called class aggregation (Hendler 1986). The redundant inheritance $\alpha_1 \rightarrow \alpha_5$ can be dealt by the same way.
**Analysis for Algorithm 3**
Algorithm 3 basically consists of two parts. First part is to apply Theorem 3 to a given class hierarchy. As the proof of Theorem 3 mentioned above, we use decomposition rule and union rule to get a class hierarchy whose both sides contain only one class. For an inheritance $\xi \rightarrow \eta$, the decomposition rule can be implemented to run in time proportional to the length of $\eta$, and union rule is in proportional to the length of $\xi$. Second part is to apply algorithm 2 to compute the closure set of $\xi$ on $\Theta$ for each inheritance $\xi \rightarrow \eta \in \Theta$. The analysis of
algorithm 2 has been shown above. Integration of the analysis of the two parts, the algorithm 3 runs in proportional to the sum of the lengths of inheritances in a given class hierarchy.
Resolutions for Redundant Inheritance
After the redundant inheritance have been identified, it is important to resolve them before coding. Basically, there are three approaches to dealing with the problem of redundant inheritance. First, we can treat occurrences of redundant inheritance as illegal. This is the approach taken by Java, Smalltalk and Eiffel (with Eiffel permitting renaming to disambiguate the duplicate references). Second, we can permit duplication of superclasses, but require the use of fully qualified names to refer to members of a specific copy. This is one of the approaches taken by C++. Third, we can treat multiple reference to the same class as denoting the same class. This is the approach taken by C++ when the repeated superclass is introduced as a virtual base class. A virtual base class exists when a subclass names another class as its superclass and marks that superclass as virtual, to indicate that it is a shared class. The redundant inheritance in Fig.1-c can be resolved by declaring superclass Driver as a virtual base class:
```c++
class Driver { ....};
class US-Driver: virtual public Driver { ... };
class China-Driver: virtual Public Driver { ... };
class US-China-Driver: public US-Driver, public China-Driver { ... };
```
Similarly, in CLOS repeated classes are shared, using a mechanism called the class precedence list. This list, calculated whenever a new class is introduced, includes the class itself and all of its superclasses, without duplication, and is based upon the following rules:
1. A class always has precedence over its superclass
2. Each class sets the precedence order of its direct superclasses
In this approach, the inheritance graph is flattened, duplicates are removed, and the resulting hierarchy is resolved using single inheritance (Dussud 1993). This is akin to the computation of a topological sort of classes. If a total ordering of classes can be calculated, then the class that introduce the redundant inheritance is accepted. Note that this total ordering may be unique, or there may be several possible orderings. If no ordering can be found (for example, when there exist cyclic inheritance), the class is rejected. In Fig.1-c, the class US-China-Driver would be accepted, because there is a unique ordering of superclasses; the superclass hierarchy includes exactly one (shared) appearance of the class Driver.
Invoking a Method in Redundant Inheritance
In traditional programming languages, invoking a subprogram is a completely static activity. In Pascal for example, for a statement that calls the subprogram P, the compiler can generate code that creates a new stack frame, places the proper arguments on the stack, and then changes the flow of control to being executing the code associated with P. However, in OO languages that support polymorphism, invoking a method is dynamic because the class of the object being operated upon may not be known until runtime. Matters are even more complicated when we add redundant inheritance to this situation. Redundant inheritance with polymorphism requires a much more sophisticated technique. Consider the class hierarchy shown in Fig.1-c, which shows the base class Driver along with two subclasses named US-Driver, and China-Driver. Both of them has a subclass, named US-China-Driver. In class Driver, the method birthday-check is common to all subclasses, and therefore need not be redefined. However, the method traffic-violation must be redefined by each of the subclasses, since either in China or US, the number of violations must be mutually independent. Thus, since the class Driver is an abstract class", traffic violation has an empty implementation (it is a pure virtual function, in C++ terminology).
In C++, the developer can decide if a particular method is to be bound late by declaring it to be virtual; all other methods are considered to be bound early, and thus the compiler can statically resolve the method call to a simple subprogram call. In this redundant inheritance, we might have declared traffic-violation as a virtual
---
1 A class with no instances is called abstract data class.
member function and the method \textit{birthday-check} as nonvirtual because the birthday of anyone is unique and unexchangeable. Therefore, birthday-check need not be redefined.
CONCLUSION
In this paper, we propose a checking mechanism to determine redundant inheritance in a given class hierarchy. Inference rule system is used to specify the inheritance constraints. A minimal class hierarchy concept is presented to address the redundant inheritance occurrence. An algorithm based on the minimal class hierarchy is derived to detect redundant inheritances. This work will contribute to object-oriented software testing and maintenance. It is noted worthy that this checking mechanism can be easily incorporated with OO CASE tool to facilitate OO software development and quality. The occurrence of cyclic inheritance which causes self-inheritance and endless type-checking is another problem associated with multiple inheritance in the class hierarchy design. For most theoretical papers, it is strictly prohibited and is assumed to never happen. However, the occurrence of cyclic inheritance due to careless design or specific purpose is inevitable and inherent for software designers. For the sake of space limitation, we leave it to future discussion and research.
REFERENCES
|
{"Source-Url": "https://journal.acs.org.au/index.php/ajis/article/download/286/256", "len_cl100k_base": 10030, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26804, "total-output-tokens": 11821, "length": "2e13", "weborganizer": {"__label__adult": 0.00036787986755371094, "__label__art_design": 0.0002460479736328125, "__label__crime_law": 0.000331878662109375, "__label__education_jobs": 0.0008115768432617188, "__label__entertainment": 4.029273986816406e-05, "__label__fashion_beauty": 0.00015270709991455078, "__label__finance_business": 0.000179290771484375, "__label__food_dining": 0.0003154277801513672, "__label__games": 0.0004184246063232422, "__label__hardware": 0.0005497932434082031, "__label__health": 0.0004177093505859375, "__label__history": 0.0001832246780395508, "__label__home_hobbies": 8.26716423034668e-05, "__label__industrial": 0.0002918243408203125, "__label__literature": 0.00023615360260009768, "__label__politics": 0.0002236366271972656, "__label__religion": 0.0004198551177978515, "__label__science_tech": 0.00408935546875, "__label__social_life": 8.225440979003906e-05, "__label__software": 0.0028209686279296875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0002617835998535156, "__label__transportation": 0.0004405975341796875, "__label__travel": 0.0001895427703857422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41732, 0.0185]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41732, 0.74142]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41732, 0.82748]], "google_gemma-3-12b-it_contains_pii": [[0, 5203, false], [5203, 8590, null], [8590, 13672, null], [13672, 18697, null], [18697, 23215, null], [23215, 29074, null], [29074, 33916, null], [33916, 38257, null], [38257, 41732, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5203, true], [5203, 8590, null], [8590, 13672, null], [13672, 18697, null], [18697, 23215, null], [23215, 29074, null], [29074, 33916, null], [33916, 38257, null], [38257, 41732, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41732, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41732, null]], "pdf_page_numbers": [[0, 5203, 1], [5203, 8590, 2], [8590, 13672, 3], [13672, 18697, 4], [18697, 23215, 5], [23215, 29074, 6], [29074, 33916, 7], [33916, 38257, 8], [38257, 41732, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41732, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
200b648e1ffedeb695d14ba1fa5e7fa0073498f6
|
## Contents
1 **Introduction** 3
1.1 Always the right resources ........................................ 3
1.2 Optimization .......................................................... 3
1.3 Smart Caching .......................................................... 4
1.4 Powerful Deployment ................................................. 4
1.5 Compatible .............................................................. 4
2 **Quickstart** 5
2.1 A simple WSGI application ........................................... 5
2.2 Including resources without Fanstatic ............................. 5
2.3 Including resources with Fanstatic ................................. 6
2.4 Wrapping your app with Fanstatic .................................. 6
3 **Concepts** 7
3.1 Library ................................................................... 7
3.2 Resource inclusion ..................................................... 7
3.3 Resource definitions ................................................... 7
3.4 Resource requirements ............................................... 8
4 **Creating a Resource Library** 9
4.1 Your project ............................................................. 9
4.2 Making Fanstatic available in your project ......................... 9
4.3 Adding the resource directory ...................................... 9
4.4 Declaring the Library .................................................. 10
4.5 Hooking it up to an entry point ..................................... 10
4.6 Declaring resources for inclusion .................................. 10
4.7 Depending on resources .............................................. 10
4.8 An example .............................................................. 11
4.9 Bonus: shipping the library .......................................... 11
4.10 Bonus: dependencies between resources ......................... 11
4.11 Bonus: a minified version .......................................... 11
4.12 Bonus: bundling of resources ..................................... 12
5 **Optimization** 15
6 **Configuration options** 17
6.1 versioning .................................................................. 17
6.2 recompute_hashes ....................................................... 18
6.3 bottom ...................................................................... 18
Contents:
CHAPTER 1
Introduction
Fanstatic is a small but powerful framework for the automatic publication of resources on a web page. Think Javascript and CSS. It just serves static content, but it does it really well.
Can you use it in your project? If you use Python, yes: Fanstatic is web-framework agnostic, and will work with any web framework that supports WSGI. Fanstatic is issued under the BSD license.
Why would you need something like Fanstatic? Can’t you just add your static resources to some statically served directory and forget about them? For small projects this is certainly sufficient. But so much more is possible and useful in this modern Javascript-heavy world. Fanstatic is able to offer a lot of powerful features for projects both small and large.
Fanstatic has a lot of cool features:
1.1 Always the right resources
- **Import Javascript as easily as Python**: Javascript dependencies are a Python import statement away. Importing Python code is easy, why should it be harder to import Javascript code?
- **Depend in the right place**: do you have a lot of server-side code that assembles a web page? Want your datetime widget to pull in a datetime Javascript library, but only when that widget is on the page? Fanstatic lets you do that with one line of Python code.
- **Dependency tracking**: use a Javascript or CSS file that uses another one that in turn uses another one? Fanstatic knows about dependencies and will make sure all dependencies will appear on your page automatically. Have minified or rolled up versions available? Fanstatic can automatically serve those too.
- **Declare dependencies**: want to publish your own Javascript library? Have your own CSS? Does it depend on other stuff? Fanstatic lets you declare dependencies with a few lines of Python code.
1.2 Optimization
- **Serve the right version**: have alternative versions of your resource available? Want to serve minified versions of your Javascript during deployment? Debug versions during development? It’s one configuration option away.
- **Bundle up resources**: roll up multiple resources into one and serve the combined resource to optimize page load time. Bundled resources can be generated automatically, or can automatically served when available.
- **Optimize load times**: Fanstatic knows about tricks to optimize the load time of your Javascript, for instance by including script tags at the bottom of your web page instead of in the head section.
1.3 Smart Caching
- **Infinite caching**: Fanstatic can publish static resources on unique URLs, so that the cache duration can be set to infinity. This means that browsers will hold on to your static resources: web server only gets that resource request once per user and no more. If a front-end in cache is in use, you reduce that to once per resource; the cache will handle all other hits.
- **Automatic deployment cache invalidation**: Fanstatic can automatically update all your resource URLs if new versions of resources are released in an application update. No longer do you need to instruct your user to use shift-reload in their application to refresh their resources.
- **Automatic development cache invalidation**: you can instruct Fanstatic to run in development mode. It will automatically use new URLs whenever you change your code now. No longer do you as a developer need to do shift-reload whenever you change a resource; just reload the page.
1.4 Powerful Deployment
- **Automated deployment**: no longer do you need to tell people in separate instructions to publish Javascript libraries on a certain URL: Fanstatic can publish these for you automatically and transparently.
- **Pre-packaged libraries**: A lot of pre-packaged Javascript libraries are available on the PyPI and are maintained by the Fanstatic community. This can be installed into your project right away using easy_install, pip or buildout. No more complicated installation instructions, just reuse a Javascript library like you reuse Python libraries.
1.5 Compatible
- **Fits your web framework**: Fanstatic integrates with any WSGI-compliant Python web framework.
- **Roll your own**: Not happy with the details of how Fanstatic works? We’ve already split the Fanstatic WSGI component into separately usable components so you can mix and match and roll your own.
Quickstart
This quickstart will demonstrate how you can integrate Fanstatic with a WSGI-based web application.
In this example, we will use Python to hook up Fanstatic to your WSGI application, but you could also use a WSGI configuration framework like Paste Deploy. For more information about this, see our Paste Deploy documentation.
2.1 A simple WSGI application
A simple WSGI application will stand in for your web application:
```python
def app(environ, start_response):
start_response('200 OK', [])
return ['<html><head></head><body></body></html>']
```
As you can see, it simply produces the following web page, no matter what kind of request it receives:
```
<html><head></head><body></body></html>
```
You can also include some code to start and run the WSGI application. Python includes wsgiref, a WSGI server implementation:
```python
if __name__ == '__main__':
from wsgiref.simple_server import make_server
server = make_server('127.0.0.1', 8080, app)
server.serve_forever()
```
For real-world uses you would likely want to use a more capable WSGI server, such as Paste Deploy as mentioned before, or for instance mod_wsgi.
2.2 Including resources without Fanstatic
Let’s say we want to start using jQuery in this application. The way to do this without Fanstatic would be:
- download jQuery somewhere and publish it somewhere as a static resource. Alternatively use a URL to jQuery already published somewhere on the web using a content distribution network (CDN).
- modify the `<head>` section of the HTML in your code to add a `<script>` tag that references jQuery, in all HTML pages that need jQuery.
This is fine for simple requirements, but gets hairy once you have a lot of pages that need a variety of Javascript libraries (which may change dynamically), or if you need a larger selection of Javascript libraries with a more involved
dependency structure. Soon you find yourself juggling HTML templates with lots of `<script>` tags, puzzling over what depends on what, and organizing a large variety of static resources.
### 2.3 Including resources with Fanstatic
How would we do this with Fanstatic? Like this:
```python
from js.jquery import jquery
def app(environ, start_response):
start_response('200 OK', [])
jquery.need()
return ['<html><head></head><body></body></html>']
```
You need to make sure that `js.jquery` is available in your project using a familiar Python library installation system such as `pip`, `easy_install` or `buildout`. This will automatically make the Javascript code available on your system.
### 2.4 Wrapping your app with Fanstatic
To use Fanstatic, you need to configure your application so that Fanstatic can do two things for you:
- automatically inject resource inclusion requirements (the `<script>` tag) into your web page.
- serve the static resources (such as jQuery.js) when a request to a resource is made.
Fanstatic provides a WSGI framework component called `Fanstatic` that does both of these things for you. Here is how you use it:
```python
from fanstatic import Fanstatic
fanstatic_app = Fanstatic(app)
```
When you use `fanstatic_app`, Fanstatic will take care of serving static resources for you, and will include them on web pages when needed. You can import and `need` resources all through your application’s code, and Fanstatic will make sure that they are served correctly and that the right script tags appear on your web page.
If you used `wsgiref` for instance, this is what you’d write to use the Fanstatic wrapped app:
```python
if __name__ == '__main__':
from wsgiref.simple_server import make_server
server = make_server('127.0.0.1', 8080, fanstatic_app)
server.serve_forever()
```
The resulting HTML looks like this:
```html
<html>
<head>
<script type="text/javascript" src="/fanstatic/jquery/jquery.js"></script>
</head>
<body></body></html>
```
Now you’re off and running with Fanstatic!
To understand Fanstatic, it’s useful to understand the following concepts.
### 3.1 Library
Static resources are files that are used in the display of a web page, such as CSS files, Javascript files and images. Often resources are packaged as a collection of resources; we call this a library of resources.
### 3.2 Resource inclusion
Resources can be included in a web page in several ways.
A common forms of inclusion in HTML are Javascript files, which are included using the `<script>` tag, for instance like this:
```html
<script type="text/javascript" src="/something.js"></script>
```
and CSS files, which are included using a `<link>` tag, like this:
```html
<link rel="stylesheet" href="/something.css" type="text/css" />
```
A common way for Javascript and CSS to be included is in head section of a HTML page. Javascript can also be included in script tags elsewhere on the page, such as at the bottom.
Fanstatic can generate these resource inclusions automatically for you and insert them into your web page.
Fanstatic doesn’t do anything special for the inclusion of image or file resources, which could be included by the `<img>` or `<a>` tag. While Fanstatic can serve these resources for you, and also knows how to generate URLs to them, Fanstatic does not automatically insert them into your web pages: that’s up to your application.
### 3.3 Resource definitions
Fanstatic lets you define resources and their dependencies to make the automated rendering of resource inclusions possible.
You can see a resource inclusion as a Python import: when you import a module, you import a particular file in a particular package, and a resource inclusion is the inclusion of a particular resource (.js file, .css file) in a particular library.
A resource may depend on other resources. A Javascript resource may for instance require another Javascript resource. An example of this is jQuery UI, which requires the inclusion of jQuery on the page as well in order to work.
Fanstatic takes care of inserting these resources inclusions on your web page for you. It makes sure that resources with dependencies have their dependencies inserted as well.
### 3.4 Resource requirements
How do you tell Fanstatic that you’d like to include jQuery on a web page? You do this by making an resource requirement in Python: you state you need a resource.
It is common to construct complex web pages on the server with cooperating components. A datetime widget may for instance expect a particular datetime Javascript library to be loaded. Pages but also sub-page components on the server may have inclusion requirements; you can effectively make resource requirements anywhere on the server side, as long as the code is executed somewhere during the request that produces the page.
Creating a Resource Library
We’ve seen how to reuse existing resources, but how do you publish your own resources using Fanstatic? Here’s how:
4.1 Your project
So, you’re developing a Python project. It’s set up in the standard Python way, along these lines:
```
fooproject/
setup.py
foo/
__init__.py
```
4.2 Making Fanstatic available in your project
In order to be able to import from `fanstatic` in your project, you need to make it available first. The standard way is to include it in `setup.py`, like this:
```
install_requires=[
'fanstatic',
]
```
4.3 Adding the resource directory
You need to place the resources in a subdirectory somewhere in your Python code. Imagine you have some resources in a directory called `bar_resources`. You simply place this in your package:
```
fooproject/
setup.py
foo/
__init__.py
bar_resources/
a.css
b.js
```
Note that `bar_resources` isn’t a Python package, so it doesn’t have an `__init__.py`. It’s just a directory.
4.4 Declaring the Library
You need to declare a `Library` for `bar`. In `__init__.py` (or any module in the package), write the following:
```python
from fanstatic import Library
bar_library = Library('bar', 'bar_resources')
```
Here we construct a fanstatic Library named `bar`, and we point to the subdirectory `bar_resources` to find them.
4.5 Hooking it up to an entry point
To let Fanstatic know that this library exists so it will automatically publish it, we need to add an `entry point` for the library to your project’s `setup.py`. Add this to the `setup()` function:
```python
entry_points={
'fanstatic.libraries': [ [ 'bar = foo:bar_library', ]],
}
```
This tells Fanstatic that there is a `Library` instance in the `foo` package. What if you had defined the library not in `__init__.py` but in a module, such as `foo.qux`? You would have referred to it using `foo.qux:bar_library`.
At this stage, Fanstatic can serve the resources in your library. The default URLs are:
- `/fanstatic/bar/a.css`
- `/fanstatic/bar/b.js`
4.6 Declaring resources for inclusion
While now the resources can be served, we can’t actually yet `.need()` them, so that we can have Fanstatic include them on web pages for us. For this, we need to create `Resource` instances. Let’s modify our original `__init__.py` to read like this:
```python
from fanstatic import Library, Resource
bar_library = Library('bar', 'bar_resources')
a = Resource(bar_library, 'a.css')
b = Resource(bar_library, 'b.js')
```
Now we’re done!
4.7 Depending on resources
We can start using the resources in our code now. To make sure `b.js` is included in our web page, we can do this anywhere in our code:
from foo import b
...
def somewhere_deep_in_our_code():
b.need()
4.8 An example
Need an example where it's all put together? We maintain a Fanstatic package called js.jquery that wraps jQuery this way:
http://bitbucket.org/fanstatic/js.jquery/src
It's also available on PyPi:
http://pypi.python.org/pypi/js.jquery
4.9 Bonus: shipping the library
You can declare any number of libraries and resources in your application. What if you want to reuse a library in multiple applications? That's easy too: you just put your library, library entry point, resource definitions and resource files in a separate Python project. You can then use this in your application projects. If it's useful to other as well, you can also publish it on PyPi! The various js.* projects that we are maintaining for Fanstatic, such as js.jquery, are already examples of this.
4.10 Bonus: dependencies between resources
What if we really want to include a.css whenever we pull in b.js, as code in b.js depends on it? Change your code to this:
from fanstatic import Library, Resource
bar_library = Library('bar', 'bar_resources')
a = Resource(bar_library, 'a.css')
b = Resource(bar_library, 'b.js', depends=[a])
Whenever you .need() b now, you'll also get a included on your page.
You can also use a Group to group Resources together:
from fanstatic import Group
c = Group([a, b])
4.11 Bonus: a minified version
What if you have a minified version of your b.js Javascript called b.min.js available in the bar_resources directory and you want to let Fanstatic know about it? You just write this:
from fanstatic import Library, Resource
bar_library = Library('bar', 'bar_resources')
a = Resource(bar_library, 'a.css')
b = Resource(bar_library, 'b.js', minified='b.min.js')
If you now configure Fanstatic to use the minified mode, it will automatically pull in b.min.js instead of b.js whenever you do b.need().
### 4.12 Bonus: bundling of resources
Bundling of resources minimizes the amount of HTTP requests from a web page. Resources from the same Library can be bundled up into one, when they have the same renderer. Bundling is disabled by default. If you want bundling, set bundle to True:
```python
from fanstatic import Library, Resource
qux_library = Library('qux', 'qux_resources')
a = Resource(qux_library, 'a.css')
b = Resource(qux_library, 'b.css')
fanstatic.init_needed(bundle=True)
a.need()
b.need()
```
The resulting URL looks like this:
http://localhost/fanstatic/qux/:bundle:a.css;b.css
The fanstatic publisher knows about bundle URLs and serves a bundle of the two files.
If you don’t want your Resource to be bundled, give it the dont_bundle argument:
```python
c = Resource(qux_library, 'a.css', dont_bundle=True)
```
Resources are bundled based on their Library. This means that bundles don’t span Libraries. If we were to allow bundles that span Libraries, we would get inefficient bundles. For an example look at the following example situation:
```python
from fanstatic import Library, Resource
foo = Library('foo', 'foo')
bar = Library('bar', 'bar')
a = Resource(foo, 'a.js')
b = Resource(bar, 'b.js', depends=[a])
c = Resource(bar, 'c.js', depends=[a])
```
If we need() resource b in page 1 of our application and would allow cross-library bundling, we would get a bundle of a + b. If we then need only resource c in page 2 of our application, we would render a bundle of a + c. In this example we see that cross-library bundling can lead to inefficient bundles, as the client downloads 2 * a + b + c. Fanstatic doesn’t do cross-library bundling, so the client downloads a + b + c.
When bundling resources, things could go haywire with regard to relative URLs in CSS files. Fanstatic prevents this by taking the dirname of the Resource into account:
```python
from fanstatic import Library, Resource
foo = Library('foo', 'foo')
a = Resource(foo, 'a.css')
b = Resource(foo, 'sub/sub/b.css')
```
Fanstatic won’t bundle \textit{a} and \textit{b}, as \textit{b} may have relative URLs that the browser would not be able to resolve. We \textit{could} rewrite the CSS and inject URLs to the proper resources in order to have more efficient bundles, but we choose to leave the CSS unaltered.
There are various optimizations for resource inclusion that Fanstatic supports. Because some optimizations can make debugging more difficult, the optimizations are disabled by default.
We will summarize the optimization features that Fanstatic offers here. See the configuration section and the API documentation for more details.
- minified resources. Resources can specify minified versions using the mode system. You can then configure Fanstatic to preferentially serve resources in a certain mode, such as minified.
- rolling up of resources. Resource libraries can specify rollup resources that combine multiple resources into one. This reduces the amount of server requests to be made by the web browser, and can help with caching. This can be controlled with the rollup configuration parameter.
- bundling of resources. Resource bundles combine multiple resources into one. This reduces the amount of server requests to be made by the web browser, and help with caching. This can be controlled with the bundle configuration parameter.
- infinite caching. Fanstatic can serve resources declaring that they should be cached forever by the web browser (or proxy cache), reducing the amount of hits on the server. Fanstatic makes this safe even when you upgrade or modify resources by its versioning technology. This can be controlled with the versioning and recompute_hashes configuration parameters.
- Javascript inclusions at the bottom of the web page. This can speed up the time web pages render, as the browser can start displaying the web page before all Javascript resources are loaded. This can be controlled using the bottom and force_bottom configuration parameters.
To find out more about these and other optimizations, please read this best practices article that describes some common optimizations to speed up page load times.
CHAPTER 6
Configuration options
Fanstatic makes available a number of configuration options. These can be passed to the Fanstatic WSGI component as keyword arguments. They can also be configured using Paste Deploy configuration patterns (see our Paste Deploy documentation for more information on that).
6.1 versioning
If you turn on versioning, Fanstatic will automatically include a version identifier in the resource URLs it generates and injects into web pages. This means that for each version of your Javascript resource its URL will be unique. The Fanstatic publisher will set cache headers for versioned resource URLs so that they will be cached forever by web browsers and caching proxies.
By default, versioning is disabled, because it needs some extra explanation. We highly recommend you to enable it however, as the performance benefits are potentially huge and it’s usually entirely safe to do so. See also recompute_hashes if you want to use versioning during development.
The benefit of versioning is that all resources will be cached forever by web browsers. This means that a web browser will never talk to the server to request a resource again once it retrieved it once, as long as it is still in its cache. This puts less load on your web application: it only needs to publish the resource once for a user, as long as the resource remains in that user’s cache.
If you use a server-side cache such as Squid or Varnish, the situation is even better: these will hold on to the cached resources as well, meaning that your web application needs to serve the resource exactly once. The cache will serve them after that.
But what if you change a resource? Won’t users now get the wrong, old versions of the changed resource? No: with versioning enabled, when you change a resource, a new URL to that resource will be automatically generated. You never will have to instruct users of your web application to do a “shift-reload” to force all resources to reload – the browser will see the resource URL has changed and will automatically load a new one.
How does this work? There are two schemes: explicit versioning and an automatically calculated hash-based versioning. An explicit version looks like this (from the js.jquery package):
'/fanstatic/jquery/:version:1.4.4/jquery.js
A hash-based version looks like this:
'/fanstatic/my_library/:version:d41d8cd98f00b204e9800998ecf8427e/my_resource.js
The version of Resource depends on the version of the python package in which the Library is defined: it takes the explicit version information from this. If no version information can be found or if the python package is installed
Well, for 10 years into the future at least.
in development mode, we still want to be able to create a unique version that changes whenever the content of the resources changes.
To this end, the most recent modification time from the files and directories in the Library directory is taken. Whenever you make any changes to a resource in the library, the hash version will be automatically recalculated.
The benefit of calculating a hash for the Library directory is that resource URLs change when a referenced resource changes; If resource A (i.e. logo.png) in a library that is referenced by resource B (i.e. style.css) changes, the URL for resource A changes, not because A changed, but because the contents of the library to which A and B belong has changed.
Fanstatic also provides an MD5-based algorithm for the Library version calculation. This algorithm is slower, but you may use if you don’t trust your filesystem. Use it through the `versioning_use_md5` parameter.
### 6.2 recompute_hashes
If you enable `versioning`, Fanstatic will automatically calculate a resource hash for each of the resource directories for which no version is found.
During development you want the hashes to be recalculated each time you make a change, without having to restart the application all the time, and having a little performance impact is no problem. The default behavior is to recompute hashes for every request.
Calculating a resource hash is a relatively expensive operation, and in production you want Fanstatic to calculate the resource hash only once per library, by setting `recompute_hashes` to false. Hashes will then only be recalculated after you restart the application.
### 6.3 bottom
While CSS resources can only be included in the `<head>` section of a web page, Javascript resources can be included in `<script>` tags anywhere on the web page. Sometimes it pays off to do so: by including Javascript resources at the bottom of a web page (just before the `</body>` closing tag), the page can already load and partially render for the user before the Javascript files have been loaded, and this may lead to a better user experience.
Not all Javascript files can be loaded at this time however: some depend on being included as early as possible. You can mark a `Resource` as “bottom safe” if they are safe to load at the bottom of the web page. If you then enable `bottom`, those Javascript resources will be loaded there. If `bottom` is turned off (the default), all Javascript resources will be included in the `<head>` section.
### 6.4 force_bottom
If you enable `force_bottom` (default it’s disabled) then if you enable `bottom`, all Javascript resources will be included at the bottom of a web page, even if they’re not marked “bottom safe”.
### 6.5 minified and debug
By default, the resource URLs included will be in the normal human-readable (and debuggable) format for that resource.
When creating `Resource` instances, you can specify alternative modes for the resource, such as minified and debug versions. The argument to `minified` and `debug` are a resource path or resource that represents the resource in that alternative mode.
You can configure Fanstatic so that it prefers a certain mode when creating resource URLs, such as `minified`. In this case Fanstatic will preferentially serve minified alternatives for resources, if available. If no minified version is available, the default resource will be served.
### 6.6 rollup
A performance optimization to reduce the amount of requests sent by a client is to roll up several resources into a bundle, so that all those resources are retrieved in a single request. This way a whole collection of resources can be served in one go.
You can create special `Resource` instances that declare they supersede a collection of other resources. If `rollup` is enabled, Fanstatic will serve a combined resource if it finds out that all individual resources that it supersedes are needed. If you also declare that a resource is an `eager_superseder`, the rolled up resource will actually always be served, even if only some of the superseded resources are needed.
### 6.7 base_url
The `base_url` URL will be prefixed in front of all resource URLs. This can be useful if your web framework wants the resources to be published on a sub-URL. By default, there is no `base_url`, and resources are served in the script root.
Note that this can also be set using the `set_base_url` method on a `NeededResources` instance during run-time, as this URL is generally not known when `NeededResources` is instantiated.
### 6.8 publisher_signature
The default publisher signature is `fanstatic`. What this means is that the `Fanstatic()` WSGI component will look for the string `/fanstatic/` in the URL path, and if it’s there, will take over to publish resources. If you would like the root for resource publication to be something else in your application (such as `resources`), you can change this to another string.
### 6.9 bundle
Bundling of resources minimizes HTTP requests from the client by finding efficient bundles of resources. In order to configure bundling of resources, set the `bundle` argument to `True`.
Fanstatic has support for Paste Deployment, a system for configuring WSGI applications and servers. You can configure the Fanstatic WSGI components using Paste Deploy.
### 7.1 Fanstatic WSGI component
If you have configured your application with Paste, you will already have a configuration .ini file, say deploy.ini. You can now wrap your application in the `Fanstatic()` WSGI component:
```ini
[server:main]
use = egg:Paste#http
[app:my_application]
use = egg:myapplication
[pipeline:main]
pipeline = fanstatic my_application
[filter:fanstatic]
use = egg:fanstatic#fanstatic
```
The `Fanstatic()` WSGI framework component actually itself combines three separate WSGI components - the Injector, the Delegator and the Publisher - into one convenient component.
The [filter:fanstatic] section accepts several configuration directives (see also the configuration documentation):
- Turn recomputing of hashes on or off with “true” or “false”:
```ini
recompute_hashes = true
```
- To turn versioning on or off with “true” or “false”:
```ini
versioning = true
```
You can also configure the URL segment that is used in generating URLs to resources and to recognize “serve-able” resource URLs:
```ini
publisher_signature = foo
```
To allow for bottom inclusion of resources:
bottom = true
To force all Javascript to be included at the bottom:
force_bottom = true
To serve minified resources where available:
minified = True
To serve debug resources where available:
debug = True
Use rolled up resources where possible and where they are available:
rollup = true
Use bundling of resources:
bundle = true
A complete [filter:fanstatic] section could look like this:
[filter:fanstatic]
use = egg:fanstatic#fanstatic
recompute_hashes = false
versioning = true
bottom = true
minified = true
The Fanstatic WSGI component is all you should need for normal use cases. Next, we will go into the details of what the sub-components that this component consists of. These should only be useful in particular use cases when you want to take over some of the task of Fanstatic itself.
### 7.2 Injector WSGI component
If you don’t want to use the Publisher component as you want to serve the libraries yourself, you can still take care of injecting URLs by configuring the **Injector** WSGI component separately:
[server:main]
use = egg:Paste#http
[app:my_application]
use = egg:myapplication
[pipeline:main]
pipeline = injector my_application
[filter:injector]
use = egg:fanstatic#injector
The [filter:injector] section accepts the same set of configuration parameters as the [filter:fanstatic] section. A complete section therefore could look like this:
[filter:injector]
use = egg:fanstatic#injector
recompute_hashes = false
7.3 Publisher WSGI component
It is also possible to set up the Publisher component separately. The publisher framework component is actually a combination of a Delegator and a Publisher component. The delegator is responsible for recognizing what URLs are in fact URLs to “serve-able” resources, passing along all other URLs to be handled by your application.
The delegator recognizes URLs that contain the publisher_signature as a path segment are recognized as “serve-able”. Configuring only the publisher component for your application implies that there is some other mechanism that injects the correct resources URLs into, for example, web pages.
The publisher component accepts one configuration directive, the publisher_signature (default it’s set to fanstatic):
```ini
[server:main]
use = egg:Paste#http
[app:my_application]
use = egg:myapplication
[pipeline:main]
pipeline = publisher my_application
[filter:publisher]
use = egg:fanstatic#publisher
publisher_signature = bar
```
7.4 Combining the publisher and the injector
As explained before, the Fanstatic() component combines the publisher and injector components. An equivalent configuration using the separate components would look like this:
```ini
[server:main]
use = egg:Paste#http
[app:my_application]
use = egg:myapplication
[pipeline:main]
pipeline = publisher injector my_application
[filter:publisher]
use = egg:fanstatic#publisher
publisher_signature = baz
[filter:injector]
use = egg:fanstatic#injector
recompute_hashes = false
versioning = true
bottom = true
```
minified = true
publisher_signature = baz
Serf: A standalone Fanstatic WSGI application
During development of Javascript code it can be useful to test your Javascript code in a very simple HTML page. Fanstatic contains a very simple WSGI application that allows you to do this: Serf.
The `Serf` class is a WSGI application that serves a very simple HTML page with a `<head>` and `<body>` section. You can give the Serf class a single resource to include. If you wrap the Serf WSGI application in a Fanstatic WSGI framework component, the resource and all its dependencies will be included on the web page.
### 8.1 Paste Deployment of Serf
Serf is mostly useful in combination with Paste Deployment, as this makes it very easy to configure a little test web application. You configure Fanstatic as discussed in the our Paste Deploy documentation section. You then add a serf app in a `app:` section and tell it what resource to include using the `py:<dotted_name>` notation.
A dotted name is a string that refers to a Python object. It consists of a packages, modules and objects joined together by dots, much as you can write them in Python `import` statements. `js.jquery.jquery` for instance refers to the `jquery` resource in the `js.jquery` package. This way you can refer to any package on your Python path (controlled by buildout or virtualenv).
Finally, you also must include the Serf application in the WSGI pipeline.
Here is a full example which includes the jquery resource on a HTML page:
```ini
[server:main]
use = egg:Paste#http
[app:serf]
use = egg:fanstatic#serf
resource = py:js.jquery.jquery
[filter:fanstatic]
use = egg:fanstatic#fanstatic
[pipeline:main]
pipeline = fanstatic serf
```
9.1 WSGI components
fanstatic.Fanstatic(app, publisher_signature='fanstatic', **config)
Fanstatic WSGI framework component.
Parameters
• app – The WSGI app to wrap with Fanstatic.
• publisher_signature – Optional argument to define the signature of the publisher in a URL. The default is fanstatic.
• **config – Optional keyword arguments. These are passed to NeededInclusions when it is constructed.
fanstatic.Serf(resource)
Serf WSGI application.
Serve a very simple HTML page while needing a resource. Can be configured behind the Fanstatic() WSGI framework component to let the resource be included.
Parameters resource – The Resource to include.
class fanstatic.Injector(app, **config)
Fanstatic injector WSGI framework component.
This WSGI component takes care of injecting the proper resource inclusions into HTML when needed.
This WSGI component is used automatically by the Fanstatic() WSGI framework component, but can also be used independently if you need more control.
Parameters
• app – The WSGI app to wrap with the injector.
• **config – Optional keyword arguments. These are passed to NeededResources when it is constructed. It also makes sure that when initialized, it isn’t given any configuration parameters that cannot be passed to NeededResources.
class fanstatic.Publisher(library_registry)
Fanstatic publisher WSGI application.
This WSGI application serves Fanstatic Library instances. Libraries are published as <library_name>/<optional_version>/path/to/resource.js.
All static resources contained in the libraries will be published to the web. If a step prefixed with \texttt{:version:} appears in the URL, this will be automatically skipped, and the HTTP response will indicate the resource can be cached forever.
This WSGI component is used automatically by the \texttt{Fanstatic()} WSGI framework component, but can also be used independently if you need more control.
**Parameters**
- \texttt{library_registry} – an instance of \texttt{LibraryRegistry} with those resource libraries that should be published.
```python
class fanstatic.LibraryPublisher(library)
```
Fanstatic directory publisher WSGI application.
This WSGI application serves a directory of static resources to the web.
This WSGI component is used automatically by the \texttt{Fanstatic()} WSGI framework component, but can also be used independently if you need more control.
**Parameters**
- \texttt{library} – The fanstatic library instance.
```python
class fanstatic.Delegator(app, publisher, publisher_signature='fanstatic')
```
Fanstatic delegator WSGI framework component.
This WSGI component recognizes URLs that point to Fanstatic libraries, and delegates them to the \texttt{Publisher} WSGI application.
In order to recognize such URLs it looks for occurrences of the \texttt{publisher_signature} parameter as a URL step. By default it looks for /fanstatic/.
This WSGI component is used automatically by the \texttt{Fanstatic()} WSGI framework component, but can also be used independently if you need more control.
**Parameters**
- \texttt{app} – The WSGI app to wrap with the delegator.
- \texttt{publisher} – An instance of the \texttt{Publisher} component.
- \texttt{publisher_signature} – Optional argument to define the signature of the publisher in a URL. The default is \texttt{fanstatic}.
### 9.2 Python components
```python
class fanstatic.Library(name, rootpath, ignores=None, version=None)
```
The resource library.
This object defines which directory is published and can be referred to by \texttt{Resource} objects to describe these resources.
**Parameters**
- \texttt{name} – A string that uniquely identifies this library.
- \texttt{rootpath} – An absolute or relative path to the directory that contains the static resources this library publishes. If relative, it will be relative to the directory of the module that initializes the library.
- \texttt{ignores} – A list of globs used to determine which files and directories not to publish.
```python
init_library_nr()
```
This can only be called once all resources are known.
i.e. once sort_resources is called this can be called. once library numbers are calculated once this will be done very quickly.
**path = None**
The absolute path to the directory which contains the static resources this library publishes.
**register**(resource)
Register a Resource with this Library.
A Resource knows about its Library. After a Resource has registered itself with its Library, the Library knows about the Resources associated to it.
**signature**(recompute_hashes=False, version_method=None)
Get a unique signature for this Library.
If a version has been defined, we return the version.
If no version is defined, a hash of the contents of the directory indicated by `path` is calculated. If `recompute_hashes` is set to `True`, the signature will be recalculated each time, which is useful during development when changing Javascript/css code and images.
**class fanstatic.Resource**(library, relpath, depends=None, supersedes=None, bottom=False, renderer=None, debug=None, dont_bundle=False, minified=None)
A resource.
A resource specifies a single resource in a library so that it can be included in a web page. This is useful for Javascript and CSS resources in particular. Some static resources such as images are not included in this way and therefore do not have to be defined this way.
**Parameters**
- **library** – the `Library` this resource is in.
- **relpath** – the relative path (from the root of the library path) that indicates the actual resource file.
- **depends** – optionally, a list of resources that this resource depends on. Entries in the list are `Resource` instances.
- **supersedes** – optionally, a list of `Resource` instances that this resource supersedes as a rollup resource. If all these resources are required for render a page, the superseding resource will be included instead.
- **bottom** – indicate that this resource is “bottom safe”: it can be safely included on the bottom of the page (just before `</body>`). This can be used to improve the performance of page loads when Javascript resources are in use. Not all Javascript-based resources can however be safely included that way, so you have to set this explicitly (or use the `force_bottom` option on `NeededResources`).
- **renderer** – optionally, a callable that accepts an URL argument and returns a rendered HTML snippet for this resource. If no renderer is provided, a renderer is looked up based on the resource’s filename extension.
- **debug** – optionally, a debug version of the resource. The argument is a `Resource` instance, or a string that indicates a relative path to the resource. In the latter case a `Resource` instance is constructed that has the same library as the resource.
- **dont_bundle** – Don’t bundle this resource in any bundles (if bundling is enabled).
- **minified** – optionally, a minified version of the resource. The argument is a `Resource` instance, or a string that indicates a relative path to the resource. In the latter case a `Resource` instance is constructed that has the same library as the resource.
**mode** *(mode)*
Get Resource in another mode.
If the mode is `None` or if the mode cannot be found, this `Resource` instance is returned instead.
**Parameters**
- **mode** – a string indicating the mode, or `None`.
**need** *(slots=None)*
Declare that the application needs this resource.
If you call `.need()` on `Resource` sometime during the rendering process of your web page, this resource and all its dependencies will be inserted as inclusions into the web page.
**Parameters**
- **slots** – an optional dictionary mapping from `Slot` instances to `Resource` instances. This dictionary describes how to fill in the slots that this resource might depend on (directly or indirectly). If a slot is required, the dictionary must contain an entry for it.
---
**class** `fanstatic.Slot` *(library, extension, depends=None, required=<object object>, default=None)*
A resource slot.
Sometimes only the application has knowledge on how to fill in a dependency for a resource, and this cannot be known at resource definition time. In this case you can define a slot, and make your resource depend on that. This slot can then be filled in with a real resource by the application when you `.need()` that resource (or when you need something that depends on the slot indirectly).
**Parameters**
- **library** – the `Library` this slot is in.
- **ext** – the extension of the slot, for instance `.js`. This determines what kind of resources can be slotted in here.
- **required** – a boolean indicating whether this slot is required to be filled in when a resource that depends on a slot is needed, or whether it’s optional. By default filling in a slot is required.
- **depends** – optionally, a list of resources that this slot depends on. Resources that are slotted in here need to have the same dependencies as that of the slot, or a strict subset.
---
**class** `fanstatic.Group` *(depends)*
A resource used to group resources together.
It doesn’t define a resource file itself, but instead depends on other resources. When a Group is depended on, all the resources grouped together will be included.
**Parameters**
- **depends** – a list of resources that this resource depends on. Entries in the list can be `Resource` instances, or `Group` instances.
**need** *(slots=None)*
Need this group resource.
If you call `.need()` on `Group` sometime during the rendering process of your web page, all dependencies of this group resources will be inserted into the web page.
**Parameters**
- **slots** – an optional dictionary mapping from `Slot` instances to `Resource` instances. This dictionary describes how to fill in the slots that this resource might depend on (directly or indirectly). If a slot is required, the dictionary must contain an entry for it.
---
**class** `fanstatic.NeededResources` *(versioning=False, versioning_use_md5=False, recom pute_hashes=True, bottom=False, force_bottom=False, minified=False, debug=False, rollup=False, base_url=None, script_name=None, publisher_signature='fanstatic', bundle=False, resources=None)*
---
The current selection of needed resources..
The `NeededResources` instance maintains a set of needed resources for a particular web page.
**Parameters**
- **versioning** – If True, Fanstatic will automatically include a version identifier in all URLs pointing to resources. Since the version identifier will change when you update a resource, the URLs can both be infinitely cached and the resources will always be up to date. See also the `recompute_hashes` parameter.
- **versioning_use_md5** – If True, Fanstatic will use an md5 algorithm instead of an algorithm based on the last modification time of the Resource files to compute versions. Use md5 if you don’t trust your filesystem.
- **recompute_hashes** – If True and versioning is enabled, Fanstatic will recalculate hash URLs on the fly whenever you make changes, even without restarting the server. This is useful during development, but slower, so should be turned off during deployment. If set to False, the hash URLs will only be calculated once after server startup.
- **bottom** – If set to True, Fanstatic will include any resource that has been marked as “bottom safe” at the bottom of the web page, at the end of `<body>`, as opposed to in the `<head>` section. This is useful for optimizing the load-time of Javascript resources.
- **force_bottom** – If set to True and `bottom` is set to True as well, all Javascript resources will be included at the bottom of a web page, even if they aren’t marked bottom safe.
- **minified** – If set to True, Fanstatic will include all resources in minified form. If a Resource instance does not provide a minified mode, the “main” (non-named) mode is used.
- **debug** – If set to True, Fanstatic will include all resources in debug form. If a Resource instance does not provide a debug mode, the “main” (non-named) mode is used. An exception is raised when both the debug and minified parameters are True.
- **rollup** – If set to True (default is False) rolled up combined resources will be served if they exist and supersede existing resources that are needed.
- **base_url** – This URL will be prefixed in front of all resource URLs. This can be useful if your web framework wants the resources to be published on a sub-URL. By default, there is no base_url, and resources are served in the script root. Note that this can also be set with the set_base_url method on a `NeededResources` instance.
- **script_name** – The script_name is a fallback for computing library URLs. The base_url parameter should be honoured if it is provided.
- **publisher_signature** – The name under which resource libraries should be served in the URL. By default this is `fanstatic`, so URLs to resources will start with `/fanstatic/`.
- **bundle** – If set to True, Fanstatic will attempt to bundle resources that fit together into larger Bundle objects. These can then be rendered as single URLs to these bundles.
- **resources** – Optionally, a list of resources we want to include. Normally you specify resources to include by calling `.need()` on them, or alternatively by calling `.need()` on an instance of this class.
```python
has_base_url()
```
Returns True if base_url has been set.
has_resources()
Returns True if any resources are needed.
library_url(library)
Construct URL to library.
This constructs a URL to a library, obey versioning and base_url configuration.
Parameters library – A Library instance.
need(resource, slots=None)
Add a particular resource to the needed resources.
This is an alternative to calling .need() on the resource directly.
Parameters
• resource – A Resource instance.
• slots – an optional dictionary mapping from Slot instances to Resource instances.
This dictionary describes how to fill in the slots that the given resource might depend on
(directly or indirectly). If a slot is required, the dictionary must contain an entry for it.
render()
Render needed resource inclusions.
This returns a string with the rendered resource inclusions (<script> and <link> tags), suitable for
including in the <head> section of a web page.
render_inclusions(resources)
Render a set of resources as inclusions.
This renders the listed inclusions and their dependencies as HTML (<script> and <link> tags),
suitable for inclusion on a web page.
Parameters inclusions – A list of Resource instances.
render_into_html(html)
Render needed resource inclusions into HTML.
Parameters html – A string with HTML to render the resource inclusions into. This string
must have a <head> section.
render_topbottom()
Render resource inclusions separately into top and bottom fragments.
Returns a tuple of two HTML snippets, top and bottom. The top one is to be included in a <head> section,
and the bottom one is to be included at the end of the <body> section. Only bottom safe resources are
included in the bottom section, unless force_bottom is enabled, in which case all Javascript resources
will be included in the bottom.
render_topbottom_into_html(html)
Render needed resource inclusions into HTML.
Only bottom safe resources are included in the bottom section, unless force_bottom is enabled, in
which case all Javascript resources will be included in the bottom, just before the </body> tag.
Parameters html – The HTML string in which to insert the rendered resource inclusions. This
string must have a <head> and a <body> section.
resources()
Retrieve the list of resources needed.
This returns the needed Resource instances. Resources are guaranteed to come earlier in the list than
those resources that depend on them.
Resources are also sorted by extension.
```python
set_base_url(url)
```
Set the base_url. The base_url can only be set (1) if it has not been set in the NeededResources configuration and (2) if it has not been set before using this method.
```python
class fanstatic.LibraryRegistry(libraries)
```
A dictionary-like registry of libraries.
This is a dictionary that mains libraries. A value is a `Library` instance, and a key is its library name.
Normally there is only a single global LibraryRegistry, obtained by calling `get_library_registry()`.
**Parameters**
- **libraries** – a sequence of libraries
```python
def add(library)
```
Add a Library instance to the registry.
**Parameters**
- **add** – add a library to the registry.
```python
class fanstatic.ConfigurationError
```
Impossible or illegal configuration.
```python
class fanstatic.UnknownResourceError
```
Resource refers to non-existent resource file.
```python
class fanstatic.UnknownResourceExtensionError
```
A resource has an unrecognized extension.
```python
class fanstatic.LibraryDependencyCycleError
```
Dependency cycles between libraries aren’t allowed.
A dependency cycle between libraries occurs when the file in one library depends on a file in another library, while that library depends on a file in the first library.
```python
class fanstatic.SlotError
```
A slot was filled in incorrectly.
If a slot is required, it must be filled in by passing an extra dictionary parameter to the `.need` method, containing a mapping from the required `Slot` to `Resource`.
When a slot is filled, the resource filled in should have the same dependencies as the slot, or a subset of the dependencies of the slot. It should also have the same extension as the slot. If this is not the case, it is an error.
### 9.3 Functions
```python
fanstatic.get_library_registry()
```
Get the global `LibraryRegistry`.
It gets filled with the libraries registered using the fanstatic entry point.
You can also add libraries to it later.
fanstatic.register_inclusion_renderer(self, extension, renderer, order=None)
Register a renderer function for a given filename extension.
Parameters
- **extension** – the filename extension to register the renderer for.
- **renderer** – a callable that should accept a URL argument and return a rendered HTML snippet for this resource.
- **order** – optionally, to control the order in which the snippets are included in the HTML document. If no order is given, the resource will be included after all other resource inclusions. The lower the order number, the earlier in the rendering the inclusion will appear.
fanstatic.set_resource_file_existence_checking(v)
Set resource file existence checking to True or False.
By default, this is set to True, so that resources that point to non-existent files will result in an error. We recommend you keep it at this value when using Fanstatic. An **UnknownResourceError** will then be raised if you accidentally refer to a non-existent resource.
When running tests it’s often useful to make fake resources that don’t really have a filesystem representation, so this is set to False temporarily; for the Fanstatic tests this is done. Inside a test for this particular feature, this can temporarily be set to True.
Pre-packaged libraries
A lot of pre-packaged CSS and Javascript libraries are available on the PyPI and are maintained by the Fanstatic community. This can be installed into your project right away using `easy_install`, `pip` or `buildout`. No more complicated installation instructions, just reuse a CSS or Javascript library like you reuse Python libraries.
Here’s a list of currently available libraries:
<table>
<thead>
<tr>
<th>package</th>
<th>library</th>
<th>source</th>
</tr>
</thead>
<tbody>
<tr>
<td>js.backbone</td>
<td>Backbone</td>
<td>Github</td>
</tr>
<tr>
<td>css.css3githubbuttons</td>
<td>CSS3 GitHub Buttons</td>
<td>GitHub</td>
</tr>
<tr>
<td>js.ace</td>
<td>Ajax.org Cloud9 Editor</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.amcharts</td>
<td>amCharts</td>
<td>GitHub</td>
</tr>
<tr>
<td>js.bootstrap</td>
<td>Bootstrap, from Twitter</td>
<td>GitHub</td>
</tr>
<tr>
<td>js.chosen</td>
<td>Chosen</td>
<td>?</td>
</tr>
<tr>
<td>js.ckeditor</td>
<td>CKEditor</td>
<td>?</td>
</tr>
<tr>
<td>js.classy</td>
<td>Classy - Classes for JavaScript</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.extjs</td>
<td>ExtJS: <a href="http://www.sencha.com/products/js/">http://www.sencha.com/products/js/</a></td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.gallerific</td>
<td>Gallerific</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_datalink</td>
<td>the jQuery plugin Datalink</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_datatables</td>
<td>the jQuery plugin DataTable</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_expandbox</td>
<td>jQuery.expandBox</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_form</td>
<td>the jQuery plugin Form</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_jgrowl</td>
<td>jGrowl</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_jqote2</td>
<td>jQuery.jqote2</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_json</td>
<td>the jQuery plugin jquery-json</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_jstree</td>
<td>the jQuery plugin JsTree</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_metadata</td>
<td>jQuery Metadata</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_qtip</td>
<td>jQuery.qtip</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_qunit</td>
<td>the jQuery plugin QUnit</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_slimbox</td>
<td>the jQuery plugin Slimbox</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_tablesorter</td>
<td>the jQuery plugin tablesorter</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_textchildren</td>
<td>the jQuery plugin Text Children</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_tinyscrollbar</td>
<td>the jQuery plugin Tiny Scrollbar</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_tools</td>
<td>jQuery tools</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_tooltip</td>
<td>the jQuery plugin Tooltip</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery_utils</td>
<td>jQuery Utils</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jquery</td>
<td>jQuery</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.jqueryui</td>
<td>jQuery UI</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.knockback</td>
<td>Knockback.js</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.knockout</td>
<td>Knockout</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>------------</td>
<td>----------</td>
<td>-----------</td>
</tr>
<tr>
<td>js.lesscss</td>
<td>less.js</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.lightbox</td>
<td>jquery lightbox</td>
<td>GitHub</td>
</tr>
<tr>
<td>js.modernizr</td>
<td>Modernizr</td>
<td>?</td>
</tr>
<tr>
<td>js.raphael</td>
<td>Raphael</td>
<td>?</td>
</tr>
<tr>
<td>js.spin</td>
<td>spin.js</td>
<td>?</td>
</tr>
<tr>
<td>js.sugar</td>
<td>Sugar</td>
<td>GitHub</td>
</tr>
<tr>
<td>js.tinymce</td>
<td>TinyMCE</td>
<td>Bitbucket</td>
</tr>
<tr>
<td>js.underscore</td>
<td>underscore.js</td>
<td>?</td>
</tr>
<tr>
<td>js.yui</td>
<td>the YUI Library</td>
<td>Bitbucket</td>
</tr>
</tbody>
</table>
Follow the instructions in the development section to learn how to package your own library.
Integration
Fanstatic can be integrated with a number of web frameworks:
- Zope/Grok through `zope.fanstatic`
- Pyramid through `pyramid_fanstatic`
- Flask through `Flask-Fanstatic`
- Django through `django_fanstatic`.
In order to integrate Fanstatic with your web framework, make sure the following conditions are met:
- **base_url**: if your web framework supports virtual hosting, make sure to set the `base_url` attribute on the `NeededResources` object.
- **Error pages**: if your web framework renders error pages, make sure to clear the `NeededResources` before rendering the error page, in order to prevent resources from the original page ‘leaking’ onto the error page.
- **URL calculation**: Fanstatic can also serve non-JavaScript and non-CSS resources (such as images) that you link to from the views in your application. In order to do so, we advise to support rendering URLs to resources from the view/page templates in your web framework.
12.1 Mailing list
Please talk to us on the Fanstatic mailing list: fanstatic@googlegroups.com
You can subscribe here: https://groups.google.com/group/fanstatic
You can also participate in the discussions through the Gmane group: gmane.comp.python.wsgi.fanstatic
12.2 IRC
Come to the #fanstatic IRC channel on FreeNode.
You want to contribute to Fanstatic? Great!
Please talk to us our on our mailing list about your plans!
13.1 Sources
Fanstatic’s source code is maintained on bitbucket: http://bitbucket.org/fanstatic
You can check out fanstatic using Mercurial (hg); see the bitbucket documentation for more information as well.
Feel free to fork Fanstatic on bitbucket if you want to hack on it, and send us a pull request when you want us to merge your improvements.
13.2 Development install of Fanstatic
Fanstatic requires Python 2.6. We believe that the Fanstatic development installation is a good example of how to install a lot of useful tools into a project’s sandbox automatically; read on.
To install Fanstatic for development, first check it out, then run the buildout:
```
$ python bootstrap.py -d
$ bin/buildout
```
This uses Buildout. The buildout process will download and install all dependencies for Fanstatic, including development tools.
Don’t worry, that’s all you need to know about buildout to get going – you only need to run bin/buildout again if something changes in Fanstatic’s buildout.cfg or setup.py.
The –d option is to instruct buildout to use Distribute instead of Setuptools and is optional.
13.3 Tests
To run the tests:
```
$ bin/py.test
```
This uses py.test. We love tests, so please write some if you want to contribute. There are many examples of tests in the test_* .py modules.
13.4 Test coverage
To get a test coverage report:
```bash
$ bin/py.test --cov fanstatic
```
To get a report with more details:
```bash
bin/py.test --cov-report html --cov fanstatic
```
The results will be stored in a subdirectory `htmlcov`. You can point a web browser to its `index.html` to get a detailed coverage report.
13.5 pyflakes
To run `pyflakes`, you can type:
```bash
$ bin/pyflakes fanstatic
```
13.6 Building the documentation
To build the documentation using `Sphinx`:
```bash
$ bin/sphinxbuilder
```
If you use this command, all the dependencies will have been set up for Sphinx so that the API documentation can be automatically extracted from the Fanstatic source code. The docs source is in `doc`, the built documentation will be available in `doc/_build/html`.
13.7 Python with Fanstatic on the sys.path
It’s often useful to have a project and its dependencies available for import on a Python prompt for experimentation:
```bash
$ bin/devpython
```
You can now import fanstatic:
```python
>>> import fanstatic
```
You can also run your own scripts with this custom interpreter if you like:
```bash
$ bin/devpython somescript.py
```
This can be useful for quick experimentation. When you want to use Fanstatic in your own projects you would normally include it in your project’s `setup.py` dependencies instead.
13.8 Releases
The buildout also installs `zest.releaser` which can be used to make automatic releases to PyPI (using `bin/fullrelease`).
13.9 Pre-packaged libraries
If you want to make an existing JS library into a fanstatic package, use the fanstatic pastet template from the fanstatic template package.
The pre-packaged libraries live in the http://bitbucket.org/fanstatic account.
In order to add a new library, ask one of the fanstatic administrators to create a repository for you. In the new repository, run fanstatic template and push your changes.
Register the newly created package on PyPI and add the fanstatic administrators (currently faassen, jw and janjaap-driessen) as owners. After that, add your library to the list of Pre-packaged libraries.
Indices and tables
- genindex
- modindex
- search
f
f
fanstatic, 25
A
add() (fanstatic.LibraryRegistry method), 33
C
ConfigurationError (class in fanstatic), 33
D
Delegator (class in fanstatic), 28
F
fanstatic (module), 9, 15, 17, 21, 25, 27
Fanstatic() (in module fanstatic), 27
G
get_library_registry() (in module fanstatic), 33
Group (class in fanstatic), 30
H
has_base_url() (fanstatic.NeededResources method), 31
has_resources() (fanstatic.NeededResources method), 31
I
init_library_nrf() (fanstatic.Library method), 28
Injector (class in fanstatic), 27
L
Library (class in fanstatic), 28
library_url() (fanstatic.NeededResources method), 32
LibraryDependencyCycleError (class in fanstatic), 33
LibraryPublisher (class in fanstatic), 28
LibraryRegistry (class in fanstatic), 33
M
mode() (fanstatic.Resource method), 29
N
need() (fanstatic.Group method), 30
need() (fanstatic.NeededResources method), 32
need() (fanstatic.Resource method), 30
NeededResources (class in fanstatic), 30
P
path (fanstatic.Library attribute), 29
Publisher (class in fanstatic), 27
R
register() (fanstatic.Library method), 29
register_inclusion_renderer() (in module fanstatic), 33
render() (fanstatic.NeededResources method), 32
render_inclusions() (fanstatic.NeededResources method), 32
render_into_html() (fanstatic.NeededResources method), 32
render_topbottom() (fanstatic.NeededResources method), 32
render_topbottom_into_html() (fanstatic.NeededResources method), 32
Resource (class in fanstatic), 29
resources() (fanstatic.NeededResources method), 32
S
Serf() (in module fanstatic), 27
set_base_url() (fanstatic.NeededResources method), 33
set_resource_file_existence_checking() (in module fanstatic), 34
signature() (fanstatic.Library method), 29
Slot (class in fanstatic), 30
SlotError (class in fanstatic), 33
U
UnknownResourceError (class in fanstatic), 33
UnknownResourceExtensionError (class in fanstatic), 33
|
{"Source-Url": "https://readthedocs.org/projects/fanstatic/downloads/pdf/0.16/", "len_cl100k_base": 15171, "olmocr-version": "0.1.50", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 95386, "total-output-tokens": 17224, "length": "2e13", "weborganizer": {"__label__adult": 0.00029087066650390625, "__label__art_design": 0.0003292560577392578, "__label__crime_law": 0.00014388561248779297, "__label__education_jobs": 0.0003139972686767578, "__label__entertainment": 6.127357482910156e-05, "__label__fashion_beauty": 8.028745651245117e-05, "__label__finance_business": 0.00015175342559814453, "__label__food_dining": 0.0002199411392211914, "__label__games": 0.000362396240234375, "__label__hardware": 0.00029659271240234375, "__label__health": 0.00010770559310913086, "__label__history": 0.00011086463928222656, "__label__home_hobbies": 5.8710575103759766e-05, "__label__industrial": 0.00015997886657714844, "__label__literature": 0.00016558170318603516, "__label__politics": 0.00011688470840454102, "__label__religion": 0.0002334117889404297, "__label__science_tech": 0.0009250640869140624, "__label__social_life": 7.665157318115234e-05, "__label__software": 0.0104217529296875, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.00013935565948486328, "__label__transportation": 0.00016939640045166016, "__label__travel": 0.0001494884490966797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65503, 0.013]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65503, 0.71239]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65503, 0.82072]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 2406, false], [2406, 2406, null], [2406, 2416, null], [2416, 2416, null], [2416, 4887, null], [4887, 6749, null], [6749, 8637, null], [8637, 10699, null], [10699, 12462, null], [12462, 13490, null], [13490, 14531, null], [14531, 16220, null], [16220, 17809, null], [17809, 20004, null], [20004, 20442, null], [20442, 20442, null], [20442, 22290, null], [22290, 22290, null], [22290, 24992, null], [24992, 27867, null], [27867, 30150, null], [30150, 30150, null], [30150, 31450, null], [31450, 32908, null], [32908, 34461, null], [34461, 34503, null], [34503, 36176, null], [36176, 36176, null], [36176, 37677, null], [37677, 40260, null], [40260, 43336, null], [43336, 46412, null], [46412, 49617, null], [49617, 51995, null], [51995, 54004, null], [54004, 55268, null], [55268, 58270, null], [58270, 58777, null], [58777, 59737, null], [59737, 59737, null], [59737, 60059, null], [60059, 60059, null], [60059, 61468, null], [61468, 62958, null], [62958, 63585, null], [63585, 63585, null], [63585, 63636, null], [63636, 63636, null], [63636, 63654, null], [63654, 63654, null], [63654, 65503, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 2406, true], [2406, 2406, null], [2406, 2416, null], [2416, 2416, null], [2416, 4887, null], [4887, 6749, null], [6749, 8637, null], [8637, 10699, null], [10699, 12462, null], [12462, 13490, null], [13490, 14531, null], [14531, 16220, null], [16220, 17809, null], [17809, 20004, null], [20004, 20442, null], [20442, 20442, null], [20442, 22290, null], [22290, 22290, null], [22290, 24992, null], [24992, 27867, null], [27867, 30150, null], [30150, 30150, null], [30150, 31450, null], [31450, 32908, null], [32908, 34461, null], [34461, 34503, null], [34503, 36176, null], [36176, 36176, null], [36176, 37677, null], [37677, 40260, null], [40260, 43336, null], [43336, 46412, null], [46412, 49617, null], [49617, 51995, null], [51995, 54004, null], [54004, 55268, null], [55268, 58270, null], [58270, 58777, null], [58777, 59737, null], [59737, 59737, null], [59737, 60059, null], [60059, 60059, null], [60059, 61468, null], [61468, 62958, null], [62958, 63585, null], [63585, 63585, null], [63585, 63636, null], [63636, 63636, null], [63636, 63654, null], [63654, 63654, null], [63654, 65503, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65503, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65503, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 2406, 3], [2406, 2406, 4], [2406, 2416, 5], [2416, 2416, 6], [2416, 4887, 7], [4887, 6749, 8], [6749, 8637, 9], [8637, 10699, 10], [10699, 12462, 11], [12462, 13490, 12], [13490, 14531, 13], [14531, 16220, 14], [16220, 17809, 15], [17809, 20004, 16], [20004, 20442, 17], [20442, 20442, 18], [20442, 22290, 19], [22290, 22290, 20], [22290, 24992, 21], [24992, 27867, 22], [27867, 30150, 23], [30150, 30150, 24], [30150, 31450, 25], [31450, 32908, 26], [32908, 34461, 27], [34461, 34503, 28], [34503, 36176, 29], [36176, 36176, 30], [36176, 37677, 31], [37677, 40260, 32], [40260, 43336, 33], [43336, 46412, 34], [46412, 49617, 35], [49617, 51995, 36], [51995, 54004, 37], [54004, 55268, 38], [55268, 58270, 39], [58270, 58777, 40], [58777, 59737, 41], [59737, 59737, 42], [59737, 60059, 43], [60059, 60059, 44], [60059, 61468, 45], [61468, 62958, 46], [62958, 63585, 47], [63585, 63585, 48], [63585, 63636, 49], [63636, 63636, 50], [63636, 63654, 51], [63654, 63654, 52], [63654, 65503, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65503, 0.0511]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
21b78361f730f9cb892b140c8133abf28f17a555
|
[REMOVED]
|
{"len_cl100k_base": 15188, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 54252, "total-output-tokens": 17899, "length": "2e13", "weborganizer": {"__label__adult": 0.0002853870391845703, "__label__art_design": 0.0005774497985839844, "__label__crime_law": 0.0002899169921875, "__label__education_jobs": 0.001369476318359375, "__label__entertainment": 6.145238876342773e-05, "__label__fashion_beauty": 0.00013756752014160156, "__label__finance_business": 0.0002366304397583008, "__label__food_dining": 0.00022614002227783203, "__label__games": 0.0005741119384765625, "__label__hardware": 0.0005202293395996094, "__label__health": 0.00028061866760253906, "__label__history": 0.00025963783264160156, "__label__home_hobbies": 7.742643356323242e-05, "__label__industrial": 0.0002942085266113281, "__label__literature": 0.0002608299255371094, "__label__politics": 0.0001804828643798828, "__label__religion": 0.0003440380096435547, "__label__science_tech": 0.0171966552734375, "__label__social_life": 7.575750350952148e-05, "__label__software": 0.0091552734375, "__label__software_dev": 0.966796875, "__label__sports_fitness": 0.0002009868621826172, "__label__transportation": 0.00033664703369140625, "__label__travel": 0.00016927719116210938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67007, 0.04586]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67007, 0.66488]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67007, 0.88017]], "google_gemma-3-12b-it_contains_pii": [[0, 4248, false], [4248, 9660, null], [9660, 15851, null], [15851, 22348, null], [22348, 26755, null], [26755, 28852, null], [28852, 34479, null], [34479, 37443, null], [37443, 41429, null], [41429, 45119, null], [45119, 51039, null], [51039, 54269, null], [54269, 58656, null], [58656, 65091, null], [65091, 67007, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4248, true], [4248, 9660, null], [9660, 15851, null], [15851, 22348, null], [22348, 26755, null], [26755, 28852, null], [28852, 34479, null], [34479, 37443, null], [37443, 41429, null], [41429, 45119, null], [45119, 51039, null], [51039, 54269, null], [54269, 58656, null], [58656, 65091, null], [65091, 67007, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67007, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67007, null]], "pdf_page_numbers": [[0, 4248, 1], [4248, 9660, 2], [9660, 15851, 3], [15851, 22348, 4], [22348, 26755, 5], [26755, 28852, 6], [28852, 34479, 7], [34479, 37443, 8], [37443, 41429, 9], [41429, 45119, 10], [45119, 51039, 11], [51039, 54269, 12], [54269, 58656, 13], [58656, 65091, 14], [65091, 67007, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67007, 0.18994]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
da405a58b2d7fcc84cf76a3e5cf6182973a3cc7f
|
Guarded Modules: Adaptively Extending the VMM’s Privilege Into the Guest
Kyle C. Hale and Peter A. Dinda, Northwestern University
https://www.usenix.org/conference/icac14/technical-sessions/presentation/hale
This paper is included in the Proceedings of the 11th International Conference on Autonomic Computing (ICAC ‘14).
June 18–20, 2014 • Philadelphia, PA
Open access to the Proceedings of the 11th International Conference on Autonomic Computing (ICAC ‘14) is sponsored by USENIX.
Guarded Modules: Adaptively Extending the VMM’s Privileges Into the Guest
Kyle C. Hale and Peter A. Dinda
{k-hale, pdinda}@northwestern.edu
Department of Electrical Engineering and Computer Science
Northwestern University
Abstract
When a virtual machine monitor (VMM) provides code that executes in the context of a guest operating system, allowing that code to have privileged access to specific hardware and VMM resources can enable new mechanisms to enhance functionality, performance, and adaptability. We present a software technique, guarded execution of privileged code in the guest, that allows the VMM to provide this capability, as well as an implementation for Linux guests in the Palacios VMM. Our system, which combines compile-time, link-time, and runtime techniques, provides the module developer with the following guarantees: (1) A kernel module will remain unmodified and it will acquire privilege only when untrusted code invokes it through developer-chosen, valid entry points with a valid stack. (2) Any execution path leaving the module will trigger a revocation of privilege. (3) The module has access to private memory. The system also provides the administrator with a secure method to bind a specific module with particular privileges implemented by the VMM. This lays the basis for guaranteeing that only trusted code in the guest can utilize special privileges. We give two examples of guarded Linux kernel modules: a network interface driver with direct access to the physical NIC and an idle loop that uses instructions not usually permitted in a guest, but which can be adaptively selected when no other virtual core shares the physical core. In both cases only the guarded module has these privileges.
1 Introduction
By design, a virtual machine monitor (VMM) does not trust the guest operating system and thus does not allow it access to privileged hardware or VMM state. However, such access can allow new or better services for the guest, such as the following examples.
- Direct guest access to I/O devices would allow existing guest drivers to be used, avoid the need for virtual devices, and accelerate access when the device could be dedicated to the guest. In existing systems, the VMM limits the damage that a rogue guest could inflict by only using self-virtualizing devices [14, 19] or by operating in contexts such as HPC environments, where the guest is trusted and often runs alone [10].
- Direct guest access to the Model-Specific Registers (MSRs) that control dynamic voltage and frequency scaling (DVFS) would allow the guest’s adaptive control of these features to be used instead of the VMM’s whenever possible. Because applications running on the guest enjoy access to more rich information than the VMM does, there is reason to believe that guest-based control would perform better.
- Direct guest access to instructions that can halt the processor, such as monitor and mwait, would allow more efficient idle loops and spinlocks when the VMM determines that such halts can be permitted given the current configuration.
Since we cannot trust the guest operating system, to create such services we must be able to place a component into the guest operating system that is both tightly coupled with the guest and yet protected from it. In prior work [5], we presented GEARS, a framework for allowing the implementation of a service to span the guest and the VMM, even without guest cooperation. GEARS provides the ability to inject modules into the guest, but the injected code runs with the same privilege and the same hardware access as other, untrusted guest code. In this
paper, we extend this functionality to allow for the injected code to be endowed with privileged access to hardware and the VMM that the VMM selects, but only under specific conditions that preclude the rest of the guest from taking advantage of the privilege. We refer to this privileged injected code as a guarded module, and it is effectively a piece of the VMM running in the guest context.
Our technique leverages compile-time and link-time processing which identifies valid entry and exit points in the module code, including function pointers. These points are in turn “wrapped” with automatically generated stub functions that communicate with the VMM. Our current implementation of this technique applies to Linux kernel modules. The unmodified source code of the module is the input to the implementation, while the output is a kernel object file that includes the original functionality of the module and the wrappers. Conceptually, a guarded module has a border, and the wrapper stubs (and their locations) identify the valid border crossings between the guarded module, which is trusted, and the rest of the kernel, which is not.
A wrapped module can then be injected into the guest using the existing GEARs framework, or added to the guest voluntarily. The wrapper stubs and other events detected by the VMM drive the second component of our technique, a state machine that executes in the VMM. An initialization phase determines whether the wrapped module has been corrupted and where it has been loaded, and then protects it from further change. Attempted border crossings, either via the wrapper functions or due to interrupt/exception injection, are caught by the VMM and validated. Privilege is granted or revoked on a per-virtual core basis. Components of the VMM that implement privilege changes are called back through a standard interface, allowing the mechanism for privilege granting/revoking to be decoupled from the mechanism for determining when privilege should change. The privilege policy is under the ultimate control of the administrator, who can determine the binding of specific guarded modules with specific privilege mechanisms.
Our contributions are as follows:
- We describe the design of the joint compile-time and run-time guarded module mechanism.
- We describe the implementation of the design for supporting guarded Linux modules in the context of the Palacios VMM [11, 9]. Our implementation is publicly available within the Palacios codebase.
- We evaluate the performance of our implementation, independent of the service and the privilege.
- We extend Palacios with a privilege mechanism, a PCI device passthrough capability that can dynamically acquire and release privilege, and then demonstrate passthrough NIC access using a guarded module that drives this mechanism. Only the module has access to the NIC.
- We extend Palacios with a second privilege mechanism, selectively-enabled access to the monitor and mwait instructions, and then demonstrate adaptive use of these instructions in a guarded module. Only the module has access to the instructions and can halt the physical core using them.
2 Related work
Process Isolation Protecting trusted applications from an untrusted OS has recently become an active area of research. Overshadow [3] first showed that hardware virtualization techniques can be used to ensure control-flow, data, and address space integrity for a process running in the guest. TrustVisor [17] extended this idea with a much smaller trusted computing base (TCB). Flicker [18] uses nascent hardware support to effectively protect trusted applications. XOMOS [13] achieves the same goal, albeit with a new ABI and an ISA that has not yet been implemented in real hardware. InkTag [6] and Virtual Ghost [4] both aim to further defend these trusted applications from a small subset of potential Iago attacks [2], a new class of attacks in which a malicious kernel crafts return values from system services to trick a trusted application into following a code path intended by the attacker. However, these systems not only lack support for trusted kernel components, they also leverage existing protection domains and do not consider the protection of a trusted component from attacks originating in the same address space.
Kernel-space Isolation A large portion of previous work on kernel-space isolation is intended for isolating an entire kernel from untrusted, external components. LeVasseur’s work on using virtual machines as vehicles for commodity driver reuse and fault isolation [12] shows promise, but these techniques involve using driver code residing in a completely separate virtual machine. Swift showed, with Nooks [22], that code wrappers can isolate faulty code in Linux kernel extensions, improving the reliability of the core kernel. While Nooks provides an illustrative example of defining boundaries between driver and kernel code, it requires modifications to the kernel in which the drivers reside. Our system requires no such modifications. Further, Nooks does not
consider the situation in which a trusted module/extension requires protection from an untrusted kernel—our primary area of concern.
Both LXFI [16] and SecVisor [20] explore isolation in terms of guaranteeing kernel integrity. LXFI mitigates the potential for privilege escalation attacks against kernels by requiring that programmers annotate their modules. SecVisor insulates kernels from untrusted code by only allowing VMM-authorized code to execute, preventing a broad class of code-injection attacks against the kernel. Protecting the kernel against both malicious attacks and faulty software components are important problems, but they are orthogonal to our concerns. Our system guarantees the integrity of kernel modules that enjoy both a higher level of trust and privilege than the rest of the OS.
**VM Introspection** There have been several examples of leveraging the guest-host relationship to improve VM monitoring and resource management, especially in the context of autonomic computing [26, 15, 23, 7]. However, as far as we are aware, the only existing use case for trusted, isolated components within a guest kernel is for security monitors in which the only protected state is the code and data of the monitor itself, not higher-privilege state such as that required to access the hardware such as we outlined in Section 1.
IntroVirt [8] allows a VMM to invoke code in the guest, but does not deal with enforcing separate levels of trust within the same guest.
SYRINGE [1] provides a mechanism by which secure monitoring code can leverage functions in an untrusted guest. This system employs a secure VM along with an untrusted VM. When the monitoring code in the secure VM needs to call a function in the untrusted VM, the hypervisor forwards the call, managing control-flow and data integrity such that the secure VM is not compromised. However, this system is more akin to a secure, cross-core RPC facility that does not address border crossings within the same address space—a major component of our work.
Secure in-VM monitoring, or SIM [21], addresses performance issues raised by previous VM introspection techniques by allowing monitoring code to run directly in the guest while ensuring the monitor’s integrity. While SIM touches on the border crossings that are our focus, it largely sidesteps the issue by using a completely separate address space for the trusted monitor code. We do not have this option as we seek to guard modules that reside in the same address space as the untrusted kernel.
As far as we are aware, the guarded module system we present is the first of its kind that guarantees both control-flow and data integrity for modules that share the same address space as an untrusted OS kernel. Guarded modules require no specialized hardware and no modifications to the guest OS in which they execute.
### 3 Trust and threat models; invariants
We assume a completely untrusted guest kernel. A developer will add to the VMM selective privilege mechanisms that are endowed with the same level of trust as the rest of the core VMM codebase. A module developer will assume that the relevant mechanism exists. The determination of whether a particular module is allowed access to a particular selective privilege mechanism is made at run-time by an administrator. The central relationship we are concerned with is between the untrusted guest kernel and the module. A compilation process transforms the module into a guarded module. This then interacts with run-time components to maintain specific invariants in the face of threats from the guest kernel.
**Control-flow integrity** The key invariant we provide is that the privilege on a given virtual core will be enabled if and only if that virtual core is executing within the code of the guarded module and the guarded module was entered via one of a set of specific, agreed-upon entry points. The privilege will be disabled whenever control flow leaves the module, including for interrupts and exceptions.
The guarded module boasts the ability to interact freely with the rest of the guest kernel. In particular, it can call other functions and access other data within the guest. A given call stack might intertwine guarded module and kernel functions, but the system guards against attacks on the stack as part of maintaining the invariant.
A valid entry into the guarded module is not checked further. Our system does not guard against an attack based on function arguments or return values, namely Iago attacks. The module author needs to validate these himself. Note, however, that the potential damage of performing this validation incorrectly is limited to the specific privilege the module has.
**Code integrity** Disguising the module’s code is not a goal of our system. The guest kernel can read and even write the code of the guarded module. However, any modifications of the code by any virtual core will be caught and the privilege will be disabled for the remainder of the module’s lifetime in the kernel. The identity of the module is determined by its content, and module
insertion is initiated external to the guest with a second identifying factor, guarding against the kernel attempting to spoof or replay a module insertion.
**Data integrity** Data integrity, beyond the registers and the stack, is managed explicitly by the module. The module can request private memory as a privilege. On a valid entry, the memory is mapped and is usable, while on departing the module, the memory is unmapped and rendered invisible and inaccessible to the rest of the kernel.
## 4 Design and implementation
The specific implementation of guarded modules we describe in this paper applies to Linux kernel modules. Our implementation fits within the context of the Palacios VMM and takes advantage of code generation and linking features of the GCC and GNU binutils toolchains. The VMM-based elements leverage functionality commonplace in modern VMMs, and thus could be readily ported to other VMMs. The code generation and linking aspects of our implementation seem to us to be feasible in any C toolchain that supports ELF or a similar format. The technique could be applicable to other guest kernels, although we do assume that the guest kernel provides runtime extensibility via some form of load-time linking.
In our implementation, a guarded Linux kernel module can either be voluntarily inserted by the guest or involuntarily injected into the guest kernel using the GEARS framework. The developer of the module needs to target the specific kernel he wants to deploy on, exactly as in creating a Linux kernel module in general.
The guarded module is a kernel module within the guest Linux kernel that is allowed privileged access to the physical hardware or to the VMM itself. The nature of this privilege, which we will describe later, depends on the specifics of the module. We refer to the code boundary between the guarded module and the rest of the guest kernel as the border.
**Border crossings** consist of control flow paths that traverse the border. A **border-out** is a traversal from the module to the rest of the kernel, of which there are three kinds. The first, a **border-out call** occurs when a kernel function is called by the guarded module, while the second, a **border-out ret**, occurs when we return back to the rest of the kernel. The third, a **border-out interrupt** occurs when an interrupt or exception is dispatched. A **border-in** is a traversal from the rest of the kernel to the guarded module. There are similarly three forms here. The first, a **border-in call** consists of a function call from the kernel to a function within the guarded module, while the second, a **border-in ret** consists of a return from a border-out call, and the third, a **border-in rti** consists of a return from a border-out interrupt. Valid border-ins should raise privilege, while border-outs should lower privilege. Additionally, any attempt to modify the module should lower privilege.
The VMM contains a new component, the **border control state machine**, that determines whether the guest has privileged access at any point in time. The state machine also implements a registration process in which the injected guarded module identifies itself to the VMM and is matched against validation information and desired privileges. This allows the administrator to decide which modules, by content, are allowed which privileges. After registration, the border control state machine is driven by hypercalls from the guarded module, exceptions that occur during the execution of the module, and by interrupt or exception injections that the VMM is about to perform on the guest.
The VMM detects attempted border crossings jointly through its interrupt/exception mechanisms and through hypercalls in special code added to the guarded module as part of our compilation process. Figure 1 illustrates how the two interact.
### 4.1 Compile-time
Our compilation process, **Christoization**, automatically wraps an existing kernel module with new code needed to work with the rest of the system. Two kinds of wrappers are generated. **Exit wrappers** are functions that interpose on the calls from the guarded module to the rest of the kernel. An exit wrapper, added using link-time processing, signals the VMM by a hypercall to lower privilege just before the underlying function call is made. When the function returns, it signals the VMM to validate the stack and raise privilege. **Entry wrappers** are functions that interpose on calls from the kernel into the guarded module. Entry wrappers, which are introduced by source preprocessing, use hypercalls to signal the VMM to raise privilege when called, and then lower privilege when the call returns to the kernel. The precise positions of the hypercall instructions in the wrappers are used by the VMM to validate the requests.
We designed our compile-time tool chain so that module developer effort is minimized when generating a guarded module. The requisite knowledge and materials are the same as what would be required of a developer writing a Linux kernel module. The necessary inputs to our toolchain are the guest Linux Makefile and kernel
---
1Named after the famed conceptual artist, Christo, who was known for wrapping large objects such as buildings and islands in fabric.
headers, as well as the source and Makefile for the module to be Christoized. Additionally, the privilege names required by the module are passed as command-line parameters. Access to the guest Linux source tree may also be required if the developer wishes to use external functions that use non-standard calling conventions.
The first stage of the Christoization process is module source analysis. We scan the source files of the module, looking for functions that are assigned as callbacks. These functions represent entry points into the module, as the kernel will invoke them asynchronously. In order to effectively identify all of these functions, we must run a preprocessing pass over the module to make sure that external inlined functions and macros are accounted for. Once the entry callbacks are identified, we must search the source for the function that the module developer registers using Linux’s `module_init` macro. This function will serve as the initial gateway into the module and must be intercepted by the VMM.
In the source annotation stage, each entry callback assignment in the source is changed to a macro that will expand to an entry wrapper function particular to that callback. These wrappers are added to the source file automatically and are depicted in Figure 2. The key idea here is that a hypercall is inserted both before and after the call to the original entry point. The remaining instructions are there to preserve the environment in such a way that the original function is not aware that it has been wrapped. The `module_init` routine is then similarly wrapped with a registration hypercall that notifies the VMM when it has been inserted into the guest kernel.
The linker wrapping stage takes the output of the annotation stage (a compiled object) and identifies undefined function references. These represent exits to the kernel. They are wrapped with exit wrappers, which are assembly stubs similar to entry wrappers. Exit wrappers lower privilege before the original call and raise it on return. They are added using `ld`’s function wrapping capability. The result of this linking step is that the module’s original unresolved external references are resolved to the exit wrappers, while the exit wrappers contain references to the original unresolved symbols. As a result, any external call from the original module goes through an exit wrapper.
The final stage of the Christoization process is metadata generation. Here, information collected in the previous stages is aggregated into a formatted file with which the administrator can later register the guarded module. The essential metadata consists of the module’s name, its required privileges, and the offsets in the compiled object of the identified valid entry points. This list can later be further restricted or expanded by the module developer. Additionally, to ensure module integrity at load-time, a cryptographic content hash of the code segment is performed and recorded. This metadata is later passed by the administrator to the VMM during the guarded module registration process, and it is used from then on by the border control state machine to validate the hyper-
entry_wrapped:
popq %r11
pushq %rax
movq $border_in_call, %rax
(a) vmmcall
popq %rax
callq entry
pushq %rax
movq $border_out_ret, %rax
(b) vmmcall
popq %rax
pushq %r11
ret (to rest of kernel)
Figure 2: An entry wrapper for a valid entry point. Exit wrappers are similar, except they invoke border out on a call, and border in after returning.
calls and other events it receives.
4.2 Run-time
The run-time element of our system is based around the border control state machine. As Figure 1 illustrates, the state machine is driven by hypercalls originating from the guarded module, and by events that are raised elsewhere in the VMM. As a side-effect of the state machine’s execution, it generates callbacks to other components of the VMM that implement specific privilege changes, notifying them when valid privilege changes occur. The state machine also handles the initialization of a guarded module and its binding with these other parts of the VMM. We now describe guarded module execution with respect to the state machine.
Module initialization The guarded module is injected into the guest, either voluntarily by the user, or involuntarily by the administrator using GEARS’s code injection facility. The module’s initialization code immediately calls the guarded module registration function that was generated by Christoization. This function makes an initialization hypercall, providing a claimed hash as its argument. In response, the state machine validates the module using the metadata associated with the claimed hash. First, the address of the initialization hypercall instruction is loaded in the state machine, combined with the known offset of the instruction in the text segment stored in the metadata, allows us to determine the load address of the module’s text segment. The metadata includes the length of the text section. With this information, the state machine then marks the text segment as unwritable in the shadow or nested page tables, making it impossible for the guest to change it. The next step is to compute the hash over the text segment memory and compare it to the hash stored in the metadata. If the hashes match, the state machine notifies the selective privilege-enabled component that privilege should be raised, transitions to the privileged state, enables interception of exceptions, and returns to the guest. At this point, the guarded module can complete the remainder of its initialization. In effect, module initialization is treated as the first border-in call.
Border-in call to border-out ret A valid entry into the guarded module results in a hypercall from the entry wrapper (Figure 2(a)) that requests a privilege raise. The address of this hypercall instruction is then validated against the list of addresses where such instructions were placed, which is stored in the metadata. If it is in the list, the state machine invokes a privilege-raising callback, and transitions to the privileged state. Before returning, it also enables interception of exceptions. Before exiting from a valid entry, the entry wrapper similarly invokes another hypercall (Figure 2(b)), which requests a lowering of privilege. When privilege is lowered, exception interception is returned to its nominal state.
Border-out call to border-in ret A call from the guarded module to the rest of the kernel results in a hypercall from the exit wrapper that requests a lowering of privilege. As a side-effect of lowering privilege, exception interception is returned to its nominal state. When the call returns, a second hypercall requests a raising of privilege. After sanity checking the address against the metadata, privilege is raised, and exception and interrupt interception are again enabled.
Border-out int to border-in rti The purpose of intercepting exceptions that occur when executing with privilege is to assure that we can lower privilege when these events trigger an interrupt handler dispatch and raise it once execution resumes in the guarded module. More generally, we must trap any switch from the guarded module code to kernel context. When the guest is not executing in the guarded module, nominal exception handling is sufficient. Our handler for exception intercepts simply causes the VMM to re-inject the exceptions alongside its normal injection of interrupt events.
Because we need to be aware of every interrupt/exception dispatch, we have modified the Palacios VM entry code so that, just before such an entry, if the guest is executing with privilege, we determine if an interrupt
---
2A direct comparison of the text segment content is also possible.
or exception injection will occur on the entry. If so, we lower privilege, switch back to nominal interception of exceptions, and enable interception of the \texttt{rti} instruction, which will be executed when the interrupt or exception handler completes. We also note the current \%rip and other information related to this interrupt dispatch.
At this point, we allow the VM entry to complete, and interrupt dispatch ensues. We emulate \texttt{rti} instructions when they occur, looking for any \texttt{rti} that will return control to the instruction at which the original interrupt/exception was injected. When we discover a match, we raise privilege, re-enable exception interception, disable \texttt{rti} interception, and resume execution with privilege in the guarded module.
We note that one privilege that could be granted to a module is the ability to disable interrupts while it executes. In this case, this code path could be entirely avoided.
**Internal calls** The entry wrapper shown in Figure 2 and the exit wrappers are linked such that they are only invoked on border crossings. Calls internal to the guarded module do not have any additional overhead. The same applies for calls internal to the kernel.
**Nesting and stack checking** Although it is convenient to think of (and generate code for) border-crossings in matched pairs, it is important to realize that an execution path may involve multiple border-crossings. For example, the kernel might invoke a callback function on the module, which requires privilege, but which in turn calls a kernel function, which \textit{should not} have privilege, and that subsequently makes another callback into the module, which \textit{should}. The sequence of events for that example would be: border-in call, border-out call(\textasteriskcommand*), border-in call, border-out ret, border-in ret(**), border-out ret. While border-ins and border-outs must eventually all be matched, they can nest. This nesting of border crossings introduces an opportunity to subvert the guarded module through the stack. Our primary concern is the protection of the \texttt{ret} in the border-out wrapper. If the border-out call(\textasteriskcommand*) had its return address modified on the stack, the border-in ret(**) would return to that address with privilege raised!
To address this, the border control state machine tracks the nesting level and the stack state, and validates the stack state on any border-in. When a border-in occurs with a nesting level of zero, the state machine captures the starting point of this “first border-in” stack frame (i.e., \%rsp and \%rbp). When a border-out occurs, the state machine captures the ending point of this “last

border-out” stack frame, and computes and stores a hash of the stack content from the first entry to this last exit. On any border-in whose nesting level is greater than zero, the actual stack is again hashed and compared with the last border-out hash. If they do not match, privilege is not granted.
**Deinitialization** The Christoization processing inserts a deinitialization hypercall as the last thing the module executes. After validating the hypercall’s location, the state machine lowers privilege, removes any special interception that is active, and remaps the module with guest-specified writability. Privilege will not change again unless the initialization hypercall is executed.
**Suspicious activity** The state machine detects suspicious activity by noting privilege changing hypercalls at invalid locations, shadow or nested page faults indicating attempts to write the module code, and stack hash mismatches. Our default behavior is simply to lower privilege when these occur, and continue execution. Other reactions are, of course, possible.
## 5 Evaluation
We now consider the costs of the guarded module system, independent of any specific guarded module that might drive it, and any selective privilege-enabled VMM component it might drive. We focus on the costs of border crossings and their breakdown. The most important contributors to the costs are VM exit/entry handling and the stack validation mechanism.
All measurements were conducted on a Dell PowerEdge R415. This is a dual-socket machine, each socket comprising a quad-core, 2.2 GHz AMD Opteron 4122,
giving a total of 8 physical cores. The machine has 16 GB of memory. It runs Fedora 15 with a stock Fedora 2.6.38 kernel. Our guest environment uses a single virtual core that runs a BusyBox environment based on Linux kernel 2.6.38. The guest runs with nested paging, using 2 MB page mappings, with DVFS control disabled.
Figure 3 illustrates the overheads in cycles incurred at runtime. All cycle counts were averaged over 1000 samples. There are five major components to the overhead. The first is the cost of initiating a callback to lower or raise privilege. This cost is very small at around 100 cycles. The second cost, labeled “hypercall handling”, denotes the cycles spent inside the hypercall handler itself, not including entry validations, privilege changes, or other processing involved with a VM exit. This cost is also quite small, and also typically under 100 cycles. “entry point lookup” represents the cost of a hash table lookup, which is invoked on border-ins when the instruction pointer is checked against the valid entry points that have been registered during guarded module initialization. The cost for this lookup is roughly 240 cycles. “exit handling” is the time spent in the VMM handling the exit outside of guarded module runtime processing. This is essentially the common overhead incurred by any VM exit. Finally, “stack checking” denotes the time spent ensuring control-flow integrity by validating the stack. This component raises the cost of a border crossing by 5000 cycles, mostly due to stack address translations and hash computations. Border-in calls are less affected due to the initial translation and recording of the entry stack pointer, while border-out rets are unaffected. Reducing the cost of this validation is the subject of on-going work.
The guarded module codebase consists of the compile-time tools, which comprise 223 lines of Perl, 260 lines of Ruby and the run-time elements added to the VMM. The latter are generally concentrated in an optional extension of 1007 lines of C that could be ported to other VMMs. Some changes to the VMM core were made to facilitate interrupt and exception interception and dispatch to the GEARS guarded module system. These include 178 lines of C.
6 Examples
We now consider two examples of using the guarded module functionality, drawn from the list in the introduction. In the first example, selectively-privileged PCI passthrough, the guarded module, and only the guarded module, is given direct access to a specific PCI device. We illustrate the use of this capability via a guarded version of a NIC driver. In our second example, selectively-privileged mwait, the guarded module, and only the guarded module, is allowed to use the mwait instruction. We illustrate the use of this capability via guarded module that adaptively replaces the kernel idle loop with a more efficient mwait loop when it is safe to do so.
We conducted all measurements in this section with the configuration described in Section 5.
6.1 Selectively privileged PCI passthrough
Like most VMMs, Palacios has hardware passthrough capabilities. Here, we use its ability to make a hardware PCI device directly accessible to the guest. This consists of a generic PCI front-end virtual device (“host PCI device”), an interface it can use to acquire and release the underlying hardware PCI device on a given host OS (“host PCI interface”), and an implementation of that interface for a Linux host.
A Palacios guest’s physical address space is contiguously allocated in the host physical address space. Because PCI device DMA operations use host physical addresses, and because the guest programs the DMA engine using guest physical addresses it believes start at zero, the DMA addresses the device will actually use must be offset appropriately. In the Linux implementation of our host PCI interface, this is accomplished using an IOMMU: acquiring the device creates an IOMMU page table that introduces the offset. As a consequence, any DMA transfer initiated on the device by the guest will be constrained to that guest’s memory. A DMA can then only be initiated by programming the device, which is restricted to the guarded module. This restriction also prevents DMA attacks on the module that might originate from the guest kernel.
A PCI device is programmed via control/status registers that are mapped into the physical memory and I/O port address spaces through standardized registers called BARs. Each BAR contains a type, a base address, and a size. Palacios’s host PCI device virtualizes the BARs (and other parts of the standardized PCI device configuration space). This lets the guest map the device as it pleases. For a group of registers mapped by a BAR into the physical memory address space, the mapping is implemented using the shadow or nested page tables to redirect memory reads and writes. For a group of registers mapped into the I/O port space, there is no equivalent to these page tables, and thus the mappings are implemented by I/O port read/write hooks. When the guest executes an IN or OUT instruction, an exit occurs, the hook is run, and the handler simply executes an IN or OUT to the corresponding physical I/O port. If the host and guest mappings are identical, the ports are not inter-
accepted, allowing the guest to read/write them directly.
Direct guest access to network hardware is not a new idea. However, the focus of recent work in this area is on providing protection between guests [25, 24]. We allow protection of a VMM-provided driver within a guest.
We extended our host PCI device to support selective privilege; in the terminology of Section 4.2, it is now a selective privilege-enabled VMM component. In this mode of operation, virtualization of the generic PCI configuration space of the device proceeds as normal. However, at startup, BAR virtualization ensures that the address space regions of memory and I/O BARs are initially hooked to stub handlers. The stub handlers simply ignore writes and supply zeros for reads. This is the unprivileged mode. In this mode, the guest sees the device on its PCI bus, and can even remap its BARs as desired, but any attempt to program it will simply fail because the registers are inaccessible. In selectively privileged operation, the host PCI device also responds to callbacks for raising and lowering privilege. Raising privilege switches the device to privileged mode, which is implemented by remapping the registers in the manner described earlier, resulting in successful accesses to the registers. Lowering privilege switches back to unprivileged mode, and remaps the registers back to the stubs. Privilege changes happen on a per-core basis.
While the above description is complex, it is important to note that only about 60 lines of code were needed to add selectively privileged operation to our existing PCI passsthrough functionality. Combined with the rest of the guarded module system, the selectively privileged host PCI device permits fully privileged access to the underlying device within a guarded module, but disallow it otherwise.
Making a NIC driver into a guarded module As an example, we used the guarded module system to generate a guarded version of an existing NIC device driver within the Linux tree, specifically the Broadcom BCM5716 Gigabit NIC. No source code modifications were done to the driver or the guest kernel. We Christoize this driver, creating a kernel module that we can later inject into the untrusted guest. The border control state machine in Palacios pairs this driver with the selectively privileged PCI passthrough capability. Recall that Christoization is almost entirely automated, so the result is an unmodified device driver, executing in the guest, having direct access to the NIC, while nothing else in the guest does.
The NIC uses exactly one BAR to define a 32 MB region of the memory address space. Raising and lowering privilege amounts to editing the shadow or nested page tables to remap these addresses. Assuming 2 MB superpages and suitable alignment, the system will adjust 16 page table entries when changing privilege.
Overheads Compared to simply allowing privilege for the entire guest, a system that leverages guarded modules incurs additional overheads. Some of these overheads are system-independent, and were covered in Section 5. The most consequential component of these overheads is the cost of executing a border-in or border-out, each of which consists of a hypercall or exception interception (requiring a VM exit) or interrupt/exception injection detection (done in the context of an in-progress VM exit), a lookup of the hypercall’s address, a stack check or record, conducting a lookup to find the relevant privilege callback function, and then the cost of invoking that callback.
We now consider the system-dependent overhead for the NIC. There are two elements to this overhead: the cost of changing privilege and the number of times we need to change privilege for each unit of work (packet sent or received) that the module finishes. The cost of raising privilege for the NIC is 4800 cycles (2.2 μs), while lowering it is 4307 cycles (2.0 μs).
Combining the system-independent and system-dependent costs, we expect that a typical border crossing overhead, assuming no stack checking will consist of about 3000 cycles for VM exit/entry, 4000 cycles to execute the border control state machine, and about 4500 cycles to enable/disable access to the NIC. These 11500 cycles comprise 5.2 μs on this machine. Stack checking would add an average of about 4500 cycles, leading to 16000 cycles (7.3 μs).
To determine the number of these border crossings per packet send or receive, we counted them while running the guarded module with a controlled traffic source (ttcp) that allows us to also count packet sends and/or receives. Dividing the counts gives us the average. There is variance because the NIC does interrupt coalescing.
Figure 4 shows the results of this analysis for the NIC. Sending requires on the order of 2 border crossings (privilege changes) per packet, while receiving requires on the
<table>
<thead>
<tr>
<th></th>
<th>Packet Sends</th>
<th>Packet Receives</th>
</tr>
</thead>
<tbody>
<tr>
<td>Border-in</td>
<td>1.06</td>
<td>4.64</td>
</tr>
<tr>
<td>Border-out</td>
<td>1.06</td>
<td>4.64</td>
</tr>
</tbody>
</table>
Figure 4: Border crossings per packet send and receive for the NIC example.
order of 9 border crossings per packet. Note that many of the functions that constitute border crossings are actually leaf functions defined in the kernel. This indicates that we could further reduce the overall number of border crossings per packet by pulling the implementations of these functions into the module itself.
### 6.2 Selectively privileged mwait
Recent x86 machines include a pair of instructions, monitor and mwait, that can be used for efficient synchronization among processor cores. The monitor instruction indicates an address range that should be watched. A subsequent mwait instruction then places the core into a suspended sleep state, similar to a hlt. The core resumes executing when an interrupt is delivered to it (like a hlt), or when another core writes into the watched address range (unlike a hlt). The latter allows a remote core to wake up the local core without the cost of an inter-processor interrupt (IPI). One example of such use is in the Linux kernel’s idle loop.
In Palacios, and other VMMs, we cannot allow an untrusted guest to execute hlt or mwait because the guest runs with physical interrupts disabled. A physical interrupt is intended to cause a VM exit followed by subsequent dispatch of the interrupt in the VMM. If an mwait instruction were executed in the guest under uncontrolled conditions, it could halt the core indefinitely. This precludes the guest using the extremely fast inter-core wakeup capability that mwait offers.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
Under controlled conditions, however, letting the guest run mwait may be permissible. When no other virtual core is mapped to the physical core (so we can tolerate a long wait) and we have a watchdog that will eventually write the memory, the guest might safely run an mwait. To achieve these controlled conditions requires that we limit the execution of these instructions to code that the VMM can trust and that this code only execute mwait when the VMM deems it safe to do so. A malicious guest could use an unrestricted ability to execute mwait to launch a denial-of-service attack on other VMs and the VMM. We enforce this protection and execute code that the VMM can trust and that this code only requires that we limit the execution of these instructions to eventually write the memory, the guest might safely run mwait.
References
|
{"Source-Url": "http://www.cs.northwestern.edu/~pdinda/Papers/icac14.pdf", "len_cl100k_base": 10157, "olmocr-version": "0.1.49", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39050, "total-output-tokens": 12791, "length": "2e13", "weborganizer": {"__label__adult": 0.00039887428283691406, "__label__art_design": 0.0003862380981445313, "__label__crime_law": 0.0004854202270507813, "__label__education_jobs": 0.0006399154663085938, "__label__entertainment": 9.53078269958496e-05, "__label__fashion_beauty": 0.0001811981201171875, "__label__finance_business": 0.0003371238708496094, "__label__food_dining": 0.0003476142883300781, "__label__games": 0.0009479522705078124, "__label__hardware": 0.00691986083984375, "__label__health": 0.0005083084106445312, "__label__history": 0.0003571510314941406, "__label__home_hobbies": 0.00013566017150878906, "__label__industrial": 0.0008382797241210938, "__label__literature": 0.00023496150970458984, "__label__politics": 0.0003209114074707031, "__label__religion": 0.000553131103515625, "__label__science_tech": 0.216064453125, "__label__social_life": 8.147954940795898e-05, "__label__software": 0.0204925537109375, "__label__software_dev": 0.74853515625, "__label__sports_fitness": 0.0002791881561279297, "__label__transportation": 0.0008378028869628906, "__label__travel": 0.00021922588348388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57486, 0.02321]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57486, 0.37845]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57486, 0.89051]], "google_gemma-3-12b-it_contains_pii": [[0, 509, false], [509, 4144, null], [4144, 9215, null], [9215, 14310, null], [14310, 19596, null], [19596, 22779, null], [22779, 27442, null], [27442, 31821, null], [31821, 37108, null], [37108, 42268, null], [42268, 50935, null], [50935, 54938, null], [54938, 57486, null]], "google_gemma-3-12b-it_is_public_document": [[0, 509, true], [509, 4144, null], [4144, 9215, null], [9215, 14310, null], [14310, 19596, null], [19596, 22779, null], [22779, 27442, null], [27442, 31821, null], [31821, 37108, null], [37108, 42268, null], [42268, 50935, null], [50935, 54938, null], [54938, 57486, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57486, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57486, null]], "pdf_page_numbers": [[0, 509, 1], [509, 4144, 2], [4144, 9215, 3], [9215, 14310, 4], [14310, 19596, 5], [19596, 22779, 6], [22779, 27442, 7], [27442, 31821, 8], [31821, 37108, 9], [37108, 42268, 10], [42268, 50935, 11], [50935, 54938, 12], [54938, 57486, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57486, 0.0241]]}
|
olmocr_science_pdfs
|
2024-11-24
|
2024-11-24
|
555ee8281f63501cea14de3b659a67323e7c50d0
|
[REMOVED]
|
{"Source-Url": "http://nokyotsu.com/me/papers/aplas13.pdf", "len_cl100k_base": 16024, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 74369, "total-output-tokens": 18449, "length": "2e13", "weborganizer": {"__label__adult": 0.0003535747528076172, "__label__art_design": 0.00040268898010253906, "__label__crime_law": 0.0004534721374511719, "__label__education_jobs": 0.0009245872497558594, "__label__entertainment": 7.402896881103516e-05, "__label__fashion_beauty": 0.00017249584197998047, "__label__finance_business": 0.00026702880859375, "__label__food_dining": 0.0004584789276123047, "__label__games": 0.0009851455688476562, "__label__hardware": 0.0009365081787109376, "__label__health": 0.0005698204040527344, "__label__history": 0.0002951622009277344, "__label__home_hobbies": 0.00012409687042236328, "__label__industrial": 0.0004973411560058594, "__label__literature": 0.0003871917724609375, "__label__politics": 0.00033020973205566406, "__label__religion": 0.0005774497985839844, "__label__science_tech": 0.049346923828125, "__label__social_life": 0.00010007619857788086, "__label__software": 0.00701141357421875, "__label__software_dev": 0.9345703125, "__label__sports_fitness": 0.00030112266540527344, "__label__transportation": 0.0007085800170898438, "__label__travel": 0.00021445751190185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58272, 0.02825]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58272, 0.29367]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58272, 0.75123]], "google_gemma-3-12b-it_contains_pii": [[0, 2949, false], [2949, 6459, null], [6459, 9155, null], [9155, 12685, null], [12685, 16435, null], [16435, 20385, null], [20385, 23623, null], [23623, 26718, null], [26718, 30554, null], [30554, 34874, null], [34874, 38839, null], [38839, 42649, null], [42649, 47626, null], [47626, 52096, null], [52096, 54977, null], [54977, 58272, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2949, true], [2949, 6459, null], [6459, 9155, null], [9155, 12685, null], [12685, 16435, null], [16435, 20385, null], [20385, 23623, null], [23623, 26718, null], [26718, 30554, null], [30554, 34874, null], [34874, 38839, null], [38839, 42649, null], [42649, 47626, null], [47626, 52096, null], [52096, 54977, null], [54977, 58272, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58272, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58272, null]], "pdf_page_numbers": [[0, 2949, 1], [2949, 6459, 2], [6459, 9155, 3], [9155, 12685, 4], [12685, 16435, 5], [16435, 20385, 6], [20385, 23623, 7], [23623, 26718, 8], [26718, 30554, 9], [30554, 34874, 10], [34874, 38839, 11], [38839, 42649, 12], [42649, 47626, 13], [47626, 52096, 14], [52096, 54977, 15], [54977, 58272, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58272, 0.09787]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
3ddf4cf131f5bf43a3134930d998269e856982a4
|
A language for interoperability modeling and prediction
Johan Ullberg*, Pontus Johnson, Markus Buschle
KTH Royal Institute of Technology, School of Electrical Engineering, Department of Industrial Information and Control Systems, Stockholm, Sweden
ARTICLE INFO
Article history:
Received 15 November 2011
Received in revised form 22 July 2012
Accepted 1 August 2012
Available online 30 August 2012
Keywords:
Interoperability
Interoperability prediction
Information systems architecture
Interoperability modeling
Architecture analysis
ABSTRACT
Interoperability, defined as the satisfaction of a communication need between two or more actors, is a sought after quality for enterprises in today’s competitive environment. For a decision maker, understanding the effects of a changing market place and understanding how to adapt to the new environment is essential. Sustainable interoperability is an approach where such dynamic environments are considered, including how to adapt to the new environments. This paper presents a modeling language for describing architectures from an interoperability perspective and a formalism for inferring the degree of interoperability from the architecture models, thus supporting sustainable interoperability.
The interoperability language is expressed as a Unified Modeling Language, UML, class diagram specifying classes, attributes, and relationships relevant for interoperability modeling. The class diagram is also augmented with a set of statements in the Object Constraint Language, OCL, supporting automated interoperability prediction.
© 2012 Elsevier B.V. All rights reserved.
1. Introduction
Interoperability is a sought after quality for enterprises in today’s competitive environment. Interoperability has been approached from many different points of view and perspectives [1], the concept of interoperability is however complex and fundamental interoperability problems are still not well understood [2]. In this paper, interoperability is defined as “the satisfaction of a communication need between two or more actors”, a definition compatible with the widely adopted definitions, e.g. IEEE’s, “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” [3]. Sustainable interoperability is an approach to interoperability where the ever-changing environment of an organization is considered. The organization needs to learn about changes, analyze their impact on the current communication needs and investigate how to adapt the system and network in order to ensure satisfaction of current and new communication needs [4]. By allowing automated prediction, the work presented in this article can facilitate sustainable interoperability and thereby enable swift responses to changes in the communication needs.
Information system architecture is an approach to information systems management that relies on models of the information systems and their environment. Instead of building information systems using trial and error, a set of models is proposed to predict the behavior and effects of changes to the system. The architecture models allow reasoning about the consequences of various scenarios and thereby support decision-making. The chosen architecture models must contain relevant information for the issue at hand. Therefore, there is a need for a tailored modeling language of the various aspects of interest for the decision maker. Most current system architecture frameworks, however, lack modeling languages that support interoperability prediction [1].
This paper presents an architecture-based formalism for predicting the interoperability between information systems. The formalism is expressed in terms of a class diagram of the Unified Modeling Language, UML [5], in the remainder of the article referred to as a metamodel. This metamodel contains classes, attributes and relationships relevant for creating business and information system architecture models from an interoperability perspective. In order to enable interoperability prediction, the metamodel is coupled with a set of rules that need to be asserted in order to achieve interoperability, i.e. satisfying a communication need. This rule set is formally expressed in the Object Constraint Language, OCL [6].
To provide an example of the proposed rule set for interoperability prediction, consider a case of telephone communication between two people. A customer wants to order merchandise over telephone from a sales employee at a retail company. Fig. 1 illustrates this setup together with the languages spoken by the individuals. From an interoperability perspective, one could be interested in ensuring the following two rules. Firstly, is there
* Corresponding author. Tel.: +46 8 790 68 23.
E-mail addresses: johantu@ics.kth.se (J. Ullberg), pj101@ics.kth.se (P. Johnson), markusb@ics.kth.se (M. Buschle).
sufficient infrastructure for the information exchange? Secondly, will they understand the messages being passed between them, i.e. do they share a common language for information exchange? Based on the model, it is reasonable to believe that the infrastructure is in place in terms of their connection to the telephone grid. However, the communication will still fail since they do not share a language for communication. This model, through its underlying metamodel, thus contains relevant information for the task at hand, i.e. interoperability prediction. The interoperability metamodel presented in this paper is a more elaborate version of the one implicitly considered in this example.
The contribution of this paper is two-fold. Firstly, the interoperability metamodel describing the classes, attributes and relationships necessary to create architecture models from an interoperability perspective is outlined. Secondly, a rule set describing interoperability requirements is presented, thus enabling automated interoperability prediction on architecture models. The work has two main delimitations. Firstly, the prediction does not cover the dynamic aspects of interoperability, i.e. the order of the information exchange. Secondly, it focuses on enabling factors and does not cover preventive aspects. Preventive aspects are various mechanisms that actively block communication; mainly various security mechanisms e.g. access control.
1.1. Evaluation criteria for prediction frameworks
When developing tools and methods for prediction several quality criteria should be considered. Adapting the six requirements on decision calculus posed by [7] to prediction frameworks, the following four criteria can be stated: (1) Accuracy, i.e. that the models created based on the metamodel really are capable of predicting interoperability. (2) Cost of use, i.e. the effort needed to reach the goal, in this case interoperability prediction. This includes data collection, the creation of the architecture models and performing the predictions. (3) Cognitive complexity, i.e. the learning curve for applying the metamodel and rule set should be sufficiently low. Finally, (4) error control, is concerned with the trade-off between accuracy and cost of use that often is necessary, it is important to be able to make an informed decision regarding this trade-off. Using these criteria, it is possible to evaluate the metamodel and rule set presented in this article. Such evaluation will be performed in Section 5, below.
1.2. Outline
The next section discusses related works in the field of interoperability modeling and prediction. Section 3 describes the formalism that is used for interoperability prediction. The fourth section constitutes the main contribution of the paper. The section first presents a metamodel suitable for interoperability modeling and secondly the rule set needed for interoperability prediction. After this, Section 5 discusses the benefits of the presented formalism and finally, conclusions are drawn in Section 6.
2. Related works
The related works can be divided into two main categories, although not completely mutually exclusive. The first category is concerned with interoperability frameworks. Secondly, related work on assessing interoperability using maturity models or other approaches is considered.
Recently, several initiatives on interoperability have proposed interoperability frameworks targeted at structuring issues and concerns in quite different ways. The European Interoperability Framework in the eGovernment domain [8] defines three aspects of interoperability: semantic, technical and organizational. A similar approach was also used by the ATHENA Interoperability Framework (AIF), structuring interoperability issues and solutions at the three levels: conceptual, technical and applicable [9]. The Framework for Enterprise Interoperability [10] is another interoperability framework that focuses on barriers to interoperability. All these interoperability frameworks provide means to classify the interoperability problems and solutions. At the same time they lack the ability to describe the interoperability situations where the problems occur and are solved.
The ontology of interoperability (Ool) [11] prescribes a set of metamodels to describe interoperability from various viewpoints, mainly aiming at classifying various problems and decision alternatives. Ool does however also provide a communication metamodel, aimed at describing architectures from an interoperability perspective. The metamodel described in this article uses several of the concepts of Ool. The work presented in this article also provides a means to predict interoperability, something not offered by the Ool.
Several methods for assessing interoperability have previously been suggested. The Levels of Information Systems Interoperability (LISI) [12] uses a maturity model for assessing interoperability. The assessment in LISI is based on an assessment process and utilizes a scorecard method and interoperability metrics. Employing LISI would require more domain knowledge in the field of interoperability than the method presented in this paper. The same is true for several other similar approaches, such as Systems of Systems Interoperability (SoSI) [13] and Levels of Conceptual Interoperability Model (LCIM) [14].
The i-Score [15] is a methodology for quantitative interoperability assessment. The assessment is based on the concept of operational threads; a sequence of activities each supported by exactly one system. Such threads are used in order to calculate an i-Score. Compared to the metamodel and assessment methods in this paper, i-Score requires deeper understanding of the interoperability field and lacks the describing capabilities of the metamodel in this article.
Summarizing, there is much work done in classifying interoperability issues in various frameworks. There are also some approaches to measuring interoperability. However, these all require more knowledge in the field of interoperability than the approach in this paper, where the prediction is formalized in terms of OCL statements and thus can be automated, something of great value for sustainable interoperability where rapid response to changes in the environment is necessary [4]. Furthermore, none of the available approaches allow for a trade-off between prediction accuracy and cost through a probabilistic approach as the work in this paper.
3. A formalism for architecture prediction
The dominating notation for software and systems modeling today is the Unified Modeling Language (UML) [5]. Practically all major software architecture and design tools, as well as many system architecture tools, are based on or support UML modeling [16]. UML specifies several different model notations of which the class diagram is one of the most well-known [5].
The Object Constraint Language (OCL) is a formal language used to describe constraints on UML models. OCL expressions typically specify invariant conditions that must hold for the system being modeled, or queries over objects described in a model [6]. Such queries can be seen as gathering structural information from the model. Referring to the introductory example of Section 1, a metamodel where actors and the languages these actors understand are modeled as separate entities, one relevant query could be "given two actors with a need to communicate, do these actors have a common language?". Such queries can be expressed in OCL. More precisely, OCL is capable of expressing prediction theories based on first-order logic, arithmetics and set theory [6].
UML and OCL together constitute the formalism used to express the metamodel for interoperability prediction presented in this article. In order to enable modeling from an interoperability perspective, a UML class diagram has been developed, i.e. level M1 in the MOF hierarchy, referred to as a metamodel. This metamodel contains classes, relationships and attributes relevant for modeling interoperability related aspects on the M0 level. The metamodel is augmented with OCL statements. These statements describe rules on the M1 level that need to be asserted in order to achieve interoperability, e.g. that two actors must share a common language. These rules are then applied on models on the M0 level to predict the interoperability of that particular scenario.
OCL, however, is a deterministic language, and as information systems are rapidly growing more complex, i.e. growing in numbers, size and in the complexity of the underlying technologies, a deterministic approach becomes difficult to apply. While it a few decades ago was feasible for one person to fully grasp the workings of any information system, this is no longer the case simply due to the increased complexity. Furthermore, the poor state of documentation that plagues many projects adds to our uncertainty. Thirdly, the use of externally operated information systems, e.g. cloud services and applications, is increasing. Finally and perhaps unrelated to the IT industry’s maturation, in the early development phases, many aspects of the information system to be developed are uncertain. For these reasons, it would be of great benefit for the modeler to be able to express his confidence in the created model, and for this confidence to reflect in the result of the prediction. [17]
Although the rule set described in this article can be used for deterministic interoperability prediction, it is shown in [17] that standard OCL statements can be evaluated probabilistically. This however, poses two additional requirements on the architecture models. Firstly, the attributes of the metamodel should be specified as random variables. The modeler should be able to specify the value of e.g. the attribute availability of a network, not as true or false, but rather as a probability distribution. Secondly, it should be possible to express structural uncertainty, i.e. uncertainty regarding the existence of objects and relations. For this purpose, all classes and relationships (and thereby also all objects and relations) must feature an additional Boolean existence attribute, indicating the modeler’s belief in the structure of the model. The modeler might for instance be uncertain of whether actor A really communicates with actor B, or whether actor B really can use the format X. Probabilistic evaluation of OCL statements for architecture prediction is further described in [17] and Section 5 evaluates OCL, including the probabilistic evaluation, with respect to the requirements on prediction formalisms as presented in Section 1.1.
4. A metamodel for interoperability prediction
In this section, a metamodel for interoperability prediction is presented. The metamodel is divided into two main parts, the structural aspects for interoperability and the conversation-specific aspects, represented as white and shaded classes of Fig. 3 respectively.
- Structural aspects cover the basic infrastructure for interoperability. They detail, for instance, the parties that are to interoperate, the format with which the information is encoded and similar aspects.
- The conversation-specific aspects are a more fine-grained description of a particular conversation detailing the messages being sent between parties, the content of such conversation, etc.
The classes related to the structural aspects can be used autonomously whereas the conversation-specific classes are a refinement requiring the structural aspects as a fundament. The conversation-specific classes allow for a more in-depth description and interoperability prediction.
This section is the main contribution of the paper and is outlined as follows: firstly, a brief overview of the metamodel is provided for orientation purposes. After that, the metamodel is described in more detail, presenting each class, relationship and attribute. This is followed by a subsection that illustrates how the metamodel can be used for models on different levels of granularity. In the final two subsections, the focus is shifted to interoperability prediction, describing the rule set necessary for this prediction as well as the additional rules needed to allow modeling on various levels of granularity.
4.1. Overview of the metamodel
In order to derive the metamodel presented in this section, a literature review was performed. From the literature, rules for ensuring interoperability were extracted and translated into the rule set. Based on this rule set, the concepts relevant to include in the metamodel were elicited. The metamodel was thus created in the opposite order to how it is presented, and the rationale for the modeling concepts presented in Sections 4.1–4.3 can therefore be found in the description of the rule set. Section 4.5. In the remainder of the article, boldface font will be used to refer to concepts of the metamodel.
The definition of interoperability used in the proposed metamodel can be expressed as “the satisfaction of a communication
need between two or more actors’. The authors believe that this definition is compatible with that of the IEEE, “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” [3], albeit with a somewhat wider definition of “systems and components”. Actors can take various forms such as systems, components, humans and whole enterprises but they all share the ability to actively use information, i.e. to operate on it, interpreting it, transforming it, etc. Communication Need satisfaction requires information exchange, which in turn necessitates a medium for transmitting the information. Examples of such media, or Message Passing Systems, are the Internet or Ethernet in computer communication or air in spoken communication between Actors of close distance. Translating this to classes of a metamodel, see Fig. 2, the three concepts mentioned above correspond to the classes Communication Need, Actor and Message Passing System respectively.
Actors identify other Actors using an Address, e.g. the name of a person. Furthermore, the Actors encode the information in a format, or Language. Examples of such Languages could be the IEC Common Information Model for information exchange between electric utilities, or spoken English when using the air as medium. Actors can also translate between different Languages, i.e. Language Translations. One example is a message broker in an integration platform. Furthermore, Message Passing systems use Languages for transporting information (i.e. the protocol), such as SOAP in a service oriented environment.
A special Language is the Reference Language, a Language in which all the concepts relevant to the Communication Need can be unambiguously defined. Considering Fig. 2, only the class Abstract Actor remains to be explained. Abstract Actor is an abstraction of both Actor and Message Passing System describing the common attributes and relationships of these classes.
The remainder of this section will describe the metamodel in more detail, including the additional concepts needed for modeling conversation-specific aspects, and also describe the rule set for interoperability prediction.
4.2. Structural aspects
Recall that an interoperability prediction corresponds to evaluating an instance of the class Communication Need. This class relates a set consisting of at least two Actors who share a desire to exchange information on a certain topic, to each other. A Communication Need is formulated in a special Language, the Reference Language. The Reference Language is a language in which the Communication Need can be expressed and, more importantly, evaluated. It is not necessarily the Language used in the communication. For more on Reference Languages, see Section 4.2.2 below. The attribute satisfied of class Communication Need is the target of the prediction, i.e. whether interoperability is achieved, and its value is derived using the rule set described later in this section.
The rest of the classes in the interoperability metamodel will be described in four main groups, starting with the classes that ensure a communication path between the Actors sharing a Communication Need.
4.2.1. Communication path
Two classes are required to establish a communication path, Actors and Message Passing Systems. In this subsection, we detail these classes. The class Actor describes a physical or virtual entity that actively participates in an interaction. Actors encode messages according to one or more Languages and can perform Language Translations. Actors cannot be related directly to each other, there needs to be a transport medium, denoted Message Passing System (MPS), between them.
In comparison to Actors, MPSs are passive elements that can be considered as channels transmitting messages in certain
---
Fig. 2. Simplified metamodel for the structural aspects of interoperability showing concepts necessary for all applications.
Languages from speaker to listener. It is not possible to couple two or more MPSs directly to each other. There needs to be an active instance (an Actor) that takes the output of one MPS and utilizes this as the input for the second MPS. An MPS is associated to one or several Languages, indicating which formats are allowed for message passing, i.e. the protocols according to which messages are transported by the MPS.
MPSs also have a second relationship to the Language class, addressing Language, expressing that the addressing of a certain MPS needs to be performed according to a special Language. Furthermore, MPSs can be separated in two categories, those where addressing is necessary and those that does not require addressing. The latter category contains MPSs where all Actors connected to the MPS are the intended parties of the communication, as in a private dinner conversation. Which category is relevant for a given MPS is indicated by the attribute fixed.
Actors and MPSs are specializations of the class Abstract Actor. The prefix abstract indicates that this class cannot be instantiated but rather contains the common properties of its specializations. Abstract Actors define the relationship to Language already described for the specializations. The Abstract Actor has three quality attributes that affect the information exchange. The Abstract Actor might lose information, modify data so that it becomes unusable or otherwise be hindered to take part in the information exchange. These three qualities are reflected in the three attributes dropMessage, distortMessage, and isAvailable.
4.2.2. Language
Languages are used to encode and transmit information. A Language might consist of several sub-languages. A sub-language can be fully expressed by the corresponding super-language, e.g. HTML as a subset of XML with a set of specific tags. A Language can also be a carrier of other Languages. This can be compared to a protocol for transmitting Languages, e.g. TCP being able to transmit XML (and thereby also HTML).
A Language can be mapped to other Languages, by the use of Language Translations. This procedure needs to be performed by a translator, i.e. an Actor. As Language Translations might be performed incorrectly, the attribute correct of the Language Translation describes the quality of the translation. Language Translations are necessary whenever two Actors share a Communication Need, but lack a common Language to communicate it in.
When considering the semantic correctness and completeness of a Language, we must compare the Language’s (artificial) Constructs to the (real) things we wish to express. In order to represent those things, the class Reference Language is introduced. The special characteristic of this language is that it must be able to express the involved Actors’ Universe of Discourse, i.e. the complete range of objects, events, attributes, relations, ideas, etc., that are intended by any Actor in a communication. Normally, the Reference Language is not used for actual communication, but simply as a tool to unambiguously evaluate the outcome of a communication; whether a Communication Need really is satisfied is thus evaluated in this Reference Language. As an example, reality is one possible Reference Language that may be used if the Communication Need is concerned with altering the reality, e.g. an enterprise receiving physical items from a supplier.
4.2.3. Addressing
Actors also need to be identified, which is achieved using Addresses, such as an IP address on the Internet. The Actor class has two relationships to the Address class: identifier and knownAddress. The former associates an Actor with an Address for identification whereas the latter constitutes the set of such identifiers known by a specific Actor.
If direct knowledge of Addresses is lacking, this barrier can be mitigated by the use of an address broker, e.g. a UDDI. Such an address broker is realized by one or several Actors, which are linked through MPSs. Such situations can be expressed as a separate Communication Need between the Actor that is lacking information and the address broker.
4.3. Conversation-specific aspects
In addition to modeling the structural aspects described above, it is also possible to model conversation-specific aspects. Conversation-specific models detail a particular message exchange between Actors. For this purpose the classes Conversation, Conversation Communication Need (CCN), Construct and Conversation Translation of Fig. 3 can be used.
A Language can be described in detail by its containing Constructs. Each possible message that could be formulated in a Language is represented as a Construct. During a message exchange, Actors exchange several Constructs. This collection of transferred Constructs is called Conversation. A Conversation might be dropped, which means the transmission of the Conversation is terminated and the message is no longer consumable. It might also be distorted, reflecting that it has been unintentionally modified so that the original meaning is lost. These two characteristics are reflected in the properties isDropped and isDistorted.
MPSs carry Conversations from one Actor to one or several others. To identify the target of a Conversation, the Conversation is coupled with Addresses, designating the receiving Actors of a Conversation.
A CCN describes the intention of a set of Actors to engage in a Conversation. This class is the conversation-specific equivalent of the class Communication Need. The class has the same attribute, satisfied, as the Communication Need, indicating a successful information exchange.
Finally, the class Construct Translation represents a mapping and transformation of a Construct of one Language into one of another Language. This mapping needs to be performed by an Actor. A Language Translation can be broken down into several Construct Translation instances.
4.4. Model abstraction
As stated in the introduction, the trade-off between prediction accuracy and modeling cost is an important aspect. The metamodel presented in this article features two types of abstractions in order to support such trade-off. Firstly, there is a choice between either modeling only the structural aspects, or modeling both the structural and the conversation-specific aspects. The latter, more comprehensive, modeling enables higher accuracy in the predictions. Secondly, the aggregation relationship of the Abstract Actor enables the models to be described using various levels of granularity. As previously described, Actors can take various forms, such as whole enterprises, departments or information systems. A more coarse-grained Actor, such as an enterprise, is generally composed of several more fine-grained Actors such as information systems.
4.5. Rule set for interoperability prediction
With the metamodel described in the previous subsection, it is possible to describe specific scenarios of communication between Actors. Given such a scenario, i.e. an instantiated metamodel, it would be possible for an interoperability expert to determine if the requirements for interoperability are fulfilled, and point out where the potential problems exist. The use of such experts is, however,
expensive, and automating the work performed by the expert is desirable. Formalizing theory for interoperability prediction using OCL statements, enables automatic assessment of the rules for ensuring interoperability.
This subsection describes the set of rules, in terms of OCL expressions, needed to predict interoperability, i.e. the satisfaction of a Communication Need, with references to literature justifying these rules. The expressions are described in natural language with references to [18], where the actual OCL code can be found. The interoperability requirements are grouped in the same way as the description of the metamodel, i.e. (1) communication path requirements, (2) language requirements, (3) addressing requirements and (4) requirements for specific conversations. The value for the attribute satisfied of class Communication Need, i.e. the target of the prediction, is evaluated by a pair-wise prediction of the Actors involved in the Communication Need, see statement 1 in [18]. For each such pair, queries to the model are performed to evaluate if information exchange is possible.
4.5.1. Communication path
The perhaps most fundamental requirement for interoperation to take place is that there is a path between the Actors that are to collaborate. For computer-based communication, a good example is the OSI reference model [19]. From the point of view of one OSI layer, the next lower layer of the OSI stack typically constitutes such a path, and the physical layer is the most basic path between Actors. Expressed in terms of the metamodel, this means that between the Actors associated to a Communication Need there must be a path of Abstract Actors. The possible paths between each pair of Actors are found and evaluated using statement 2 in [18], this statement starts with one of the Actors of the Communication Need and traverses the architecture model to find the target Actor. Every time a new Actor is reached, the allowed message formats (i.e. Languages) and a set of necessary conditions are, as described below, evaluated for this step in the path. Statements 8 and 3 capture, for each such step of the path traversal, the evaluation of allowed Languages and the additional necessary conditions
respectively. If there is no path to the target Actor statement 2 returns false (resulting in the same value for the satisfaction of the Communication Need).
Even if there is a communication path as described above, communication could still fail due to various barriers. First of all, the path may be unavailable [20,24]. For this reason, Actors and MPSs feature the attribute isAvailable. For each new Actor reached in the search for a path, this attribute will be evaluated for the new Actor and the intermediary MPS, see statement 4 in [18]. In the same manner, it is also possible to evaluate if any of the Actors or MPSs syntactically distorts or drops messages transmitted over the path [21,22]. Such evaluation is based on the attributes distortsMessage and dropsMessage respectively, cf. statements 5 and 6 respectively.
4.5.2. Languages
As has been described earlier, both Actors and MPSs use Languages. The Actors that are to communicate either need to share a Language for the messages passed between them or use a translator [20–22]. To capture these aspects, a list of the allowed Languages is kept while traversing the model. An allowed Language is a Language that can be used for message exchange between two Actors. The list of allowed Languages changes while traversing the model and can decrease, e.g. if an Actor along the communication path is not capable of using an allowed Language, and increase by the use of a Language Translations, see statement 8 where the list of allowed Languages is updated while traversing the model.
If Actors share a Language, the Language of the intermediary MPS also needs to be compatible with, i.e. a carrier of this Language [20,24]. Furthermore, the Language needs to be sufficiently expressive with respect to the information that is to be exchanged. These requirements are listed in statements 11 and 12 in [18] and are combined in statement 9 where only Languages that fulfill all the requirements are kept in the list.
Language Translations may be used to increase the set of allowed Languages [20,23]. In order to determine whether such a Language Translation is present, statement 10 in [18] is used. This statement finds all available Language Translations and for each evaluates the following criteria: the first requirement is that the Language Translation translates between Languages relevant to, i.e. used by, the collaborating Actors. Furthermore, one of the Languages in a translation needs to be an allowed Language and the translation also needs to be correct. Finally, these Languages need to be compatible with the MPS, and be able to express the information to be exchanged, i.e. express the concepts of the Reference Language.
4.5.3. Addressing
A third important requirement for interoperability is the addressing of parties involved in a collaboration. The most basic case is when the Actors of a Communication Need use an MPS that does not require addressing [19,24], e.g. a cable using between two information systems using serial communication, captured by the fixed attribute of the class MPS, cf. statement 15.
If the situation requires addressing, one possibility is that each Actor knows the Addresses of the other Actors, and the Addresses are expressed in a valid addressing language for the MPS [19,20]. This is modeled using the knownAddress relationship, and the two conditions are checked using statements 16 and 17 respectively.
If a new Actor (the source actor) knows the Address of another Actor (the target actor), but this Address is expressed in a Language that cannot be used on the MPS, there is also the requirement that there must be a third Actor (the translator) that can translate the known Address to a valid Address for the target Actor [20,24], see statement 18. The Domain Name Service (DNS), which translates a host name to an IP address [25], is an example of this situation.
The statements above are combined in statement 14 capturing static addressing, i.e. when addresses are explicitly, or (as in the DNS case) implicitly, known by the Actors. To capture dynamic addressing, e.g. when using an UDDI in a service oriented architecture [26], it is necessary to introduce a subordinate Communication Need for the addressing issue. This Communication Need is related to the main Communication Need through the addressingNeed slot. Such a sub-Communication Need is evaluated using the same statements as described in this section.
4.5.4. Conversation-specific aspects
When considering a specific conversation, a more detailed prediction can be performed utilizing the additional concepts of Conversations, Constructs, etc. as detailed above. The goal is, as with the structural aspects, to predict the value of the attribute satisfied, this time however, for the class Conversation Communication Need, CCN. Statement 19 assigns the value to this attribute.
4.5.4.1. Path. On the conversation-specific level, it is still necessary to ensure the existence of a path between the Actors of a CCN, see statement 20. As with the structural aspects, the other requirements (cf. statement 21) affecting the satisfaction of the CCN, are evaluated for each new Actor reached during the discovery of a communication path, see statement 20. The first requirement is that it is necessary to ensure that the path between the Actors is capable of transporting the Conversation [19,24]. Statements 29 and 30 express this, where it is ensured that each of the elements in the path between Actors is a part of the Conversation.
4.5.4.2. Language and Constructs. A Conversation consists of a set of Constructs. A basic requirement of these Constructs is that they can be transmitted on the MPSs [20,24]. Statement 22 ensures that the Language of the MPS is a carrier of the Language of the Constructs in the Conversation and thus can carry the Conversation. Construct Translations can be used to map concepts of two Languages. It is however necessary to ensure that these translations are properly carried out. This is evaluated in statement 28 by evaluating the attribute correct of the involved Construct Translations.
When it comes to the conversation-specific level, a Conversation can be dropped and distorted. Such distortion and dropping is the result of the actions of the Actors that are involved in the Conversation, and the MPSs that these Actors use to communicate the Conversation. In particular the attributes dropsMessage and distortsMessage of these classes are the ones that influence the corresponding qualities of the Conversation. Since these attributes are concerned with failure, it is sufficient that any of the Actors or MPSs fail for the Conversation to fail, see statements 25 and 27 respectively. This aggregated property for the Conversations in turn affects the successfulness of the CCN, see statements 24 and 26 respectively.
4.5.4.3. Addressing. With respect to addressing, it is also of importance that the target Address of a Conversation is an Address of one of the Actors involved in the communication [20,24]. This is expressed using statement 23 in [18].
4.6. Attribute aggregation for model abstration
When Actors and MPSs are combined into composite Actors and MPSs, the attributes distortsMessage and dropsMessage and isAvailable of the aggregate instance become dependent on the more detailed instances. These dependencies, found in statement 29, are based on the aggregation relationship of Abstract Actors. The attributes distortsMessage and DropsMessage utilize an OR
aggregation function, illustrating that it is sufficient that one of the composing Actors or MPSs distorts or drops the message respectively. For the isAvailable attribute, an AND aggregation function is used, i.e. all the sub components must be available in order for the aggregate Actor or MPS to be available.
This concludes the description of the metamodel and rule set, for an example application, see [28]. Employing the metamodel for prediction without additional support would be difficult. In particular the evaluation of the OCL statements would be cumbersome. To aid in this, a tool for enterprise architecture modeling and prediction has been developed [27] that is capable of such predictions.
5. Discussion
This section will evaluate the work presented in this article using the four quality criteria for prediction frameworks as described in Section 1.1, accuracy, cost, cognitive complexity and error control. The metamodel presented in this article meets these criteria by featuring the following seven properties: (1) formalized prediction theory; (2) automated prediction; (3) metamodel and rule set tailored particularly for interoperability prediction; (4) expressiveness of the formalism; (5) reusable prediction theory; (6) probabilistic predictions and finally (7) abstraction mechanisms of the metamodel.
The first property targets the criterion relating to prediction accuracy. Models created using the metamodel have a predefined means for interoperability prediction. Since the rules for prediction are formalized in OCL, the prediction will not feature errors due to e.g. misinterpretation, and prediction accuracy can thus be improved. Secondly, formalized rules allow for an automated prediction using a software tool [27]. This reduces the cost for prediction and improves the accuracy by eliminating human errors in the prediction. From an evolutionary perspective, it should be noted that since the prediction is based on architecture models, as the system of systems evolve, the architecture models need to be updated, with an associated cost. The actual discovery of evolution and maintenance of the many evolutionary stages (e.g. several potential to-be scenarios) is an important aspect, which will at minimum require software support, but is beyond the scope of this work. The mere use of architecture models is however one general approach to managing such evolution [16].
Furthermore, the metamodel presented in this article is based on class diagrams of UML [5], extended with concepts relevant for interoperability. This facilitates a more efficient modeling, thereby reducing cost of use. This specialization of the metamodel is based on state of the art research in the interoperability domain. This in turn ensures that all aspects relevant to interoperability prediction have been modeled and thus increases the prediction accuracy. The compatibility with the more generic modeling language of UML is also beneficial in order to reduce the cognitive complexity.
Fourthly, to enable prediction accuracy it is also important that the chosen formalism is sufficiently expressive for the issue at hand. Many interoperability concerns are related to conformance checking, e.g. that two communicating parties have a common communication format. From a business and information system architecture perspective, this poses requirements on the ability to describe the structure of the architecture, i.e. perform queries for structural information of the created models. The use of OCL for formalizing the rule set enables such structural queries as well as logic and arithmetic operations and is thereby sufficiently expressive for interoperability prediction.
Furthermore, the rule set for interoperability prediction is coupled with the class level concepts, i.e. with the metamodel. This enables the reuse of prediction theory in several application instances, i.e. instance models. Thus there is a separation of concerns between the prediction theory and the application of that theory. Enabling reuse of prediction theory reduces the need for costly domain experts. Allowing the end user to be agnostic regarding the rule set for prediction also reduces the cognitive complexity and allows the user to focus on the modeling aspect. The current rule set is targeted at static interoperability prediction. However, some investigation has been performed with respect to extending this to prediction of dynamic interoperability as defined in the introduction. Although this is work in progress, current findings suggest that the proposed set of classes are sufficient, there is a need for a new relationships between these classes to express temporal order and the rule set need to be expanded to fit such predictions.
Additionally, as described in Section 3, and more elaborately in [17], the rule set used for interoperability prediction can be evaluated probabilistically. Attribute values used in this prediction can thus be set coarsely using little data collection, or more precisely using additional resources. Furthermore, the modeler is allowed to express uncertainty regarding the structure of the architecture. The main benefit of a probabilistic approach is that it enables error control, i.e. a trade-off between the cost of use and prediction accuracy but also that it allows to manage situations where completely accurate data collection unfeasible and to indicate this uncertainty in the predictions, i.e. the trade-off is made explicit to the user.
Finally, the metamodel is capable of two abstraction mechanisms. The first such abstraction mechanism allows the modeler to describe interoperability on two levels, either on an abstract level where the structural aspects are covered – i.e. the structural and conversation-specific aspects. The latter allows for better predictive accuracy but also consume more resources in terms of modeling effort. Furthermore, the structural aspects can be modeled on different levels of granularity. For instance, an actor, such as an enterprise, participating in an information exchange, can either be described as one aggregate entity or refined as various sub-actors. Once again, this enables the modeler to perform a trade-off between the predictive accuracy and the additional cost necessary to both collect the more detailed data and perform modeling of it.
6. Conclusions
This article has demonstrated how an interoperability metamodel and a rule set for interoperability theory can be used as a foundation for achieving sustainable interoperability by enabling automated interoperability predictions. Although such prediction can be performed deterministically, the rule set presented in this paper can also be evaluated probabilistically, allowing the user to express uncertainty regarding the modeled concepts and perform a trade-off between prediction accuracy and modeling cost. From the perspective of sustainable interoperability [4] automation as well as the trade-off is important since it is essential to swiftly respond to changes in the environment, often with sparse knowledge.
Employing the work described in this article would, particularly aid in the learning capabilities of sustainable interoperability. Once a change to the environment is detected, it is possible to learn which part of the rule set that is no longer fulfilled and identify the areas in which adaptations need to be performed in order to restore interoperability. From an enterprise perspective, this would aid the a decision maker in the understanding of the as-is scenario and perhaps more importantly understand the transient effects from a change in the environment and how to, from an interoperability perspective, adapt to such new future scenarios.
The metamodel was developed to be generic and capable of describing many different interoperability scenarios. By specializing the metamodel as described in [29], e.g. specializing the class
MPS to the Internet, a general LAN or a specific LAN setup, a more detailed prediction can be performed. This is the case since the attributes of the specialized classes and the dependencies among them can be set with greater accuracy thus enabling a more precise prediction. The metamodel presented in this article is delimited to prediction of static interoperability, although, as described above, extensions to dynamic interoperability seem feasible. The planned approach to achieve this is to start eliciting the new requirements necessary, e.g. from current research in the field, and then mapping them onto the concepts of the modeling language to identify the new rules necessary for prediction.
References
Johan Ullberg received his M.Sc. in computer science at the Royal Institute of Technology (KTH), Stockholm in 2007. He is currently a Ph.D. student at the Department of Industrial Information and Control Systems at KTH. His research is focused on interoperability in technical support systems and in particular interoperability prediction based on architecture models.
Pontus Johnson is Professor and Head of the Department of Industrial Information and Control Systems at the Royal Institute of Technology (KTH) in Stockholm, Sweden. He received his M.Sc. from the Lund Institute of Technology in 1997 and his Ph.D. and Docent titles from the Royal Institute of Technology in 2002 and 2007. He was appointed professor in 2009. He has chaired and co-chaired a number of international conferences and workshops and participated in the program committees of approximately fifty such events. He has been associate and guest editor to several journals. He is secretary of the IFIP Working Group 5.8 on Enterprise Interoperability.
Pontus holds undergraduate courses and supervises Ph.D. students (currently six) at the Royal Institute of Technology. He has authored close to 100 scientific articles, mainly on the prediction of non-functional properties in software-intensive system architectures.
Markus Buschle received his M.Sc. degree in computer science at TUB, Berlin Institute of Technology, Germany. He is currently a Ph.D. student at the Department of Industrial Information and Control Systems at KTH – the Royal Institute of Technology Stockholm Sweden. His research focuses on the development of languages for enterprise architecture analysis and how they could be supported tool based.
|
{"Source-Url": "https://www.researchgate.net/profile/Pontus_Johnson/publication/257001790_A_language_for_interoperability_modeling_and_prediction/links/02e7e52539e758c7ee000000.pdf", "len_cl100k_base": 9330, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 48264, "total-output-tokens": 11125, "length": "2e13", "weborganizer": {"__label__adult": 0.0003311634063720703, "__label__art_design": 0.0007810592651367188, "__label__crime_law": 0.0004699230194091797, "__label__education_jobs": 0.003246307373046875, "__label__entertainment": 0.00013530254364013672, "__label__fashion_beauty": 0.0002081394195556641, "__label__finance_business": 0.0011110305786132812, "__label__food_dining": 0.0003597736358642578, "__label__games": 0.0005936622619628906, "__label__hardware": 0.0011730194091796875, "__label__health": 0.0005946159362792969, "__label__history": 0.0005002021789550781, "__label__home_hobbies": 0.00012117624282836914, "__label__industrial": 0.0010395050048828125, "__label__literature": 0.0007143020629882812, "__label__politics": 0.0004584789276123047, "__label__religion": 0.0005598068237304688, "__label__science_tech": 0.297119140625, "__label__social_life": 0.0001443624496459961, "__label__software": 0.0270538330078125, "__label__software_dev": 0.662109375, "__label__sports_fitness": 0.00023734569549560547, "__label__transportation": 0.000850677490234375, "__label__travel": 0.00023317337036132812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53509, 0.03568]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53509, 0.5369]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53509, 0.92129]], "google_gemma-3-12b-it_contains_pii": [[0, 4927, false], [4927, 10296, null], [10296, 17997, null], [17997, 21976, null], [21976, 29238, null], [29238, 31478, null], [31478, 38996, null], [38996, 46942, null], [46942, 53509, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4927, true], [4927, 10296, null], [10296, 17997, null], [17997, 21976, null], [21976, 29238, null], [29238, 31478, null], [31478, 38996, null], [38996, 46942, null], [46942, 53509, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53509, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53509, null]], "pdf_page_numbers": [[0, 4927, 1], [4927, 10296, 2], [10296, 17997, 3], [17997, 21976, 4], [21976, 29238, 5], [29238, 31478, 6], [31478, 38996, 7], [38996, 46942, 8], [46942, 53509, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53509, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
70a0fcad469064901ed52da0dbb10b3083c5c84e
|
FLEET MANAGEMENT SYSTEM
by
ABHISHEK CHALLA
B. Tech., Jawaharlal Nehru Technological University, 2013
A REPORT
Submitted in partial fulfillment of the requirements for
MASTER OF SCIENCE
Department of Computer Science
College of Engineering
KANSAS STATE UNIVERSITY
Manhattan, Kansas
2016
Approved by:
Major Professor
Dr. Daniel Andresen
Copyright
ABHISHEK CHALLA
2016
Abstract
Web services have become quintessential in web application development. RESTful web services are one way of providing interoperability between computer systems on the internet. REST-compliant web services allow requesting systems to access and manipulate textual representations of web resources using a uniform and predefined set of stateless operations. These services, which are online APIs, can be accessed from various applications and the results can be used to offer specific functionality to users.
This project consists of an Android app, a Server application and a Client application. The Server application exposes a REST API (Web Services developed using REpresentational State Transfer (REST) protocol) using, which the consuming client applications can make use of various functionalities as services across the network.
The Android app would be installed in the smart phone present in each vehicle of the fleet, this app would send live location data to the database using the REST API. The manager uses the client application to track the vehicles in real time, the manager can also choose to track a particular vehicle. The API could also be used to integrate the services with other systems.
This project serves to a wide variety of users, from small local businesses owning tens or hundreds of vehicle to parents who would like to track the location of their children in real time. This project aims to help the managers/owners better control and track the vehicles. Also, the exposed API could be used by other developers to customize or extend this application. This project is easy to install, use and hence friendly for users with even minimal computer skills.
# Table of Contents
FLEET MANAGEMENT SYSTEM ......................................................................................... i
List of Figures ....................................................................................................................... v
Acknowledgements ............................................................................................................... vi
Chapter 1 - Project Description ......................................................................................... 1
1.1 Introduction .................................................................................................................. 1
1.2 Motivation .................................................................................................................... 2
Chapter 2 - Background ..................................................................................................... 3
Chapter 3 - Setup and Software Requirements ................................................................. 8
Chapter 4 - Requirements Analysis .................................................................................. 9
4.1 Functional Requirements ............................................................................................. 9
Chapter 5 - System Design ............................................................................................... 10
5.1 Use Case Diagram ...................................................................................................... 10
5.2 Class Diagram ............................................................................................................ 12
5.3 Data Flow Diagram .................................................................................................... 14
Chapter 6 - Implementation .............................................................................................. 15
6.1 Output Screens ............................................................................................................ 15
Chapter 7 - Testing ............................................................................................................ 23
7.1 Unit Testing ................................................................................................................ 23
7.2 Integration Testing ...................................................................................................... 23
7.3 Performance Testing .................................................................................................. 25
Chapter 8 - Conclusion ...................................................................................................... 30
Chapter 9 - Future Work ................................................................................................... 31
References .......................................................................................................................... 32
List of Figures
Figure 1: Spring Architecture [4]........................................................................................................... 3
Figure 2: Request processing workflow in SpringMVC [5]................................................................. 4
Figure 3: Use case diagram.................................................................................................................. 11
Figure 4.1 Android Class Diagram...................................................................................................... 13
Figure 4.2 Class Diagram .................................................................................................................... 13
Figure 4.3 Data Flow Diagram ............................................................................................................ 14
Figure 5.1 Welcome Page.................................................................................................................... 15
Figure 5.2 Registration Page ............................................................................................................... 16
Figure 5.3 Registration successful ...................................................................................................... 17
Figure 5.4 Login successful .................................................................................................................. 17
Figure 5.5 Log file showing periodic updates ..................................................................................... 18
Figure 5.6 Admin Home ...................................................................................................................... 19
Figure 5.7 Admin Login ....................................................................................................................... 19
Figure 5.8 Web Vehicle Registration .................................................................................................. 20
Figure 5.9 Map Displaying the Vehicles. ............................................................................................ 21
Figure 5.10 Updating the vehicle using an API-key ............................................................................. 21
Figure 5.11 Table depicting the API .................................................................................................. 22
Figure 6 Integration Testing.................................................................................................................. 24
Figure 7.1 table depicting the throughput for different configurations. ............................................... 26
Figure 7.2 Response Time Graph for Test Plan 1 ................................................................................ 26
Figure 7.3 Throughput graph for Test Plan 1 ....................................................................................... 27
Figure 7.4 Response Time Graph for Test Plan 2 ................................................................................. 27
Figure 7.5 Throughput Graph for Test Plan 2 ...................................................................................... 28
Figure 7.6 Response Time Graph for Test Plan 3 ................................................................................. 28
Figure 7.7 Throughput Graph for Test Plan 3 ...................................................................................... 29
Figure 8 Table Lines of Code ............................................................................................................. 30
I would like to thank my Major professor, Dr. Daniel Andresen for the consistent support, guidance and constructive feedback. It has been a pleasure to have been associated with him. I also thank Dr. Mitchell L Neilsen and Dr. Torben Amtoft, for taking time off their busy schedules to serve on my committee. It would have been really difficult without their support.
I would like to thank my parents Mr. Ashok Challa and Mrs. Satyavathi Challa for having the faith in me and for their love and blessings, without which I could not have achieved any of this. Special thanks to my sister Ms. Varsha Challa for being my support system. I would also like to thank all my friends who have been a part of my ups and downs.
Chapter 1 - Project Description
1.1 Introduction
This project consists of a server application, a client application, and an Android app. The server application exposes a REST API which provides all the functionalities to manage a Fleet of vehicles. This API is developed using the Spring MVC architecture and uses the Hibernate framework to perform CRUD (Create, Read, Update and Delete) operations on the underlying MySQL database. This API responds to the different types of HTTP requests and communicates using the JSON format. Basic authentication is provided using the Spring Security module for the admin. The REST requests are secured using an API-key.
The Android app is installed in the smart device available in each of the vehicles belonging to the fleet. This app is responsible to register the vehicles using the IMEI of the smart device or a user defined Vehicle Id. Once the Vehicle is registered this app is responsible to send the current location data (co-ordinates) to the database using the API in a timely manner.
The Client application retrieves the current / last known location data of all the vehicles registered with the system using the API. Then the data is rendered on a Map using the Google Maps API. The manager can track the real – time location data of all the vehicles registered in the fleet.
1.2 Motivation
The motivation to develop this project “Fleet Management System” is driven by two reasons. The first and the foremost reason is my strong interest to learn the popular frameworks used in the industry to develop web applications, such as Spring MVC, Spring Security, Spring REST, and Hibernate. I also wanted to hone my mobile development skills by building an Android application. The reason to stick to Java and Android is their wide usage and acceptance across the globe.
The second reason is to create an application/ API which can be used in multiple ways. It is a common scenario where a business manager would like to track the location of all the vehicles owned by the firm. This application could be used by many businesses like delivery trucks, taxi services and public transport. Also this API can be consumed by developers who want to develop their own applications, such as an application to track all the vehicles belonging to a group of friends who are on a trip.
Chapter 2 - Background
In order to better understand the working and implementation of this project we need to understand the frameworks and technologies used for developing this project. In this chapter I would like to describe in brief about the choices I have made and the advantages they have.
2.1 Spring Framework
Spring is a popular framework for developing enterprise java applications. It is one of the most widely used frameworks in the industry to develop web and enterprise applications. It is an open source platform and was initially written by Rod Johnson. The below figure depicts the architecture of the Spring framework. The core container can be considered as the heart of this framework.
Figure 1: Spring Architecture [4]
2.1.1 Spring MVC
The MVC or Model View Controller is a popular design pattern used for developing web application. By using this design pattern to develop a web application we can separate the application logic from the presentation logic, thus making it easier and flexible to change and manipulate individual components with finer granularity. Spring supports this architecture with the help of a module called the Spring MVC module. In brief Model is responsible for the structure and maintenance of data, usually represented using a Plain Old Java Object (POJO), View is responsible for displaying the model data to the user with all the required formatting, the Controller is responsible for manipulating and processing the data, it could also be used to interact between the view and the model.
The sequence of events which happen when a request is sent to the Spring MVC application are as follows, the request is first received by the Dispatcher Servlet, this Dispatcher Servlet now consults the Handler Mapping and invokes the corresponding method in the associated Controller pertaining to the request. The Controller handles this request by performing the defined functionality by using the service methods and manipulating the data, once the result is achieved, the method returns a Model and View Object which contains the model data and the view name to be rendered back to the Dispatcher Servlet. Now the Dispatcher Servlet delegates the responsibility to render the view by passing the view name to the View Resolver, it also passes the model object with all the data required by the View. Finally, the View renders the resultant data with all the required formatting.

2.1.2 Spring Security
The Spring Security module is responsible for providing the set of Security related functionalities and standards for the enterprise applications developed using this framework. This module addresses the two key areas of concern for the security of web applications, i.e. Authentication and Authorization. Authentication is the process of identifying and validating the true users using the system. Authorization is the process which controls what users can access what functionalities within the application.
2.1.3 Spring REST
The “REST (REpresentational State Transfer) is an architectural style, and an approach to communications that is often used in the development of Web services” [1]. “A web service is a software system designed to support interoperable machine-to-machine interaction over a network” [2]. The REST architectural style describes six constraints, they are: Uniform Interface, Stateless, Cacheable, Client-Server, Layered System and Code on Demand. Spring started providing support for RESTful web services from Spring 3.0, RESTful functionality can be added to any Spring application using the Controller from Spring MVC.
The main advantages brought about by the Spring REST are
- RESTful web services are light weight and bring better performance.
- Responses from the RESTful web services could be presented in standard formats such as JSON and XML and hence can be consumed easily by any application or website.
- RESTful web services support a uniform interface since resources are accessed using the URL.
2.1.4 Dependency Injection
The main and the most popular feature of Spring is Dependency Injection. We inject a dependency from outside the class, thus making that class free from any dependency. It is the Spring’s flavor of handling the general concept of Inversion of Control(IoC). While writing complex enterprise applications it is ideal to have minimal dependencies in the code so that the project would be flexible for changes and developer will not have to change the same code at multiple places for a tiny feature or requirement modification.
By minimizing the dependencies between classes and making them independent we can promote and foster code reuse of these classes and also these classes can be tested independently while performing unit testing. This Dependency Injection (DI) enables us to get rid of tight coupling in the code.
A minimal example to explain DI, consider a class A having a reference of class B within it. Now the class A is dependent on B, if the class A want to have B’ instead, then we have to change the code in A or check all the places in the project where a B object is used and if there are no conflicts then change the functionality of B. Instead of going through all this hassle, we can just have a super type reference in class A and leave the instantiation to the Spring IoC container.
2.2 Hibernate
Hibernate is an Object Relational Mapping framework. It is an implementation of the Java Persistence API. It is used as an alternative to the JDBC approach for storing data in the database. By using this framework, we can save the state of the object to a database and when needed create the object back from the database. By using this framework, we need not write explicit prepared statements to store the data of an object, instead we can just use the save method on the entity type objects to save the data to the database. This makes the development process very easy and saves a lot of time for the developer. Another advantage of using this framework is the seamless mapping between the classes and the database tables making the design more consistent with the class names and their underlying structure. This framework provides support to most of the features of the RDBMS such as joins, and association mappings. Spring also has inbuilt support for the Hibernate framework.
Apart from making the development process easier and faster Hibernate also offers additional advantages such as, the business logic access and deals with the objects rather than using prepared statements and dealing with the database tables, makes transaction management easier.
2.3 Android
Android is the most used mobile OS in the world. It’s an open source and Linux based operating system. Its syntax is very similar to Java and hence a lot of developers find it easy to start with. Android uses an underlying Linux kernel which acts as an interface between the hardware components of the device and the upper levels of the Android operating system. This
kernel basically consists of the hardware drivers for the various components such as camera, display, touchpad. The Hardware Abstraction Layer provides standard interfaces that expose device hardware capabilities to the higher-level Java API framework. Many core Android system components and services, such as ART and HAL, are built from native code that require native libraries written in C and C++ [6]. Android Runtime(ART) is written to run multiple virtual machines on low-memory devices by executing DEX files, a bytecode format designed especially for Android. The highest level is the system applications/apps which are run on this OS and provide the basic functionalities required by the user.
2.4 JavaScript
JavaScript is a high-level, interpreted programming language. It can be used for sending HTTP requests from a web page and then gather back the results. I have used JavaScript in my project for this reason. Apart from this I have also used it for form validations. The main purpose was however to provide location data for simulating live data.
2.5 JSP
Java Server Pages (JSP) is the technology which is used for developing web pages that have both static and dynamic content. These pages can have embedded java code in them and thus makes them a better and flexible choice compared to servlets. The major benefit of using JSP is that multiple authors with various skill set can work on it together and achieve the required result. A UI developer need not have the knowledge of the java code and at the same time the java developer controlling the logic in the JSP need not understand the presentation code.
2.5.1 Advantages of JSP:
- JSP pages easily combine static templates, including HTML or XML fragments, with code that generates dynamic content.
- “JSP pages are compiled dynamically into servlets when requested, so page authors can easily make updates to presentation code”. [7]
- Java Server Pages are built on top of the Java Servlets API, so like Servlets, JSP also has access to all the powerful Enterprise Java APIs, including JDBC, JNDI, EJB, JAXP etc.
Chapter 3 - Setup and Software Requirements
The following are the tools, frameworks and software’s I have used for the development of this project:
- Eclipse Mars IDE with support for Java EE.
- Android Studio.
- MySQL database.
- Apache Tomcat Server.
- REST client: Postman.
- Spring core.
- Spring Security.
- Spring MVC.
- Spring REST.
- Hibernate ORM framework.
Chapter 4 - Requirements Analysis
Requirement Analysis is the first step in the process of developing any application. This step is necessary to have a contract between the user and the developer to stay even on the expectations. An exhaustive list is always good to start with, however, it is not really feasible in the real world. Hence the Requirements document gets updated based on the user needs, time and budget. Following are the list of functional requirements I have decided to implement for this project and hence this serve as a boundary to the scope of this project. I have come up with these requirements based on my vision for the application. I wanted an app that would allow me to track the vehicles on a map in real time and also an API. Hence these are the complete set of requirements fulfilling the description.
4.1 Functional Requirements
For the web application:
- The admin should be able to login.
- The admin should be able to perform Vehicle registration.
- The admin should be able to view all the vehicles on a map.
- The application should expose a service for vehicle registration.
- The application should have a service to get the details of a vehicle.
- The application should have a service to get the details of all the vehicles.
- The application should have a service to update the location of a vehicle.
- When a vehicle loses connectivity, the last known location would be displayed on the map.
For Android Application:
- The user should be able to register to the system.
- The user should be able to login.
- The user should be able to view his information on the screen after login.
- The user should be able to send live location data at a fixed interval to the server.
- The user should be able to logout.
- The app should be able to run on the background.
Chapter 5 - System Design
Once the requirements are finalized, the next phase in the development is the System Design. Good System design forms a good foundation for a sound and stable project. The design of a system is usually depicted in the form of various UML diagrams. These are standardized diagrams and follow rules such that all the developers can stay on the same level when discussing about abstract concepts such as design.
5.1 Use Case Diagram
Use case diagrams represent the way various actors interact with the system. Use cases provide a means to capture and depict the requirements. Use cases are simple and descriptive diagrams and hence easily understood by both the end users and the domain experts. These help for the smooth communication between the end user and the developer teams. The below use case diagram depicts the two roles Android App User and the WebApp User / Admin. The Android App user can interact with the system and achieve the following functions, they are, Register the Vehicle by entering all the required information, Login to the application, send and update the location data on a periodic basis, View the personal information.
The Web app user can use the Admin portal to login as an admin, the admin can view the location of all the registered vehicles on the map. The admin can explicitly register a vehicle. The admin can access all the services such as, retrieving the details of all the vehicles in the required format, retrieving the details of a single vehicle by id.
The below figure – 3 depicts the use case diagram for the project. It shows the two types of actors and all the different use cases using which the actors interact with the system.
Figure 3: Use case diagram
5.2 Class Diagram
A class diagram is a static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships between the classes.
The class diagram for this project contains these classes.
- **Vehicle class** - This a model class, it is of the type Plain Old Java Object. This class has the attributes pertaining to a Vehicle in the real world and all the accessor methods used for using them.
- **KeyGen class** - The KeyGen class has the method to generate an API-key for each registered vehicle, the android app needs to save this and include it in the header every time it wants to send an update request. By doing this, we are providing an additional layer of security, and blocking all the false request hitting the server.
- **RegistrationDao class** - This class is responsible for having all the logic required to retrieve and store data back to the database. Hence, this type of classes are called as the Data Access Objects. I am using the Hibernate framework to deal with the database and hence this class has the corresponding annotations to let the framework identify the elements. For each method in the class a session is created, a transaction is performed and the session is closed.
- **VehicleController class** - This is the heart of the application. This class is responsible for responding to all the requests and hence is called the Controller class. Each method within this class is designed in such a way that it responds to a Http request. Each method is annotated with the type of Http request it needs to handle, the type of data it would return, the url it should respond to and then return the manipulated data.
5.3 Data Flow Diagram
A data flow diagram shows us how the data is passed between the different users/ modules of a system. I have come up with a higher level data flow diagram for the application.
![Data Flow Diagram]
Figure 4.3 Data Flow Diagram
The Android App sends out an update location details request to the web application, the application handles this request and stores the details in the database. The Map requests the details of all the vehicles and the locations using the getAllVehicleDetailsAsJson request, the web app handles the request, by retrieving the details from the database and sending them over to the Map.
Chapter 6 - Implementation
The fleet management system consists of a web application which exposes an API to register and track the vehicles. I have also implemented an Android app which uses the functionalities provided by this web application. I also have a web page which displays the location of the vehicles on the map and update the location of each vehicle on a periodic basis. This is the higher level implementation of the project. In this chapter I would like to describe my application with the help of some output screens.
6.1 Output Screens
Below are the output screens for the Android application.

*Figure 5.1 Welcome Page*
The first output screen i.e. figure 5.1 shows us the welcome page of the Android app, it has two buttons to navigate to the Vehicle registration page and the Login page. This page is displayed to the user upon the launch of the app.

**Figure 5.1 Welcome Page**
The second output screen - figure 5.2, shows the registration page, it has all the fields required for the user to register, once the user enters all the values and if they are valid, then the user will be shown a message that the registration is successful. Now the user is redirected to the home page. This is shown in figure 5.3.

**Figure 5.2 Registration Page**
Figure 5.3 Registration successful
Figure 5.4 Login successful
The user can proceed to login with the credentials created, if the user attempts to login with valid credentials, then the user can successfully login to the system and the web application will send the user an API-key which has to be stored, and sent along as a header ever time with the update location request. This will help us to avoid any malicious update requests. Once the user logs in, he will be redirected to a page with the vehicle info shown in figure 5.4, the user can now choose to stay on this page or move to a different app, no matter what our app would be run on the background and keeps updating the data base in a timely manner. Once the user has reached the destination he can choose to logout and that is when the updates would be stopped.

The figure 5.5 shows us a console log, which shows that the alarm manager is invoking the vehicle location update service at regular intervals, which is in turn sending a put request to the server. Thus it could be observed that the Android app is sending periodic updates with the current location the web server. The Android app sends out requests when it is active and also when moved to the background, i.e. even when another app is currently operating. In the above figure, a request is being sent every one minute, as I have configured the Android app in such a way. This could be easily changed by re-configuring the Alarm Manager within the Android app.
The web application has spring security enabled for admin role, so the user has to provide the admin credentials to access any page within the system. Once the user provides the valid credentials, the user would be redirected to the admin home page. Figure 5.7 depicts the Admin Login page, the user accessing any path within the project would be first redirected to this page, upon successful Login the user would be redirected to the Admin Home displayed in figure 5.6. Here the user can choose to perform the web vehicle registration, View vehicles on map or logout.
If the user chooses to perform web vehicle registration he would be redirected to a page which would look like, figure 5.8. This page has all the fields necessary for the registration of the vehicle. If an Admin would like to register a vehicle from the back end, he can do so by making use of this page.

**Figure 5.8 Web Vehicle Registration**
If the user wants to view the vehicles on the map, he can do that by clicking on the displayMap and this page shows us all the vehicles currently registered in the system, on the map. The map is refreshed every 2 seconds to reflect any changes. I have made use of google maps API to display the vehicles using a marker and label on the map. This is achieved by using the getAllVehicles service provided by the server. This is displayed in figure 5.9.
Once the user is done monitoring he can choose to logout. The user can also access all the services provided by the server, such as `getAllVehicleDetailsAsJson` which would give the vehicle id and location of all the vehicles, in the JSON format. The user can also get the details of a particular vehicle by specifying the vehicle id. Figure 5.10 depicts an update request performed using the Postman REST client. The API-key is passed through the header. The response shows a success message.
**Figure 5.9 Map Displaying the Vehicles.**
**Figure 5.10 Updating the vehicle using an API-key**
I have tried to keep the user interface as simple and clean as possible. I believe that a simple and compact user interface is a key factor for the usability of the application. If an interface is simple to use it will be usable by more number of people and hence serves the purpose. I have installed the Android application in my device and checked the look and feel. I have also asked for feedback from my friends. I have taken into consideration the inputs from them and made the corresponding changes.
The main motivation was to have a user interface using which all the functionality can be accessed and also to expose a comprehensive API implementing the most required and useful functionality. This will help other developer to build systems using this API.
<table>
<thead>
<tr>
<th>URL</th>
<th>Method Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>/VehicleRegistration</td>
<td>GET</td>
<td>Displays the form for vehicle registration</td>
</tr>
<tr>
<td>/admin</td>
<td>GET</td>
<td>Displays login page for admin</td>
</tr>
<tr>
<td>/saveVehicle</td>
<td>POST</td>
<td>The passed vehicle is saved</td>
</tr>
<tr>
<td>/isVehicleExists/{VehicleID}</td>
<td>GET</td>
<td>Returns true if vehicle exists</td>
</tr>
<tr>
<td>/validateLogin/{vehicleID}/{password}</td>
<td>GET</td>
<td>Returns the API-Key if the credentials match</td>
</tr>
<tr>
<td>/getAllVehicleDetails</td>
<td>GET</td>
<td>Returns the location details of all the Vehicles</td>
</tr>
<tr>
<td>/getVehicleDetails/{vehicleID}</td>
<td>GET</td>
<td>Returns the details of the required vehicle</td>
</tr>
<tr>
<td>/updateVehicle</td>
<td>PUT</td>
<td>Updates the location of the vehicle, provided the correct API-Key is passed</td>
</tr>
</tbody>
</table>
*Figure 5.11 Table depicting the API*
Chapter 7 - Testing
Once the development phase is completed, the next important phase is the Testing phase. Any application should be thoroughly tested so that the user can be assured of a functioning and reliable product. Hence testing is a non-trivial activity and should be performed with enough time and effort. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. Our goal is to attain the expected outcomes in the most scenarios. Various types of testing need to be performed such as, unit test cases to test each class, integration testing to check how all the modules are binding, performance testing to evaluate the performance of the system and finally user acceptance testing to see if the user has received what he has accepted.
7.1 Unit Testing
Testing the application at class level or method level helps us to understand if each of the building blocks are functioning as expected, so that there would not be a problem when each of these individual elements collaborate to perform the higher level functionalities of the system. Unit testing could be performed by sending mock requests to each of the methods in the controller class. A basic example test method would have the following code
```
mockMvc.perform(get('/')).andExpect(status().isOk());
```
So using the mock objects and passing test data we can test each of the methods in the controllers. All the exported services have been tested using the above said procedure. The same procedure has been followed to test the data access methods in the DAO classes. The keyGen class has been tested using a Junit test case.
7.2 Integration Testing
Once all the modules are put together we can test the higher level functionalities of the system, this phase is called the integration testing.
<table>
<thead>
<tr>
<th>Test Case</th>
<th>Expected results</th>
<th>Actual Result</th>
</tr>
</thead>
<tbody>
<tr>
<td>Admin Login with valid user id and password.</td>
<td>User redirected to admin page.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin login with invalid user id and password.</td>
<td>Error message is displayed.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin clicks on Vehicle Registration hyper link.</td>
<td>Redirected to Vehicle Registration page.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin enters all the required fields with valid data.</td>
<td>Vehicle is successfully registered.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin does not enter all the fields for vehicle registration.</td>
<td>Error message is displayed.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin clicks on View vehicles on Map hyperlink.</td>
<td>The map is displayed with markers for all the vehicles.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin should be able to update the vehicle location with valid API-key.</td>
<td>The co-ordinates are updated successfully.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin tries to update the location of the vehicle without a valid API-key.</td>
<td>The co-ordinates are not updated. Error code is returned.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin attempts to retrieve the details of all the vehicles.</td>
<td>Receives the location details of all the vehicle.</td>
<td>Pass</td>
</tr>
<tr>
<td>Admin attempts to retrieve the details of all the vehicles in json format.</td>
<td>Receives the location details of all the vehicle in json format.</td>
<td>Pass</td>
</tr>
<tr>
<td>Android user registers with valid data.</td>
<td>User registered Successfully.</td>
<td>Pass</td>
</tr>
<tr>
<td>Android user attempts login with valid data.</td>
<td>Successfully logs into the system.</td>
<td>Pass</td>
</tr>
<tr>
<td>Android user attempts login with invalid data.</td>
<td>Error message is displayed.</td>
<td>Pass</td>
</tr>
</tbody>
</table>
*Figure 6* Integration Testing
**7.3 Performance Testing**
It is important that we need to test the application for performance. Good performance ensures that the user has good availability of the services even under the load. I have used Apache JMeter tool for load and performance testing. This tool simulates a group of users sending requests to the application server and returns various statistics that show the performance of the server in the form of table data and corresponding graphs.
A Test Plan in JMeter has the configurations required for running a test, in test plan we specify the thread group or the number of users, the number of times to loop, the ramp up time, the url to hit, the path to the exact resource, the port number, the parameters to be passed to the request, headers if any, the type of request and the type of content. Once all the parameters are configured we can execute this test plan whenever we want.
I have performed performance testing to the most important service in my project, i.e., the service to retrieve the data of all the vehicles. I have chosen this service as it requires the most database resources and deals with relatively more amount of data. I have used a laptop with 8GB RAM and ran an instance of the project in the tomcat server. I have used one of the campus computers to perform load testing, by sending the request over the Wi-Fi. The data transferred per request is 11.14 Kb.
Test plan 1 simulates a thread group of 100 users, a ramp up period of 1 seconds and a loop count of 10. From the response time graph for test plan 1 we can observe that the response time increases, and through put remains about constant. This test plan simulates 1000 requests and tries to send out the 1000 requests by the end of 1 second.
Test plan 2 has a thread group of 200 users, a ramp up period of 1 second and a loop count of 10. From the response time graph for test plan 2 we can see that the response time spiked at the middle. The deviation in the through put is less, so the performance is acceptable. This test plan simulates a bigger number than the first test plan, it simulates 2000 request and has a ramp-up period of 1 second.
Test plan 3 has a thread group of 500 users, a ramp up period of 1 seconds and a loop count of 10.
Response time graph has ups and downs, I think this is attributed to the concurrent operations on the database. The throughput is 3200 pm. This test plan sends out even more requests compared to the previous test plan. This plan simulates a total of 5000 requests and has a ramp up period of 1 second.
<table>
<thead>
<tr>
<th>No. of Users</th>
<th>Ramp up period</th>
<th>Loop Count</th>
<th>Throughput</th>
</tr>
</thead>
<tbody>
<tr>
<td>100</td>
<td>1</td>
<td>10</td>
<td>3043.831/minute</td>
</tr>
<tr>
<td>200</td>
<td>1</td>
<td>10</td>
<td>2906.132/minute</td>
</tr>
<tr>
<td>500</td>
<td>1</td>
<td>10</td>
<td>3211.991/minute</td>
</tr>
</tbody>
</table>
*Figure 7.1 table depicting the throughput for different configurations.*
*Figure 7.2 Response Time Graph for Test Plan 1*
Figure 7.3 Throughput graph for Test Plan 1
Figure 7.4 Response Time Graph for Test Plan 2
**Figure 7.5** Throughput Graph for Test Plan 2
**Figure 7.6** Response Time Graph for Test Plan 3
Figure 7.7 Throughput Graph for Test Plan 3
Chapter 8 - Conclusion
The fleet management system could be used by anyone who owns a fleet of vehicles and would like to track each of the vehicles in real time. This way it is easy to keep track of the progress and have up to date information of each vehicle. This application also exposes an API that could be used by any developer to make use of the underlying features and develop his/her own application. For example, if a developer wants to develop an application, which tracks and shows each vehicle of a group of friends who are on a trip together going to the same destination, he can use the API and show all the vehicles of the group on a map. This project also avoids malicious updates by making use of a unique key for each of the user, which needs to be sent in the header. Thus, this project is a complete application, which could be used by a wide variety of users for various purposes. Developing this application has given me a good grip on web application development, as this project has end to end functionality and also has integration with an Android app. I have gained the required experience with the popular frameworks used in the industry for web application development such as Spring and Hibernate, thus making me confident to start my career as a Developer.
<table>
<thead>
<tr>
<th>Language</th>
<th>Lines of Code</th>
</tr>
</thead>
<tbody>
<tr>
<td>Java</td>
<td>1600</td>
</tr>
<tr>
<td>XML</td>
<td>250</td>
</tr>
<tr>
<td>JavaScript</td>
<td>300</td>
</tr>
<tr>
<td><strong>Total</strong></td>
<td><strong>2150</strong></td>
</tr>
</tbody>
</table>
*Figure 8 Table Lines of Code*
The above table depicts the breakdown of the lines of code in my project with respect to the language.
Chapter 9 - Future Work
There is a lot of scope to extend this project, a dedicated desktop application could be built to track the vehicles and it can have filters to select different vehicles. The project could be hosted on an online server and made available to a wider user base. The UI could be improved to make the application look more elegant and rich. Add the functionality to store the trips made by each vehicle on a particular day. Perform analysis on the old data and generate reports, which could be useful for the business.
References
[1] Representational state transfer (12/14)
http://searchsoa.techtarget.com/definition/REST
[2] Web services Glossary (02/02)
https://www.w3.org/TR/ws-gloss/
https://www.tutorialspoint.com/spring/spring_architecture.htm
[5] Spring Model View Controller
|
{"Source-Url": "https://core.ac.uk/download/pdf/77979434.pdf", "len_cl100k_base": 8508, "olmocr-version": "0.1.50", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 65171, "total-output-tokens": 9769, "length": "2e13", "weborganizer": {"__label__adult": 0.00060272216796875, "__label__art_design": 0.0004482269287109375, "__label__crime_law": 0.0002658367156982422, "__label__education_jobs": 0.0031280517578125, "__label__entertainment": 9.27448272705078e-05, "__label__fashion_beauty": 0.00023746490478515625, "__label__finance_business": 0.00031256675720214844, "__label__food_dining": 0.0005087852478027344, "__label__games": 0.0007376670837402344, "__label__hardware": 0.0013332366943359375, "__label__health": 0.0003631114959716797, "__label__history": 0.0003070831298828125, "__label__home_hobbies": 0.00013649463653564453, "__label__industrial": 0.00040435791015625, "__label__literature": 0.0003428459167480469, "__label__politics": 0.0002015829086303711, "__label__religion": 0.00044655799865722656, "__label__science_tech": 0.0030231475830078125, "__label__social_life": 0.0001646280288696289, "__label__software": 0.0044097900390625, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0003566741943359375, "__label__transportation": 0.0019121170043945312, "__label__travel": 0.00029206275939941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44131, 0.03551]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44131, 0.20899]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44131, 0.86985]], "google_gemma-3-12b-it_contains_pii": [[0, 345, false], [345, 378, null], [378, 2075, null], [2075, 5064, null], [5064, 8664, null], [8664, 9383, null], [9383, 10716, null], [10716, 11710, null], [11710, 12455, null], [12455, 14204, null], [14204, 16319, null], [16319, 18762, null], [18762, 20855, null], [20855, 21224, null], [21224, 23032, null], [23032, 24738, null], [24738, 24765, null], [24765, 26507, null], [26507, 26507, null], [26507, 27145, null], [27145, 27811, null], [27811, 28516, null], [28516, 28580, null], [28580, 30062, null], [30062, 30632, null], [30632, 31462, null], [31462, 32056, null], [32056, 33882, null], [33882, 35759, null], [35759, 38044, null], [38044, 40304, null], [40304, 41052, null], [41052, 41144, null], [41144, 41244, null], [41244, 41288, null], [41288, 42895, null], [42895, 43435, null], [43435, 44131, null]], "google_gemma-3-12b-it_is_public_document": [[0, 345, true], [345, 378, null], [378, 2075, null], [2075, 5064, null], [5064, 8664, null], [8664, 9383, null], [9383, 10716, null], [10716, 11710, null], [11710, 12455, null], [12455, 14204, null], [14204, 16319, null], [16319, 18762, null], [18762, 20855, null], [20855, 21224, null], [21224, 23032, null], [23032, 24738, null], [24738, 24765, null], [24765, 26507, null], [26507, 26507, null], [26507, 27145, null], [27145, 27811, null], [27811, 28516, null], [28516, 28580, null], [28580, 30062, null], [30062, 30632, null], [30632, 31462, null], [31462, 32056, null], [32056, 33882, null], [33882, 35759, null], [35759, 38044, null], [38044, 40304, null], [40304, 41052, null], [41052, 41144, null], [41144, 41244, null], [41244, 41288, null], [41288, 42895, null], [42895, 43435, null], [43435, 44131, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44131, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44131, null]], "pdf_page_numbers": [[0, 345, 1], [345, 378, 2], [378, 2075, 3], [2075, 5064, 4], [5064, 8664, 5], [8664, 9383, 6], [9383, 10716, 7], [10716, 11710, 8], [11710, 12455, 9], [12455, 14204, 10], [14204, 16319, 11], [16319, 18762, 12], [18762, 20855, 13], [20855, 21224, 14], [21224, 23032, 15], [23032, 24738, 16], [24738, 24765, 17], [24765, 26507, 18], [26507, 26507, 19], [26507, 27145, 20], [27145, 27811, 21], [27811, 28516, 22], [28516, 28580, 23], [28580, 30062, 24], [30062, 30632, 25], [30632, 31462, 26], [31462, 32056, 27], [32056, 33882, 28], [33882, 35759, 29], [35759, 38044, 30], [38044, 40304, 31], [40304, 41052, 32], [41052, 41144, 33], [41144, 41244, 34], [41244, 41288, 35], [41288, 42895, 36], [42895, 43435, 37], [43435, 44131, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44131, 0.12857]]}
|
olmocr_science_pdfs
|
2024-12-02
|
2024-12-02
|
2fb502d2da991f8ca143b4c9f01b3600a2592e3e
|
Contents
INTRODUCTION TO EMBARCADERO DBARTISAN .................................................................................. 5
Product Benefits ..................................................................................................................................... 5
Database Administrators ...................................................................................................................... 5
Developers .......................................................................................................................................... 5
About This Guide ................................................................................................................................. 5
SESSION 1: GETTING STARTED WITH DBARTISAN ....................................................................... 7
Download and Install ........................................................................................................................... 7
Introduction to Embarcadero DBArtisan ............................................................................................. 7
Overview ............................................................................................................................................... 7
Start DBArtisan .................................................................................................................................. 7
Registering Cross-Platform Datasources ............................................................................................ 8
General Usability Features .................................................................................................................. 10
SESSION 2: OBJECT AND SCHEMA MANAGEMENT .................................................................... 13
Advanced Object Creation and Management ....................................................................................... 13
Creating a Table Object ...................................................................................................................... 13
Making changes to an existing table object .......................................................................................... 13
Working with Object DDL .................................................................................................................. 14
Advanced Schema Management ......................................................................................................... 14
SESSION 3: DATABASE SECURITY MANAGEMENT ..................................................................... 18
Adding a new database user ................................................................................................................. 18
Granting and Editing User Privileges .................................................................................................. 18
SESSION 4: SPACE MANAGEMENT .............................................................................................. 20
Built-in Space Management ................................................................................................................. 20
Advanced Space Management (Oracle and SQL Server only) .............................................................. 21
SESSION 5: SQL MANAGEMENT .................................................................................................. 23
Visual Query Builder ............................................................................................................................. 23
ISQL Window ....................................................................................................................................... 23
SQL Debugging, Analysis and Tuning .................................................................................................. 24
SQL Debugging ................................................................................................................................... 25
SQL Profiling ....................................................................................................................................... 25
SESSION 6: JOB MANAGEMENT .................................................................................................. 26
Advanced Job Management ................................................................................................................. 26
SESSION 7: DATA MANAGEMENT ........................................................................................................... 27
Visual Data Editing ................................................................................................................................. 27
Working with Table Data – Create Insert Statements ............................................................................... 28
Working with Table Data – Extract Data as XML .................................................................................. 29
Advanced Data Management – Schema and Data Migration ................................................................. 29
SESSION 8: PERFORMANCE MANAGEMENT ...................................................................................... 33
Monitoring Sessions ................................................................................................................................. 33
Advanced Client-Side Performance Monitoring .................................................................................... 34
SESSION 9: CAPACITY MANAGEMENT ............................................................................................... 37
SESSION 10: GENERAL UTILITIES AND TOOLS ............................................................................... 38
Utilities Menu .......................................................................................................................................... 38
Tools Menu .............................................................................................................................................. 38
ADDITIONAL RESOURCES .................................................................................................................. 39
Licensing Your Embarcadero Technologies Product ................................................................................ 39
Embarcadero Technologies Product Support .......................................................................................... 39
Embarcadero Technologies Technical Support ....................................................................................... 39
Embarcadero Technologies on the Web .................................................................................................. 39
Introduction to Embarcadero DBArtisan
DBArtisan is an industry-leading database administration solution for managing Oracle, Microsoft SQL Server, Sybase Adaptive Server, IBM DB2 for Windows, Unix, and Linux, IBM DB2 for OS/390 and z/OS databases, and MySQL. Its cross-platform capability allows users to efficiently manage heterogeneous database platforms easily using a single front-end tool. Using DBArtisan, users boost their productivity by using a single tool for all their databases, regardless of vendor.
Product Benefits
DBArtisan provides benefits to the following functions:
**Database Administrator**
DBArtisan enables database administrators to accomplish more with the time they have available in their workday. It eliminates the tedious tasks associated with researching schema dependencies when making object changes. Also included are a host of utilities, which condense DBA tasks taking hours or days down to minutes.
**Developer**
DBArtisan provides additional administration functionality to database developers over standard development platforms. Using the powerful schema extraction, schema migration, and publication wizards, developers can quickly extract and move schema from development to other environments, as well as create objects much quicker than using old-fashioned hand coding techniques.
About This Guide
This evaluation guide is intended to help you get started using Embarcadero’s DBArtisan, the industry-leading solution for administering enterprise databases from a single point of control. While DBArtisan supports current versions of Oracle, Microsoft SQL Server, Sybase Adaptive Server, IBM DB2 for Unix, Windows, and Linux, IBM DB2 for OS/390 and z/OS, and MySQL, the examples in this guide are Oracle-centric. Unless otherwise noted, all features and functionality highlighted in this guide are applicable to all supported platforms.
After reviewing this evaluation guide, you will have the foundation you need to explore the many features and benefits of DBArtisan. You’ll have learned how to competently manage the major database administration disciplines using DBArtisan’s standard cross-platform console. In addition, you will have a solid understanding of DBArtisan’s more advanced Space, Performance, and Capacity management capabilities.
This guide is divided into 10 sessions:
- Session 1: Getting Started with DBArtisan
- Session 2: Schema Management
- Session 3: Security Management
- Session 4: Space Management
- Session 5: SQL Management
- Session 6: Job Management
- Session 7: Data Management
- Session 8: Performance Management
- Session 9: Capacity Management
- Session 10: General Utilities and Tools
You can use this basic tutorial as a roadmap of product highlights, but also to help you find your own path to explore DBArtisan.
Once you’ve started, you can select Help from the toolbar to find many additional resources that complement and build on many of the activities shown in this brief guide.
Session 1: Getting Started with DBArtisan
Download and Install
You can obtain the latest version of the DBArtisan software from the Embarcadero website at http://www.embarcadero.com/downloads/downloaddbartisan.jsp.
Provide the requested information and follow the steps indicated to download and install the software. When you first install an evaluation copy of DBArtisan, you can use the tool for 14 days. After that time, a permanent license is needed.
Introduction to Embarcadero DBArtisan
DBArtisan is an industry-leading database administration solution for managing Oracle, Microsoft SQL Server, Sybase Adaptive Server, MySQL, IBM DB2 for Windows, Unix, and Linux, and IBM DB2 for OS/390 and z/OS databases. Its cross-platform capability allows users to efficiently manage heterogeneous database platforms easily using a single front-end tool. Using DBArtisan, users boost their productivity by utilizing a single tool for all their databases, regardless of vendor.
Overview
The graphic below illustrates all the elements of the DBArtisan User Interface:
Start DBArtisan
1. On the Start menu, point to Programs, Embarcadero DBArtisan 8.5.0, and then select DBArtisan.
The first time DBArtisan starts, it displays a message indicating that it can automatically detect and register your datasources. If you have installed and used other Embarcadero tools, DBArtisan can find any active datasources being used by those tools. In addition, DBArtisan provides a Discover Datasources feature that automatically searches the DBMS configuration files on your system for datasources that are not currently registered. The Discover Datasource feature is a dialog box that contains a list that includes the name of the server or instance and the type of DBMS, of all unregistered datasources found on your network or local machine, including the name of the server or instance and the type of DBMS. Once discovered, you have the
option to register datasources for DBArtisan usage.
2. For the purpose of this Guide, dismiss the dialog. You will be registering a datasource manually.
Registering Cross-Platform Datasources
The Datasource Registration Wizard walks you through the process of registering a datasource for use with DBArtisan.
1. On the Datasource menu, select Register Datasource.
DBArtisan opens the wizard, initially prompting you for the DBMS type.
2. Select Oracle as the database type and then click Next.
DBArtisan opens the next panel of the Datasource Registration Wizard.
3. Provide the Host machine name associated with an Oracle datasource.
4. Type a Port number. The default is 1521, but you can change it to wherever the Oracle listener is set up.
5. Specify a Type of SERVICE_NAME or SID and enter the corresponding value in the SID/Service Name box.
6. In the Datasource Name text box, type SAMPLE_DATASOURCE for the purpose of this example.
7. Click Next.
DBArtisan saves your selections and opens the next panel of the Datasource Registration Wizard.
8. In **User Id**, type the user id for the database.
9. In **Password**, type the user’s password.
10. To save and encrypt your password, select **Auto-Connect**?
11. Click **Next**.
DBArtisan opens the final panel of the wizard.
12. In the Managed Datasources tree, place the datasource you are registering.
13. Click **Finish**.
DBArtisan prompts you as to whether you want to connect to the datasource.
14. Click **Yes**.
DBArtisan offers the same easy-to-use Datasource Registration Wizard for IBM DB2, Microsoft SQL Server, Oracle, MySQL, and Sybase connections. The connection information only needs to be set up one time for each datasource and can be saved locally or in a common datasource catalog for use by other Embarcadero products.
You can configure Embarcadero database applications to use a datasource catalog stored in the system registry of your machine (local) or to use a datasource catalog located in the registry of another computer (remote). This capability makes it easy to share datasource catalogs among multiple users so that maintenance can occur in one location.
All Embarcadero database administration products share the datasource catalog, which means that when you set
up your datasource catalog using one product such as DBArtisan, the same list of datasources is available in other Embarcadero Technologies products. Any changes you make to the datasource catalog are reflected in all Embarcadero database management products.
**General Usability Features**
DBArtisan provides many "user in mind" features that make the product configurable to meet individual needs and preferences. These features are designed to shave time off the tasks that you perform many times on any given working day.
**Retaining Datasource Explorer View Settings**
1. At the top of the **Explorer** tree, click to expand the drop-down menu.
2. Select Retain View Settings.
The next time you open DBArtisan, the Explorer appears just as you left it. All connections that were present when you closed DBArtisan will be reestablished.
**Datasource Explorer Bookmarks**
1. In the **Explorer** tree, right-click any node.
2. Select Add Bookmark.
DBArtisan opens the Add Friendly Bookmark Name dialog box.
3. Click **OK**.
After Bookmarks are defined you can use them to easily navigate to commonly used datasource resources via the main menu Bookmarks item.
---
**Setting Keyboard Shortcuts and Hotkeys**
1. In any open space above the Datasource Explorer, right-click.
DBArtisan opens a shortcut menu.
2. From the shortcut menu, select Customize.
The Customize dialog box opens.
3. In the Customize dialog box, open the Keyboard tab.

The Keyboard tab can be used to set Keyboard shortcut hot keys for all areas of DBArtisan functionality.
4. Click **Close**.
Referencing Most Recently Used Datasources
1. From the **File** menu, select **Recent Datasources**, and then choose a datasource.
DBArtisan opens the datasource in the Datasource Explorer, ready to work with an active connection.
Session 2: Object and Schema Management
Advanced Object Creation and Management
DBArtisan provides unparalleled database object management capabilities. Its database platform- and version-specific graphical object editors and wizards enable you to easily create, drop or alter any of your accessible database objects. The following example walks you through creating and then altering a standard Oracle table object. This concept carries across all of the supported object types, across all of the supported platforms.
Creating a Table Object
1. On the Datasource Explorer, expand the Schema node an Oracle datasource.
2. On the Oracle datasource, right-click the Tables node, and then select New.
DBArtisan opens the Table wizard and leads you through the process of creating a table object.
3. Complete the wizard panels, and ensure that you create two or more columns in the table.
4. Click Finish.
DBArtisan lets you preview any and all generated scripts before you submit them to the database. This is standard for all object related scripts.
Making changes to an existing table object
Changes to database tables, such as modifying column lengths, inserting new columns, or deleting unneeded ones, can require dropping of a table. This requires knowledge of the underlying object dependencies so that these dependent objects are rebuilt after the table has been re-created. DBArtisan provides the ability to perform “extended” table alterations by constructing a SQL script with the steps necessary to save off the original data, create the new table, and populate it with the original data. Once these steps are complete, all dependent objects are then rebuilt and permissions re-applied. Following is a sample table change:
1. From the Explorer, Tables node, select the table you created in the previous example.
2. Double-click the table.
OR
3. From the Command menu, click Open.
DBArtisan opens the Table Editor. The Table Editor provides access to basic table properties, the list of table columns as well as any constraints, storage parameters, space allocation, partitioning, table dependencies, object privileges, table DDL and other attributes of the table.
4. Click the Columns tab.
5. Select one of the columns you created in this table you want to modify.
Details for the column are shown in the Column Attributes area on the right side of the pane.
6. In the Width or Scale text box, type a new value.
7. On the Tables Editor toolbar, select the Alter button.
DBArtisan lets you preview the SQL script before you submit it to the database.
8. Close the Tables Editor pane.
Working with Object DDL
DBArtisan allows you to easily extract DDL for single or multiple objects using several methods. The most straight-forward is described here:
1. On the Explorer, expand an Oracle datasource.
2. On the Oracle datasource, click the Tables node.
3. In the right pane of the Explorer window, right-click any table or group of tables (SHIFT+CLICK), and then select Extract.
The DDL for all highlighted objects is extracted directly in a DDL Editor where it can be altered, executed and saved to the database, with no intermediary steps required.
4. Close the PL/SQL Editor pane.
Advanced Schema Management
In addition to standardized object support, DBArtisan provides you with advanced Schema management features. These features include full or object-level schema extraction, migration (with or without data) and publication. This example walks you through a simple cross-platform migration between Oracle and SQL Server datasources. Because DBArtisan automatically resolves differences between these disparate DBMS platforms, you can concentrate on what you want to do, rather than
how to actually do it. The Schema Migration Wizard sequences the creation of objects in the proper order to eliminate dependency errors. It also has the intelligence to use the fastest method available for copying table data.
**Schema Level Migration**
While this example focuses on schema migration, the same wizard principle applies to schema extract and publication.
1. **On the Utilities menu, select Schema Migration.**
DBArtisan opens the Migration Wizard.
2. **Click Next.**
DBArtisan opens the second panel of the Schema Migration Wizard.
3. **Select Perform New Migration and Normal mode** on this panel.
4. **Click Next.**
DBArtisan opens the third panel of the Migration Wizard.
5. Under **Source Datasource**, select an Oracle datasource.
6. Under **Target datasource**, select an SQL Server datasource.
7. Click **Next**.
DBArtisan opens the next panel of the Migration Wizard.
8. Use the **Server Objects** and **Database Object Types** boxes to select the owner (All owners is the default) and associated object types you want to migrate to the target datasource.
9. Click **Next**.
DBArtisan opens the next panel of the Migration Wizard.
10. In the **Migration Options** box, specify the migration options to use for this migration job. This panel provides a comprehensive list of dependency, script, and ownership options.
11. Click **Next**.
DBArtisan opens the next panel of the Migration Wizard.
This panel provides a summary of the migration about to be performed. At this point you can either click Next to view the progress of the migration or click Cancel to quit.
Session 3: Database Security Management
DBArtisan can help you efficiently establish and maintain database security and related objects. Whether you are managing an existing production database or setting up a new environment, you’ll find consistent support across all of the supported platforms.
Adding a new database user
While this example focuses on creating a new Oracle user, the same wizard-driven principle applies to all security objects (groups, roles, etc).
1. On the Datasource Explorer, expand an Oracle datasource, and then the Security node.
2. On the Security node, right-click Users, and then click New.
DBArtisan opens the User Wizard and leads you through the process of adding a user.
3. Provide the information on each panel of the User Wizard until you reach the DDL View panel.
DBArtisan allows you to preview any and all generated scripts before they are submitted to the database. This is standard for all object related scripts.
4. Click Execute to create the new user.
DBArtisan opens the User Editor for the new user. The standard User Editor can be used to manage existing database users as shown below.
Granting and Editing User Privileges
Privileges can be easily granted, revoked, or viewed from within either of two editors within DBArtisan: the User Editor, or the individual object editor (Table, procedure, etc.) The User editor provides a tabbed interface, which can be used to view and modify individual attributes of the user.
1. In the User Editor, open the Object Permissions tab.
2. Use the Object Type dropdown to select a set of objects such as tables or views.
3. Select a cell (corresponding to a specific object type and a specific permission, such as DELETE), and then click **Grant**.
A distinctive icon is shown in the cell, indicating that this permission is granted. You use a similar process to revoke privileges and perform other permissions-based activities.
4. On the Object Editor toolbar, click **Alter** to implement the changes.
Session 4: Space Management
Managing space is vital to ensuring the availability and performance of your databases. DBArtisan incorporates many built-in space features that enable you to smartly manage and exploit all aspects of your database's storage. The following example walks you through a review of DBArtisan's built-in support for reporting Oracle tablespace storage and space data.
Built-in Space Management
While this example is specific to Oracle tablespaces the same concept applies to all of the supported platforms.
2. On the Oracle datasource, expand the Storage node, and then select Tablespace.
3. Right-click any tablespace listed in the right pane of the Explorer window, and then click Open.
Embarcadero DBArtisan opens the Tablespace Editor.
4. On the Tablespace Editor, click the Storage tab.
The Storage tab displays and lets you edit the tablespace extent limits.
5. On the Tablespace Editor, click the Space tab.
The Space tab displays a graphical view of the Free space and Fragmentation Index for the target tablespace.
6. Finally, on the Tablespace Editor, click the Map tab.
The Map tab displays a color-coded map of the objects contained on the tablespace.
The map segments are proportional to the actual size of the objects on the tablespace.
7. Close the Tablespaces Editor pane.
**Advanced Space Management (Oracle and SQL Server only)**
For advanced space analysis and management, DBArtisan’s optional Space Analyst component contains sophisticated diagnostics to help you pinpoint all space-related problems in your database, as well as an intelligent reorganization wizard that can reorganize all or selected parts of your database.
**Embarcadero Space Analyst**
1. On the Analyst menu, select **Space Analyst**.
The Space Analyst launches in the DBArtisan workspace.
Embarcadero’s Space Analyst provides sophisticated diagnostic capabilities to troubleshoot bottlenecks and performance inefficiencies that result in poor space management.
Please see the DBArtisan online help for a detailed walkthrough of all available features and functionality.
2. Close the Space Analyst pane.
Session 5: SQL Management
DBArtisan provides powerful visual tools for creating and analyzing complex SQL statements and server-side code objects. The following examples walk you through DBArtisan’s Visual Query Builder, feature-rich ISQL facility and some of the advanced analysis and debugging capabilities provided by the Embarcadero SQL Debugger and SQL Profiler.
Visual Query Builder
1. From the Tools menu, select Query Builder.
OR
2. In the right pane, right-click a table, and then select Build Query.
DBArtisan opens the Query Builder.
3. In the Tables/Views tab, right-click a table or view and select Add.
4. In the window that opens, select the columns to return in the result.
Query Builder generates the query text in the lower SQL window. You can build advanced queries using the options supplied in the DML tab. You choose the type of query (SELECT, INSERT, and so on) using the dropdown on the Query Builder toolbar.
5. After the query is built, click the Execute button (green arrow) on the Query Builder toolbar.
Query Builder displays results in the lower SQL window.
6. Close the QueryBuilder pane.
ISQL Window
1. On the File menu, click New, and then ISQL.
DBArtisan opens the ISQL Editor window.
2. Add SQL code via your method of choice (free-form typing, retrieve from a file, paste copied code, etc.).
The ISQL Editor window includes the following features and options:
- The ISQL window highlights all platform and general keywords and provides the options for SQL code formatting, syntax checking and analysis.
- Once code is executed you have control over whether your transaction is committed or rolled back from the database.
- For all open ISQL windows, there are also options for connection locking, scheduling, executing your code across multiple datasources, explain plan generation, and SQL Tuning.
3. Press F8 prior to SQL execution.
DBArtisan opens the Query Option dialog box that lets you set platform specific Query Options to immediately determine if your code is optimized.
4. Either close the Query options dialog and then the SQL Editor window or continue to work to execute your query. When complete, ensure that only the Datasource Explorer window is open.
SQL Debugging, Analysis and Tuning
To analyze and debug your SQL code, DBArtisan provides cross-platform SQL code debuggers, and for your Oracle databases, a robust PL/SQL code profiler that helps you to pinpoint and eliminate “hot spots” within poorly running server-side code. To ensure code efficiency, the ISQL window provides tight integration with Embarcadero’s SQL Tuner, so you can perform multiple “test then tune” iterations without having to leave an open ISQL window.
**SQL Debugging**
While this example is specific to Oracle PL/SQL Debugging the same interface and functionality applies to all of the supported platforms.
1. On the **Datasource Explorer**, expand any Oracle datasource node.
2. On the Oracle datasource, expand the **Procedures** node.
3. In the right pane of the **Explorer**, right-click any stored procedure, and then select **Debug**.
```sql
CREATE OR REPLACE PROCEDURE SCOTT.TEST
IS
BEGIN
DBMS_OUTPUT.PUT_LINE ('TEST');
END;
/
```
4. If applicable, enter any input parameters the Procedure Execution input window and then click **Continue**.
After the SQL Debugger interface is displayed you can step through code, step into dependencies, set and watch variables, and even perform basic code profiling for each line of executed code.
Please see the DBArtisan online help for a detailed listing of all available SQL features.
Session 6: Job Management
DBArtisan freely integrates with the Microsoft Windows Task Scheduler, which allows you to schedule virtually any task to run on your own computer whenever and how often you’d like. While this example is specific to an Oracle table redefinition, the same concept applies to any job or script that can be scheduled.
Advanced Job Management
To schedule a job, do the following:
1. On the Explorer, expand any Oracle datasource.
2. On the Oracle datasource, expand the Tables node, and then right-click any table.
3. Select Extract.
4. From the ISQL window toolbar, click Schedule.
The Scheduler Action dialog box opens where you can provide a name, set notifications, and specify an output directory for the new job.
5. After you have completed the dialog box, click OK.
6. To monitor and administer your new job, on the Oracle datasource, right-click the Instance node, and then select Job Scheduler. This opens the Windows Job Scheduler dialog. For the purposes of this exercise, you can either finish scheduling the task and inspect the results when it completes, or you can click Cancel to proceed to the next session.
Session 7: Data Management
DBArtisan provides comprehensive facilities to help you manage the data in all of your databases. A visual data editor helps you add, change, and delete data from your tables with all referential integrity enforced. You can create insert statements for tables using current data and also extract data as XML documents for certain databases. Rounding out its rich Schema Management capabilities, DBArtisan also allows you to migrate schema objects and associated table data from one database server to another, across the same or different platforms.
Visual Data Editing
To start the Visual Data Editor, do the following:
1. In the Datasource Explorer, right-click any table or tables, and select Edit Data.
DBArtisan opens the Data Editor Filter.
2. In Columns, select the columns to include in the Edit.
3. You can also filter the editable rows by including your own Select statement.
4. Click OK.
In Live mode all changes are applied to the database when you move off of an updated or inserted row. Deleted rows are immediately removed from the database.
Batch mode allows you to make changes and then save all when all are complete. Mode is controlled by a dropdown in the Data Editor toolbar.
5. Experiment with editing your data, and when complete, on the Data Editor toolbar, click the Execute (blue arrow) button.
DBArtisan commits your changes. Regardless of mode, all of the generated DML statements are viewable in the lower SQL window.
6. Close the Data Editor pane.
Working with Table Data – Create Insert Statements
2. On the Oracle datasource, expand the Tables node.
3. In the right pane of the Explorer window, right-click any table, and then select **Create Insert Statements**.
DBArtisan opens the Create Insert Statements dialog box.
5. In **Columns**, select the columns you want to include in the Insert statement.
6. You can also filter what rows are included by adding your own Select statement.
7. **OPTIONAL** Select Owner information and row limit
8. Click **OK**.
The resulting insert statements are created and presented in an active ISQL window. At this point they can be executed immediately, scheduled to later or saved. Note that all extracted insert statements can be run against the same or different databases containing a similar schema.
```
1 --
2 -- TABLE INSERT STATEMENTS
3 --
4 INSERT INTO HR.JOB_HISTORY ( JOB_HISTORY.EMPLOYEE_ID,
5 JOB_HISTORY.START_DATE, JOB_HISTORY.END_DATE, JOB_HISTORY.JOB_ID,
6 JOB_HISTORY.DEPARTMENT_ID )
7 VALUES ( 113, TO_DATE('11/04/2006 12:43:24 PM',
8 'MM/DD/YYYY HH24:MI:SS AM'), TO_DATE('01/15/2007 12:43:24 PM',
9 'MM/DD/YYYY HH24:MI:SS AM'), 'DG_MAN', 270 )
10 /
11 INSERT INTO HR.JOB_HISTORY ( JOB_HISTORY.EMPLOYEE_ID,
12 JOB_HISTORY.START_DATE, JOB_HISTORY.END_DATE, JOB_HISTORY.JOB_ID,
13 JOB_HISTORY.DEPARTMENT_ID )
14 VALUES ( 102, TO_DATE('01/13/1993 12:00:00 AM',
15 'MM/DD/YYYY HH24:MI:SS AM'), TO_DATE('07/24/1998 12:00:00 AM',
16 'MM/DD/YYYY HH24:MI:SS AM'), 'IT_PROG', 60 )
17 /
18 INSERT INTO HR.JOB_HISTORY ( JOB_HISTORY.EMPLOYEE_ID,
19 JOB_HISTORY.START_DATE, JOB_HISTORY.END_DATE, JOB_HISTORY.JOB_ID,
20 JOB_HISTORY.DEPARTMENT_ID )
21 VALUES ( 101, TO_DATE('09/21/1999 12:00:00 AM',
22 'MM/DD/YYYY HH24:MI:SS AM'), TO_DATE('10/27/1999 12:00:00 AM',
23 'MM/DD/YYYY HH24:MI:SS AM'), 'AC_ACCOUNT', 110 )
24 /
```
9. Close the editor pane.
Working with Table Data – Extract Data as XML
This feature is available for Oracle 9i and SQL Server 8.1. The following example is specific to Oracle 9i, but the concept applies to SQL Server 8.1 as well.
2. On the Oracle datasource, expand the Tables node.
3. In the right pane of the Explorer window, right-click any table listed, and then select Extract Data as XML.
4. Select the columns to include in the Insert statement.
5. You can also filter what rows are included by adding your own Select statement.
6. Click OK.
The resulting XML document is created and presented in an active XML Editor. At this point the document can be saved in XML format.
7. Close the editor pane.
Advanced Data Management – Schema and Data Migration
DBArtisan provides advanced data management tools that help you to move schema and corresponding table data across the same or different platforms. You can copy a single database object, all objects owned by a specific user, or an entire database all guided by a wizard-driven process.
Schema and Data Migration
While this example is specific to an Oracle to SQL Server schema and data migration the same concept applies to any migration involving any combination of the supported platforms.
To open the Schema Migration Wizard:
1. On the Utilities menu, select Schema Migration button.
DBArtisan opens the Migration Wizard.
2. Click Next.
DBArtisan opens the next panel of the Migration Wizard.
3. Select the **Perform new migration and Normal Mode** options.
4. Click **Next**.
DBArtisan opens the next panel of the Migration Wizard.
5. Select Oracle server and database as the **Source Datasource** and a Microsoft SQL Server server and database as the **Target Datasource**.
6. Click **Next**.
DBArtisan opens the next panel of the Migration Wizard.
7. Select the objects to be migrated to the target datasource.
8. Select which owner to transfer objects for.
9. Specify the migration options to use for this migration job.
10. Select the Customize Object List option.
11. Click Next.
DBArtisan opens the next panel of the Migration Wizard.
12. Select the specific objects you would like to migrate.
13. Click Next.
DBArtisan opens the next panel of the Migration Wizard.
14. Select the Dependency, Script, Owner, Table, and Migration options to be used while performing the migration.
15. Click Next.
DBArtisan opens the next panel of the Migration Wizard. It provides a summary of your choices and provides additional options.
Clicking Finish executes the migration and lets you view the progress of the job.
Session 8: Performance Management
DBArtisan offers a number of different options to help you manage the performance of your databases. First, DBArtisan ships with a built-in process monitor that helps you understand who is connected to your database along with each user’s current activity and session-related data. For more robust performance details DBArtisan’s Performance Analyst add-on is a powerful client-side database monitor that runs fully contained in the DBArtisan console.
Monitoring Sessions
While this example is specific to Oracle the Process Monitor is available for all of the supported platforms.
To start the DBArtisan Process Monitor:
2. From the Utilities menu, select Database Monitor.
The Database Monitor includes the following options and features:
- Highlight any session and any currently running SQL is displayed in the lower pane.
- You can drill-down into a specific session to display session-level statistical details, historical and current wait events along with a working copy of the currently running SQL that can be copied to an ISQL for
explain plan generation.
• By using the Monitor drop down options you can display more advanced database-level monitoring data such as locks, blocking locks, hit ratio by user, Top 20 SQL etc.
3. Close the Database Monitor pane.
Advanced Client-Side Performance Monitoring
For advanced performance monitoring and management, DBArtisan’s optional Performance Analyst provides intelligent diagnostic information and strong drilldown details to help you get to the heart of any episode of performance degradation. Performance Analyst integrates completely with DBArtisan so you can fix any performance problems with a few clicks of the mouse.
As of DBArtisan 8.5, Performance Analyst is available for Oracle, SQL Server, Sybase and DB2 for Unix, Windows, and Linux on Open Systems.
Embarcadero Performance Analyst
2. On the Analyst menu, select Performance Analyst.
The Performance Analyst opens in the DBArtisan workspace for the target datasource.
Please see the DBArtisan online help for a detailed walkthrough of all available features and functionality.
For enterprise performance monitoring, DBArtisan integrates with the Embarcadero Performance Center Web Client. While integration requires a licensed Performance Center server, there are not upgrade requirements for the DBArtisan console.
**NOTE:** You should only work through the following exercise if you are a current Performance Center user. If you are not a Performance Center customer, please read the following for information purposes only.
Use the following to establish a quick connection to your Performance Center server:
1. On the Options Editor, select the Perf Center tab.
2. Select the Web Client radio button and enter the Performance Center server info as indicated. Perform a test to ensure the configuration is correct. After a connection is established you can use the Tools > Performance Center option to launch the Web Client within the DBArtisan console. If you are using the full Performance Center client you can use this same Options editor tab to switch back.
Note the Performance Center web client provides read only access to the monitored datasources. To perform edits or maintenance you must switch to the full Performance Center client.
Please see the DBArtisan online help for a detailed walkthrough of all available features and functionality.
Session 9: Capacity Management
Planning for the future of your critical databases used to be a difficult task. However, DBArtisan’s optional Capacity Analyst tool makes it easy to understand where your databases are today and where they are headed in the future. Capacity Analyst lets you track key database metadata and performance metrics over time so you can perform trend analysis on key are like growth, object fragmentation, I/O and session load. Like all of the Analyst Series products, Capacity Analyst runs fully contained within DBArtisan so you have access to smart, built-in forecasting mechanisms that allow you to predict when your databases will run out of space and the ability to proactively manage your storage assets, all from the same console.
As of DBArtisan 8.5, Performance Analyst is available for DB2 for Unix, Windows, and Linux, Sybase, Oracle, and SQL Server.
Advanced Capacity Planning – Embarcadero Capacity Analyst
2. From the Analyst toolbar, click the Capacity Analyst button.
The Capacity Analyst opens in the DBArtisan workspace for the target Oracle datasource.
Please see the Embarcadero Capacity Analyst evaluation guide for a detailed walkthrough of all available features and functionality.
Session 10: General Utilities and Tools
No evaluation of DBArtisan would be complete without a mention of the general Utilities and Tools that are available across all of the supported platforms.
Utilities Menu
The main toolbar Utilities menu contains the more advanced DBArtisan features. The available menu items are context-sensitive and version specific for the selected datasource DBMS platform. This example shows Utilities menu features that are available for Oracle.
Tools Menu
The main toolbar Tools menu contains those features that are common across all dbms platforms. This example shows the Tools menu features that are available for all supported dbms platforms. Note that if any other Embarcadero products are installed on your client they will be available in the Tools menu.
All DBArtisan utilities and tools provide a common interface that walks you through all input and execution requirements. All results are consistently presented so you can easily move between features without effort or confusion.
Additional Resources
Licensing Your Embarcadero Technologies Product
All Embarcadero Technologies products include a 14-day trial period. To continue using the product without interruption, we recommend that you license it as soon as possible. To license your product, use the License Request Wizard found in the Help menu of your respective product. If you have not yet purchased your Embarcadero Technologies product, contact sales@embarcadero.com, or uk.sales@embarcadero.com for sales in the EMEA region.
Embarcadero Technologies Product Support
The Embarcadero Technologies Web site is an excellent source for additional product information, including white papers, articles, FAQs, discussion groups, and the Embarcadero Knowledge Base. Go to www.embarcadero.com/resources, or click any of the links below, to find:
- Documentation
- Online Demos
- Technical Papers
- Discussion Groups
- Knowledge Base
- FAQ
Embarcadero Technologies Technical Support
If you have a valid maintenance contract with Embarcadero Technologies, the Embarcadero Technical Support team is available to assist you with any problems you have with our applications. Our maintenance contract also entitles registered users of Embarcadero Technologies products to download free software upgrades during the active contract period. Evaluators receive free technical support for the term of their evaluation (14 days).
We encourage you to open technical support cases via the Technical Support request form at the Embarcadero Technologies Web site. For additional information about Embarcadero Technologies Technical Support, go to the Support page on our Web site.
Embarcadero Technologies on the Web
To download evaluations of other Embarcadero Technologies products or to learn more about our company and our products visit us at www.embarcadero.com.
|
{"Source-Url": "http://docs.embarcadero.com/products/dbartisan/dbartisan8.5/dba_eval_guide81.pdf", "len_cl100k_base": 8388, "olmocr-version": "0.1.53", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 62528, "total-output-tokens": 10239, "length": "2e13", "weborganizer": {"__label__adult": 0.0003116130828857422, "__label__art_design": 0.0003573894500732422, "__label__crime_law": 0.0002386569976806641, "__label__education_jobs": 0.001026153564453125, "__label__entertainment": 0.00011748075485229492, "__label__fashion_beauty": 0.00012254714965820312, "__label__finance_business": 0.0015392303466796875, "__label__food_dining": 0.00023186206817626953, "__label__games": 0.0006995201110839844, "__label__hardware": 0.000988006591796875, "__label__health": 0.0002027750015258789, "__label__history": 0.0001959800720214844, "__label__home_hobbies": 9.936094284057616e-05, "__label__industrial": 0.0005698204040527344, "__label__literature": 0.00018966197967529297, "__label__politics": 0.0001596212387084961, "__label__religion": 0.00036406517028808594, "__label__science_tech": 0.01251220703125, "__label__social_life": 9.781122207641602e-05, "__label__software": 0.2083740234375, "__label__software_dev": 0.77099609375, "__label__sports_fitness": 0.00015926361083984375, "__label__transportation": 0.00023114681243896484, "__label__travel": 0.00015854835510253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44072, 0.02195]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44072, 0.20517]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44072, 0.75695]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 4445, false], [4445, 6782, null], [6782, 9591, null], [9591, 9762, null], [9762, 11693, null], [11693, 12758, null], [12758, 13986, null], [13986, 14943, null], [14943, 15602, null], [15602, 15841, null], [15841, 18038, null], [18038, 19574, null], [19574, 20279, null], [20279, 21028, null], [21028, 21201, null], [21201, 22820, null], [22820, 23208, null], [23208, 24382, null], [24382, 25092, null], [25092, 25408, null], [25408, 26652, null], [26652, 28124, null], [28124, 29015, null], [29015, 30169, null], [30169, 31699, null], [31699, 33716, null], [33716, 35220, null], [35220, 35589, null], [35589, 36014, null], [36014, 36356, null], [36356, 37499, null], [37499, 38513, null], [38513, 39616, null], [39616, 39908, null], [39908, 41210, null], [41210, 42238, null], [42238, 44072, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 4445, true], [4445, 6782, null], [6782, 9591, null], [9591, 9762, null], [9762, 11693, null], [11693, 12758, null], [12758, 13986, null], [13986, 14943, null], [14943, 15602, null], [15602, 15841, null], [15841, 18038, null], [18038, 19574, null], [19574, 20279, null], [20279, 21028, null], [21028, 21201, null], [21201, 22820, null], [22820, 23208, null], [23208, 24382, null], [24382, 25092, null], [25092, 25408, null], [25408, 26652, null], [26652, 28124, null], [28124, 29015, null], [29015, 30169, null], [30169, 31699, null], [31699, 33716, null], [33716, 35220, null], [35220, 35589, null], [35589, 36014, null], [36014, 36356, null], [36356, 37499, null], [37499, 38513, null], [38513, 39616, null], [39616, 39908, null], [39908, 41210, null], [41210, 42238, null], [42238, 44072, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44072, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44072, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 4445, 3], [4445, 6782, 4], [6782, 9591, 5], [9591, 9762, 6], [9762, 11693, 7], [11693, 12758, 8], [12758, 13986, 9], [13986, 14943, 10], [14943, 15602, 11], [15602, 15841, 12], [15841, 18038, 13], [18038, 19574, 14], [19574, 20279, 15], [20279, 21028, 16], [21028, 21201, 17], [21201, 22820, 18], [22820, 23208, 19], [23208, 24382, 20], [24382, 25092, 21], [25092, 25408, 22], [25408, 26652, 23], [26652, 28124, 24], [28124, 29015, 25], [29015, 30169, 26], [30169, 31699, 27], [31699, 33716, 28], [33716, 35220, 29], [35220, 35589, 30], [35589, 36014, 31], [36014, 36356, 32], [36356, 37499, 33], [37499, 38513, 34], [38513, 39616, 35], [39616, 39908, 36], [39908, 41210, 37], [41210, 42238, 38], [42238, 44072, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44072, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
c637634eeefc376bdeda812949ada992fec7138b
|
Knowledge Compilation Properties of Trees-of-BDDs, Revisited
Hélène Fargier
IRIT-CNRS, UMR 5505
Université de Toulouse, France
fargier@irit.fr
Pierre Marquis
CRIL-CNRS, UMR 8188
Université Lille-Nord de France, Artois, France
marquis@cril.univ-artois.fr
Abstract
Recent results have shown the interest of trees-of-BDDs [Subbarayan et al., 2007] as a suitable target language for propositional knowledge compilation from the practical side. In the present paper, the concept of tree-of-BDDs is extended to additional classes of data structures C thus leading to trees-of-C representations (ToC). We provide a number of generic results enabling one to determine the queries/transformations satisfied by ToC depending on those satisfied by C. We also present some results about the spatial efficiency of the ToC languages. Focusing on the ToOBDD< language (and other related languages), we address a number of issues that remained open in [Subbarayan et al., 2007]. We show that beyond CO and VA, the ToOBDD< fragment satisfies IM and ME but satisfies neither CD nor any query among CE, SE unless P = NP. Among other results, we prove that ToOBDD< is not comparable w.r.t. succinctness with any of CNF, DNF, DNNF unless the polynomial hierarchy collapses. This contributes to the explanation of some empirical results reported in [Subbarayan et al., 2007].
1 Introduction
This paper is concerned with “knowledge compilation” (KC), a family of approaches proposed so far for addressing the intractability of a number of AI problems of various kinds (reasoning, decision making, etc.). The key idea underlying KC is to pre-process parts of the available information (i.e., turning them into a compiled form) for improving on-line computational efficiency (see among others [Darwiche, 2001; Cadoli and Donini, 1998; Selman and Kautz, 1996; del Val, 1994]).
A important research line in KC [Gogic et al., 1995; Darwiche and Marquis, 2002] addresses the following issue: How to choose a target language for knowledge compilation? In [Darwiche and Marquis, 2002], the authors argue that the choice of a target language must be based both on the set of queries and transformations which can be achieved in polynomial time when the data are represented in the language, as well as the spatial efficiency of the language. They pointed out a KC map which can be viewed as a multi-criteria evaluation of a number of propositional fragments, including DNF, d–DNNF, CNF, DNF, OBDD<, OBDD (the union of all OBDD< when < varies), etc. (see [Darwiche and Marquis, 2002] for details). From there, other propositional fragments have been considered so far and put in the KC map, see for instance [Wachter and Haenni, 2006; Fargier and Marquis, 2006; Subbarayan et al., 2007; Pipatsrisawat and Darwiche, 2008; Fargier and Marquis, 2008a; 2008b].
Recent experimental results have shown the practical interest of trees-of-BDDs [Subbarayan et al., 2007] as a target language for propositional knowledge compilation: it turns out that the tree-of-BDDs language renders feasible the compilation of a number of benchmarks which cannot be compiled into D-DNNF due to space limitations.
In the present paper, we elaborate on the tree-of-BDDs language. After some formal preliminaries (Section 2), we generalize the tree-of-BDDs language to the family of ToC representations where C is any complete propositional language (Section 3). We provide a number of generic results enabling one to determine the queries/transformations satisfied by ToC depending on the queries/transformations satisfied by C. We also present some results about the spatial efficiency of the ToC languages. Focusing on ToOBDD< and some related languages, we then address a number of issues that remained open in [Subbarayan et al., 2007] (Section 4): beyond CO and VA, the ToOBDD< language satisfies IM and ME but does not satisfy any query among CE, SE unless P = NP. Under similar assumptions from complexity theory, we demonstrate that ToOBDD< does not satisfy any transformation among CD, FO, A BC, ∨ C or ¬C. Among other succinctness results, we prove that the ToOBDD< language is not comparable w.r.t. succinctness with any of CNF, DNF or DNNF unless the polynomial hierarchy PH collapses. This contributes to the explanation of some empirical results reported in [Subbarayan et al., 2007].
We conclude the paper by a discussion of the results and some perspectives (Section 5). Proofs are omitted for space reasons but are available at http://www.fr/~marquis/fargier-marquis-ijcai09.pdf.
2 Representations and the KC Map
Trees-of-BDDs and their forthcoming generalization are not stricto sensu formulae. Hence we need to extend the notions of queries, transformations and succinctness at work in the KC map to such representations. Roughly speaking, a propositional representation language is a way to represent Boolean functions. Such a representation language often takes the form of a standard propositional language but other data structures can be used as well (e.g. Karnaugh maps, truth tables, various graphs including those binary decision diagrams, ... and of course trees-of-BDDs) for the representation purpose.
Formally, given a finite set of propositional variables $PS$, we consider Boolean functions from $\{0, 1\}^X$ to $\{0, 1\}$, where $X \subseteq PS$. $Var(f) = X$ is called the scope of $f$. The support $\Omega(f)$ of $f$ is the set of all assignments $\omega$ of $Var(f)$ to Boolean values such that $f(\omega) = 1$. For any $X \subseteq PS$, we note by $\overline{X}$ the set $PS \setminus X$. The set of Boolean functions is equipped with the three standard internal laws, $\land$, $\lor$ and $\neg$. Given $X \subseteq PS$ we note by $\exists f$, the Boolean function with scope $\overline{Var(f)} \setminus X$ that maps 1 to an assignment $\omega_{\overline{Var(f)}} \setminus X$ of $\overline{Var(f)} \setminus X$ iff there exists an assignment $\omega$ of $\overline{Var(f)}$ such that the restriction of $\omega$ over $\overline{Var(f)} \setminus X$ and $\omega_{\overline{Var(f)}} \setminus X$ coincide and $f(\omega) = 1$.
Definition 1 (representation language) (inspired from I.Gogic et al., 1995) A (propositional) representation language over a finite set of propositional variables $PS$ is a set $C$ of data structures $\alpha$ (also referred to as $C$-representations) together with a scope function $Var: C \rightarrow 2^X$ with $X \subseteq PS$ and an interpretation function $I_\alpha$ which associates to each $C$-representation $\alpha$ a Boolean function $I(\alpha)$ the scope of which is $\overline{Var(\alpha)}$, $C$ is also equipped with a size function from $C$ to $\mathbb{N}$ that provides the size $|\alpha|$ of any $C$-representation $\alpha$.
Definition 2 (complete language) A propositional representation language $C$ is said to be complete iff for any Boolean function $f$ with $\overline{Var(f)} \subseteq PS$, there exists a $C$-representation $\alpha$ such that $I(\alpha) = f$.
Clearly enough, formulae from a standard propositional language are representations of Boolean functions. The size of such a formula is the number of symbols in it. Slightly abusing words, when $\Sigma$ is a propositional formula representing a Boolean function $g$ one often says that a representation $\alpha$ of $g$ is a representation of $\Sigma$ instead of $\alpha$ is a representation of the semantics of $\Sigma$.
The DAG-NNF language [Darwiche and Marquis, 2002] is also a complete graph-based representation language of Boolean functions. Distinguishable formulae from DAG-NNF are the literals over PS, the clauses (a clause is a finite disjunction of literals or the Boolean constant $\bot$) and the terms (a term is a finite conjunction of literals or the Boolean constant $\top$). We assume the reader to be familiar with the
\footnote{\cite{I.Gogic et al., 1995} refers to the interpretation function associated to the $C$ language, so that $I_C$ would be a more correct notation for it; nevertheless, in order to keep the notations light and since no ambiguity is possible, we refrained from indexing the functions $I$ (as well as $Var$ and the size function) by the associated representation language.
DAG-NNF fragments DNNF, d-DNNF, CNF, DNF, FBDD, OBDD,$<$, OBDD, MODS, etc.
Obviously, all the logical notions pertaining to formulae viewed up to logical equivalence can be easily extended to any representation language $C$ of Boolean functions. For instance, an assignment $\omega$ of $\overline{Var(\alpha)}$ to Boolean values is said to be a model of a $C$ representation $\alpha$ over $\overline{Var(\alpha)}$ iff $I(\alpha)(\omega) = 1$. Similarly, two representations $\alpha$ and $\beta$ (possibly from different representation formalisms) are said to be equivalent, noted $\alpha \equiv \beta$, when they represent the same Boolean function. A $C$ representation $\alpha$ is consistent (resp. valid) iff $\alpha$ does not represent the Boolean function $0$ (resp. represents the Boolean function $1$). $\alpha$ is a logical consequence of $\beta$ noted $\beta \models \alpha$, iff $\Omega(I(\beta)) \subseteq \Omega(I(\alpha))$.
We are now ready to extend the notions of queries, transformations and succinctness considered in the KC map to any propositional representation language. Their importance is discussed in depth in [Darwiche and Marquis, 2002], so we refrain from recalling it here.
Definition 3 (queries) Let $C$ denote a propositional representation language.
- $C$ satisfies CO (resp. VA) iff there exists a polytime algorithm that maps every $C$ representation $\alpha$ to $1$ if $\alpha$ is consistent (resp. valid), and to $0$ otherwise.
- $C$ satisfies CE iff there exists a polytime algorithm that maps every $C$ representation $\alpha$ and every clause $\delta$ to $1$ if $\alpha \models \delta$ holds, and to $0$ otherwise.
- $C$ satisfies EQ (resp. SE) iff there exists a polytime algorithm that maps every pair of $C$ representations $\alpha, \beta$ to $1$ if $\alpha \equiv \beta$ (resp. $\alpha \models \beta$) holds, and to $0$ otherwise.
- $C$ satisfies IM iff there exists a polytime algorithm that maps every $C$ representation $\alpha$ and every term $\gamma$ to $1$ if $\gamma \models \alpha$ holds, and to $0$ otherwise.
- $C$ satisfies CT iff there exists a polytime algorithm that maps every $C$ representation $\alpha$ to a nonnegative integer that represents the number of models of $\alpha$ over $\overline{Var(\alpha)}$ (in binary notation).
- $C$ satisfies ME iff there exists a polynomial $p(\ldots, m)$ and an algorithm that outputs all models of an arbitrary $C$-representation $\alpha$ in time $p(|\alpha|, m)$, where $m$ is the number of its models (over $\overline{Var(\alpha)}$).
Definition 4 (transformations) Let $C$ denote a propositional representation language.
- $C$ satisfies CD iff there exists a polytime algorithm that maps every $C$ representation $\alpha$ and every consistent term $\gamma$ to a $C$ representation $\beta$ of the restriction of $I(\alpha)$ to $I(\gamma)$, i.e., $\overline{Var(\beta)} = \overline{Var(\alpha)} \setminus \overline{Var(\gamma)}$ and $I(\beta) = \exists \overline{Var(\gamma)}.(I(\alpha) \land I(\gamma))$.
- $C$ satisfies FO iff there exists a polytime algorithm that maps every $C$ representation $\alpha$ and every subset $X$ of variables from $PS$ to a $C$ representation of $\exists X. I(\alpha)$. If the property holds for each singleton $X$, we say that $C$ satisfies SFO.
- $C$ satisfies $\land C$ (resp. $\lor C$) iff there exists a polytime algorithm that maps every finite set of $C$ representations...
α₁, . . . , αₙ to a C representation of I(α₁) ∧ · · · ∧ I(αₙ) (resp. I(α₁) ∨ · · · ∨ I(αₙ)).
• C satisfies ∧BC (resp. ∨BC) iff there exists a polynome
algorithm that maps every pair of C representations α
and β to a C representation of I(α) ∧ I(β) (resp. I(α) ∨ I(β)).
• C satisfies ¬C iff there exists a polynome algorithm that
maps every C representation α to a C representation of
¬I(α).
Definition 5 (succinctness) Let C₁ and C₂ be two representation languages. C₁ is at least as succinct as C₂, noted C₁ ≤ₛ C₂, if there exists a polynomial p such that for every C₂ representation α there exists an equivalent C₁ representation β where |β| ≤ p(|α|).
~ₘ is the symmetric part of ≤ₘ defined by C₁ ~ₘ C₂ if C₁ ≤ₘ C₂ and C₂ ≤ₘ C₁. <ₘ is the asymmetric part of ≤ₘ defined by C₁ <ₘ C₂ iff C₁ ≤ₘ C₂ and C₂ ≤ₘ C₁. Finally, C₁ ≤ₘ* C₂ (resp. C₁ <ₘ* C₂) means that C₁ ≤ₘ C₂ (resp. C₁ <ₘ C₂) unless the polynomial hierarchy PH collapses (which is considered very unlikely in complexity theory).
We also consider the following restriction of the succinctness relation:
Definition 6 (polynomial translation) Let C₁ and C₂ be two representation languages. C₁ is polynomially translatable into C₂, noted C₁ ≥ₚ C₂, if there exists a polynome algorithm A such that for every C₁ representation α A(α) is a C₂ representation such that A(α) ≡ α.
Like ≥ₘ, ≥ₚ is a preorder (i.e., a reflexive and transitive relation) over propositional representation languages. It refines the spatial efficiency preorder ≥ₘ in the sense that for any C₁ and C₂, if C₁ ≥ₚ C₂, then C₁ ≥ₘ C₂ (but the converse does not hold in general). We note by ¬ₚ the symmetric part of ≥ₚ.
3 The ToC Languages
We start with the definition of trees-of-BDDs as given in [Subbarayan et al., 2007] (modulo the notations used):
Definition 7 (tree-of-BDDs)
• A decomposition tree of a CNF formula Σ is a (finite) labelled tree T whose set of nodes is N. Each node n ∈ N is labelled with Var(n), a subset of Var(Σ). For each n ∈ N, let clauses(n) = {clause δ of Σ s.t. Var(δ) ⊆ Var(n)}; T satisfies two conditions: for every clause δ of Σ there exists n ∈ N such that δ ∈ clauses(n), and for every x ∈ Var(Σ), {n ∈ N | x ∈ Var(n)} forms a connected subtree of T.
• Let < be a total strict ordering over PS. A tree-of-BDDs of a CNF formula Σ given < consists of a decomposition tree T of Σ equipped with a further labelling function B such that for every n ∈ N, B(n) is the OBDD< representation of 3Var(n).I(Σ).
We have Var(T) = 3n∈N Var(n) and I(T) = 3n∈N I(B(n)). ToB denotes the set of all trees-of-BDDs given <.
Clearly, ToB is a complete representation language: for every Boolean function there is a CNF formula Σ representing it, and thus a tree-of-BDDs T of Σ such that I(T) = I(Σ).
The above definition can be simplified and extended, allowing the representation of other formulae than CNF ones, and taking advantage of other target languages than OBDD< for compiling the labels B(n):
Definition 8 (ToC) Let C be any complete propositional representation language. A ToC representation is a finite, labelled tree T, whose set of nodes is N. Each node n ∈ N is labelled with Var(n), a subset of PS and with a C representation B(n).
T must satisfy:
• the running intersection property: for each x ∈ ∪n∈N Var(n), {n ∈ N | x ∈ Var(n)} forms a connected subtree of T, and
• the global consistency property: for each n ∈ N, I(B(n)) = 3Var ar(x).3n∈N I(B(n)).
We have Var(T) = 3n∈N Var(n) and I(T) = 3n∈N I(B(n)). The size of a ToC representation T is the size of this tree, plus the sizes of the labels of the nodes of T (numbers of variables in Var(n) and sizes of B(n)).
ToC denotes the set of all ToC representations.
Taking C = OBDD<, we get the ToOBDD< language. Clearly, this definition of ToOBDD< is close to the previous one ToB from [Subbarayan et al., 2007], except that a ToOBDD< representation T is defined per se, i.e., independently from a given CNF formula Σ. Within this language, unlike with the OBDD< one, a Boolean function may have several equivalent representations. For instance, let Σ = (¬a ∧ ¬b) ∨ (¬a ∧ c) ∨ (b ∧ c). Whatever <, I(Σ) can be represented by the ToOBDD< representation T such that T has a single node n₀, such that Var(n₀) = Var(Σ) and B(n₀) is the OBDD< representation equivalent to Σ; observing that Σ ≡ (¬a ∨ b) ∧ (¬b ∨ c), I(Σ) can also be represented by the ToOBDD< representation T such that T has two nodes n₀ and n₁, the root of T is n₀, Var(n₀) = {a, b}, Var(n₁) = {b, c}, B(n₀) is the OBDD< formula equivalent to (¬a ∨ b), and B(n₁) is the OBDD< formula equivalent to (¬b ∨ c). In short, ToOBDD< does not offer the property of canonical representation.
Compiling a CNF formula Σ into a ToC representation T basically consists in computing first a decomposition tree of Σ, then taking advantage of any CNF-to-C compiler so as to turn the CNF clauses(n) formulae (for each node n of the tree) into equivalent C representations, and finally to use the well-known message-passing propagation algorithm (see the Propagate function in [Subbarayan et al., 2007]), which applies also to ToC representations) from the leaves of the tree to its root then from the root to the leaves so as to ensure the global consistency property. This approach can be easily extended to deal with the compilation of any conjunctive representation into a ToC representation when compilers to C are available. The running intersection property enables one to replace a global computation on the resulting ToC represen-
tation $T$ by a number of possibly easier, local computations on the corresponding $B(n)$.
Let us now present some generic properties about $\text{ToC}$ fragments; such properties are about queries, transformations and succinctness, and are related to similar properties satisfied by the corresponding $C$ languages. We first need the following definition:
**Definition 9 (TE, CL)** Let $C$ be any propositional representation language.
- $C$ satisfies $\text{TE}$ (the term condition) iff for every term $\gamma$ over $PS$, a $C$ representation equivalent to $\gamma$ can be computed in time polynomial in $|n|$.
- $C$ satisfies $\text{CL}$ (the clause condition) iff for every clause $\delta$ over $PS$, a $C$ representation equivalent to $\delta$ can be computed in time polynomial in $|\delta|$.
Clearly enough, those conditions are not very demanding and are satisfied by all complete propositional languages considered in [Darwiche and Marquis, 2002], but $\text{MODS}$.
**Proposition 1** Let $C$ be any complete propositional representation language.
1. $C$ satisfies $\text{CO}$ iff $\text{ToC}$ satisfies $\text{CO}$.
2. $C$ satisfies $\text{VA}$ iff $\text{ToC}$ satisfies $\text{VA}$.
3. $C$ satisfies $\text{IM}$ iff $\text{ToC}$ satisfies $\text{IM}$.
4. If $C$ satisfies $\text{CD}$, then $C$ satisfies $\text{ME}$ iff $\text{ToC}$ satisfies $\text{ME}$.
5. If $C$ satisfies $\text{CL}$, then $\text{ToC}$ does not satisfy $\text{CE}$ unless $P = \text{NP}$.
6. If $C$ satisfies $\text{CL}$, then $\text{ToC}$ does not satisfy $\text{SE}$ unless $P = \text{NP}$.
Points 1. to 4. show that the $\text{ToC}$ languages typically satisfy all the queries $\text{CO}, \text{VA}, \text{IM}$ and $\text{ME}$ (just because the corresponding $C$ languages typically satisfy them and $\text{CD}$). Similarly, points 5. and 6. show that the $\text{ToC}$ languages typically do not satisfy any of $\text{CE}$ or $\text{SE}$ unless $P = \text{NP}$ (because they are not complete propositional languages satisfying $\text{CO}$ and $\text{CD}$ also satisfies $\text{CE}$ (a straightforward extension of Lemma 1.4 from [Darwiche and Marquis, 2002] to any propositional representation language), we get as a corollary to points 1. and 5. that:
**Corollary 1** If $C$ satisfies $\text{CO}$ and $\text{CL}$, then $\text{ToC}$ does not satisfy $\text{CD}$ unless $P = \text{NP}$.
Considering other transformations, we obtained the following results which hold for any propositional representation language (hence specifically for the $\text{ToC}$ ones):
**Proposition 2** Let $C$ be any propositional representation language.
1. If $C$ satisfies $\text{CO}$ and $\text{TE}$ and $C$ does not satisfy $\text{CE}$ unless $P = \text{NP}$, then $C$ does not satisfy $\text{ABC}$ unless $P = \text{NP}$.
2. If $C$ satisfies $\text{VA}$ and $\text{TE}$, then $C$ does not satisfy $\lor C$ unless $P = \text{NP}$.
3. If $C$ satisfies $\text{IM}$ and does not satisfy $\text{CE}$ unless $P = \text{NP}$, then $C$ does not satisfy $\neg C$ unless $P = \text{NP}$.
These results show that the $\text{ToC}$ languages typically satisfy only few transformations among $\text{CD}, \lor \text{ABC}, \lor C$ and $\neg C$. The conditions on $C$ listed in Corollary 1 and Proposition 2 are indeed not very demanding.
It is interesting to note that the algorithms $\text{Conditioning}$, $\text{Project}$, $\text{IsCE}$, $\text{IsEQ}$ reported in [Subbarayan et al., 2007] (Figure 3), for respectively computing the conditioning of a $\text{ToOBDD}_{<}$ representation by a consistent term, computing the projection of a $\text{ToOBDD}_{<}$ representation $T$ on a given set $V$ of variables (or equivalently, forgetting all variables in $T$ except those of $V$), deciding whether a clause is entailed by a $\text{ToOBDD}_{<}$ representation, deciding whether two $\text{ToOBDD}_{<}$ representations are equivalent, apply to $\text{ToC}$ representations as well (the fact that each $B(n)$ of $T$ is an $\text{OBDD}_{<}$ representation is not mandatory for ensuring the correctness of these algorithms). While these algorithms do not run in polynomial time in the general case, imposing further restrictions on $C$ can be a way to achieve tractability. Thus, it is easy to show that if $C$ has a linear-time algorithm for $\text{FO}$ and a linear-time algorithm for $\lor C$, then $\text{Project}$ is a polytime $\text{FO}$ algorithm for the $\text{ToC}$ languages. If $C$ has a linear-time algorithm for $\text{FO}$, a linear-time algorithm for $\lor C$, and a polytime algorithm for $\text{CD}$, then $\text{Conditioning}$ is a polytime $\text{CD}$ algorithm for the $\text{ToC}$ languages.
The fact that many queries/transformations are NP-hard in the general case does not discard $\text{ToOBDD}_{<}$ (and beyond the $\text{ToC}$ languages) as interesting target languages for $\text{KC}$ from the practical side. Indeed, if the width of a $\text{ToC}$ representation $T$, i.e., $\max_{n \in N}(|\text{Var}(n)| - 1)$, is (upper) bounded by a constant, then the time complexity of the $\text{Propagate}$ function becomes linear in the tree size; consequently, many other queries and transformations may become tractable as well; for instance if $C$ satisfies $\text{CD}$, we get that both conditioning and clausal entailment can be achieved in polynomial time in the tree size.
As to succinctness, we get the following results:
**Proposition 3** Let $C$ be any complete propositional representation language.
1. $\text{ToC} \leq_{P} C$.
2. Let $C'$ be any complete propositional fragment. If $C \leq_{s} C'$, then $\text{ToC} \leq_{s} \text{ToC'}$.
3. If $C$ satisfies $\text{CL}$ and $C'$ satisfies $\text{CE}$, then $C' \not< C$.
4. If $C$ satisfies $\text{IM}$, then $\text{ToC} \not< \text{DNF}$.
Proposition 3 has many interesting consequences:
- From point 1., we directly get that $\text{ToC} \leq_{s} C$, and that $\text{ToC}$ is complete (since $C$ is). This result cannot be strengthened to $\text{ToC} \leq_{s} C$ in the general case (for every $C$ satisfying $\land C$, e.g., $C = \text{CNF}$, we can prove that $C \sim_{p} \text{ToC}$).
\[\text{See Marquis, 2008 for more details on this issue.}\]
\[\text{The price to be paid by such a restriction is a lack of expressiveness: none of the languages of $\text{ToC}$ representations of width bounded by $c$ (where $c$ is a parameter) is complete.}\]
• Point 2. allows one to take advantage of previous results describing how propositional languages C are organized w.r.t. spatial efficiency in order to achieve similar results for the corresponding ToC languages.
• Point 3. implies that the DNNF language, which satisfies CE, is typically (i.e., whenever C satisfies CL) more succinct than the corresponding ToC language; hence none of the languages C which are less succinct than DNNF (e.g. C = DNF) can be more succinct than such ToC languages; thus, we get for instance that DNF $\not\leq^*_{\text{ToC}}$ ToDNNF (which together with point 1. shows that ToDNNF $\not<^*_{\text{ToC}}$ DNF).
• Another consequence of point 3. is that if C satisfies CL then DNNF $\not<^*_{\text{ToC}}$ ToC (hence d-DNNF $\not<^*_{\text{ToC}}$ C). With point 1. this shows ToDNNF to be spatially (strictly) more efficient than DNNF, while keeping CO and ME.
Finally, an interesting issue is to determine whether, at the "instance level", i.e., considering a given Boolean function to be compiled, targeting ToC in a compilation process leads always to save space w.r.t. targeting C. The answer is "not always" (even in the cases when we have ToC $\not<^*_{\text{ToC}}$ C). We showed it by considering the notion of decomposition set:
**Definition 10 (decomposition)** Let f be a Boolean function. Let $V_1, \ldots, V_k$ be k subsets of PS. $D = \{V_1, \ldots, V_k\}$ is a decomposition set for f iff we have $f = \bigwedge_{i=1}^k \exists V_i.f$.
Clearly enough, for each ToC representation T whose set of nodes is N, $\{\text{Var}(n) \mid n \in N\}$ is a decomposition set for I(T). We proved that:
**Lemma 1** Let f be a Boolean function. Let $\delta$ be an essential prime implicate of f, i.e., a prime implicate of f which is not implied by the conjunction of the other prime implicates of f. Then for every decomposition set $D$ for f, there exists $V \in D$ such that $\text{Var}(\delta) \subseteq V$.
This lemma shows that when f has an essential prime implicate containing all its variables, no ToC representation of f can be more compact than each of its C representations. This lemma also shows that when f has an essential prime implicate $\delta$ such that $\exists \text{Var}(\delta).f$ has no C representation of reasonable size, choosing ToC as the target language is not a way to save space.
Finally, Lemma 1 also explains why imposing a fixed decomposition tree $T$ for defining a ToC language is not so a good idea (despite the fact it may offer a property of canonicity in some cases): either $T$ has a node n such that $\text{Var}(n) = \{x_1, \ldots, x_p\}$ (all the variables of interest), and in this case the corresponding ToC language mainly amounts to C, or $T$ does not contain such a node, and in this case the ToC language is incomplete: the Boolean function which is the semantics of the clause $\bigwedge_{i=1}^p x_i$ cannot be represented in ToC.
### 4 Back to ToOBDD<\<\< Representations
Let us now fix C to OBDD<\<\< in order to get some further results. Beyond ToOBDD<\<\< we have investigated the properties of U(ToOBDD<\<\<) (the union of all ToOBDD<\<\< for each total order $<$ over PS) and of ToOBDD, as target languages for propositional knowledge compilation, along the lines of the KC map. To make the differences between these languages clearer, observe that OBDD representations $B(n), n \in N$ where N is the set of nodes of a given ToOBDD $T$ may rely on different variable orders $<$, while all the OBDD<\< representations in a given U(ToOBDD<\<\<) are based on the same order. Hence, U(ToOBDD<\<\<) is a proper subset of ToOBDD.
**Proposition 4** The results in Table 1 hold.
<table>
<thead>
<tr>
<th>C</th>
<th>VA</th>
<th>CO</th>
<th>IM</th>
<th>EQ</th>
<th>SE</th>
<th>CT</th>
<th>ME</th>
</tr>
</thead>
<tbody>
<tr>
<td>ToOBDD</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>U(ToOBDD<<<)</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>ToOBDD<<<</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>OBDD</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>OBDD<<<</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
<table>
<thead>
<tr>
<th>C</th>
<th>TO</th>
<th>LDL</th>
<th>BC</th>
<th>CL</th>
<th>BL</th>
<th>BC</th>
<th>CL</th>
<th>ME</th>
</tr>
</thead>
<tbody>
<tr>
<td>ToOBDD</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>U(ToOBDD<<<)</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>ToOBDD<<<</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>OBDD</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>OBDD<<<</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
</tbody>
</table>
Table 1: ✓ means “satisfies”, • means “does not satisfy”, o means “does not satisfy unless $P = NP$”, and $\not\leq$ means “does not satisfy unless PH collapses.” Results for OBDD<\<\< and OBDD are from [Darwiche and Marquis, 2002] and are given here as a baseline.
The fact that ToOBDD, U(ToOBDD<\<\<), and ToOBDD<\<\< satisfy CO, VA, IM, and ME and that none of these languages satisfies any of CE, SE, CD, $\land BC$, $\land C$, $\lor C$ or $\neg C$, unless $P = NP$ is a direct corollary of Propositions 1 and 2. Except CO and VA, all those results concern some issues left open in [Subbarayan et al., 2007]. Especially, there exist polynome algorithms for IM and ME which are not based on the message-passing propagation algorithm (those given in [Subbarayan et al., 2007] do not run in polynomial time in the general case). Furthermore, contrary to what was expected in [Subbarayan et al., 2007], $\neg C$ is not trivial: the negation of a conjunction of OBDD<\< representations is equivalent to the disjunction of their negations. We actually showed that the $\neg C$ transformation on ToOBDD<\< cannot be achieved in polynomial time unless $P = NP$.
As to succinctness, we proved the following results:
**Proposition 5**
1. For each $<$, ToOBDD<\<\< $\nleq^*_{\text{ToC}}$ OBDD<\<\<.
2. For each $<$, DNNF $\not\leq^*_{\text{ToC}}$ ToOBDD<\<\<.
3. ToOBDD $\not<^*_{\text{ToC}}$ DNF.
4. ToOBDD $\not\leq^*_{\text{ToC}}$ CNF.
Points 1. to 3. are direct consequences of Proposition 3 and results from [Darwiche and Marquis, 2002]. A direct consequence of Proposition 5 is that d-DNNF $\not\leq^*_{\text{ToC}}$ ToOBDD<\<\<. This explains in some sense the space savings which can be offered by ToOBDD<\< over d-DNNF and observed empirically as reported in [Subbarayan et al., 2007]. More generally, from Proposition 3 and some results given in [Darwiche and Marquis, 2002] we get that:
**Corollary 2** Unless PH collapses, ToOBDD, U(ToOBDD<\<\<) and ToOBDD<\<\< are incomparable w.r.t. succinctness with the languages CNF, DNF, and DNNF.
5 Conclusion
In this paper, the concept of tree-of-BDDs has been extended to any complete propositional representation language C thus leading to the family of ToC languages. A number of generic results are provided, which allow to determine the queries/transformations satisfied by ToC depending on the ones satisfied by C, as well as results about the spatial efficiency of the ToC languages. Focusing on the ToOBDD language, we have addressed a number of issues that remained open in [Subbarayan et al., 2007]; especially, we have shown that beyond CO and VA, ToOBDD satisfies IM and ME but does not satisfy any query among CE, SE. We have also proved that ToOBDD does not satisfy any transformation among CD, FO, ABC, or ¬C and that this fragment is not comparable for succinctness w.r.t. any of CNF, DNF and DNNF unless PH collapses.
From this investigation, it turns out that the ToOBDD language (and in general the ToC languages) satisfies only few queries and transformations. Subsequently, in applications where some queries/transformations not satisfied by ToOBDD must be achieved under some guaranteed response time, considering ToOBDD as a target language for KC is not always the best choice. From the practical side, as reported in [Subbarayan et al., 2007] (and despite the fact that ToOBDD are CNF, there are CNF formulae which can be compiled into ToOBDD using a reasonable amount of computational resources, while it turned out impossible to generate d-DNNF representations for them. Such empirical results cohere with our succinctness result d-DNNF £ ToOBDD. Nevertheless, our result ToOBDD £ DNNF shows that this empirical evidence can be argued (this result implies that some DNNF representations do not have "small" ToOBDD equivalent representations under the standard assumptions of complexity theory), so DNNF remains a very attractive language for the KC purpose.
Our results also suggest a number of ToC languages as quite promising. Consider for instance the ToFBDD language. From our results, it comes easily that ToFBDD satisfies CO, VA, IM, ME (hence the same queries as ToOBDD); since ToFBDD is at least as succinct as ToOBDD, it appears as a challenging fragment. Furthermore, a compiler to FBDD is already available (see e.g. http://www.eecg.utoronto.ca/~jzhu/fbdduser11.ps). When none of VA or IM is expected, the ToDNNF language looks also valuable; indeed, from our results we know that ToDNNF satisfies CO and ME, while being quite compact: ToDNNF £ ToOBDD and ToDNNF £ DNNF hold; beyond the spatial dimension, targeting the ToDNNF language may also reduce the on-line computation time needed for achieving queries/transformations based on Propagate function (as well as the off-line CNF-to-ToC compilation time) since DNNF satisfies FO, which is one of the two key operations of the propagation algorithm. The ToDNNFT language, based on DNNFT [Pipatsrisawat and Darwiche, 2008], also looks interesting in this respect since it satisfies both FO and ABC, the other key operation of the propagation algorithm.
This is what the "theory" says in some sense about such languages. Going further requires to implement compilers and perform experiments in order to determine whether, from the practical side, representations from those languages can be computed using a reasonable amount of resources. This is an issue for further research. Another perspective for further work is to complete the missing results about queries, transformations and succinctness for the ToC languages and to extend the KC map accordingly. Especially, it would be interesting to characterize some families of propositional formulae each of DNNF and ToOBDD are "effective" on.
References
|
{"Source-Url": "http://www.aaai.org/ocs/index.php/IJCAI/IJCAI-09/paper/download/515/806", "len_cl100k_base": 9665, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 28450, "total-output-tokens": 10617, "length": "2e13", "weborganizer": {"__label__adult": 0.00044465065002441406, "__label__art_design": 0.0004725456237792969, "__label__crime_law": 0.0006651878356933594, "__label__education_jobs": 0.0010519027709960938, "__label__entertainment": 0.00012731552124023438, "__label__fashion_beauty": 0.0002417564392089844, "__label__finance_business": 0.00046944618225097656, "__label__food_dining": 0.00060272216796875, "__label__games": 0.0007691383361816406, "__label__hardware": 0.0010080337524414062, "__label__health": 0.0009832382202148438, "__label__history": 0.00036025047302246094, "__label__home_hobbies": 0.00013363361358642578, "__label__industrial": 0.0007767677307128906, "__label__literature": 0.0008559226989746094, "__label__politics": 0.0005021095275878906, "__label__religion": 0.0007367134094238281, "__label__science_tech": 0.152099609375, "__label__social_life": 0.00014150142669677734, "__label__software": 0.01065826416015625, "__label__software_dev": 0.82568359375, "__label__sports_fitness": 0.0002980232238769531, "__label__transportation": 0.0007166862487792969, "__label__travel": 0.0002086162567138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35633, 0.02131]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35633, 0.43234]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35633, 0.82888]], "google_gemma-3-12b-it_contains_pii": [[0, 4549, false], [4549, 11640, null], [11640, 17153, null], [17153, 23560, null], [23560, 29893, null], [29893, 35633, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4549, true], [4549, 11640, null], [11640, 17153, null], [17153, 23560, null], [23560, 29893, null], [29893, 35633, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35633, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35633, null]], "pdf_page_numbers": [[0, 4549, 1], [4549, 11640, 2], [11640, 17153, 3], [17153, 23560, 4], [23560, 29893, 5], [29893, 35633, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35633, 0.09032]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
42a802ceed7427ee332f0e839f5640df604b0ee3
|
# Table of Contents
**Abstract and introduction** ........................................................................................................... i
- Introduction .......................................................................................................................... 1
- Are you Well-Architected? ....................................................................................................... 2
**Continuous integration** ........................................................................................................... 3
- AWS CodeCommit .................................................................................................................... 3
- AWS CodeBuild ....................................................................................................................... 4
- AWS CodeArtifact ................................................................................................................... 4
**Continuous delivery** ............................................................................................................... 6
- AWS CodeDeploy .................................................................................................................... 6
- AWS CodePipeline ................................................................................................................... 7
**Deployment strategies** ............................................................................................................. 9
- In-place deployments .............................................................................................................. 9
- Blue/green deployment ............................................................................................................ 9
- Canary deployment .................................................................................................................. 10
- Linear deployment .................................................................................................................. 10
- All-at-once deployment ......................................................................................................... 10
**Deployment strategies matrix** ............................................................................................... 11
- AWS Elastic Beanstalk deployment strategies ..................................................................... 11
**Infrastructure as code** ............................................................................................................. 13
- AWS CloudFormation ........................................................................................................... 14
- AWS Serverless Application Model ....................................................................................... 15
- AWS Cloud Development Kit ............................................................................................... 15
- AWS Cloud Development Kit for Kubernetes ........................................................................ 16
- AWS Cloud Development Kit for Terraform .......................................................................... 16
- AWS Cloud Control API ...................................................................................................... 17
**Automation and tooling** ......................................................................................................... 18
- AWS OpsWorks ..................................................................................................................... 19
- AWS Elastic Beanstalk .......................................................................................................... 20
- EC2 Image Builder .................................................................................................................. 20
- AWS Proton ............................................................................................................................ 21
- AWS Service Catalog ............................................................................................................ 21
- AWS Cloud9 ........................................................................................................................... 21
- AWS CloudShell .................................................................................................................... 22
- Amazon CodeGuru ............................................................................................................... 22
Monitoring and observability ................................................................................................................. 23
Amazon CloudWatch metrics ...................................................................................................................... 23
Amazon CloudWatch Alarms ..................................................................................................................... 23
Amazon CloudWatch Logs .......................................................................................................................... 24
Amazon CloudWatch Logs Insights ........................................................................................................... 24
Amazon CloudWatch Events ..................................................................................................................... 24
Amazon EventBridge ............................................................................................................................... 25
AWS CloudTrail ........................................................................................................................................ 25
Amazon DevOps Guru ............................................................................................................................... 25
AWS X-Ray ................................................................................................................................................ 26
Amazon Managed Service for Prometheus ................................................................................................. 26
Amazon Managed Grafana ........................................................................................................................ 27
Communication and Collaboration ............................................................................................................. 28
Two-Pizza Teams ....................................................................................................................................... 28
AWS CodeStar ......................................................................................................................................... 29
Security .................................................................................................................................................... 30
AWS Shared Responsibility Model ............................................................................................................. 30
Identity and Access Management .............................................................................................................. 31
Conclusion .................................................................................................................................................. 33
Document Revisions ................................................................................................................................. 34
Contributors ................................................................................................................................................ 35
Notices ....................................................................................................................................................... 36
Introduction to DevOps on AWS
Publication date: April 7, 2023 (Document Revisions)
Today more than ever, enterprises are embarking on their digital transformation journey to build deeper connections with their customers, to achieve sustainable and enduring business value. Organizations of all shapes and sizes are disrupting their competitors and entering new markets by innovating more quickly than ever before. For these organizations, it is important to focus on innovation and software disruption, making it critical to streamline their software delivery. Organizations that shorten their time from idea to production making speed and agility a priority could be tomorrow's disruptors.
While there are several factors to consider in becoming the next digital disruptor, this whitepaper focuses on DevOps, and the services and features in the Amazon Web Services (AWS) platform that will help increase an organization's ability to deliver applications and services at a high velocity.
Introduction
DevOps is the combination of cultural philosophies, engineering practices, and tools which increase an organization's ability to deliver applications and services at high velocity and better quality. Over time, several essential practices have emerged when adopting DevOps: continuous integration (CI), continuous delivery (CD), Infrastructure as Code (IaC), and monitoring and logging.
This paper highlights AWS capabilities that help you accelerate your DevOps journey, and how AWS services can help remove the undifferentiated heavy lifting associated with DevOps adaptation. It also describes how to build a continuous integration and delivery capability without managing servers or build nodes, and how to use IaC to provision and manage your cloud resources in a consistent and repeatable manner.
- **Continuous integration**: A software development practice where developers regularly merge their code changes into a central repository, after which automated builds and tests are run.
- **Continuous delivery**: A software development practice where code changes are automatically built, tested, and prepared for a release to production.
- **Infrastructure as Code**: A practice in which infrastructure is provisioned and managed using code and software development techniques, such as version control, and continuous integration.
- **Monitoring and logging**: Enables organizations to see how application and infrastructure performance impacts the experience of their product’s end user.
• **Communication and collaboration**: Practices are established to bring the teams closer and by building workflows and distributing the responsibilities for DevOps.
• **Security**: Should be a cross cutting concern. Your continuous integration and continuous delivery (CI/CD) pipelines and related services should be safeguarded and proper access control permissions should be set up.
An examination of each of these principles reveals a close connection to the offerings available from AWS.
**Are you Well-Architected?**
The [AWS Well-Architected Framework](https://aws.amazon.com/architecture/framework/) helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the [AWS Well-Architected Tool](https://wellarchitected.amazonaws.com/), available at no charge in the [AWS Management Console](https://aws.amazon.com/management-console/), you can review your workloads against these best practices by answering a set of questions for each pillar.
Continuous integration
Continuous integration (CI) is a software development practice where developers regularly merge their code changes into a central code repository, after which automated builds and tests are run. CI helps find and address bugs quicker, improve software quality, and reduce the time it takes to validate and release new software updates.
AWS offers the following services for continuous integration:
Topics
- AWS CodeCommit
- AWS CodeBuild
- AWS CodeArtifact
AWS CodeCommit
AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private git repositories. CodeCommit reduces the need for you to operate your own source control system and there is no hardware to provision and scale or software to install, configure, and operate. You can use CodeCommit to store anything from code to binaries, and it supports the standard functionality of GitHub, allowing it to work seamlessly with your existing Git-based tools. Your team can also use CodeCommit’s online code tools to browse, edit, and collaborate on projects. AWS CodeCommit has several benefits:
- **Collaboration** — AWS CodeCommit is designed for collaborative software development. You can easily commit, branch, and merge your code, which helps you easily maintain control of your team’s projects. CodeCommit also supports pull requests, which provide a mechanism to request code reviews and discuss code with collaborators.
- **Encryption** — You can transfer your files to and from AWS CodeCommit using HTTPS or SSH, as you prefer. Your repositories are also automatically encrypted at rest through AWS Key Management Service (AWS KMS) using customer-specific keys.
- **Access control** — AWS CodeCommit uses AWS Identity and Access Management (IAM) to control and monitor who can access your data in addition to how, when, and where they can access it. CodeCommit also helps you monitor your repositories through AWS CloudTrail and Amazon CloudWatch.
High availability and durability — AWS CodeCommit stores your repositories in Amazon Simple Storage Service (Amazon S3) and Amazon DynamoDB. Your encrypted data is redundantly stored across multiple facilities. This architecture increases the availability and durability of your repository data.
- Notifications and custom scripts — You can now receive notifications for events impacting your repositories. Notifications will come as Amazon Simple Notification Service (Amazon SNS) notifications. Each notification will include a status message as well as a link to the resources whose event generated that notification. Additionally, using AWS CodeCommit repository cues, you can send notifications and create HTTP webhooks with Amazon SNS or invoke AWS Lambda functions in response to the repository events you choose.
AWS CodeBuild
AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. You don’t need to provision, manage, and scale your own build servers. CodeBuild can use either of GitHub, GitHub Enterprise, BitBucket, AWS CodeCommit, or Amazon S3 as a source provider.
CodeBuild scales continuously and can process multiple builds concurrently. CodeBuild offers various pre-configured environments for various versions of Microsoft Windows and Linux. Customers can also bring their customized build environments as Docker containers. CodeBuild also integrates with open source tools such as Jenkins and Spinnaker.
CodeBuild can also create reports for unit, functional, or integration tests. These reports provide a visual view of how many test cases were run and how many passed or failed. The build process can also be run inside an Amazon Virtual Private Cloud (Amazon VPC) which can be helpful if your integration services or databases are deployed inside a VPC.
AWS CodeArtifact
AWS CodeArtifact is a fully managed artifact repository service that can be used by organizations to securely store, publish, and share software packages used in their software development process. CodeArtifact can be configured to automatically fetch software packages and dependencies from public artifact repositories so developers have access to the latest versions.
Software development teams increasingly rely on open-source packages to perform common tasks in their application package. It has become critical for software development teams to maintain
control on a particular version of the open-source software to ensure the software is free of vulnerabilities. With CodeArtifact, you can set up controls to enforce this.
CodeArtifact works with commonly used package managers and build tools such as Maven, Gradle, npm, yarn, twine, and pip, making it easy to integrate into existing development workflows.
Continuous delivery
Continuous delivery (CD) is a software development practice where code changes are automatically prepared for a release to production. A pillar of modern application development, continuous delivery expands upon continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When properly implemented, developers will always have a deployment-ready build artifact that has passed through a standardized test process.
Continuous delivery lets developers automate testing beyond just unit tests so they can verify application updates across multiple dimensions before deploying to customers.
These tests might include UI testing, load testing, integration testing, API reliability testing, and more. This helps developers more thoroughly validate updates and preemptively discover issues. Using the cloud, it is easy and cost-effective to automate the creation and replication of multiple environments for testing, which was previously difficult to do on-premises.
AWS offers the following services for continuous delivery:
- AWS CodeBuild
- AWS CodeDeploy
- AWS CodePipeline
Topics
- AWS CodeDeploy
- AWS CodePipeline
AWS CodeDeploy
AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute services such as Amazon Elastic Compute Cloud (Amazon EC2), AWS Fargate, AWS Lambda, and your on-premises servers. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications. You can use CodeDeploy to automate software deployments, reducing the need for error-prone manual operations. The service scales to match your deployment needs.
CodeDeploy has several benefits that align with the DevOps principle of continuous deployment:
- **Automated deployments** — CodeDeploy fully automates software deployments, allowing you to deploy reliably and rapidly.
- **Centralized control** — CodeDeploy enables you to easily launch and track the status of your application deployments through the AWS Management Console or the AWS CLI. CodeDeploy gives you a detailed report enabling you to view when and to where each application revision was deployed. You can also create push notifications to receive live updates about your deployments.
- **Minimize downtime** — CodeDeploy helps maximize your application availability during the software deployment process. It introduces changes incrementally and tracks application health according to configurable rules. Software deployments can easily be stopped and rolled back if there are errors.
- **Easy to adopt** — CodeDeploy works with any application, and provides the same experience across different platforms and languages. You can easily reuse your existing setup code. CodeDeploy can also integrate with your existing software release process or continuous delivery toolchain (for example, AWS CodePipeline, GitHub, Jenkins).
AWS CodeDeploy supports multiple deployment options. For more information, refer to the **Deployment strategies** section of this document.
**AWS CodePipeline**
**AWS CodePipeline** is a continuous delivery service that you can use to model, visualize, and automate the steps required to release your software. With AWS CodePipeline, you model the full release process for building your code, deploying to pre-production environments, testing your application, and releasing it to production. AWS CodePipeline then builds, tests, and deploys your application according to the defined workflow every time there is a code change. You can integrate partner tools and your own custom tools into any stage of the release process to form an end-to-end continuous delivery solution.
AWS CodePipeline has several benefits that align with the DevOps principle of continuous deployment:
- **Rapid delivery** — AWS CodePipeline automates your software release process, allowing you to rapidly release new features to your users. With CodePipeline, you can quickly iterate on feedback and get new features to your users faster.
**Improved quality** — By automating your build, test, and release processes, AWS CodePipeline enables you to increase the speed and quality of your software updates by running all new changes through a consistent set of quality checks.
**Easy to integrate** — AWS CodePipeline can easily be extended to adapt to your specific needs. You can use the pre-built plugins or your own custom plugins in any step of your release process. For example, you can pull your source code from GitHub, use your on-premises Jenkins build server, run load tests using a third-party service, or pass on deployment information to your custom operations dashboard.
**Configurable workflow** — AWS CodePipeline enables you to model the different stages of your software release process using the console interface, the AWS CLI, [AWS CloudFormation](https://aws.amazon.com/cloudformation/), or the AWS SDKs. You can easily specify the tests to run and customize the steps to deploy your application and its dependencies.
Deployment strategies
Deployment strategies define how you want to deliver your software. Organizations follow different deployment strategies based on their business model. Some choose to deliver software that is fully tested, and others might want their users to provide feedback and let their users evaluate under development features (such as Beta releases). The following section discusses various deployment strategies.
Topics
- In-place deployments
- Blue/green deployment
- Canary deployment
- Linear deployment
- All-at-once deployment
In-place deployments
In this strategy, the previous version of the application on each compute resource is stopped, the latest application is installed, and the new version of the application is started and validated. This allows application deployments to proceed with minimal disturbance to underlying infrastructure. With an in-place deployment, you can deploy your application without creating new infrastructure; however, the availability of your application can be affected during these deployments. This approach also minimizes infrastructure costs and management overhead associated with creating new resources. You can use a load balancer so that each instance is deregistered during its deployment and then restored to service after the deployment is complete. In-place deployments can be all-at-once, assuming a service outage, or done as a rolling update. AWS CodeDeploy and AWS Elastic Beanstalk offer deployment configurations for one-at-a-time, half-at-a-time, and all-at-once.
Blue/green deployment
Blue/green deployment, sometimes referred to as red/black deployment, is a technique for releasing applications by shifting traffic between two identical environments running differing versions of the application. Blue/green deployment helps you minimize downtime during application updates, mitigating risks surrounding downtime and rollback functionality.
Blue/green deployments enable you to launch a new version (green) of your application alongside the old version (blue), and monitor and test the new version before you reroute traffic to it, rolling back on issue detection.
**Canary deployment**
The purpose of a canary deployment is to reduce the risk of deploying a new version that impacts the workload. The method will incrementally deploy the new version, making it visible to new users in a slow fashion. As you gain confidence in the deployment, you will deploy it to replace the current version in its entirety.
**Linear deployment**
Linear deployment means traffic is shifted in equal increments with an equal number of minutes between each increment. You can choose from predefined linear options that specify the percentage of traffic shifted in each increment and the number of minutes between each increment.
**All-at-once deployment**
All-at-once deployment means all traffic is shifted from the original environment to the replacement environment all at once.
Deployment strategies matrix
The following matrix lists the supported deployment strategies for Amazon Elastic Container Service (Amazon ECS), AWS Lambda, and Amazon EC2/on-premises.
- Amazon ECS is a fully managed orchestration service.
- AWS Lambda lets you run code without provisioning or managing servers.
- Amazon EC2 enables you to run secure, resizable compute capacity in the cloud.
<table>
<thead>
<tr>
<th>Deployment strategy</th>
<th>Amazon ECS</th>
<th>AWS Lambda</th>
<th>Amazon EC2/on-premises</th>
</tr>
</thead>
<tbody>
<tr>
<td>In-place</td>
<td>✓</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td>Blue/green</td>
<td>✓</td>
<td>✓</td>
<td>✓*</td>
</tr>
<tr>
<td>Canary</td>
<td>✓</td>
<td>✓</td>
<td>X</td>
</tr>
<tr>
<td>Linear</td>
<td>✓</td>
<td>✓</td>
<td>X</td>
</tr>
<tr>
<td>All-at-once</td>
<td>✓</td>
<td>✓</td>
<td>X</td>
</tr>
</tbody>
</table>
Note
Blue/green deployment with EC2/on-premises works only with EC2 instances.
AWS Elastic Beanstalk deployment strategies
AWS Elastic Beanstalk supports the following type of deployment strategies:
- **All-at-once** Performs in place deployment on all instances.
- **Rolling** Splits the instances into batches and deploys to one batch at a time.
- **Rolling with additional batch** Splits the deployments into batches but for the first batch creates new EC2 instances instead of deploying on the existing EC2 instances.
• **Immutable** If you need to deploy with a new instance instead of using an existing instance.
• **Traffic splitting** Performs immutable deployment and then forwards percentage of traffic to the new instances for a pre-determined duration of time. If the instances stay healthy, then forward all traffic to new instances and shut down old instances.
Infrastructure as code
A fundamental principle of DevOps is to treat infrastructure the same way developers treat code. Application code has a defined format and syntax. If the code is not written according to the rules of the programming language, applications cannot be created. Code is stored in a version management or source control system that logs a history of code development, changes, and bug fixes. When code is compiled or built into applications, we expect a consistent application to be created, and the build is repeatable and reliable.
Practicing *infrastructure as code* means applying the same rigor of application code development to infrastructure provisioning. All configurations should be defined in a declarative way and stored in a source control system such as AWS CodeCommit, the same as application code. Infrastructure provisioning, orchestration, and deployment should also support the use of the infrastructure as code.
Infrastructure was traditionally provisioned using a combination of scripts and manual processes. Sometimes these scripts were stored in version control systems or documented step by step in text files or run-books. Often the person writing the run books is not the same person executing these scripts or following through the run-books. If these scripts or runbooks are not updated frequently, they can potentially become a show-stopper in deployments. This results in the creation of new environments not always being repeatable, reliable, or consistent.
In contrast, AWS provides a DevOps-focused way of creating and maintaining infrastructure. Similar to the way software developers write application code, AWS provides services that enable the creation, deployment and maintenance of infrastructure in a programmatic, descriptive, and declarative way. These services provide rigor, clarity, and reliability. The AWS services discussed in this paper are core to a DevOps methodology and form the underpinnings of numerous higher-level AWS DevOps principles and practices.
AWS offers the following services to define infrastructure as code.
**Services**
- [AWS CloudFormation](#)
- [AWS Serverless Application Model](#)
- [AWS Cloud Development Kit (AWS CDK)](#)
- [AWS Cloud Development Kit for Kubernetes](#)
AWS CloudFormation
AWS CloudFormation is a service that enables developers to create AWS resources in an orderly and predictable fashion. Resources are written in text files using JSON or YAML format. The templates require a specific syntax and structure that depends on the types of resources being created and managed. You author your resources in JSON or YAML with any code editor such as AWS Cloud9, check it into a version control system, and then CloudFormation builds the specified services in a safe, repeatable manner.
A CloudFormation template is deployed into the AWS environment as a stack. You can manage stacks through the AWS Management Console, AWS Command Line Interface, or AWS CloudFormation APIs. If you need to make changes to the running resources in a stack you update the stack. Before making changes to your resources, you can generate a change set, which is a summary of your proposed changes. Change sets enable you to see how your changes might impact your running resources, especially for critical resources, before implementing them.
AWS CloudFormation creating an entire environment (stack) from one template
You can use a single template to create and update an entire environment, or separate templates to manage multiple layers within an environment. This enables templates to be modularized, and also provides a layer of governance that is important to many organizations.
When you create or update a stack in the CloudFormation console, events are displayed, showing the status of the configuration. If an error occurs, by default the stack is rolled back to its previous state. Amazon SNS provides notifications on events. For example, you can use Amazon SNS to track stack creation and deletion progress using email and integrate with other processes programmatically.
AWS CloudFormation makes it easy to organize and deploy a collection of AWS resources, and lets you describe any dependencies or pass in special parameters when the stack is configured.
With CloudFormation templates, you can work with a broad set of AWS services, such as Amazon S3, Auto Scaling, Amazon CloudFront, Amazon DynamoDB, Amazon EC2, Amazon ElastiCache, AWS Elastic Beanstalk, Elastic Load Balancing, IAM, AWS OpsWorks, and Amazon VPC. For the most recent list of supported resources, refer to the AWS resource and property types reference.
**AWS Serverless Application Model**
The AWS Serverless Application Model (AWS SAM) is an open-source framework that you can use to build serverless applications on AWS.
AWS SAM integrates with other AWS services, so creating serverless applications with AWS SAM provides the following benefits:
- **Single-deployment configuration** — AWS SAM makes it easy to organize related components and resources, and operate on a single stack. You can use AWS SAM to share configuration (such as memory and timeouts) between resources, and deploy all related resources together as a single, versioned entity.
- **Extension of AWS CloudFormation** — Because AWS SAM is an extension of AWS CloudFormation, you get the reliable deployment capabilities of AWS CloudFormation. You can define resources by using AWS CloudFormation in your AWS SAM template.
- **Built-in best practices** — You can use AWS SAM to define and deploy your IaC. This makes it possible for you to use and enforce best practices such as code reviews.
**AWS Cloud Development Kit (AWS CDK)**
The AWS Cloud Development Kit (AWS CDK) is an open source software development framework to model and provision your cloud application resources using familiar programming languages. AWS CDK enables you to model application infrastructure using TypeScript, Python, Java, and .NET.
Developers can leverage their existing Integrated Development Environment (IDE), using tools such as autocomplete and in-line documentation to accelerate development of infrastructure.
AWS CDK utilizes AWS CloudFormation in the background to provision resources in a safe, repeatable manner. Constructs are the basic building blocks of CDK code. A construct represents a cloud component and encapsulates everything AWS CloudFormation needs to create the component. The AWS CDK includes the AWS Construct Library, containing constructs representing many AWS services. By combining constructs together, you can quickly and easily create complex architectures for deployment in AWS.
**AWS Cloud Development Kit for Kubernetes**
AWS Cloud Development Kit for Kubernetes is an open-source software development framework for defining Kubernetes applications using general-purpose programming languages.
Once you have defined your application in a programming language (as of the date of this publication, only Python and TypeScript are supported), cdk8s will convert your application description in to pre-Kubernetes YAML. This YAML file can then be consumed by any Kubernetes cluster running anywhere. Because the structure is defined in a programming language, you can use the rich features provided by the programming language. You can use the abstraction feature of the programming language to create your own boilerplate code, and reuse it across all of the deployments.
**AWS Cloud Development Kit for Terraform**
Built on top of the open source JSII library, CDK for Terraform (CDKTF) allows you to write Terraform configurations in your choice of C#, Python, TypeScript, Java, or Go and still benefit from the full ecosystem of Terraform providers and modules. You can import any existing provider or module from the Terraform Registry into your application, and CDKTF will generate resource classes for you to interact with in your target programming language.
With CDKTF, developers can set up their IaC without context switching from their familiar programming language, using the same tooling and syntax to provision infrastructure resources similar to the application business logic. Teams can collaborate in familiar syntax, while still using the power of the Terraform ecosystem and deploying their infrastructure configurations via established Terraform deployment pipelines.
AWS Cloud Control API
AWS Cloud Control API is a new AWS capability that introduces a common set of Create, Read, Update, Delete, and List (CRUDL) APIs to help developers manage their cloud infrastructure in an easy and consistent way. The Cloud Control API common APIs allow developers to uniformly manage the lifecycle of AWS and third-party services.
As a developer, you might prefer to simplify the way you manage the lifecycle of all your resources. You can use Cloud Control API's uniform resource configuration model with a pre-defined format to standardize your cloud resource configuration. In addition, you will benefit from uniform API behavior (response elements and errors) while managing your resources.
For example, you will find it simple to debug errors during CRUDL operations through uniform error codes surfaced by Cloud Control API that are independent of the resources you operate on. Using Cloud Control API, you will also find it simple to configure cross-resource dependencies. You will also no longer require to author and maintain custom code across multiple vendor tools and APIs to use AWS and third-party resources together.
Automation and tooling
Another core philosophy and practice of DevOps is *automation*. Automation focuses on the setup, configuration, deployment, and support of infrastructure and the applications that run on it. By using automation, you can set up environments more rapidly in a standardized and repeatable manner. The removal of manual processes is key to a successful DevOps strategy. Historically, server configuration and application deployment have been predominantly a manual process. Environments become non-standard, and reproducing an environment when issues arise is difficult.
The use of automation is critical to realizing the full benefits of the cloud. Internally, AWS relies heavily on automation to provide the core features of elasticity and scalability.
Manual processes are error prone, unreliable, and inadequate to support an agile business. Frequently, an organization may tie up highly skilled resources to provide manual configuration, when time could be better spent supporting other, more critical, and higher value activities within the business.
Modern operating environments commonly rely on full automation to eliminate manual intervention or access to production environments. This includes all software releasing, machine configuration, operating system patching, troubleshooting, or bug fixing. Many levels of automation practices can be used together to provide a higher level end-to-end automated process.
Automation has the following key benefits:
- Rapid changes
- Improved productivity
- Repeatable configurations
- Reproducible environments
- Elasticity
- Automatic scaling
- Automated testing
Automation is a cornerstone with AWS services and is internally supported in all services, features, and offerings.
AWS OpsWorks takes the principles of DevOps even further than AWS Elastic Beanstalk. It can be considered an application management service rather than simply an application container. AWS OpsWorks provides even more levels of automation, with additional features such as integration with configuration management software (Chef) and application lifecycle management. You can use application lifecycle management to define when resources are set up, configured, deployed, un-deployed, or ended.
For added flexibility AWS OpsWorks has you define your application in configurable stacks. You can also select predefined application stacks. Application stacks contain all the provisioning for AWS resources that your application requires, including application servers, web servers, databases, and load balancers.
Application stacks are organized into architectural layers so that stacks can be maintained independently. Example layers could include web tier, application tier, and database tier. Out of the box, AWS OpsWorks also simplifies setting up AWS Auto Scaling groups and Elastic Load Balancing (ELB) load balancers, further illustrating the DevOps principle of automation. Just like AWS Elastic Beanstalk, AWS OpsWorks supports application versioning, continuous deployment, and infrastructure configuration management.
Introduction to DevOps on AWS
AWS OpsWorks showing DevOps features and architecture
AWS OpsWorks also supports the DevOps practices of monitoring and logging (covered in the next section). Monitoring support is provided by Amazon CloudWatch. All lifecycle events are logged, and a separate Chef log documents any Chef recipes that are run, along with any exceptions.
AWS Elastic Beanstalk
AWS Elastic Beanstalk is a service to rapidly deploy and scale web applications developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, NGINX, Passenger, and IIS.
Elastic Beanstalk is an abstraction on top of Amazon EC2, Auto Scaling, and simplifies the deployment by giving additional features such as cloning, blue/green deployments, Elastic Beanstalk Command Line Interface (EB CLI) and integration with AWS Toolkit for Visual Studio, Visual Studio Code, Eclipse, and IntelliJ for increase developer productivity.
EC2 Image Builder
EC2 Image Builder is a fully managed AWS service that helps you to automate the creation, maintenance, validation, sharing, and deployment of customized, secure, and up-to-date Linux or Windows custom AMI. EC2 Image Builder can also be used to create container images. You can use the AWS Management Console, the AWS CLI, or APIs to create custom images in your AWS account.
EC2 Image Builder significantly reduces the effort of keeping images up-to-date and secure by providing a simple graphical interface, built-in automation, and AWS-provided security settings. With EC2 Image Builder, there are no manual steps for updating an image nor do you have to build your own automation pipeline.
**AWS Proton**
AWS Proton enables platform teams to connect and coordinate all the different tools your development teams need for infrastructure provisioning, code deployments, monitoring, and updates. AWS Proton enables automated infrastructure as code provisioning and deployment of serverless and container-based applications.
AWS Proton enables platform teams to define their infrastructure and deployment tools, while providing developers with a self-service experience to get infrastructure and deploy code. Through AWS Proton, platform teams provision shared resources and define application stacks, including CI/CD pipelines and observability tools. You can then manage which infrastructure and deployment features are available for developers.
**AWS Service Catalog**
AWS Service Catalog enables organizations to create and manage catalogs of IT services that are approved for AWS. These IT services can include everything from virtual machine images, servers, software, databases, and more to complete multi-tier application architectures. AWS Service Catalog lets you centrally manage deployed IT services, applications, resources, and metadata to achieve consistent governance of your IaC templates.
With AWS Service Catalog, you can meet your compliance requirements while making sure your customers can quickly deploy the approved IT services they need. End users can quickly deploy only the approved IT services they need, following the constraints set by your organization.
**AWS Cloud9**
AWS Cloud9 is a cloud-based IDE that lets you write, run, and debug your code with just a browser. It includes a code editor, debugger, and terminal. AWS Cloud9 comes prepackaged with essential tools for popular programming languages, including JavaScript, Python, PHP, and more, so you don’t need to install files or configure your development machine to start new projects. Because your AWS Cloud9 IDE is cloud-based, you can work on your projects from your office, home, or anywhere using an internet-connected machine.
**AWS CloudShell**
AWS CloudShell is a browser-based shell that makes it easier to securely manage, explore, and interact with your AWS resources. AWS CloudShell is pre-authenticated with your console credentials. Common development and operations tools are pre-installed, so there's no need to install or configure software on your local machine.
**Amazon CodeGuru**
Amazon CodeGuru is a developer tool that provides intelligent recommendations to improve code quality and identify an application's most expensive lines of code. Integrate CodeGuru into your existing software development workflow to automate code reviews during application development and continuously monitor application's performance in production and provide recommendations and visual clues on how to improve code quality, application performance, and reduce overall cost. CodeGuru has two components:
- **Amazon CodeGuru Reviewer** — Amazon CodeGuru Reviewer is an automated code review service that identifies critical defects and deviation from coding best practices for Java and Python code. It scans the lines of code within a pull request and provides intelligent recommendations based on standards learned from major open-source projects as well as Amazon codebase.
- **Amazon CodeGuru Profiler** — Amazon CodeGuru Profiler analyzes the application runtime profile and provides intelligent recommendations and visualizations that guide developers on how to improve the performance of the most relevant parts of their code.
Monitoring and observability
Communication and collaboration are fundamental in a DevOps philosophy. To facilitate this, feedback is critical. This feedback is provided by our suite of monitoring and observability services.
AWS provides the following services for monitoring and logging:
Topics
- Amazon CloudWatch metrics
- Amazon CloudWatch Alarms
- Amazon CloudWatch Logs
- Amazon CloudWatch Logs Insights
- Amazon CloudWatch Events
- Amazon EventBridge
- AWS CloudTrail
- Amazon DevOps Guru
- AWS X-Ray
- Amazon Managed Service for Prometheus
- Amazon Managed Grafana
Amazon CloudWatch metrics
Amazon CloudWatch metrics automatically collect data from AWS services such as Amazon EC2 instances, Amazon EBS volumes, and Amazon RDS database (DB) instances. These metrics can then be organized as dashboards and alarms or events can be created to trigger events or perform Auto Scaling actions.
Amazon CloudWatch Alarms
You can set up alarms using Amazon CloudWatch alarms based on the metrics collected by Amazon CloudWatch metrics. The alarm can then send a notification to Amazon SNS topic, or initiate Auto Scaling actions. An alarm requires period (length of the time to evaluate a metric), evaluation
period (number of the most recent data points), and datapoints to alarm (number of data points within the evaluation period).
**Amazon CloudWatch Logs**
Amazon CloudWatch Logs is a log aggregation and monitoring service. AWS CodeBuild, CodeCommit, CodeDeploy and CodePipeline provide integrations with CloudWatch logs so that all of the logs can be centrally monitored. In addition, the previously mentioned services various other AWS services provide direct integration with CloudWatch.
With CloudWatch Logs you can:
- Query your log data
- Monitor logs from Amazon EC2 instances
- Monitor AWS CloudTrail logged events
- Define log retention policy
**Amazon CloudWatch Logs Insights**
Amazon CloudWatch Logs Insights scans your logs and enables you to perform interactive queries and visualizations. It understands various log formats and auto-discovers fields from JSON logs.
**Amazon CloudWatch Events**
Amazon CloudWatch Events delivers a near real-time stream of system events that describe changes in AWS resources. Using simple rules that you can quickly set up, you can match events and route them to one or more target functions or streams.
CloudWatch Events becomes aware of operational changes as they occur. CloudWatch Events responds to these operational changes and takes corrective action as necessary, by sending messages to respond to the environment, activating functions, making changes, and capturing state information.
You can configure rules in Amazon CloudWatch Events to alert you to changes in AWS services and integrate these events with other third-party systems using Amazon EventBridge. The following are the AWS DevOps related services that have integration with CloudWatch Events.
- **Application Auto Scaling Events**
• CodeBuild Events
• CodeCommit Events
• CodeDeploy Events
• CodePipeline Events
Amazon EventBridge
Note
Amazon CloudWatch Events and EventBridge are the same underlying service and API, however, EventBridge provides more features.
Amazon EventBridge is a serverless event bus that enables integrations between AWS services, Software as a services (SaaS), and your applications. In addition to build event driven applications, EventBridge can be used to notify about the events from the services such as CodeBuild, CodeDeploy, CodePipeline, and CodeCommit.
AWS CloudTrail
To embrace the DevOps principles of collaboration, communication, and transparency, it's important to understand who is making modifications to your infrastructure. In AWS, this transparency is provided by AWS CloudTrail. All AWS interactions are handled through AWS API calls that are monitored and logged by AWS CloudTrail. All generated log files are stored in an Amazon S3 bucket that you define. Log files are encrypted using Amazon S3 server-side encryption (SSE). All API calls are logged whether they come directly from a user or on behalf of a user by an AWS service. Numerous groups can benefit from CloudTrail logs, including operations teams for support, security teams for governance, and finance teams for billing.
Amazon DevOps Guru
Amazon DevOps Guru is a service powered by machine learning (ML) that is designed to make it easy to improve an application's operational performance and availability. DevOps Guru helps detect behaviors that deviate from normal operating patterns, so you can identify operational issues long before they impact your customers.
DevOps Guru uses ML models informed by years of Amazon.com and AWS operational excellence to help identify anomalous application behavior (for example, increased latency, error rates, resource constraints, and others) and surface critical issues that could cause potential outages or service disruptions.
When DevOps Guru identifies a critical issue, it saves debugging time by fetching relevant and specific information from a large number of data sources and automatically sends an alert and provides a summary of related anomalies, and context for when and where the issue occurred.
**AWS X-Ray**
AWS X-Ray helps developers analyze and debug production, distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray provides an end-to-end view of requests as they travel through your application, and shows a map of your application’s underlying components. X-Ray makes it easy for you to:
- **Create a service map** – By tracking requests made to your applications, X-Ray can create a map of services used by your application. This provides you with a view of connections among services in your application, and enables you to create a dependency tree, detect latency or errors when working across AWS Availability Zones or Regions, zero in on services not operating as expected, and so on.
- **Identify errors and bugs** – X-Ray can automatically highlight bugs or errors in your application code by analyzing the response code for each request made to your application. This enables easy debugging of application code without requiring you to reproduce the bug or error.
- **Build your own analysis and visualization apps** – X-Ray provides a set of query APIs you can use to build your own analysis and visualizations apps that use the data that X-Ray records.
**Amazon Managed Service for Prometheus**
Amazon Managed Service for Prometheus is a serverless monitoring service for metrics compatible with open-source Prometheus, making it easier for you to securely monitor and alert on container environments. Amazon Managed Service for Prometheus reduces the heavy lifting required to get started with monitoring applications across Amazon Elastic Kubernetes Service, Amazon Elastic Container Service, and AWS Fargate, as well as self-managed Kubernetes clusters.
Amazon Managed Grafana
Amazon Managed Grafana is a fully managed service with rich, interactive data visualizations to help customers analyze, monitor, and alarm on metrics, logs, and traces across multiple data sources. You can create interactive dashboards and share them with anyone in your organization with an automatically scaled, highly available, and enterprise-secure service.
Communication and Collaboration
Whether you are adopting DevOps Culture in your organization or going through a DevOps cultural transformation, communication and collaboration are an important part of your approach. At Amazon, we have realized that there was a need to bring a change to the mindset of our teams and thus adopted the concept of Two-Pizza Teams.
Topics
- [Two-Pizza Teams](#)
- [AWS CodeStar](#)
Two-Pizza Teams
"We try to create teams that are no larger than can be fed by two pizzas," said Bezos. "We call that the two-pizza team rule."
The smaller the team, the better the collaboration. Collaboration is very important, as software releases are moving faster than ever. And a team’s ability to deliver the software can be a differentiating factor for your organization against your competition. Imagine a situation in which a new product feature needs to be released or a bug needs to be fixed. You want this to happen as quickly as possible, so you can have a smaller go-to-market timed. You don’t want the transformation to be a slow-moving process; you want an agile approach where waves of changes start to make an impact.
Communication between teams is also important as you move toward the shared responsibility model and start moving out of the siloed development approach. This brings the concept of ownership to the team, and shifts their perspective to look at the process as an end-to-end venture. Your team should not think about your production environments as black boxes where they have no visibility.
Cultural transformation is also important, because you might be building a common DevOps team or have a DevOps focused member in your team. Both of these approaches introduce Shared Responsibility in to the team.
AWS CodeStar
AWS CodeStar provides the tools you need to quickly develop, build, and deploy applications on AWS. With AWS CodeStar, you can use a variety of project templates to start developing applications on Amazon EC2, AWS Lambda, and AWS Elastic Beanstalk. AWS CodeStar allows you to accelerate application delivery by providing a pre-configured continuous delivery toolchain for developing, building, testing, and deploying your projects on AWS.
The project dashboard in AWS CodeStar makes it easy to centrally monitor application activity and manage day-to-day development tasks such as recent code commits, builds, and deployments. Because AWS CodeStar integrates with Atlassian JIRA, a third-party issue tracking and project management tool, you can create and manage JIRA issues in the AWS CodeStar dashboard.
Security
Whether you are going through a DevOps transformation or implementing DevOps principles for the first time, you should think about Security as integrated in your DevOps processes. This should be cross cutting concern across your build, test deployment stages.
Before exploring security in DevOps on AWS, this paper looks at the AWS Shared Responsibility Model.
Topics
- AWS Shared Responsibility Model
- Identity and Access Management
AWS Shared Responsibility Model
Security is a shared responsibility between AWS and the customer. The different parts of the Shared Responsibility Model are:
- **AWS responsibility “Security of the Cloud”** - AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
- **Customer responsibility “Security in the Cloud”** – Customer responsibility is determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities.
This shared model can help relieve the customer’s operational burden as AWS operates, manages, and controls the components from the host operating system and virtualization layer down to the physical security of the facilities in which the service operates. This is critical in the cases where customer want to understand the security of their build environments.
For DevOps, assign permissions based on the *least-privilege permissions* model. This model states that "a user (or service) should have the exact access rights necessary to complete their role’s responsibilities—no more, no less."
Permissions are maintained in IAM. You can use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources.
**Identity and Access Management**
**AWS Identity and Access Management** (IAM) defines the controls and policies that are used to manage access to AWS resources. Using IAM you can create users and groups and define permissions to various DevOps services.
In addition to the users, various services may also need access to AWS resources. For example, your CodeBuild project might need access to store Docker images in *Amazon Elastic Container Registry* (Amazon ECR) and need permissions to write to Amazon ECR. These types of permissions are defined by a special type role know as service role.
IAM is one component of the AWS security infrastructure. With IAM, you can centrally manage groups, users, service roles and security credentials such as passwords, access keys, and permissions policies that control which AWS services and resources users can access. **IAM Policy** lets you define the set of permissions. This policy can then be attached to either a role, user, or a service to define their permission.
You can also use IAM to create roles that are used widely within your desired DevOps strategy. In some cases, it can make perfect sense to programmatically **AssumeRole** instead of directly getting
the permissions. When a service or user assumes roles, they are given temporary credentials to access a service that they normally don’t have access to.
Conclusion
To make the journey to the cloud smooth, efficient, and effective, technology companies should embrace DevOps principles and practices. These principles are embedded in AWS, and form the cornerstone of numerous AWS services, especially those in the deployment and monitoring offerings.
Begin by defining your infrastructure as code using the service AWS CloudFormation or AWS CDK. Next, define the way in which your applications are going to use continuous deployment with the help of services like AWS CodeBuild, AWS CodeDeploy, AWS CodePipeline, and AWS CodeCommit. At the application level, use containers such as AWS Elastic Beanstalk, Amazon ECS, or Amazon Elastic Kubernetes Service (Amazon EKS). Use AWS OpsWorks to simplify the configuration of common architectures. Using these services also makes it easy to include other important services such as Auto Scaling and Elastic Load Balancing.
Finally, use the DevOps strategy of monitoring such as Amazon CloudWatch, and solid security practices such as IAM.
With AWS as your partner, your DevOps principles bring agility to your business and IT organization and accelerate your journey to the cloud.
Document Revisions
To be notified about updates to this whitepaper, subscribe to the RSS feed.
<table>
<thead>
<tr>
<th>Change</th>
<th>Description</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>Updated</td>
<td>Updated</td>
<td>April 7, 2023</td>
</tr>
<tr>
<td>Updated sections to include new services</td>
<td>Updated sections to include new services</td>
<td>October 16, 2020</td>
</tr>
<tr>
<td>Initial publication</td>
<td>Whitepaper first published</td>
<td>December 1, 2014</td>
</tr>
</tbody>
</table>
Contributors
Contributors to this document include:
- Abhra Sinha, Solutions Architect
- Anil Nadiminti, Solutions Architect
- Muhammad Mansoor, Solutions Architect
- Ajit Zadgaonkar, World Wide Tech Leader, Modernization
- Juan Lamadrid, Solutions Architect
- Darren Ball, Solutions Architect
- Rajeswari Malladi, Solutions Architect
- Pallavi Nargund, Solutions Architect
- Bert Zahniser, Solutions Architect
- Abdullahi Olaoye, Cloud Solutions Architect
- Mohamed Kiswani, Software Development Manager
- Tara McCann, Manager, Solutions Architect
Notices
Customers are responsible for making their own independent assessment of the information in this document. This document: (a) is for informational purposes only, (b) represents current AWS product offerings and practices, which are subject to change without notice, and (c) does not create any commitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or services are provided “as is” without warranties, representations, or conditions of any kind, whether express or implied. The responsibilities and liabilities of AWS to its customers are controlled by AWS agreements, and this document is not part of, nor does it modify, any agreement between AWS and its customers.
© 2023 Amazon Web Services, Inc. or its affiliates. All rights reserved.
|
{"Source-Url": "https://docs.aws.amazon.com/pdfs/whitepapers/latest/introduction-devops-aws/introduction-devops-aws.pdf", "len_cl100k_base": 10635, "olmocr-version": "0.1.53", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 75477, "total-output-tokens": 12099, "length": "2e13", "weborganizer": {"__label__adult": 0.00031065940856933594, "__label__art_design": 0.00028634071350097656, "__label__crime_law": 0.0001957416534423828, "__label__education_jobs": 0.0005288124084472656, "__label__entertainment": 5.495548248291016e-05, "__label__fashion_beauty": 0.00011426210403442384, "__label__finance_business": 0.00079345703125, "__label__food_dining": 0.00022411346435546875, "__label__games": 0.0003414154052734375, "__label__hardware": 0.0007014274597167969, "__label__health": 0.0002371072769165039, "__label__history": 0.00014734268188476562, "__label__home_hobbies": 7.832050323486328e-05, "__label__industrial": 0.00022161006927490232, "__label__literature": 0.00020205974578857425, "__label__politics": 0.00018107891082763672, "__label__religion": 0.00024962425231933594, "__label__science_tech": 0.0032558441162109375, "__label__social_life": 8.177757263183594e-05, "__label__software": 0.0091705322265625, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0002014636993408203, "__label__transportation": 0.00029349327087402344, "__label__travel": 0.00017690658569335938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60581, 0.00365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60581, 0.03434]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60581, 0.86773]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 4600, false], [4600, 7925, null], [7925, 10430, null], [10430, 11610, null], [11610, 13585, null], [13585, 16051, null], [16051, 16409, null], [16409, 18215, null], [18215, 20574, null], [20574, 21576, null], [21576, 23501, null], [23501, 24532, null], [24532, 25979, null], [25979, 26333, null], [26333, 28604, null], [28604, 30017, null], [30017, 32311, null], [32311, 34704, null], [34704, 35862, null], [35862, 37620, null], [37620, 38948, null], [38948, 40304, null], [40304, 42659, null], [42659, 44167, null], [44167, 45383, null], [45383, 47144, null], [47144, 48798, null], [48798, 51277, null], [51277, 51664, null], [51664, 53421, null], [53421, 54243, null], [54243, 55757, null], [55757, 57356, null], [57356, 57509, null], [57509, 58682, null], [58682, 59243, null], [59243, 59794, null], [59794, 60581, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 4600, true], [4600, 7925, null], [7925, 10430, null], [10430, 11610, null], [11610, 13585, null], [13585, 16051, null], [16051, 16409, null], [16409, 18215, null], [18215, 20574, null], [20574, 21576, null], [21576, 23501, null], [23501, 24532, null], [24532, 25979, null], [25979, 26333, null], [26333, 28604, null], [28604, 30017, null], [30017, 32311, null], [32311, 34704, null], [34704, 35862, null], [35862, 37620, null], [37620, 38948, null], [38948, 40304, null], [40304, 42659, null], [42659, 44167, null], [44167, 45383, null], [45383, 47144, null], [47144, 48798, null], [48798, 51277, null], [51277, 51664, null], [51664, 53421, null], [53421, 54243, null], [54243, 55757, null], [55757, 57356, null], [57356, 57509, null], [57509, 58682, null], [58682, 59243, null], [59243, 59794, null], [59794, 60581, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60581, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60581, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 4600, 3], [4600, 7925, 4], [7925, 10430, 5], [10430, 11610, 6], [11610, 13585, 7], [13585, 16051, 8], [16051, 16409, 9], [16409, 18215, 10], [18215, 20574, 11], [20574, 21576, 12], [21576, 23501, 13], [23501, 24532, 14], [24532, 25979, 15], [25979, 26333, 16], [26333, 28604, 17], [28604, 30017, 18], [30017, 32311, 19], [32311, 34704, 20], [34704, 35862, 21], [35862, 37620, 22], [37620, 38948, 23], [38948, 40304, 24], [40304, 42659, 25], [42659, 44167, 26], [44167, 45383, 27], [45383, 47144, 28], [47144, 48798, 29], [48798, 51277, 30], [51277, 51664, 31], [51664, 53421, 32], [53421, 54243, 33], [54243, 55757, 34], [55757, 57356, 35], [57356, 57509, 36], [57509, 58682, 37], [58682, 59243, 38], [59243, 59794, 39], [59794, 60581, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60581, 0.03343]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
de26bf2399b1c6f8e687ccf36534826d95d027b8
|
[REMOVED]
|
{"Source-Url": "https://softarch.usc.edu/~gedwards/pubs/gpce_mdm_ec.pdf", "len_cl100k_base": 11371, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 48491, "total-output-tokens": 14856, "length": "2e13", "weborganizer": {"__label__adult": 0.0002853870391845703, "__label__art_design": 0.0003483295440673828, "__label__crime_law": 0.0002419948577880859, "__label__education_jobs": 0.0005464553833007812, "__label__entertainment": 7.390975952148438e-05, "__label__fashion_beauty": 0.00016188621520996094, "__label__finance_business": 0.0002810955047607422, "__label__food_dining": 0.0002741813659667969, "__label__games": 0.0006189346313476562, "__label__hardware": 0.0017337799072265625, "__label__health": 0.0003113746643066406, "__label__history": 0.00027680397033691406, "__label__home_hobbies": 8.022785186767578e-05, "__label__industrial": 0.00051116943359375, "__label__literature": 0.00019419193267822263, "__label__politics": 0.0002593994140625, "__label__religion": 0.0004184246063232422, "__label__science_tech": 0.041656494140625, "__label__social_life": 6.258487701416016e-05, "__label__software": 0.00991058349609375, "__label__software_dev": 0.9404296875, "__label__sports_fitness": 0.00025153160095214844, "__label__transportation": 0.0008697509765625, "__label__travel": 0.00021839141845703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67737, 0.02083]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67737, 0.25423]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67737, 0.86946]], "google_gemma-3-12b-it_contains_pii": [[0, 2555, false], [2555, 4993, null], [4993, 8283, null], [8283, 11730, null], [11730, 14103, null], [14103, 17540, null], [17540, 19703, null], [19703, 23370, null], [23370, 25081, null], [25081, 27422, null], [27422, 30833, null], [30833, 34412, null], [34412, 36883, null], [36883, 40010, null], [40010, 43431, null], [43431, 45094, null], [45094, 46503, null], [46503, 48634, null], [48634, 51864, null], [51864, 54663, null], [54663, 57932, null], [57932, 61013, null], [61013, 64796, null], [64796, 67737, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2555, true], [2555, 4993, null], [4993, 8283, null], [8283, 11730, null], [11730, 14103, null], [14103, 17540, null], [17540, 19703, null], [19703, 23370, null], [23370, 25081, null], [25081, 27422, null], [27422, 30833, null], [30833, 34412, null], [34412, 36883, null], [36883, 40010, null], [40010, 43431, null], [43431, 45094, null], [45094, 46503, null], [46503, 48634, null], [48634, 51864, null], [51864, 54663, null], [54663, 57932, null], [57932, 61013, null], [61013, 64796, null], [64796, 67737, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67737, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67737, null]], "pdf_page_numbers": [[0, 2555, 1], [2555, 4993, 2], [4993, 8283, 3], [8283, 11730, 4], [11730, 14103, 5], [14103, 17540, 6], [17540, 19703, 7], [19703, 23370, 8], [23370, 25081, 9], [25081, 27422, 10], [27422, 30833, 11], [30833, 34412, 12], [34412, 36883, 13], [36883, 40010, 14], [40010, 43431, 15], [43431, 45094, 16], [45094, 46503, 17], [46503, 48634, 18], [48634, 51864, 19], [51864, 54663, 20], [54663, 57932, 21], [57932, 61013, 22], [61013, 64796, 23], [64796, 67737, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67737, 0.03689]]}
|
olmocr_science_pdfs
|
2024-12-08
|
2024-12-08
|
014ee9d1ac62b4af71442240e3d7e7a1485da37e
|
METHODS AND SYSTEMS FOR PROVIDING RESPONSES TO SOFTWARE COMMANDS
Inventors: David M. T. Ting, Sudbury, MA (US); Charles Kekek, Melrose, MA (US)
Assignee: Imprivata, Inc., Lexington, MA (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 1226 days.
Appl. No.: 11/392,233
Filed: Mar. 29, 2006
Prior Publication Data
Field of Classification Search None
See application file for complete search history.
References Cited
U.S. PATENT DOCUMENTS
5,719,950 A 2/1998 Osten et al. .............. 382/115
5,802,199 A 9/1998 Pase et al. .............. 382/115
5,857,028 A 1/1999 Frieling .................. 382/116
FOREIGN PATENT DOCUMENTS
OTHER PUBLICATIONS
Primary Examiner — Ilyung S Sough
Assistant Examiner — Carina Yun
Attorney, Agent, or Firm — Bingham McCutchen LLP
ABSTRACT
Software processes are automated by storing predetermined responses and recognizing the screens of server and/or web-based applications that require data to continue operating.
19 Claims, 4 Drawing Sheets
METHODS AND SYSTEMS FOR PROVIDING RESPONSES TO SOFTWARE COMMANDS
TECHNICAL FIELD
This invention relates to methods and systems for providing automated responses to computer software applications and, more particularly, to methods and systems for intercepting and recognizing screen draw commands issued by the applications and providing automated responses thereto.
BACKGROUND INFORMATION
The number of computer applications used by large corporations has increased significantly over the past thirty years. For example, companies may employ separate applications for electronic mail, document control, financial applications, inventory management, manufacturing control and engineering functions, in addition to overall network access. Each application often requires a separate login procedure (including some form of personal identification such as a user ID, a password, a key sequence or biometric authentication) and other routine responses to screens, forms and messages during the initiation and/or operation of the application.
One approach to addressing the proliferation of user authentication credentials is to provide a single-sign-on application (either client-based or residing on a server) to which a user is authenticated by means of a unique credential (e.g., a biometric scan). Once the single credential is authenticated, IDs and passwords for various other applications are then provided to the client machine and used to access the individual applications. However, as programs are added to the user’s application suite or application workflows are changed, new screens and input fields are introduced and various system configurations must be changed accordingly. Furthermore, many applications require a user to provide numerous, often repetitive inputs (in the form of data, mouse clicks, or keystrokes, for example) to complete simple tasks and navigate through an application.
In addition to the repetitive nature of user authentication, many operational tasks within applications require a user to repeat the same steps for many transactions. For example, a call-center application may require a user to confirm a caller’s account number, recall recent account history, and retrieve text regarding current promotions deemed relevant to that caller. Each process may require the user to select a particular button, enter user data (e.g., an account number, a zip code, etc.) and request text messages from a server. Each of these steps requires additional time and introduces opportunities for error, thus increasing costs.
What is needed, therefore, are systems and techniques for facilitating the central management of user authentication, access, and computer system usage that can easily accommodate the introduction of new computer applications into a large computing environment and automate many of the redundant tasks associated with operating the applications.
SUMMARY OF THE INVENTION
The present invention automates responses to various software application commands that, absent the present invention, require manual user actions to complete. In response to initiation of an application from a client machine, instructions corresponding to generation of application screens are scrutinized at the client. For example, the client’s memory allocations may be rewritten such that a system call filter is assigned to memory addresses previously assigned to operating system procedures or application libraries. As data and instructions are generated by the application (which can be client-based, server-based, and/or web-based) the client-based system recognizes commands related to the generation and rendering of application screens that require data entry and/or user interaction.
Upon issuing a command directed at the operating system or library, the application directs the command to the memory address at which the system commands were previously stored, even though the filter now occupies that address. As a result, the commands are never processed by the operating system; instead, the system filter intercepts them and scans the commands for screen-rendering requests. The present invention uses such commands to generate “virtual screen images” which are compared to stored “screen templates.” These specify the data that the user would be expected to enter into the rendered screen and the locations for such entry serve to provide predetermined responses to the screens (e.g., passwords, biometric authentication information, object selection messages, mouse events, text responses). These responses are then presented back to the application via the application message queue.
In effect, applications are automatically provided with data and instructions based on the low-level operating system commands used to produce screen images before the screens are actually rendered by the operating system or presented to the user. In this way, the user is not burdened by various login processes or repetitive processes, and need not maintain awareness of the different requirements of each screen she may encounter. The system can automatically complete many of the repetitive tasks associated with operating the application.
Accordingly, in a first aspect, a method for providing a response to a software program includes providing a system call filter for monitoring system commands issued by a software program. The commands are directed to a procedure (such as an operating system or application procedure) assigned to a particular memory address (or sets of memory addresses, in RAM, for example) on a client computer. The memory allocations of the client computer are redirected such that the memory addresses previously attributed to the system procedures are reassigned to the system call filter, thus allowing the system call filter to monitor and/or intercept the commands issued by the software program. Certain commands (or sets of commands) are recognized as commands relating to an application event (e.g., a screen draw event or other object event) and an appropriate response to the event is determined and provided to the software program.
Allocation of memory on the client computer may take place prior to the issuance of a command from the software program, and may be such that the procedure is allocated to a second memory address on the client computer. The response may be provided by inserting the response into a messaging queue. In some embodiments, the commands include one or more parameters, which may be modified, and may also be used to determine a response to the commands. Templates representing screen images from the software application may be provided and compared with virtual screen images based on the intercepted commands. In some cases stored responses attributed to the templates are provided and one or more of the stored responses can be presented to the software application. The stored responses can include user authentication information, transaction response information, as well as other commands directed to the application.
The software program can reside on a client, a server, or some combination thereof, and can communicate with a client over a network such as a local area network, a wide area
network, a virtual private network, and/or the Internet. In some embodiments, the client comprises a remote access server and acts as a client to other application or web servers while providing access to clients on the network.
In another aspect, a system for providing a response to a software program comprises a system call filter, a system memory module, a recognition engine, and a client agent. The system call filter monitors and intercepts system commands issued by a software program. The commands are typically directed to a procedure (e.g., an operating system function or application procedure) assigned to a memory address on a client machine. To allow the filter to intercept the system calls, the memory module reassigns memory allocations on the client machine such that the system call filter is assigned the memory address previously assigned to the procedure called by the system command. The recognition engine scans the intercepted system commands, identifies those commands related to particular application events, and determines an appropriate response thereto. The client agent provides the responses to the application, using, for example, an application message queue.
The system can also include a template database for storing templates representing screen images, which in some embodiments can be compared to virtual screen images generated by the client agent to determine appropriate responses to the intercepted commands.
In another aspect, a client-resident apparatus configured to provide automated responses to software applications includes a rendering module for generating virtual screen images based on system commands intercepted between a software application and an operating system such that the operating system remains unaware of the system commands, a communications module for receiving stored responses to the virtual screen images based on a comparison of the virtual screen images to screen image templates, and a messaging module for providing the stored responses to the software application. By “unaware” is meant that the commands are not processed by and do not affect the operating system.
In another aspect, the invention comprises an article of manufacture having a computer-readable medium with the computer-readable instructions embodied thereon for performing the methods described in the preceding paragraphs. In particular, the functionality of a method of the present invention may be embodied on a computer-readable medium, such as, but not limited to, a floppy disk, a hard disk, an optical disk, a magnetic tape, a PROM, an EPROM, CD-ROM, or DVD-ROM. The functionality of the techniques may be embodied on the computer-readable medium in any number of computer-readable instructions, or languages such as, for example, FORTRAN, PASCAL, C, C++, Java, C#, TCL, BASIC and assembly language. Further, the computer-readable instructions may, for example, be written in a script, macro, or functionally embodied in commercially available software (such as, e.g., EXCEL or VISUAL BASIC).
The foregoing and other objects, features and advantages of the present invention disclosed herein, as well as the invention itself, will be more fully understood from the following description of preferred embodiments and claims, when read together with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention.
FIG. 1 schematically illustrates an environment in which the application monitoring processes of various embodiments of the invention may operate.
FIG. 2 is a schematic diagram of a system adapted to practice the methods according to one embodiment of the invention.
FIG. 3 is a more detailed schematic diagram of a client agent adapted to practice the methods according to one embodiment of the invention.
FIG. 4 is a flow diagram illustrating the process for providing automated responses to software applications according to one embodiment of the present invention.
DETAILED DESCRIPTION
In broad overview, FIG. 1 illustrates an environment 100 in which the various techniques, systems and apparatuses can be implemented to automate user authentication and computer application usage. The environment 100 includes one or more client devices 102 and may also include one or more server devices, including, without limitation, an application server 104, an authentication server 106, and a remote access server 108. Each of the client devices 102 and servers 104, 106 and 108 are in communication with a computer network 110 and the Internet 112 using various communication channels.
In one embodiment, the client devices 102 can be implemented as a system including software running on a personal computer (e.g., a PC with an Intel processor or an Apple MACINTOSH) capable of running such operating systems as the MICROSOFT WINDOWS family of operating systems from Microsoft Corporation of Redmond, Wash., the MACINTOSH operating system from Apple Computer of Cupertino, Calif., and various varieties of Unix, such as SUN SOLARIS from SUN MICROSYSTEMS, and GNU/Linux from RED HAT, INC. of Durham, N.C. (and others). The client devices 102 also can be implemented on such hardware as a smart or dumb terminal, network computer, wireless device, telephone, personal digital assistant, media player, information appliance, workstation, minicomputer, mainframe computer, or some combination, or as another computing device, that is operated, for example, as a general purpose computer, or a special purpose hardware device used solely for serving as a client device 102 in the environment 100.
In one embodiment, the servers 104, 106, and 108 are implemented using one or more server-class computers capable of running such operating systems as the MICROSOFT WINDOWS family of operating systems from Microsoft Corporation of Redmond, Wash., the MACINTOSH operating system from Apple Computer of Cupertino, Calif., and various varieties of Unix, such as SUN SOLARIS from SUN MICROSYSTEMS, and GNU/Linux from RED HAT, INC. of Durham, N.C. (and others). Web service software, such as the APACHE software, provided by the Apache Software Foundation, or INTERNET INFORMATION SERVICES from Microsoft Corporation may be used to provide web-based content to the clients 102.
The communications networks that connect the client devices 102 with the servers 104, 106 and 108 can use any media or any combination of media such as standard telephone lines, LAN or WAN links (e.g., T1, T3, 56 k, X.25), broadband connections (ISDN, Frame Relay, ATM), and wireless links (cellular, 802.11, Bluetooth, etc.). Preferably, the network carries TCP/IP protocol communications, and HTTP/HTTPS requests made by the client devices 102 to the servers 104, 106 and 108. The type of network is not a limitation, however, and any suitable network(s) and protocol(s) may be used. Non-limiting examples of networks that can
serve as or be part of the communications network include a wireless or wired Ethernet-based intranet 110, a local or wide-area network (LAN or WAN), and/or the global communications network known as the Internet 112, which can accommodate many different communications media and protocols, and any variation or combination. In instances where the client device 102 communicates with the application server 104 using an untrusted connection (e.g., from outside a corporate intranet, from an unknown domain, or beyond a firewall 114) the network communications can utilize a remote access server 108 such as a RADIUS server, which provides the necessary session-level security and authentication to provide the client 102 with access to secure systems and applications hosted within the firewall 114.
In one embodiment, a user located at a client device 102 (either directly connected to the network 110 or remotely connected via the Internet 112 and the remote access server 108) attempts to gain access to and use applications residing on the application server 104. The applications residing on the application server 104 can provide various services, including, by way of example only, network access, accounting services, software development services, on-line transaction processing services, document processing services, as well as others.
To gain access to the desired application(s), the user generally is required to provide some form of user authentication credential. User authentication credentials are typically classified into one of three categories—something a user knows (e.g., a password), something a user has (e.g., a token or smartcard), and something a user is (e.g., a biometric credential such as a fingerprint, retinal scan, facial scan, voiceprint, DNA sequence, or the like). During the user authentication process, and once access is granted, the applications present to the user various screens, input fields, and buttons as screen objects that the user manipulates (by, for example, completing a text field, clicking on a button, navigating to a particular web page) to effectuate some desired action in the application. Many of these actions are repetitive in nature (e.g., they are done each time a user logs in or performs a particular function) and often use the same data for each occurrence. It is these repetitive, predictable events (which often rely on user-provided input for the application to continue operating) that the invention aims to automate and, in some cases, eliminate.
Referring to FIG. 2, the client 102 includes various components, some being embodied in software, such as an operating system 202, and some embodied in hardware, such as various input devices 204 (e.g., a keyboard, a mouse, a biometric input device, a pointer, etc.), memory, storage devices, and a display 206. In addition to the operating system 202, one or more target applications 208 (or in some cases, components of applications) reside on the client 102. For example, the software code for application 208 (which may be, for example, a word processing application, a spreadsheet application, or an Internet browser) may reside solely within the various storage devices of the client 102. In such cases, the various components of the application 208 are generally stored in non-volatile memory (e.g., on a “hard drive” of the client) and loaded into random access memory (RAM) when the application 208 is started. In other embodiments, the application 208 may reside on a server (e.g., the application server 104 of FIG. 1) and provide one or more components to the client 102 only when needed. The memory locations of the various application components (as well as operating system modules) are determined upon instantiation of the application by an application loader. Typically, components of active, running applications are stored in RAM on the client 102 until an application is terminated, at which point the components are removed from RAM. In some embodiments, the components of application 208 can reside on a number of devices, depending, for example, on processing requirements, geographical constraints, and other architectural considerations.
In a typical implementation, the application 208 interacts with the operating system 202 (and potentially other applications) through the use of system modules 214. For example, on a client using a WINDOWS-based operating system, the system modules 214 process requests from the applications 208 operating on the client 102 to the operating system 202. This allows interactive applications to operate using an “event-driven” model (e.g., mouse clicks, screen renderings, HTTP requests, keystrokes, etc.). In instances where the application needs an action to be taken by the operating system 202, the application typically determines the proper system module that can process the request (e.g., a library) and sends a command to the module to effectuate the action. To confirm that the action is complete (e.g., the screen is rendered), the application monitors the application message queue 210 for an indication, for example, that a data field has been completed and a form posted to the server, an object has been selected, or a series of characters typed.
In various embodiments, the present invention automates many of these processes by providing a system call filter 216 and a client agent 218 that, in combination with a recognition engine 220 and a template database 224, emulate operating system procedures and user input. In some cases, these components reside on the client 102 (to allow use of secure applications when not physically connected to the network, for example), whereas in other embodiments certain components, such as the recognition engine 220 and/or the template database 224, reside on the authentication server 106.
To implement the system call filter 216, memory assignments on the client 102 are rewritten such that the memory addresses associated with various system modules (e.g., DLLs) are reallocated to the system call filter 216. For example, in one embodiment, a jump table provides a listing of the memory addresses for various routines and associates a numerical value (typically an integer) or a memory address with each table entry. When an application invokes the routine, it refers to the jump table to find the memory address associated with the routine. Substituting the address of the system call filter for the addresses associated with various system modules causes an application 208 seeking to invoke a particular routine or procedure by issuing a command to that routine to “unknowingly” send the commands instead to the system call filter 216. As a result, the commands intended for the operating system are instead intercepted by the system call filter 216. Effectively, the application is suspended because the operating system does not process the system call, and thus cannot provide the expected message to the application message queue 210 indicating that the command has been completed.
The system call filter 216 scans the commands received from the application 208 and determines which commands are related to various application and/or operating system events. Although the system call filter 216 can receive and identify any command, screen draw commands are of particular interest, as these commands generally instruct the operating system to present screens (e.g., screens 228 and 230) to the user that prompt user action. As a result, the application 208 halts processing and awaits a response in the form of a message inserted into the application message queue 210. The filter 216 identifies those commands that indicate application events (by, for example, comparing the commands to a previously-compiled list of commands corre-
sponding to such events) and provides the commands to the client agent 218. The commands can be sent to the agent 218 individually, in groups (defined, for example, by a common parameter such as a window handle or object name, or based on a particular time segment) or in bulk.
In some embodiments, an application probe 226 (also referred to as a “hook”) is inserted between the message queue 210 and the application 208. The probe 226 intercepts messages directed to the application 208 via the message queue 210 before the messages are received by the application and, if necessary, the probe 226 may act on the messages. In a manner similar to the way the system call filter 216 intercepts commands from the application 208 to the operating system 202, the application probe 226 intercepts messages to the application 208. By intercepting the messages being sent to the application 208, the probe 226 (or the system generally) can act on the messages before they reach the application 208.
For example, the system messages below represent the process of hooking into commands being generated by a secure shell client application that communicates via a window:
- 09:17:46 HOK SshClient Resetting GDI capture for hwnd: 0x000206d6
- 09:17:46 HOK SshClient Unable to get import table for module: 0x7c9f0000 [c:\windows\system32\ntdll.dll]
- 09:17:46 HOK SshClient Unable to get import table for module: 0x20000000 [c:\windows\system32\spap2rem.dll]
- 09:17:46 HOK SshClient Initializing GDI Signature capture for hwnd: 0x000206d6
Once the system call filter is initiated, it modifies the memory address allocations for system modules such as Kernel32, User32 and OLE32 used in the MICROSOFT WINDOWS operating system. Such modifications may be represented by the following messages, for example:
- 09:17:46 HOK SshClient Modifying: KERNEL32 Starting: 6 Count: 5
- 09:17:46 HOK SshClient Modifying: USER32 Starting: 5 Count: 23
- 09:17:46 HOK SshClient Modifying: GDI32 Starting: 28 Count: 20
- 09:17:46 HOK SshClient Modifying: USP10 Starting: 4 Count: 5
- 09:17:46 HOK SshClient Modifying: OLE32 Starting: 53 Count: 0
- 09:17:46 HOK SshClient Unable to get module handle for USP10
- 09:17:46 HOK SshClient Unable to get module handle for USP10
- 09:17:46 HOK SshClient Modifying: USER32 Starting: 4 Count: 5
- 09:17:46 HOK SshClient Modifying: USP10 Starting: 4 Count: 0
- 09:17:46 HOK SshClient Modifying: OLE32 Starting: 4 Count: 0
- 09:17:46 HOK SshClient Modifying: USER32 Starting: 0 Count: 0
- 09:17:46 HOK SshClient Modifying: USER32 Starting: 0 Count: 0
- 09:17:46 HOK SshClient Modifying: USER32 Starting: 0 Count: 0
The hooking process may redirect calls to specific modules. For example, calls to the operating system function such as LoadLibrary, A within Kernel32 may be redirected to a wrapper function as part of the system call filter. This may be done for a specific set of functions within Kernel32, User32, GDI32 as well as other operating system and/or application functions such that the system maintains adequate coverage of system calls. For example, functions such as DrawTextA, ExtTextOutA, ExtTextOutA are functions that instruct a client to display text at a given location either on the user’s screen or into non-displayable (off screen) memory such that it can be painted to the display at a later time. Because of this queuing of text, the instruction also tracks the generation and management of memory-based bitmap images and how they are used. Other system functions such as LoadLibrary and GetProcAddress are tracked to detect when an application directly loads a system library, as opposed to when the loader dynamically loads the libraries. Examples of the redirected calls to specific modules are shown below:
- 09:17:46 HOK SshClient Hooking module: 0x00040000 [c:\program files\ssh communications security\ssh secure shell\sshclient.exe]
- 09:17:46 HOK SshClient Redirecting {0x7401d7f7} -> {0x10037a32}
- KERNEL32 : LoadLibraryA in module: KERNEL32.dll
- 09:17:46 HOK SshClient Redirecting {0x7401c2a8} -> {0x10037a73}
- KERNEL32 : GetProcAddress in module: KERNEL32.dll
- 09:17:46 HOK SshClient Hooking module: 0x74049021 [c:\windows\system32\kernel32.dll] within dll: c:\program files\ssh communications security\ssh secure shell\sshclient.exe
- 09:17:46 HOK SshClient Redirecting {0x77440056} -> {0x10038496}
- USER32 : SetWindowPos in module: USER32.dll
- 09:17:46 HOK SshClient Redirecting {0x7744257} -> {0x10037a48}
- USER32 : FillRect in module: USER32.dll
- 09:17:46 HOK SshClient Redirecting {0x7744694} -> {0x10038838}
- USER32 : ReleaseDC in module: USER32.dll
- 09:17:46 HOK SshClient Redirecting {0x7747727} -> {0x100371d1}
- USER32 : InvertRect in module: USER32.dll
- 09:17:46 HOK SshClient Hooking module: 0x77440000 [c:\windows\system32\user32.dll] within dll: c:\program files\ssh communications security\ssh secure shell\sshclient.exe
- 09:17:46 HOK SshClient Redirecting {0x77161b2} -> {0x10037a06}
- GDI32 : BitBlt in module: GDI32.dll
- 09:17:46 HOK SshClient Redirecting {0x770ec6c} -> {0x10037258}
- GDI32 : Rectangle in module: GDI32.dll
- 09:17:46 HOK SshClient Redirecting {0x77bf452} -> {0x10037a7f}
- GDI32 : ExtTextOutA in module: GDI32.dll
- 09:17:46 HOK SshClient Redirecting {0x77f620f} -> {0x10038743}
- GDI32 : CreateBitmap in module: GDI32.dll
The following trapped commands, for example, represent the drawing of an application menu that are captured and redirected to the client agent for virtual rendering:
```plaintext
09:17:47 HOK SshClient TextOutA(0xc9010909 4 109 11) ->&File
09:17:47 HOK SshClient TextOutW(0xc9010909 9 2 4) -> File
```
Some of all of the collected text is aggregated and marked as label text with an associated value. The set of labels is then used by the client agent 218 to determine if a particular screen and/or if the application is running in a specific context. In the example below, each element is assigned to label 1 with a unique control ID (1 through 17) to identify the screen and the elements within the screen.
The client agent 218 emulates the operating system by generating “virtual” screen images based on the commands received from the filter 216. For example, a “virtual” screen refers to a command or set of commands that when processed by an operating system render an actual screen for presentation to a user, but because the commands are not sent to the operating system, remain unprocessed, but, because of the particular types of commands and/or parameters, are still recognizable as a screen. The client agent 218 then transmits these commands to the recognition engine 220, which, by comparing the commands to previously stored commands that were previously associated with a particular screen or application templates (or other application event), identifies the command set as relating to a particular screen.
Further, responses for the recognized screen, having been previously stored in database 224 and associated with the screen may then be provided to the application 208 via the hbtn_cancel that, when selected by the user, clears any values from the text boxes. Because each object is associated with the same screen, they share the same window handle (typically of the form hWnd_login), indicating to the system call filter 216 that each of these objects is associated with a single screen. Further, because of the naming conventions used to identify the objects (i.e., “login,” “ID,” and “pswd”), the filter 216 can identify the common screen as a login screen. Alternatively or in addition, other characteristics of the screen and the objects thereon may also be captured, such as the physical screen location (which can be measured in absolute terms such as pixels or inches or in relative terms in comparison with other screens and objects), screen colors, HTML tags, XML data tags, images embedded on the screen (e.g., application logos) and the like. Each element adds to the particular “fingerprint” of the screen type and function, mak-
ing it possible to programmatically identify the screen without human intervention (or exclusive reliance on object naming conventions).
In some embodiments, the degree of match between a virtual screen rendered by the client agent and the template provided by the template database need not be absolute. For example, as screens evolve through application updates, a high percentage of the screen attributes may remain constant (e.g., the screen name, and various objects within the screen) while new screen objects are added. In some cases, the process of matching the virtual screens to the stored templates can include thresholds that define a minimum degree of match adequate to allow the automated responses to be forwarded to the application. Further, individual elements of a screen may be weighted such that particular objects (e.g., a login text box and screen name) must match whereas other objects (logos, locations, etc.) are allowed to vary.
In addition to storing the templates for comparison purposes, the template database 224 can also include one or more responses for each screen such that when a screen is “recognized,” the proper responses are provided to the client agent 218. Using the login screen described above as an example, the template database can associate a user name, a password, and a mouse-click event message with the login screen. In some embodiments, the username field may be populated (either automatically using a cookie or other stored data item or manually by the user) and used to select the proper password to associate with the screen. In some cases, all the necessary responses are provided automatically, based, for example, on a previously provided and authenticated biometric credential. Thus, the system can determine the proper responses that are to be provided to a particular screen generated by an application and pass the responses back to the application, all without interaction of the operating system or any user input (or actual screen rendering), greatly accelerating the application login process and use of the application.
In some embodiments, the system call filter 216, client agent 218, recognition engine 220 and template database 224 implement the functionality of the present invention in hardware or software, or a combination of both on a general-purpose computer. In addition, such a program may set aside portions of a computer’s RAM to provide control logic that affects one or more of the message interception, message filtering, screen rendering, comparison, and response retrieval. In such an embodiment, the program may be written in any one of a number of high-level languages, such as FORTRAN, PASCAL, C, C++, C#, Java, TCL, or BASIC. Further, the program can be written in a script, macro, or functionally embedded in commercially available software, such as EXCEL or VISUAL BASIC. Additionally, the software could be implemented in an assembly language directed to a microprocessor resident on a computer. For example, the software can be implemented in Intel 80x86 assembly language if it is configured to run on an IBM PC or PC clone. The software may be embedded on an article of manufacture including, but not limited to, “computer-readable program means” such as a floppy disk, a hard disk, an optical disk, a magnetic tape, a PROM, an EROM, or CD-ROM.
FIG. 3 illustrates the modules of the client agent 216 in greater detail. These include a messaging module 304, a rendering module 308, and a communications module 312. The messaging module 304 interfaces with the application message queue and monitors messages that are transmitted from the operating system to the application. It also provides the responses to the application based on the recognized screens and associated responses. The rendering module 308 receives the screen draw commands intended for the operating system from the system call filter, and based on the draw commands, creates the virtual screens. Using the communication module 312, the rendering module 308 interacts with the recognition engine to identify the rendered screen and provide the appropriate responses thereto. In some embodiments, the rendering module 308 provides additional information (e.g., new or modified parameters) based on previously recognized screens or user-provided information.
The client agent 216 can reside on the computer responsible for providing and managing user sessions with the secure system. For example, this can be the user’s workstation if the user is connected to a LAN or working offline. With a remote setup, for example, a VPN connection provides a secure network access and the agent 216 runs on the client. In other embodiments such as server-based computing in which the session is provided by a CITRIX server or via Remote Terminal Services, the agent runs within each virtual session on the server.
FIG. 4 illustrates an exemplary technique for providing responses to applications in accordance with the invention. A user operating a client machine selects an application, and based on the initial selection of the application, the application and its libraries are loaded into the memory of the client (STEP 404) using, for example, an application loader. The memory locations of the libraries are stored in a table (referred to as a “jump table” in a WINDOWS 2000 system) such that when the application sends commands to one of the libraries, the application refers to the jump table to determine the appropriate memory address to send the command. In accordance with the invention, however, the memory addresses of the client machine are reallocated (STEP 408) such that the memory addresses initially assigned to the application libraries (and, in some cases, operating system functions) are reassigned to the system call filter 216 (see FIG. 2). In some cases, the libraries can be temporarily allocated to new memory addresses.
While operating, the application generates system calls (STEP 412) to instruct the operating system to perform various functions, such as to draw a screen and render it on the client display for presentation to the user. However, because the system call filter is assigned the memory address the application attributes to the libraries or procedures used to execute the commands, the commands never reach the operating system and are instead intercepted (STEP 416) by the filter. The filter determines which commands are directed to procedures that effectuate screen events and that may require user input for further processing. Examples of such screen events include the completion of text boxes, clicking of buttons, selections from drop-down boxes (“combo-boxes”), menu selections, selection of URL’s, selection of media elements, and the like. The intercepted calls determined to be related to screen events are then provided to the client agent (STEP 420).
By emulating the operating system procedures and/or application libraries to which the application sends commands, the client agent renders virtual screens (STEP 424) based on the intercepted commands and any parameters that may be included with or in the commands. The screens can be rendered based on object-specific commands (e.g., objects that comprise screen-based images such as buttons, fields, text, etc.) based on scrolling commands for cursor-based screens. For example, a cursor-based screen may not utilize discrete screens that appear and disappear on a user display, but instead the display is rendered line-by-line as a user provides input in response to a cursor. To capture such screens, the intercepted commands can be grouped by counting of carriage returns (e.g., every twenty carriage returns
equals one screen) or time (e.g., a snapshot is taken every n seconds) to represent a screen. Once a virtual screen is compiled, the data representing that screen is sent to the recognition engine (STEP 428) where it is compared (STEP 432) to screen templates stored, for example, in the template database.
The virtual screens can be compared to the screen templates based on one or more attributes of the screens, such as element naming conventions, geometric shapes, physical layouts, images included within the screens, the ordering of the screens within the application (e.g., if SCREEN_XZY was recognized previously, SCREEN_ABC is the next screen) as well as parameters supplied with the screens. If a match is found (DECISION STEP 436), the appropriate response (or responses) associated with that screen (and in some cases the screen/parameter combination) are identified (by querying a data storage device, for example) and provided to the client agent (STEP 440). The client agent then forwards the responses to the application message queue (STEP 444) where the application expects to find responses based on the originally issued commands. The responses are delivered to the application (STEP 448), and the application continues to operate as intended. For example, once the user authentication steps are completed, the user is presented with a screen in which they can begin to interact with the application on an ad hoc basis.
If the screen is not recognized (because, for example, the application is a new application or the screen is encountered infrequently), the recognition engine captures the various attributes of the screen (similar to those discussed above with respect to the comparison step) and stores the screen attributes as a new template (STEP 452). If user inputs are identified in response to the newly captured screen, the inputs can also be stored in the template database as potential responses (STEP 456), and thus can be automatically supplied when the screen is next encountered. The process can continue to intercept commands and automatically provide appropriate responses until the system no longer recognizes a screen and user input is required, or the application is terminated (STEP 460). Once the application is terminated, or the system is disabled because the automatic response feature is no longer needed (e.g., after the user has logged into all necessary systems and any administrative screens have been passed), the memory address values are restored (STEP 464) to their original values and the intercept filter is removed from the jump table listing. The applications then operate as they would under normal conditions.
While the invention has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the area that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.
One the system has successfully “hooked” the various modules, the system filter begins intercepting application calls using, for example, a wrapper function. For location-based draw commands, the XY position, the surface handle, and any displayed text is obtained. In some embodiments, this information may be checked to see if the surface is actually visible or is an off-screen bitmap (not visible) that are painted by the application and then copied to the display, such that a user does not see the individual characters being drawn. In some cases, scrolling information and rectangular draw operations that may or may not overwrite data previously painted are also captured. In the example below, the TextOutA, TextOutW commands are identified, ordered and stored into indexed buffers for retrieval when the drawing stops.
What is claimed is:
1. A method for providing a response, without user interactivity, to a software program executed by a processor, the method comprising:
providing a plurality of template screen images;
monitoring system commands issued by a software program during processor execution thereof;
intercepting system commands relating to an application event and comprising one or more screen-rendering commands;
generating a virtual screen image based on the intercepted screen-rendering commands;
identifying a template screen image that matches the generated virtual screen image; and
providing, to the executing software program, responses attributed to the identified matching screen template as if the screen-rendering commands had been executed and user input obtained in response thereto.
2. The method of claim 1 wherein the software program issues system commands to a procedure assigned to a first memory address on a client computer and further comprising reassigning memory allocations on the client computer such that the procedure is allocated to a second memory address of the client computer.
3. The method of claim 2 wherein the procedure comprises an operating system procedure.
4. The method of claim 2 wherein the procedure comprises an application procedure.
5. The method of claim 2 wherein the first memory address refers to a memory address in random access memory.
6. The method of claim 2 wherein the reassigning step occurs prior to the issuance of a command from the software program.
7. The method of claim 1 wherein a response to the software program is provided by inserting the response into a message queue.
8. The method of claim 1 wherein the intercepted commands comprise one or more parameters.
9. The method of claim 8 further comprising modifying at least one of the one or more parameters.
10. The method of claim 1 wherein the system commands comprise screen draw commands.
11. The method of claim 1 wherein the responses comprise user authentication information.
12. The method of claim 1 wherein the responses comprise transaction response information.
13. The method of claim 1 wherein the software program resides on a server.
14. The method of claim 13 wherein the software program issues system commands to a procedure assigned to a memory address on a client computer and the client and the server are in communications over a network.
15. The method of claim 14 wherein the network comprises one or more of a local area network, a wide area network, the Internet, and a virtual private network.
16. The method of claim 13 wherein the client computer comprises a remote access server.
17. A non-transitory computer-readable medium comprising computer-readable instructions for:
providing a plurality of template screen images;
monitoring system commands issued by a software program during processor execution thereof;
intercepting system commands relating to an application event and comprising one or more screen-rendering commands;
generating a virtual screen image based on the intercepted screen-rendering commands;
identifying a template screen image that matches the generated virtual screen image; and
providing, to the executing software program, responses attributed to the identified matching screen template as if the screen-rendering commands had been executed and user input obtained in response thereto.
18. A system for providing a response to a software program, the system comprising:
a processor;
a template database for storing a plurality of template screen images;
a system call filter, executable by the processor, for (i) monitoring system commands issued by a software program during processor execution thereof, (ii) intercepting system commands relating to an application event, the commands comprising one or more screen-rendering commands, and (iii) generating a virtual screen image based on the intercepted screen-rendering commands;
a recognition engine, executable by the processor, for recognizing the screen-rendering commands and identifying a template screen image that matches a screen corresponding to the screen-rendering commands; and
a client agent for providing, to the executing software program, responses attributed to the identified matching screen template as if the screen-rendering commands had been executed and user input obtained in response thereto.
19. The system of claim 18 further comprising a template database for storing a plurality of templates, each template representing one or more screen images.
|
{"Source-Url": "https://patentimages.storage.googleapis.com/0f/35/9d/9c0dcb2c4e568f/US7950021.pdf", "len_cl100k_base": 9710, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 19953, "total-output-tokens": 11345, "length": "2e13", "weborganizer": {"__label__adult": 0.00044417381286621094, "__label__art_design": 0.000469207763671875, "__label__crime_law": 0.0018281936645507812, "__label__education_jobs": 0.0005846023559570312, "__label__entertainment": 0.00014400482177734375, "__label__fashion_beauty": 0.00015020370483398438, "__label__finance_business": 0.0013284683227539062, "__label__food_dining": 0.0002505779266357422, "__label__games": 0.0013017654418945312, "__label__hardware": 0.002414703369140625, "__label__health": 0.0002276897430419922, "__label__history": 0.00021266937255859375, "__label__home_hobbies": 6.431341171264648e-05, "__label__industrial": 0.000400543212890625, "__label__literature": 0.0003039836883544922, "__label__politics": 0.000247955322265625, "__label__religion": 0.00026035308837890625, "__label__science_tech": 0.0156097412109375, "__label__social_life": 4.374980926513672e-05, "__label__software": 0.1455078125, "__label__software_dev": 0.82763671875, "__label__sports_fitness": 0.00012290477752685547, "__label__transportation": 0.0003058910369873047, "__label__travel": 0.0001386404037475586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48168, 0.08619]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48168, 0.51691]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48168, 0.90611]], "google_gemma-3-12b-it_contains_pii": [[0, 1780, false], [1780, 1780, null], [1780, 1780, null], [1780, 1780, null], [1780, 1780, null], [1780, 1780, null], [1780, 8994, null], [8994, 16119, null], [16119, 23912, null], [23912, 29240, null], [29240, 29543, null], [29543, 31909, null], [31909, 39595, null], [39595, 46512, null], [46512, 48168, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1780, true], [1780, 1780, null], [1780, 1780, null], [1780, 1780, null], [1780, 1780, null], [1780, 1780, null], [1780, 8994, null], [8994, 16119, null], [16119, 23912, null], [23912, 29240, null], [29240, 29543, null], [29543, 31909, null], [31909, 39595, null], [39595, 46512, null], [46512, 48168, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48168, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48168, null]], "pdf_page_numbers": [[0, 1780, 1], [1780, 1780, 2], [1780, 1780, 3], [1780, 1780, 4], [1780, 1780, 5], [1780, 1780, 6], [1780, 8994, 7], [8994, 16119, 8], [16119, 23912, 9], [23912, 29240, 10], [29240, 29543, 11], [29543, 31909, 12], [31909, 39595, 13], [39595, 46512, 14], [46512, 48168, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48168, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-06
|
2024-12-06
|
5b1437db268e778f67f2afb528d994fcdb8ca1ab
|
Disclaimer and Legal Information
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
Copies of documents which have an order number and are referenced in this document may be obtained by calling 1-800-548-4725 or by visiting: http://www.intel.com/design/literature.htm
*Other names and brands may be claimed as the property of others.
Copyright © 2016, Intel Corporation. All rights reserved.
Benchmark and Performance Disclaimers
Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.
Table of Contents
1 Introduction ............................................................................................................. 6
1.1 About This User Guide .................................................................................. 6
1.2 Target Audience ......................................................................................... 6
1.3 Related Documents ..................................................................................... 6
1.4 Terminology and Acronyms ......................................................................... 6
2 Product Overview .................................................................................................. 7
2.1 Goals and Objectives .................................................................................... 7
2.1.1 Portability and Platform Independence .............................................. 7
2.2 Product Environment ................................................................................... 7
2.2.1 Hardware Environment ....................................................................... 7
2.2.2 Software Environment ...................................................................... 7
3 SCIF Programming Concepts ............................................................................... 9
3.1 Nodes ............................................................................................................. 9
3.2 Ports ............................................................................................................... 9
3.3 Endpoints and Connections ......................................................................... 9
3.4 Messaging Layer .......................................................................................... 12
3.5 Memory Registration .................................................................................. 13
3.5.1 Duplication of Endpoint Descriptors Across a fork() ....................... 19
3.5.1.1 Registered Memory Across a fork() ............................................ 20
3.5.2 Kernel Mode Registration-Related API ............................................ 21
3.6 Mapped Remote Memory ............................................................................ 22
3.6.1 Kernel Mode Mapping-Related API .................................................. 24
3.7 Remote Memory Access .............................................................................. 24
3.7.1.1 DMA Ordering ................................................................................ 27
3.8 RMA Synchronization .................................................................................. 27
3.9 Registered Window Deletion ....................................................................... 30
3.9.1 Connection Termination ..................................................................... 31
3.9.2 Normal Connection Termination ....................................................... 31
3.9.3 Abnormal Connection Termination .................................................... 31
3.10 Process Termination .................................................................................... 32
3.11 User Mode Utility Functions ....................................................................... 32
3.12 Kernel Mode Utility Functions .................................................................... 33
4 Programming Considerations .............................................................................. 34
4.1 Unaligned DMAs ......................................................................................... 34
4.2 Synchronization Overhead .......................................................................... 34
4.3 Large pages .................................................................................................. 34
## Table of Figures
<table>
<thead>
<tr>
<th>Figure</th>
<th>Description</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Intel® ManyCore Platform Software Stack (MPSS)</td>
<td>8</td>
</tr>
<tr>
<td>2</td>
<td>Connecting two endpoints</td>
<td>11</td>
</tr>
<tr>
<td>3</td>
<td>Connected endpoints</td>
<td>12</td>
</tr>
<tr>
<td>4</td>
<td>Registration mapping to memory objects</td>
<td>15</td>
</tr>
<tr>
<td>5</td>
<td>Registered window configurations</td>
<td>15</td>
</tr>
<tr>
<td>6</td>
<td>Registered window configurations</td>
<td>16</td>
</tr>
<tr>
<td>7</td>
<td>Hard coded registered addresses</td>
<td>18</td>
</tr>
<tr>
<td>8</td>
<td>Registered addresses same as virtual addresses</td>
<td>19</td>
</tr>
<tr>
<td>9</td>
<td>Registering windows using scif_pin_pages()</td>
<td>22</td>
</tr>
<tr>
<td>10</td>
<td>Address space mapping of scif_mmap()</td>
<td>23</td>
</tr>
<tr>
<td>11</td>
<td>Virtual address space mapping that intersects multiple windows</td>
<td>24</td>
</tr>
<tr>
<td>12</td>
<td>scif_readfrom()/scif_writeto() address space mapping</td>
<td>26</td>
</tr>
<tr>
<td>13</td>
<td>scif_vreadfrom()/scif_vwriteto() address space mapping</td>
<td>26</td>
</tr>
<tr>
<td>14</td>
<td>scif_fence_mark()/scif_fence_wait()</td>
<td>28</td>
</tr>
<tr>
<td>15</td>
<td>scif_fence_signal()</td>
<td>29</td>
</tr>
<tr>
<td>16</td>
<td>Using scif_fence_signal()</td>
<td>30</td>
</tr>
</tbody>
</table>
1 Introduction
1.1 About This User Guide
This user guide describes the Symmetric Communication Interface (SCIF) for the Intel® Xeon Phi™ Product Family. SCIF is a component of the Intel® ManyCore Platform Software Stack (MPSS). The goal of this document is to present SCIF concepts and usage. Refer to the SCIF header file, scif.h, and the SCIF man pages for detailed information on the SCIF API.
1.2 Target Audience
The target audience includes tools developers and application developers. After reading this document, the reader will be able to use the SCIF interface for communication between the components of a distributed application.
1.3 Related Documents
<table>
<thead>
<tr>
<th>Document Title</th>
<th>Revision Number</th>
<th>Availability</th>
</tr>
</thead>
<tbody>
<tr>
<td>MPI overview and specification</td>
<td></td>
<td><a href="http://www.mpi-forum.org/">http://www.mpi-forum.org/</a></td>
</tr>
<tr>
<td>OFED* overview</td>
<td></td>
<td><a href="http://www.openfabrics.org/OFED-Overview.html">http://www.openfabrics.org/OFED-Overview.html</a></td>
</tr>
</tbody>
</table>
1.4 Terminology and Acronyms
<table>
<thead>
<tr>
<th>Term</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>API</td>
<td>Application Programming Interface</td>
</tr>
<tr>
<td>HCA</td>
<td>(Infiniband) Host Channel Adapter</td>
</tr>
<tr>
<td>MIC</td>
<td>Intel® Many Integrated Core</td>
</tr>
<tr>
<td>MPSS</td>
<td>ManyCore Platform Software Stack</td>
</tr>
<tr>
<td>OFED</td>
<td>Open Fabrics Enterprise Distribution</td>
</tr>
<tr>
<td>RMA</td>
<td>Remote memory access</td>
</tr>
<tr>
<td>RDMA</td>
<td>Remote direct memory access</td>
</tr>
</tbody>
</table>
2 Product Overview
2.1 Goals and Objectives
SCIF provides a mechanism for inter-node communication within a single platform, where a node is an Intel® Xeon Phi™ coprocessor or an Intel® Xeon® host processor complex. In particular, SCIF abstracts the details of communicating over the PCIe bus while providing an API that is symmetric between the host and MIC Architecture devices. An important design objective for SCIF was to deliver the maximum possible performance given the communication capabilities of the hardware.
2.1.1 Portability and Platform Independence
The Intel® MIC software architecture supports a computing model in which the workload may be distributed across both the Intel® Xeon® host processor complex and Intel® MIC Architecture coprocessors. An important property of SCIF is symmetry; SCIF drivers must present the same interface on both the host processor and the Intel® MIC Architecture coprocessor in order that software written to SCIF can be executed wherever is most appropriate.
Since the Intel® MIC Architecture coprocessor may use a different operating system than that running on the host, the SCIF architecture is designed to be operating system independent. This ensures SCIF implementations on different operating systems can inter-communicate.
2.2 Product Environment
As mentioned earlier, the Intel® MIC software architecture supports a computing model in which the workload is distributed across both Intel® host processors and Intel® MIC Architecture coprocessors.
2.2.1 Hardware Environment
SCIF supports communication between Xeon host processors and Intel® MIC Architecture coprocessors within a single platform. Communication between such components that are in separate platforms can be performed using standard communication channels such as Infiniband and TCP/IP.
2.2.2 Software Environment
A SCIF implementation on a host or Intel® MIC Architecture coprocessor includes both a user mode (Ring 3) library and kernel mode (Ring 0) driver as shown in Chapter 3 SCIF Programming Concepts. Most of the components in the Intel® MPSS use SCIF for communication. Refer to the Intel® Xeon Phi™ coprocessor (codename: Knights Corner)
Software Developers Guide for a discussion of the other components in the Intel® MPSS and their relationship to SCIF.
Figure 1: Intel® ManyCore Platform Software Stack (MPSS)
3 SCIF Programming Concepts
The SCIF driver provides a reliable connection-based messaging layer, as well as functionality which abstracts RMA operations. In the following sections we describe these architectural concepts in some detail. The SCIF API is documented in the SCIF header file, scif.h, and the SCIF man pages. A common API is exposed for use in both user mode (ring 3) and kernel mode (ring 0), with the exception of slight differences in signature, and several functions which are only available in user mode, and several only available in kernel mode.
3.1 Nodes
A SCIF node is a physical endpoint in the SCIF network. The host and MIC Architecture devices are SCIF nodes. From the SCIF point of view, all host processors (CPUs) under a single OS are considered a single SCIF (host) node.
We generally use node instead of SCIF node where this will not cause confusion.
Each node in the SCIF network has a node identifier that is assigned when the platform is booted. Node IDs are generally based on PCIe discovery order and, thus, may change across a platform reboot, however the host node is always assigned ID 0.
3.2 Ports
A SCIF port is a logical destination on a SCIF node. We generally use port rather than SCIF port. Within a node, a SCIF port on that node may be referred to by its number, a 16-bit integer. This is analogous to an IP port; for instance, SSH usually talks over TCP port 22. We sometimes use local port to refer to a port that is on the same node as a particular point of reference.
A SCIF port identifier is unique across a SCIF network, comprising both a node identifier and a local port number. A SCIF port identifier is analogous to a complete TCP/IP address (for instance 192.168.1.240:22).
Analogous to Internet sockets, some ports may be well-known, and monitored by service daemons launched with the local OS or later. Any such services are layered on SCIF and thus beyond the scope of this document.
3.3 Endpoints and Connections
The entity through which a port is accessed is called an endpoint. An endpoint can be listening, for example waiting for a connection request from another endpoint, or connected, for example able to communicate with a remote connected endpoint. A connection is an association established between two endpoints for the purpose of communication. The following functions are used during the connection process:
The process for establishing a connection is similar to socket programming: A process calls `scif_open()` to create a new endpoint; `scif_open()` returns an endpoint descriptor that is used to refer to the endpoint in subsequent SCIF function calls. The endpoint is then bound to a port on the local node using `scif_bind()`. An endpoint which was opened and bound to a port is made a listening endpoint by calling `scif_listen()`. To create a connection, a process opens an endpoint and binds it to a local port, and then requests a connection by calling `scif_connect()`, specifying the port identifier of some listening endpoint, usually on a remote node. A process on the remote node may accept a pending or subsequent connection request by calling `scif_accept()`. `scif_accept()` can conditionally return immediately if there is no connection request pending, or block until a connection request is received.
The `select()` and `poll()` functions can be used from Linux* user mode to determine when a connection request is received on any of a set of listening endpoints. The `scif_poll()` function may be used from Linux* user and kernel modes, and from Microsoft Windows* user mode for this purpose.
When the connection request is accepted, a new connected endpoint is created, bound to the same port as the listening endpoint. The requesting endpoint and the newly created connected endpoint form the connection. The listening endpoint is unchanged by this process. Multiple connections may be established to a port bound to a listening endpoint.
The following figure illustrates the connection process. In this example, a process on node i calls `scif_open()`, which returns endpoint descriptor `epd_i`. It then calls `scif_bind()` to bind the new endpoint to local port `pm`, and then calls `scif_connect()` requesting a connection to port `pn` on node j. Meanwhile, a process on node j calls `scif_open()`, getting back endpoint descriptor `epd_j`, binds the new endpoint associated with `epd_j` to local port `pn`, and calls `scif_listen()` to mark the endpoint as a listening endpoint. Finally, it calls `scif_accept()` to accept a connection request. In servicing the connection request, `scif_accept()` creates a new endpoint, with endpoint descriptor `nepd`, which is the endpoint to which `epd_i` is connected. The endpoints associated with `epd_j` and `nepd` are now connected endpoints and may proceed to communicate with each other. The listening endpoint associated with `epd_j` remains a listening endpoint and may accept an arbitrary number of connection requests.
Normally the endpoints of a connection are on different nodes in the SCIF network. We therefore often refer to these endpoints as *local* and *remote* with respect to one end of the connection. In fact, SCIF fully supports connections in which both endpoints are on the same node, and we refer to this as a *loopback* connection.
A process may create an arbitrary number of connections, limited by system resources (memory). The following figure illustrates a SCIF network of three nodes. Two connections have been established between nodes 0 and 1, another between nodes 0 and 2. On node N2, a loopback connection was established.
The endpoint pair comprising the connection are peer endpoints or just peers. Similarly, the processes which own the peer endpoints are peer processes, the node on which a peer endpoint resides is a peer node, and so on.
### 3.4 Messaging Layer
After a connection is established, messages may be exchanged between the processes owning the connected endpoints. A message sent into one connected endpoint is received at the other connected endpoint. Such communication is bi-directional. The following functions comprise the messaging layer:
```c
int scif_send(scif_epd_t epd, void* msg, int len, int flags);
int scif_recv(scif_epd_t epd, void* msg, int len, int flags);
```
Messages are always sent through a local endpoint for delivery at a remote connected endpoint. For each connected pair of endpoints, there is a dedicated pair of message queues – one queue for each direction of communication. In this way, the forward progress of any connection is not gated by progress on another connection, which might be the case were multiple connections sharing a queue pair.
A message may be up to $2^{31} - 1$ bytes long. In spite of this, the messaging layer is intended for sending short command-type messages, not for bulk data transfers. The messaging layer queues are relatively short; a long message is transmitted as multiple
shorter queue-length transfers, with an interrupt exchange for each such transfer. Therefore it is strongly recommended that SCIF RMA functionality be used for sending larger units of data, for instance longer than 4KiB.
Messages on any connection are received in the order in which they are sent. There are no guarantees regarding the order in which messages sent on different connections are received. Moreover, the PCIe bus is assumed to be a reliable transport. Therefore, SCIF makes no attempt to detect or correct lost or corrupted messages.
The content of a message is not interpreted by the messaging layer, and has meaning only to the sending and receiving processes. Therefore it is the responsibility of the application to impose any required structure or protocol.
The messaging layer supports both blocking and non-blocking behaviors. A blocking call to the scif_send() function will block (not return) until the entire message is sent. A non-blocking call to the scif_send() function only sends as much data as there is room in the send queue at the time of the call. In both cases, the number of bytes sent is returned as the result of the call. The select() and poll() functions can be used from Linux* user mode to determine when it is possible to send more data on any of a set of connected endpoints. The scif_poll() function may be used from Microsoft Windows* and Linux* kernel mode, and from Microsoft Windows* user mode for this purpose.
Similarly, a blocking call to the scif_recv() function will block until all len bytes (where len is a parameter specifying the number of bytes to receive) have been received and copied to the application’s buffer. A non-blocking call to the scif_recv() function only returns data that is currently in the receive queue (up to some application-specified maximum number of bytes). In both cases, the number of bytes received is returned as the result of the call. The select() and poll() functions can be used from Linux* user mode to determine when more data is available on any of a set of connected endpoints. The scif_poll() function may be used from Microsoft Windows* and Linux* kernel modes, and from Microsoft Windows* user mode for this purpose.
3.5 Memory Registration
Memory registration is the mechanism by which a process exposes ranges of its address space for controlled access by another process, typically a process on a remote node. Memory must be registered before it can be mapped to the address space of another process or be the source or target of an RMA transfer.
Each connected endpoint has a registered address space, a kind of address space managed by the SCIF driver, ranges of which can represent local physical memory. The registered address space is sparse in that only specific ranges which have been registered, called registered windows or just windows, can be accessed. It is an application error to attempt to access any range of a registered address space which is not within such a window.
We use the term offset to mean a location in a registered address space in analogy to the mapping from virtual address space to a shared memory object established by the Posix mmap() function. In the Posix mmap() function, an offset parameter specifies the offset, from the beginning of the memory object, of the range onto which the virtual
address range is mapped. Essentially an offset is an address in some registered address space, therefore we sometimes talk about a registered address.
The following functions support registration:
```c
off_t scif_register(scif_epd_t epd, void* addr, size_t len,
off_t offset, int prot_flags, int map_flags);
int scif_unregister(scif_epd_t epd, off_t offset, size_t len);
```
The scif_unregister() function and window deletion is discussed in a later section.
[In this and subsequent sections, we talk about ranges in virtual and registered address spaces. The reader should understand that these are specified by the (addr,len) and (offset,len) parameter pairs respectively. Registration granularity is 4KiB (a small page) so the addr, offset and len parameters to scif_register() must be multiples of 4KiB.]
The scif_register() function establishes a mapping between a range in the registered address space of some connected endpoint of the calling process and a set of physical pages. The physical pages are indirectly identified by specifying a range in the user virtual address space of the calling process. The mapping, then, is from the specified range in some registered address space to the physical pages which back the specified virtual address range. This mapping between registered address space and physical memory remains even if the specified virtual address range is unmapped or remapped to some different physical pages or object.
In the following figure, the left diagram illustrates a registered window, W, at the time of its creation by scif_register(). The pages of W, a range in the registered address space of some local endpoint, represent some set, P1, of physical pages in local memory. P1 is the set of physical pages which backed a specified virtual address range, VA, at the time that scif_register() was executed. Even if the virtual address range, VA, is subsequently mapped to different physical pages P2, W continues to represent P1. Of course, the process now has no way to access the registered memory in order to read or write RMA data unless those physical pages back some other virtual address range.
For simplicity, we show P1 and P2 as contiguous ranges in physical memory, whereas they may be discontiguous.
Figure 4: Registration mapping to memory objects
<table>
<thead>
<tr>
<th>Registered Address Space</th>
<th>Virtual Address Space</th>
<th>Physical Address Space</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>W</td>
<td>P1</td>
</tr>
<tr>
<td>VA</td>
<td></td>
<td>P2</td>
</tr>
<tr>
<td>VA</td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
Though a window is a mapping in the mathematical sense, we generally say that the registered address space range of a window represents the corresponding physical pages. This is intended to avoid confusion with mappings created by scif_mmap() or mmap() described later.
The physical pages which a window represents are pinned (locked) in memory so that they can be accessed from a remote SCIF node. Therefore it is an error to specify a virtual address range to scif_register() for which the backing pages cannot be pinned for whatever reason. The pages which a window represents remain pinned as long as the window exists. As will be explained below, a physical page may be represented by more than one window. Such a page will remain locked until all such windows are unregistered.
The scif_unregister() function is used to delete one or more windows and is discussed in more detail later.
Figure 5: Registered window configurations
This figure illustrates several registered window configurations. It shows the physical space of a node which has two connected endpoints, possibly owned by different processes. Each endpoint has an independent registered address space associated with it (for simplicity, we do not illustrate the virtual memory ranges which the physical ranges back).
- Windows W1a and W2a represent the same physical address range but have different offsets in their respective registered address spaces.
- W1b and W2b have the same offset (the light gray dashed lines help show this) but represent different physical address ranges.
- W1c and W1d are disjoint windows in the same registered address space, but represent overlapping physical address ranges.
The extra degree of freedom offered by registered address spaces may be useful for solving various communication and programming problems.
**Figure 5: Registered window configurations**
We refer to a window in the registered address space of the peer of a local endpoint as a remote window. Every window in the registered address space of a local endpoint is a remote window to the peer endpoint. Several SCIF functions (scif_readfrom(), scif_writeto(), scif_vreadfrom(), scif_vwriteto(), scif_mmap(), and scif_get_pages()) access remote windows or portions thereof, and specified as an offset and length in the registered address space of the peer of a specified local endpoint.
The management of a registered address space can be performed by SCIF, by the application or both, and is controlled by the map flags parameter to scif_register(). When SCIF_MAP_FIXED is set in map_flags, SCIF attempts to allocate the window at the registered address specified in the offset parameter. Otherwise, SCIF selects a registered address at which to allocate the window.
In Figure 6 the application has create three windows at offsets 0x1000, 0x3000 and 0x5000 respectively (by passing the SCIF_MAP_FIXED flag), each 0x1000 bytes long. If these offsets are coded in the peer application, then it knows the offsets to use to access these windows, for example in performing an RMA.
As an alternative, an application can use the virtual address as the offset when registering a window. In this way the application need not “remember” the offset of the window corresponding to some virtual address. This is illustrated in Figure 7.
The scif_register() function also takes a prot_flags parameter which controls access to the window being registered. The SCIF_PROT_READ flag marks a window as allowing read operations; specifically the window can be the source of an RMA operation. Similarly the SCIF_PROT_WRITE flag marks a window as allowing write operations; specifically the window can be the destination of an RMA operation.
The scif_mmap() function (described more fully later) also takes a prot_flags parameter. The SCIF_PROT_READ flag indicates that the mapped region is to be readable; it is an error if the referenced window was not also registered with the SCIF_PROT_READ flag. Similarly the SCIF_PROT_WRITE flag indicates that the mapped region is to be writable; it is an error if the referenced window was not also registered with the SCIF_PROT_WRITE flag.
These flags only control access to windows; they do not control access to the physical pages which a window represents where those pages back virtual memory. Thus, referring back to Figure 7, the process which registered window W has access to the pages P1 through the virtual addresses VA regardless of the protections on window W. Similarly, once a (portion of a) window is mapped using scif_mmap(), the application may read or write to the mapped physical pages regardless of the prot_flags specified when scif_mmap() was called. Referring ahead to, the process which mapped a range, RR, of remote window RW into a range of its address space at VA, can both read and write to pages P through VA, regardless of the value of prot_flags.
3.5.1 Duplication of Endpoint Descriptors Across a fork()
On Linux*, an endpoint is implemented as a file description, and an endpoint descriptor as a file descriptor. If an application opens an endpoint and then fork()'s, the parent and
child will each have an endpoint descriptor (file descriptor) which refers the same endpoint. The parent and child then share the registered address space of this endpoint. Consider the following scenario:
<table>
<thead>
<tr>
<th>Parent:</th>
<th>Child:</th>
</tr>
</thead>
<tbody>
<tr>
<td>scif_epd_t epd = scif_open();</td>
<td>off_t po =</td>
</tr>
<tr>
<td>scif_connect(epd, pn);</td>
<td>scif_register(epd,addr2,0x1000,0x20000,3,0);</td>
</tr>
<tr>
<td>fork();</td>
<td>scif_readfrom(epd,0x10000,len2,roff2,flags);</td>
</tr>
<tr>
<td>off_t po = scif_register(epd,addr1,0x10000,0x1000,0x10000,3,0);</td>
<td></td>
</tr>
<tr>
<td>scif_readfrom(epd,0x20000,len1,roff1,flags);</td>
<td></td>
</tr>
</tbody>
</table>
After the fork(), both the parent and child have an endpoint descriptor, epd, which refers to the endpoint created by the parent. The parent now registers a window at offset 0x10000 that represents the physical page backing the page at its addr1. Similarly the child registers a window at offset 0x20000 that represents the physical page backing the page at its addr2. Because both windows are in the same registered address space, the child can access the parent’s memory and vice versa. That is, any memory registered to this endpoint is shared by the two processes. For example, each can initiate an RMA which transfers data into the shared pages. This behavior, while perhaps surprising, is consistent with fork() semantics regarding duplication of file descriptors.
### 3.5.1.1 Registered Memory Across a fork()
Linux*’s copy-on-write semantics mean that, following a fork(), both the parent and child process will have page table entries pointing to the same physical pages. Because those pages are write protected, when one of the processes, either parent or child, writes to a page, the hardware will trap the write event. The kernel will respond by allocating a new page and copying the contents from the original page, breaking the linkage to the physical page for that process.
Consider the case that a process registers a window and then fork()’s. Suppose the parent now writes directly to a virtual address corresponding to a page of the window; it will be allocated a new physical page. However, subsequent RMAs to or from the window offset that corresponds to that virtual address will, however, access the original physical page at the time of registration, not the newly allocated physical page; the physical pages that the window represents are unchanged. Thus, data which the parent process writes to the newly allocated page will not be sent when a scif_writeto() RMA is performed, and data received when a scif_readfrom() RMA is performed will not be read by the process.
To prevent this from happening, it is recommended that the parent mark the virtual address range of a registered window as MADV_DONTFORK, if the process will fork() after performing the registration. Doing this prevents the virtual address range from being seen by the child, so it is only seen by the parent, and copy-on-write semantics do not apply to that range.
A similar problem can occur if a process registers a window after a fork() in which the virtual address range was allocated before the fork(), since that virtual address range
might now be subject to copy-on-write semantics. There are several possible solutions to this problem:
- Mark the virtual address range to be registered as MADV_DONTFORK before the fork(). The virtual address range will now only be available for registration by the parent.
- (After the fork...) In one or the other process, write to all the pages of the range to force new pages to be allocated.
### 3.5.2 Kernel Mode Registration-Related API
Several additional functions are available in kernel mode to solve specific programming requirements:
```c
int scif_pin_pages(void* addr, size_t len, int prot_flags, int map_flags, scif_pinned_pages_t* pages);
int scif_unpin_pages(scif_pinned_pages_t pinned_pages);
off_t scif_register_pinned_pages(scif_epd_t epd, scif_pinned_pages_t pinned_pages, off_t offset, int map_flags);
```
`scif_pin_pages()` pins the set of physical pages which back a range of virtual address space, and returns a handle which may subsequently be used in calls to `scif_register_pinned_pages()` to create windows which represent the set of pinned pages. The windows so created are otherwise identical to windows created by `scif_register()`. The handle is freed by `scif_unpin_pages()`, but the physical pages themselves remain pinned as long as there is a window which represents the pages. Unlike `scif_register()` which interprets the address passed it as a user space address, `scif_pin_pages()` interprets the address passed it as a kernel space address if the `map_flags` parameter has the SCIF_MAP_KERNEL flag.
Figure 8 illustrates this process. In the leftmost panel, `scif_pin_pages()` pins the set of physical pages, \( P_1 \), which back some range, \( VA \), of virtual address space. In the center panel, a window, \( W_1 \), is registered, using `scif_register_pinned_pages()`, at some offset in some Registered Address Space 1, and represents the physical pages \( P_1 \). In the rightmost panel, a second window, \( W_2 \), is registered, again using `scif_register_pinned_pages()`, at some offset in some Registered Address Space 2, and also represents the physical pages \( P_1 \). At the same time, the mapping of \( VA \) was changed to the set of physical pages, \( P_2 \), but windows \( W_1 \) and \( W_2 \) continue to represent \( P_1 \).
3.6 Mapped Remote Memory
The SCIF mapping functions enable mapping some physical memory on a remote node into the virtual address space of a process. Once established, a read or write access to such a mapped range of virtual address space will read or write to the corresponding mapped physical memory location. The mapping functions are:
```c
void* scif_mmap(void* addr, size_t len, int prot_flags, int map_flags, sci_epd_t epd, off_t offset);
int scif_munmap (void* addr, size_t len);
```
**Note:** These functions are only available in the user mode API.
The mapping established by a `scif_mmap()` operation is illustrated in the following figure:
The process performing the `scif_mmap()` operation specifies a range, VA, within its local virtual address space, and a corresponding range, RR, of the same length within a peer remote registered address space. The composition of the mapping from VA to RR and the mapping from RR to P, the set of physical pages represented by RR, defines a mapping (black lines) from VA to P. `scif_mmap()` modifies the page table of the process according to this mapping. Hence, reads from and writes to VA will actually read from or write to corresponding locations in the physical pages P.
Figure 10: Virtual address space mapping that intersects multiple windows
The remote registered address range may not intersect any portion of the remote virtual address space which is not within a window, but may intersect multiple remote windows. Therefore those multiple windows must be contiguous in their registered address space. In Figure 10, RR intersects windows RW1 and RW2, which represent physical memory ranges P1 and P2 respectively. Thus access to an address in VA will be vectored to a page in P1 or P2 depending on whether the address in VA maps to RW1 or RW2.
While a remote mapping exists, the remote pages remain pinned and available for access, even if the peer endpoint referenced when the mapping was created is closed, either explicitly or because the peer process is killed. `scif_munmap()` unmaps some range of pages in the callers address space. Subsequent access to such virtual pages results in a segmentation fault. `scif_munmap()` does not take an endpoint parameter; if a page in the specified range was not mapped using `scif_mmap()`, the effect will be as if `mmap()` was called on that page.
3.6.1 Kernel Mode Mapping-Related API
The kernel mode API provides a similar capability to `scif_mmap()` through the `scif_get_pages()` and `scif_put_pages()` functions. `scif_get_pages()` takes a range in some remote window and returns a structure listing the physical addresses of pages which are represented by the registered address space range. Those physical pages will continue to be available until the structure obtained from `scif_get_pages()` is returned in a call to `scif_put_pages()`.
3.7 Remote Memory Access
SCIF RMA operations are intended to support the one-sided communication model which has the advantage that a read/write operation can be performed by one side of a connection when it knows both the local and remote locations of data to be transferred. One-sided calls can often be useful for algorithms in which synchronization would be
inconvenient (for instance distributed matrix multiplication), or where it is desirable for tasks to be able to balance their load while other processors are operating on data.
The following functions comprise the RMA group:
```
int scif_readfrom(scif_epd_t epd, off_t loffset, size_t len, off_t roffset, int rma_flags);
int scif_writeto(scif_epd_t epd, off_t loffset, size_t len, off_t roffset, int rma_flags);
int scif_vreadfrom(scif_epd_t epd, void* addr, size_t len, off_t offset, int rma_flags);
int scif_vwriteto(scif_epd_t epd, void* addr, size_t len, off_t offset, int rma_flags);
```
The `scif_readfrom()` and `scif_writeto()` functions perform DMA or CPU based read and write operations, respectively, between physical memory of the local and remote nodes of the specified endpoint and its peer. The physical memory is that which is represented by specified ranges in the local and remote registered address spaces of a local endpoint and its peer remote endpoint. Specifying these registered address ranges establishes a correspondence between local and remote physical pages for the duration of the RMA operation. The `rma_flags` parameter controls whether the transfer is DMA or CPU based.
Figure 11 illustrates such a mapping. The process performing the operation specifies a range, LR, within the registered address of one of its connected endpoints, and a corresponding range, RR, of the same length within the peer endpoint’s registered address space. Each specified range must be entirely within a previously registered window or contiguous windows of the corresponding registered address spaces. The solid green lines represent the correspondence between the specified ranges in the local and remote registered address spaces; the dashed green lines represent the projections into their respective physical address spaces. This defines an overall effective correspondence (black lines) between the physical address space of the local node and that of the remote node of the peer registered address space.
Hence, a DMA operation will transfer data between LP and RP (again, LP and RP are typically not contiguous).
**Figure 11: scif_readfrom()/scif_writeto() address space mapping**
scif_vreadfrom() and scif_vwriteto() are variants of scif_readfrom() and scif_writeto(). Rather than taking a local registered address space range parameter, these functions take a local user address space range, V. Transfers are then between the local physical pages, LP, which back V, and the remote physical pages, RP which are represented by RR. The resulting address space mapping is illustrated in Figure 12.
**Figure 12: scif_vreadfrom()/scif_vwriteto() address space mapping**
If it is known that a buffer will be used multiple times as the source or destination of an RMA, then it is typically beneficial to `scif_register()` the buffer and use `scif_readfrom()` and `scif_writeto()` to perform the transfers. However, if it’s known that the buffer will only be used once, or if it is unknown if the buffer will be used multiple times (this might be the case in a library on top of SCIF), then using `scif_vreadfrom()` and `scif_vwriteto()` may provide a performance advantage as compared to registering some window in the local registered address space, performing a single RMA operation to or from that window, and then unregistering the window.
As mentioned, in some cases it is not known whether a local buffer will be used in subsequent transfers. For this case, the `scif_vreadfrom()` and `scif_vwriteto()` functions have a caching option. When the `rma_flags` parameter includes the `SCIF_RMA_USECACHE` flag, physical pages that were pinned in order to perform the RMA may remain pinned after the transfer completes. This may reduce overhead if some or all of the same virtual address range is referenced in a subsequent execution of `scif_vreadfrom()` or `scif_vwriteto()` since pinning pages has relatively high overhead. A cached page is evicted from the cache in the event that it no longer backs the user space page that it backed when first cached.
### 3.7.1.1 DMA Ordering
The Intel® Xeon Phi™ coprocessor DMA engine does not maintain write ordering. That is some written data may become visible before written data with a lower address. This might be an issue if the process to which data is being transferred polls the last byte of a buffer for some trigger value as an indication that the transfer has completed.
When the `rma_flags` parameter includes the `SCIF_RMA_ORDER` flag, the last cacheline or partial cacheline of the transfer is written after the all other data in the transfer is written. There is slight performance penalty for invoking this feature.
Similarly, the order in which any two RMA transfers complete is indeterminate. SCIF synchronization functions, described in the next section, can be used to synchronize to the completion of RMA transfers.
### 3.8 RMA Synchronization
SCIF supports the ability of a process to synchronize with the completion of RMA operations previously initiated against one of its endpoints, or against a peer of one of its endpoints. The following functions comprise the synchronization group:
```c
int scif_fence_mark(scif_epd_t epd, int flags, int* mark);
int scif_fence_wait(scif_epd_t epd, int mark);
int scif_fence_signal(scif_epd_t epd, off_t loff, uint64_t lval, off_t roff, uint64_t rval, int flags);
```
There are two synchronization methods available. The first method uses both the `scif_fence_mark()` and `scif_fence_wait()` functions. The `scif_fence_mark()` function marks the set of RMAs previously initiated against a
specified endpoint or against its peer, and which have not yet completed.
`scif_fence_mark()` returns a handle to the application which the application can later pass to `scif_fence_wait()` in order to await completion of all RMAs in the marked set. If the `flags` parameter has the `SCIF_FENCE_INIT_SELF` flag, then `scif_fence_mark()` marks RMAs initiated through the local endpoint. If the `flags` parameter has the `SCIF_FENCE_INIT_PEER` flag, then `scif_fence_mark()` marks RMAs initiated through the peer endpoint. `flags` can have only one of these flags values.
**Figure 13: scif_fence_mark()/scif_fence_wait()**
This is illustrated in Figure 13 (the triangles are meant to indicate RMA progress over time). RMA1 and RMA2 are initiated at times t1 and t2, respectively, against some endpoint descriptor `epd`. At time t3, `scif_fence_mark()` is called, marking RMA1 and RMA2 as members of some set, and returning a handle `m` to that set. At time t4, RMA3 is initiated. The application then calls `scif_fence_wait()` at time t5 to await the completion of RMAs in the set indicated by handle `m`. `scif_fence_wait()` then returns at time t6 when RMA1 completes.
The second synchronization method uses the `scif_fence_signal()`. This function returns after conceptually marking the set of RMAs previously initiated against a specified endpoint or against its peer endpoint, and which have not yet completed. Like `scif_fence_mark()`, if the `flags` parameter has the `SCIF_FENCE_INIT_SELF` flag, then `scif_fence_mark()` marks RMAs initiated through the local endpoint. If the `flags` parameter has the `SCIF_FENCE_INIT_PEER` flag, then `scif_fence_mark()` marks RMAs initiated through the peer endpoint. `flags` can have only one of these flags values.
When all the RMAs in the marked set have completed, an application specified value, `lval`, is written to a specified offset, `loff`, in the registered address space of a local endpoint and/or another application specified value, `rval`, is written to another specified offset, `roff`, in the registered address space of the peer of the local endpoint,
as specified by the SCIF_SIGNAL_LOCAL and SCIF_SIGNAL_REMOTE flag values. Each specified offset must be within a registered window of the corresponding registered address space.
The local process and/or the peer process may poll the virtual address which maps to the specified registered address space offset waiting for the specified value(s) to be written.
*scif_fence_signal*() is illustrated in Figure 14 in which the same sequence of RMAs is initiated. The application calls *scif_fence_signal*() at time t3, passing a local offset, loff, and a value, v to be written to loff. Then *scif_fence_signal*() returns after marking RMA1 and RMA2, that were previously initiated and have not completed. At time t6, when all RMAs in the marked set have completed, a value v is written to the registered address space at offset loff. (For simplicity, we don’t try to illustrate writing to values to both the local and remote registered address spaces.)
**Figure 14: scif_fence_signal()**
Marking a set of RMAs does not impose a barrier. That is, an RMA that is submitted after a set of RMAs is marked can begin transferring, and even complete its transfer, before the marked set completes. This is the case for both synchronization methods. For example, in the figure above RMA3 is shown to access some of the same registered address range as RMA1 while RMA1 is in progress. Thus if RMA1 is a transfer to some memory and RMA3 is a transfer out of some of the same memory, RMA3 would likely not transfer out the expected data in this case. It is the application’s responsibility to order RMAs as needed by using SCIF synchronization functionality to await the completion of previous RMAs before subsequent RMAs are submitted. In this case, the application should wait until after RMA1 and RMA2 have completed by polling for v before initiating RMA3:
In the case that an application must wait for a DMA transfer to complete before it can do any other work, it can use either of the two fence mechanisms described above. Alternatively, if the rma_flags parameter of any RMA API includes the SCIF_RMA_SYNC flag, then control will not return to the application until the RMA has completed.
### 3.9 Registered Window Deletion
The `scif_unregister()` function is used to delete one or more registered windows, as specified by a local endpoint and a range within that endpoint’s registered address space. The range must completely encompass zero or more windows. Deleting a portion of a window is not supported.
After `scif_unregister()` is called to delete a window, the registered address space range of the window is no longer available for use in calls to `scif_mmap()`, `scif_get_pages()`, `scif_readfrom()`, `scif_writeto()`, `scif_vreadfrom()`, `scif_vwriteto()` and `scif_fence_signal()`. However, the window continues to exist until all references to the window are removed. A window is referenced if there is a mapping to it created by `scif_mmap()`, or if `scif_get_pages()` was called against the window (and the pages have not been returned via `scif_put_pages()`). A window is also referenced while an RMA, in which some range of the window is a source or destination, is in progress. Finally a window is referenced while some offset in that window was specified to `scif_fence_signal()`, and the RMAs marked by that call to `scif_fence_signal()` have not completed.
Until the window is deleted, no portion of its registered address space range can be used to create a new window, and all the physical pages represented by that window remain locked. A physical page can be represented by multiple windows; for example, see cases 1 and 3. Such a page remains locked until all the windows which represent it are deleted.
3.9.1 Connection Termination
We distinguish between normal connection termination that is triggered by one of the processes at each end of a connection, and abnormal termination triggered when a node becomes "lost".
3.9.2 Normal Connection Termination
A connection is terminated when scif_close() is called on one of its endpoints. The following steps describe the process of closing an endpoint, and apply to both the local endpoint and its peer.
- Further operations through the closing endpoint are not allowed, with the exception described below.
- All previously initiated RMAs to or from windows of the endpoint are allowed to complete.
- Blocked calls to scif_send() or scif_recv() through the closing endpoint are unblocked and return the number of bytes sent or received, or return the ECONNRESET error if no data was sent or received.
- Each window of the closing endpoint is unregistered as described for scif_unregister(). In particular, the physical pages represented by each window remain locked until all references to the window are removed. Thus mappings to its windows previously established by scif_mmap() remain until removed by scif_mmap(), scif_munmap(), or standard functions such as mmap() and munmap(), or until the process holding the mapping is killed. In kernel mode, it is an error to call scif_close() on an endpoint for which there are outstanding physical page addresses obtained from scif_get_pages().
If an endpoint was closed because its peer was closed, scif_recv() can be called on the local endpoint while its receive buffer is non-empty and will return data until the receive queue is empty, at which time it returns the ECONNRESET error. This allows an application to send a message, and then close the local endpoint without waiting somehow for the message to be received by the remote endpoint.
In all other cases, a SCIF function call returns the ECONNRESET error if it references an endpoint that is no longer connected because the peer endpoint was closed.
3.9.3 Abnormal Connection Termination
When a node in the SCIF network is lost and must be reset for some reason, the SCIF driver on each other node will kill() any user mode process which has scif_mmap()’d pages from the lost node. This is done to prevent corruption of the memory of the lost node after it is reset.
Access to any remaining endpoint which was connected to an endpoint on the lost node now returns the ECONNRESET error. The application may scif_close() such an endpoint as part of cleaning up from the loss of the node.
Each kernel mode module that uses SCIF must register a callback routine with the SCIF driver.
void scif_event_register (scif_callback_t handler);
that is the routine to be called in the event that a node is added or is lost and must be reset. Upon being called with the SCIF_NODE_REMOVED event, and before returning, the event handler must return, using scif_put_pages(), all structures obtained using scif_get_pages() against an endpoint connected to the lost node. It is recommended and expected that the handler will also scif_close() all endpoints connected to the lost node.
3.10 Process Termination
When a process is terminated, either normally or abnormally, the following steps are performed:
- All remote mappings previously created by scif_mmap() are removed as if scif_munmap() were called on the mapping.
- Physical page addresses obtained from scif_get_pages() are effectively returned as if scif_put_pages() were called.
- Each endpoint owned by the process is closed as if scif_close() were called on the endpoint.
3.11 User Mode Utility Functions
Several utility functions are defined in the SCIF user mode API:
int scif_get_nodeIDs(uint16_t* nodes, int len, uint16_t* self);
static int scif_get_fd(scif_epd_t epd);
int scif_poll(struct scif_pollepd* epds, unsigned int nepds, long timeout);
The scif_get_nodeIDs() function may be called to obtain the IDs of the nodes currently in the SCIF network. This function also returns the ID of the node on which the calling process is executing.
scif_get_fd() returns the file descriptor which backs a specified endpoint descriptor, epd. The file descriptor returned can be used when calling poll() or select(). It should in this way. This function is only available in the Linux* user mode API.
scif_poll() waits for one of a set of endpoints to become ready to perform an I/O operation; it is syntactically and semantically very similar to poll(). The SCIF functions on which scif_poll() waits are scif_accept(), scif_send(), and scif_recv(). Consult the SCIF header file, scif.h, and the SCIF man pages for details on scif_poll() usage.
3.12 **Kernel Mode Utility Functions**
The `scif_get_nodeIDs()` and `scif_poll()` functions are available in kernel mode. In addition, the `scif_pce_dev()` function:
```c
int scif_pce_dev(uint16_t node, struct pci_dev** pdev);
```
returns the `pci_dev` structure pointer associated with specified SCIF node. This structure can then be used in standard Linux® kernel functions to refer to an Intel® Xeon Phi™ coprocessor. For example the `pci_dev` structure can be used to obtain system bus addresses from a virtual address or page pointer in calls to Linux® PCIe mapping APIs like `pci_map_single()` or `pci_map_page()`.
4 Programming Considerations
4.1 Unaligned DMAs
The Intel® Xeon Phi™ coprocessor DMA engine supports cacheline aligned transfers. That is, starting and ending addresses of DMA transfers must be a multiple of 64. SCIF RMA APIs (scif_readfrom(), scif_writeto(), scif_vreadfrom(), scif_vwriteto()) may be specified with any alignment: The source and destination may have any alignment, these alignments may differ, and the length of a transfer need not be a multiple of 64.
When a request is made to use DMA for a transfer that is not cacheline aligned, SCIF uses a combination of DMA and programmed I/O to implement the transfer. Such transfers will have lower performance than the cacheline aligned transfers. Therefore, optimal DMA performance will likely be realized if both source and destination base addresses are cacheline aligned. Lower performance will likely be realized if the source and destination base addresses are not cacheline aligned but are separated by some multiple of 64. The lowest level of performance is likely if source and destination base addresses are not separated by a multiple of 64.
A suggested workaround is to pad data allocations to ensure cacheline alignment of data structures that are to be DMA’d.
When the source and destination base addresses are cacheline aligned, DMA performance will be higher when the source and destination base addresses’ page offsets are the same than when the page offsets are different. One way to ensure the page offsets are the same is to page align the data structures during allocation.
4.2 Synchronization Overhead
The scif_fence_mark() and scif_fence_wait() functions should be used somewhat judiciously in order to minimize overhead. For example, an application might call scif_fence_mark() after each RMA, and then later chose on which mark(s) to wait. Such a sequence can have a negative impact on BW, particularly where transfers are small.
4.3 Large pages
SCIF registration and DMA performance will be better if the buffers being registered are backed by huge pages. SCIF registration is improved because the driver requires fewer data structures to accurately store meta-data about huge pages which are contiguous in physical memory as compared to storing the meta data for every 4K page. SCIF DMA performance is improved since the software overhead for programming DMA descriptors is reduced. SCIF detects and optimizes for huge pages transparently. The user does not need to specify if a virtual address region is backed by huge pages or not. Maximum performance benefits will be seen if both source and destination buffers are backed by huge pages.
|
{"Source-Url": "http://registrationcenter-download.intel.com/akdlm/irc_nas/9669/scif_userguide.pdf", "len_cl100k_base": 12427, "olmocr-version": "0.1.49", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 86690, "total-output-tokens": 13445, "length": "2e13", "weborganizer": {"__label__adult": 0.0005974769592285156, "__label__art_design": 0.0006227493286132812, "__label__crime_law": 0.0009021759033203124, "__label__education_jobs": 0.0007796287536621094, "__label__entertainment": 0.0001703500747680664, "__label__fashion_beauty": 0.00028014183044433594, "__label__finance_business": 0.0020694732666015625, "__label__food_dining": 0.0003914833068847656, "__label__games": 0.00136566162109375, "__label__hardware": 0.051788330078125, "__label__health": 0.0004096031188964844, "__label__history": 0.00035071372985839844, "__label__home_hobbies": 0.00018680095672607425, "__label__industrial": 0.0023097991943359375, "__label__literature": 0.0003740787506103515, "__label__politics": 0.00045013427734375, "__label__religion": 0.0006170272827148438, "__label__science_tech": 0.1317138671875, "__label__social_life": 5.114078521728515e-05, "__label__software": 0.033050537109375, "__label__software_dev": 0.77001953125, "__label__sports_fitness": 0.0003540515899658203, "__label__transportation": 0.0009684562683105468, "__label__travel": 0.0002086162567138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58285, 0.02149]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58285, 0.43061]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58285, 0.86524]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1146, false], [1146, 1716, null], [1716, 5723, null], [5723, 7474, null], [7474, 9090, null], [9090, 11273, null], [11273, 11449, null], [11449, 13843, null], [13843, 16435, null], [16435, 17068, null], [17068, 18403, null], [18403, 21740, null], [21740, 24017, null], [24017, 25388, null], [25388, 26320, null], [26320, 27506, null], [27506, 27754, null], [27754, 29571, null], [29571, 33375, null], [33375, 35667, null], [35667, 36322, null], [36322, 36899, null], [36899, 38893, null], [38893, 41031, null], [41031, 41586, null], [41586, 44518, null], [44518, 46637, null], [46637, 48486, null], [48486, 50366, null], [50366, 53008, null], [53008, 55023, null], [55023, 55648, null], [55648, 58285, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1146, true], [1146, 1716, null], [1716, 5723, null], [5723, 7474, null], [7474, 9090, null], [9090, 11273, null], [11273, 11449, null], [11449, 13843, null], [13843, 16435, null], [16435, 17068, null], [17068, 18403, null], [18403, 21740, null], [21740, 24017, null], [24017, 25388, null], [25388, 26320, null], [26320, 27506, null], [27506, 27754, null], [27754, 29571, null], [29571, 33375, null], [33375, 35667, null], [35667, 36322, null], [36322, 36899, null], [36899, 38893, null], [38893, 41031, null], [41031, 41586, null], [41586, 44518, null], [44518, 46637, null], [46637, 48486, null], [48486, 50366, null], [50366, 53008, null], [53008, 55023, null], [55023, 55648, null], [55648, 58285, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58285, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58285, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1146, 2], [1146, 1716, 3], [1716, 5723, 4], [5723, 7474, 5], [7474, 9090, 6], [9090, 11273, 7], [11273, 11449, 8], [11449, 13843, 9], [13843, 16435, 10], [16435, 17068, 11], [17068, 18403, 12], [18403, 21740, 13], [21740, 24017, 14], [24017, 25388, 15], [25388, 26320, 16], [26320, 27506, 17], [27506, 27754, 18], [27754, 29571, 19], [29571, 33375, 20], [33375, 35667, 21], [35667, 36322, 22], [36322, 36899, 23], [36899, 38893, 24], [38893, 41031, 25], [41031, 41586, 26], [41586, 44518, 27], [44518, 46637, 28], [46637, 48486, 29], [48486, 50366, 30], [50366, 53008, 31], [53008, 55023, 32], [55023, 55648, 33], [55648, 58285, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58285, 0.14145]]}
|
olmocr_science_pdfs
|
2024-11-27
|
2024-11-27
|
5434723fb7ec523fa1a90bbf6c7250337a4146f7
|
Mixture Density Network Training by Computation in Parameter Space
David J. Evans
evansdj@aston.ac.uk
Technical Report NCRG/98/016
August 3, 1998
Abstract
Training Mixture Density Network (MDN) configurations within the NETLAB framework takes time due to the nature of the computation of the error function and the gradient of the error function. By optimising the computation of these functions, so that gradient information is computed in parameter space, training time is decreased by at least a factor of sixty for the example given. Decreased training time increases the spectrum of problems to which MDNs can be practically applied making the MDN framework an attractive method to the applied problem solver.
1 Introduction
Mixture Density Networks (MDNs) provide a framework for modelling conditional probability densities \( p(t|x) \) (Bishop, 1995). The distribution of the outputs, \( t \), is described by a parametric model whose parameters are determined by the output of a neural network, which takes \( x \) as its inputs. The general model is described by equation 1 below:
\[
p(t|x) = \sum_{j=1}^{M} \alpha_j(x) \phi_j(t|x)
\]
Where \( \alpha_j(x) \) represent the mixing coefficients (which depend on \( x \)) and \( \phi_j(t|x) \) are the kernel distributions of the mixture model whose parameters also depend on \( x \).
Training of mixture density networks for modelling wind vectors requires data sets of at least three thousand examples, with a MDN complexity of at least two centres and fifteen hidden units. Using the Netlab toolbox for MATLAB, training MDNs of this complexity takes at least a week, but can be longer dependent on the machine configuration and loading.
The majority of training time is spent computing two functions, the gradient of the error function and the error function. The bottleneck in these functions is the MATLAB for loop which is poorly optimised. These two functions are re-engineered to take advantage of the MATLAB optimised matrix functionality.
2 Software Techniques for Computation in Parameter Space
This section describes software techniques used to facilitate computation of the error and error gradient of a MDN by matrix operations. For a complete discussion of the implementation of MDNs see (Bishop, 1994)\(^2\). The parameter space is defined as the outputs of the Multi-Layer Perceptron (MLP), after the inputs \( x \) have been forward propagated through the network. The outputs of the MLP are vectors which contain the parameters that define the coefficients of the mixture model conditional on the inputs \( x \). For spherical Gaussian mixture models the coefficients\(^3\) are, \( \alpha_{j,n} \), the mixing coefficient for the \( j^{th} \) kernel of pattern \( n \), \( \mu_{jk,n} \) the \( k^{th} \) element of the centre of the \( j^{th} \) kernel of pattern \( n \) and \( \sigma^2_{j,n} \) the width or variance of the \( j^{th} \) kernel of pattern \( n \). The order of the coefficients in the parameter vector have been changed from that in the current Netlab implementation of the MDN to clarify the notation of the problem. The parameter vector for the \( n^{th} \) pattern is now described as:
\[
\begin{bmatrix}
\alpha_{1,n}, \alpha_{2,n}, \ldots, \alpha_{j,n}, \ldots, \alpha_{M,n}, \\
\mu_{11,n}, \mu_{12,n}, \ldots, \mu_{1c,n}, \ldots, \mu_{j1,n}, \mu_{j2,n}, \ldots, \mu_{jc,n}, \ldots, \mu_{M1,n}, \mu_{M2,n}, \ldots, \mu_{Mc,n}, \ldots, \\
\sigma^2_{1,n}, \sigma^2_{2,n}, \ldots, \sigma^2_{j,n}, \ldots, \sigma^2_{M,n}
\end{bmatrix}
\]
(2)
where \( M \) is the number of kernels (mixtures) in the model and \( c \) is the dimension of the target space (when modelling wind vectors \( c = 2 \)). For all patterns we have a matrix of parameters \( P \),
\(^1\)Available from http://www.ncrg.aston.ac.uk/netlab/
\(^2\)Available from http://www.ncrg.aston.ac.uk/Papers/
\(^3\)Throughout this document the subscript identifies the model parameter and the pattern for which the model parameter refers too. For example \( \alpha_{j,n} \) is the mixing coefficient of the \( j^{th} \) kernel for the \( n^{th} \) pattern.
which is split into three sub-matrices defined by \( \mathbf{P}^\alpha \) the mixing coefficients, \( \mathbf{P}^\mu \) which describes the centres of each kernel and \( \mathbf{P}^\sigma \) the parameters defining the variance of each kernel. Each row corresponds to a training pattern (total \( N \)):
\[
\mathbf{P}^\alpha = \begin{bmatrix}
\alpha_{1,1} & \alpha_{2,1} & \cdots & \alpha_{M,1} \\
\alpha_{1,2} & \alpha_{2,2} & \cdots & \alpha_{M,2} \\
\vdots & \vdots & \ddots & \vdots \\
\alpha_{1,N} & \alpha_{2,N} & \cdots & \alpha_{M,N}
\end{bmatrix}
\]
(3)
\[
\mathbf{P}^\mu = \begin{bmatrix}
\mu_{11,1} & \mu_{12,1} & \cdots & \mu_{1c,1} & \cdots & \mu_{M1,1} & \mu_{M2,1} & \cdots & \mu_{Mc,1} \\
\mu_{11,2} & \mu_{12,2} & \cdots & \mu_{1c,2} & \cdots & \mu_{M1,2} & \mu_{M2,2} & \cdots & \mu_{Mc,2} \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\mu_{11,N} & \mu_{12,N} & \cdots & \mu_{1c,N} & \cdots & \mu_{M1,N} & \mu_{M2,N} & \cdots & \mu_{Mc,N}
\end{bmatrix}
\]
(4)
\[
\mathbf{P}^\sigma = \begin{bmatrix}
\sigma_{1,1}^2 & \sigma_{2,1}^2 & \cdots & \sigma_{M,1}^2 \\
\sigma_{1,2}^2 & \sigma_{2,2}^2 & \cdots & \sigma_{M,2}^2 \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{1,n}^2 & \sigma_{2,n}^2 & \cdots & \sigma_{M,n}^2 \\
\vdots & \vdots & \ddots & \vdots \\
\sigma_{1,N}^2 & \sigma_{2,N}^2 & \cdots & \sigma_{M,N}^2
\end{bmatrix}
\]
(5)
There is the corresponding matrix \( \mathbf{t} \) which describes the target values for each pattern:
\[
\mathbf{t} = \begin{bmatrix}
t_{1,1} & t_{2,1} & \cdots & t_{c,1} \\
t_{1,2} & t_{2,2} & \cdots & t_{c,2} \\
\vdots & \vdots & \ddots & \vdots \\
t_{1,n} & t_{2,n} & \cdots & t_{c,n} \\
\vdots & \vdots & \ddots & \vdots \\
t_{1,N} & t_{2,N} & \cdots & t_{c,N}
\end{bmatrix}
\]
(6)
### 2.1 Computing the Gaussian activations and probabilities
Each kernel within the MDN framework is implemented using a \( c \) dimensional Gaussian. The computation of a Gaussian requires the squared distance between the targets and the centres of the Gaussian to be computed. For each centre for each pattern we require:
\[
d_{j,n} = ||\mathbf{t}_n - \mu_j(x_n)||^2
\]
(7)
In computing the squared distance we are interested in the parameters which correspond to the centres of the Gaussian.
To compute the distance the following operation is computed for each centre of each Gaussian for each pattern:
\[
\begin{pmatrix}
t_{1,n} \\
t_{2,n} \\
\vdots \\
t_{c,n}
\end{pmatrix} -
\begin{pmatrix}
\mu_{j1,n} \\
\mu_{j2,n} \\
\vdots \\
\mu_{jc,n}
\end{pmatrix}
\]
This operation can be completed as one matrix operation as follows:
\[
D = \begin{pmatrix}
t_{1,1} & t_{2,1} & \cdots & t_{c,1} & t_{1,1} & t_{2,1} & \cdots & t_{c,1} \\
t_{1,2} & t_{2,2} & \cdots & t_{c,2} & t_{1,2} & t_{2,2} & \cdots & t_{c,2} \\
\vdots & \vdots & \cdots & \vdots & \vdots & \vdots & \cdots & \vdots \\
t_{1,n} & t_{2,n} & \cdots & t_{c,n} & t_{1,n} & t_{2,n} & \cdots & t_{c,n} \\
t_{1,N} & t_{2,N} & \cdots & t_{c,N} & t_{1,N} & t_{2,N} & \cdots & t_{c,N}
\end{pmatrix} -
\begin{pmatrix}
\mu_{11,1} & \mu_{12,1} & \cdots & \mu_{1c,1} & \mu_{M1,1} & \mu_{M2,1} & \cdots & \mu_{Me,1} \\
\mu_{11,2} & \mu_{12,2} & \cdots & \mu_{1c,2} & \mu_{M1,2} & \mu_{M2,2} & \cdots & \mu_{Me,2} \\
\vdots & \vdots & \cdots & \vdots & \vdots & \vdots & \cdots & \vdots \\
\mu_{1,n} & \mu_{12,n} & \cdots & \mu_{1c,n} & \mu_{M1,n} & \mu_{M2,n} & \cdots & \mu_{Me,n} \\
\mu_{1,N} & \mu_{12,N} & \cdots & \mu_{1c,N} & \mu_{M1,N} & \mu_{M2,N} & \cdots & \mu_{Me,N}
\end{pmatrix}
\]
That is
\[
D = \begin{pmatrix}
t_{1,1} & t_{2,1} & \cdots & t_{c,1} & t_{1,1} & t_{2,1} & \cdots & t_{c,1} \\
t_{1,2} & t_{2,2} & \cdots & t_{c,2} & t_{1,2} & t_{2,2} & \cdots & t_{c,2} \\
\vdots & \vdots & \cdots & \vdots & \vdots & \vdots & \cdots & \vdots \\
t_{1,n} & t_{2,n} & \cdots & t_{c,n} & t_{1,n} & t_{2,n} & \cdots & t_{c,n} \\
t_{1,N} & t_{2,N} & \cdots & t_{c,N} & t_{1,N} & t_{2,N} & \cdots & t_{c,N}
\end{pmatrix} - \mathbf{P}^u
\]
Inspection of equation (9) reveals that the target data is repeated for each centre, and so by reshaping the target matrix the distances can be computed as matrix operations within MATLAB. The following MATLAB code reshapes the \( t \) vector into the form required in equation (9).
% Build t that suits parameters,
% that is repeat t for each centre
t = kron(ones(1,ncentres),t);
% Which gives results like the following
% ----------------------------------------
\[
t =
\begin{array}{ccc}
1 & 2 & 3 \\
4 & 5 & 6 \\
7 & 8 & 9
\end{array}
\]
\[
t =
\begin{array}{cccccccc}
1 & 2 & 3 & 1 & 2 & 3 & 1 & 2 & 3 \\
4 & 5 & 6 & 4 & 5 & 6 & 4 & 5 & 6 \\
7 & 8 & 9 & 7 & 8 & 9 & 7 & 8 & 9
\end{array}
\]
The following code completes the squared distance operation.
% Do subtraction
diff = t - centres;
% Square each result
diff2 = diff.^2;
% Reshape and sum each component
diff2 = reshape(diff2',dim_target,(ntarget*ncentres));
% This is the transformation after the reshape
% Centres are zero for this illustration
% diff2 =
%
% 1 4 9 1 4 9 1 4 9
% 16 25 36 16 25 36 16 25 36
% 49 64 81 49 64 81 49 64 81
%
%
% diff2 =
%
% 1 4 9
% 1 4 9
% 1 4 9
% 16 25 36
% 16 25 36
% 16 25 36
% 49 64 81
% 49 64 81
% 49 64 81
sum2 = sum(diff2,2);
% Calculate the sum of distance, and reshape
% so that we have a distance for each centre per target
% i.e. ntarget * ncentres
dist2 = reshape(sum2,ncentres,ntarget);
% This is the transformations after the reshape
% sum2 =
%
% 14
% 14
% 14
% 77
% 77
% 77
% 194
% 194
% 194
%
% dist2 =
%
% 14 14 14
% 77 77 77
% 194 194 194
Where
\[
\text{dist2} = \begin{bmatrix}
d_1,1 & d_1,2 & \cdots & d_1,m_1 \\
d_2,1 & d_2,2 & \cdots & d_2,m_2 \\
\vdots & \vdots & \ddots & \vdots \\
d_n,1 & d_n,2 & \cdots & d_n,m_n \\
\vdots & \vdots & \ddots & \vdots \\
d_M,1 & d_M,2 & \cdots & d_M,m_N \\
\end{bmatrix}
\] (11)
and the equation (7) is now in matrix form. Now that the distance has been computed it is a natural progression to compute the activations of each Gaussian kernel.
\[
\text{A} = \begin{bmatrix}
a_1,1 & a_1,2 & \cdots & a_1,m_1 \\
a_2,1 & a_2,2 & \cdots & a_2,m_2 \\
\vdots & \vdots & \ddots & \vdots \\
a_n,1 & a_n,2 & \cdots & a_n,m_n \\
\vdots & \vdots & \ddots & \vdots \\
a_M,1 & a_M,2 & \cdots & a_M,m_N \\
\end{bmatrix}
\] (12)
where
\[
a_{j,n} = \phi_j(t_n | x_n) = \frac{1}{(2\pi\sigma_j^2)^{\frac{M}{2}}} \exp \left\{ \frac{-d_{j,n}^2}{2\sigma_j^2} \right\}
\] (13)
The probabilities of each Gaussian are then computed by multiplying each activation by the re-
spective mixing coefficient:
\[
\begin{bmatrix}
\alpha_{1,1} & \alpha_{1,2} & \cdots & \alpha_{1,M} \\
\alpha_{2,1} & \alpha_{2,2} & \cdots & \alpha_{2,M} \\
\vdots & \vdots & \ddots & \vdots \\
\alpha_{N,1} & \alpha_{N,2} & \cdots & \alpha_{N,M}
\end{bmatrix}
\]
These principles are implemented in a function called \texttt{f_prob} listed below. Where \texttt{mixparams.vars} refers to the matrix \( P \), line 12 computes the squared distance, line 22 computes the matrix \( A \) and finally line 28 is the computation of \( \mathbf{P_r} \).
```plaintext
function [prob,a] = f_prob(net,mixparams,t)
ncentres = net.mix.ncentres;
dim_target = net.mix.nin;
nparams = net.mix.nparams;
target = size(t, 1);
% Calculate squared norm matrix, of dimension (ndata, ncentres)
dist2 = f_dist2(net,mixparams,t);
% Calculate variance factors
variance = 2.*mixparams.vars;
% Compute the normalisation term
normal = ((2.*pi).*mixparams.vars).*(dim_target./2);
% Now compute the activations
a = exp(-dist2./variance))./normal;
% Accumulate negative log likelihood of targets
prob = mixparams.mixcoeffs.*a;
```
### 2.2 Computing the probability of a point, \( \pi_j \)
The probability of a point is defined as:
\[ \pi_j = \frac{\alpha_j \phi_j}{\sum_{i=1}^{n} \alpha_i \phi_i} \]
(15)
The computation of equation (15) is implemented using row and column operations on the matrix described by equation (14):
\[ \pi_{j,n} = \frac{pr_{j,n}}{pr_{1,n} + pr_{2,n} + \cdots + pr_{M,n}} \]
(16)
and
\[ \Pi = \begin{bmatrix}
\pi_{1,1} & \pi_{2,1} & \cdots & \pi_{M,1} \\
\pi_{1,2} & \pi_{2,2} & \cdots & \pi_{M,2} \\
\vdots & \vdots & \ddots & \vdots \\
\pi_{1,n} & \pi_{2,n} & \cdots & \pi_{M,n} \\
\vdots & \vdots & \ddots & \vdots \\
\pi_{1,N} & \pi_{2,N} & \cdots & \pi_{M,N}
\end{bmatrix} \]
(17)
which is implemented in MATLAB as follows:
```matlab
function [post, a] = f_post(net, mixparams, t)
% Check that inputs are consistent
[prob a] = f_prob(net, mixparams, t);
s = sum(prob, 2);
% Set any zeros to one before dividing
s = s + (s==0);
post = prob./(s*ones(1, net.mix.ncentres));
```
### 2.3 Reshaping the parameter matrix
When computing the derivative \( \frac{\partial E}{\partial \mu_{jk,n}} \) it is necessary that each of the components of the kernel centres is operated on by its respective variance and posterior. To facilitate this operation as a single matrix operation one further reshape is required. This takes a matrix (say \( P^\prime \)) and rebuilds the columns so that the dimensions are the same as \( P^\prime \), and populated such that for each \( \mu_{jk,n} \) there is a corresponding \( \sigma_{jk,n}^2 \). An example would be as follows where each of the centre parameters
is matched to their corresponding width parameter.
\[
\begin{bmatrix}
\mu_{11,1} & \mu_{12,1} & \cdots & \mu_{1c,1} & \cdots & \mu_{M1,1} & \mu_{M2,1} & \cdots & \mu_{Mc,1} \\
\mu_{11,2} & \mu_{12,2} & \cdots & \mu_{1c,2} & \cdots & \mu_{M1,2} & \mu_{M2,2} & \cdots & \mu_{Mc,2} \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\mu_{11,n} & \mu_{12,n} & \cdots & \mu_{1c,n} & \cdots & \mu_{M1,n} & \mu_{M2,n} & \cdots & \mu_{Mc,n} \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
\mu_{11,N} & \mu_{12,N} & \cdots & \mu_{1c,N} & \cdots & \mu_{M1,N} & \mu_{M2,N} & \cdots & \mu_{Mc,N}
\end{bmatrix}
\]
\[\mathbf{P}^\mu = (18)\]
The following MATLAB code shows how to reshape the parameter matrix into the desired form,
\[
z = [1 2 3; 4 5 6; 7 8 9]
\]
\[
z = \text{kron}(\text{ones(dim_target,1)},z);
z = \text{reshape}(z,\text{ntarget},(\text{ncentres*dim_target}));
\]
% Gives results like this
% z =
% % 1 2 3
% % 4 5 6
% % 7 8 9
% %
% z =
% % 1 1 1 2 2 2 3 3 3
% % 4 4 4 5 5 5 6 6 6
% % 7 7 7 8 8 8 9 9 9
3 Computing the error function in parameter space
The negative log likelihood error function for a MDN is defined as (Bishop, 1995; Bishop, 1994):
\[
E = \sum_{n=1}^{N} - \ln \left( \sum_{j=1}^{m} \alpha_j(x_n) \phi_j(t_n|x_n) \right)
\]
(20)
Then each element in equation (14) is defined as follows:
$$Pr_{j,n} = \alpha_j(x_n) \phi_j(t_n|x_n)$$ \hspace{1cm} (21)
and the implementation becomes row and column operations in MATLAB. The following code shows the function `f_mdner`, which implements equation (20).
```matlab
function err = f_mdner(net, x, t)
%F_MDNERR Evaluate error function for Mixture Density Network.
% Check arguments for consistency
errstring = consist(net, 'f_mdn', x, t);
if ~isempty(errstring)
error(errstring);
end
% Get the output mixture models
mixparams = f_mdnfwd(net, x);
probs = f_prob(net, mixparams, t);
err = sum(-log(max(eps,sum(probs,2))));
```
Line 13 returns a matrix of probabilities, and so the computation of the error for each pattern is a summation along the rows of `probs`, and the total error becomes a summation of the vector resulting from `sum(probs, 2)`.
4 Computing the gradient of the error function in parameter space
First forward propagate the inputs `x` through the MLP, which returns a matrix containing the parameters for each pattern (see Appendix A for source code listing)
```matlab
[mixparams, z] = f_mdnfwd(net, x);
```
`mixparams` is a structure containing three matrices $P^a$, $P^h$ and $P^y$ of the form described in equations (3), (4) and (5) respectively. Using techniques similar to those described in Section 2 all the derivatives are then computed with matrix operations.
4.1 Computing the error gradient with respect to the mixing coefficients, $\frac{\partial E_n}{\partial z_j}$
The standard result for each centre is:
$$\frac{\partial E_n}{\partial z_j} = \alpha_j - \pi_j$$ \hspace{1cm} (22)
is simply computed as
\[
\frac{\partial E_n}{\partial \sigma_{\alpha}} = \Delta^\alpha = \mathbf{P}' - \Pi
\]
4.2 Computing the error gradient with respect to the kernel centres, \( \frac{\partial E_n}{\partial z_{j,k}} \)
The general result is
\[
\frac{\partial E_n}{\partial z_{j,k}} = \pi_j \left\{ \frac{\mu_{jk} - t_k}{\sigma_j^2} \right\}
\]
Using techniques described in Section 2.3 matrices \( \mathbf{P}' \) and \( \Pi \) can be reshaped, and the following operation is computed within MATLAB:
\[
\frac{\partial E_n}{\partial z_{j,k}} = \Delta^\mu =
\]
\[
\begin{bmatrix}
\pi_{1,1} \left( \frac{z_{1,1} - t_1}{\sigma_1^2} \right) & \cdots & \pi_{1,1} \left( \frac{z_{1,1} - t_1}{\sigma_1^2} \right) & \cdots & \pi_{M,1} \left( \frac{z_{M,1} - t_1}{\sigma_{M,1}^2} \right) & \cdots & \pi_{M,1} \left( \frac{z_{M,1} - t_1}{\sigma_{M,1}^2} \right) \\
\pi_{1,2} \left( \frac{z_{1,2} - t_2}{\sigma_2^2} \right) & \cdots & \pi_{1,2} \left( \frac{z_{1,2} - t_2}{\sigma_2^2} \right) & \cdots & \pi_{M,2} \left( \frac{z_{M,2} - t_2}{\sigma_{M,2}^2} \right) & \cdots & \pi_{M,2} \left( \frac{z_{M,2} - t_2}{\sigma_{M,2}^2} \right) \\
\vdots & \cdots & \vdots & \cdots & \vdots & \cdots & \vdots \\
\pi_{1,n} \left( \frac{z_{1,n} - t_n}{\sigma_n^2} \right) & \cdots & \pi_{1,n} \left( \frac{z_{1,n} - t_n}{\sigma_n^2} \right) & \cdots & \pi_{M,n} \left( \frac{z_{M,n} - t_n}{\sigma_{M,n}^2} \right) & \cdots & \pi_{M,n} \left( \frac{z_{M,n} - t_n}{\sigma_{M,n}^2} \right) \\
\pi_{1,N} \left( \frac{z_{1,N} - t_N}{\sigma_N^2} \right) & \cdots & \pi_{1,N} \left( \frac{z_{1,N} - t_N}{\sigma_N^2} \right) & \cdots & \pi_{M,N} \left( \frac{z_{M,N} - t_N}{\sigma_{M,N}^2} \right) & \cdots & \pi_{M,N} \left( \frac{z_{M,N} - t_N}{\sigma_{M,N}^2} \right)
\end{bmatrix}_{\text{dimension } M \times N}
\]
4.3 Computing the error gradient with respect to the kernel widths \( \frac{\partial E_n}{\partial \sigma_j} \)
The general result:
\[
\frac{\partial E_n}{\partial \sigma_j} = -\pi_j, n \left\{ \frac{||t_n - \mu_j(x_n)||^2}{\sigma_j^2} - c \right\}
\]
is computed using the functions and matrices defined previously. Using the MATLAB operator .\:/ for element-wise division and multiplication respectively, the computation becomes:
\[
\frac{\partial E_n}{\partial \sigma_j} = \Delta^\sigma = \frac{\Pi}{2} \left\{ \text{dist}^2 \frac{2}{\mathbf{P}'^2} - \mathbf{C} \right\}
\]
where \( \mathbf{C} \) is a matrix of dimension \((n\text{ patterns}, n\text{ centres})\) with each element taking the value \( c \), the dimension of the target space.
A full listing of the MATLAB function to compute the gradient of the error function is given in Appendix B.
5 Testing
5.1 Training Accuracy
Tests using ‘gradcheck’ from NETLAB toolbox show that, for the configurations tested, the implementation of the gradient function performs to specification.
Comparison of demmdn1 and f_demmdn1 produces interesting results. Initially the training errors appear to be identical (to the 6\textsuperscript{th} decimal place). After the 36\textsuperscript{th} iteration (demmdn1 trains for 200) the errors diverge in the 6\textsuperscript{th} decimal. Comparing scale, they are identical (to the 6\textsuperscript{th} decimal place) until the 105\textsuperscript{th} iteration, where f_demmdn1 remains static for one iteration, there after the one step lagged scale of f_demmdn1 is the same as demmdn1. An explanation of these differences is offered by inspecting the average delta\textsuperscript{4} and the average of the modulus of delta for the results returned by gradcheck as shown in table 1.
<table>
<thead>
<tr>
<th>MDN type</th>
<th>mean(delta)</th>
<th>mean(abs(delta))</th>
</tr>
</thead>
<tbody>
<tr>
<td>f_demmdn</td>
<td>-3.4169e-009</td>
<td>4.0190e-008</td>
</tr>
<tr>
<td>demmdn</td>
<td>-1.6406e-009</td>
<td>4.1471e-008</td>
</tr>
</tbody>
</table>
Table 1: Results of running gradcheck
The mean delta for f_demmdn1 is at least twice that of demmdn1, whilst the mean(abs(delta)) are of the same magnitude but differ in the 9\textsuperscript{th} decimal place. The scaled conjugate gradients optimisation algorithm (Bishop, 1995) uses information on the gradient of the error function to minimise the error function. It is suggested that the differences in computed gradient accumulates during training and accounts for the divergence of training errors between demmdn1 and f_demmdn1.
5.2 Training Speed
The programme demmdn1 was also used to illustrate the improvement in training time by comparing the results of the MATLAB profile function for each implementation. Two examples of profile reports are shown in Appendix C. Ten profile reports of each method where collected by running batch jobs (on a Silicon Graphics Challenge L, holding 4 x 200MHz R10000 CPUs, 512 Mb RAM, and running IRIX 6.2). The summaries of these reports are tabulated in table 2. Note although the standard deviation of demmdn1 seems large, both standard deviations relative to their means are of the same order. The difference in mean execution time illustrates the improvement in training time by computation of the error and error gradient functions in parameter space.
\textsuperscript{4} delta is the difference between the computation of the error derivatives obtained from the analytic expressions and those calculated using finite differences (Bishop, 1994).
### Table 2: Summary results of running `demmdn1` Netlab package ten times.
<table>
<thead>
<tr>
<th>MDN type</th>
<th>mean(execution time) s</th>
<th>std(execution time) s</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>f_demmdn1</code></td>
<td>10.99</td>
<td>0.23</td>
</tr>
<tr>
<td><code>demmdn1</code></td>
<td>723.18</td>
<td>51.52</td>
</tr>
</tbody>
</table>
### 6 Conclusions
The techniques presented here for training Mixture Density Networks show that training in parameter space leads to substantial gains in training time without loss of accuracy. Examination of the gradient information shows that differences in training errors are due to small differences in the computation of the gradient information. The example presented in this report shows an improvement in mean training time of at least a factor of sixty. The decreased training time allows us to tackle more complicated problems, which previously took too long to train to be of any practical use. Such an example, modelling wind vectors conditional on satellite information, discussed briefly in Section 1, shows training times improved from several days to a few hours.
### Acknowledgements
I thank Dan Cornford, for his patient reading and constructive comments on the draft versions of this report, and Ian Nabney, for his constructive comments on the second draft of this report and suggesting to change `mixparams` from a matrix to a MATLAB data structure and to use the MATLAB function `kron` instead of complex matrix reshapes implemented in the first version of the software.
### Appendices
#### A Listing of MDN forward propagation function
```matlab
function [mixparams, z, a] = f_mdnfwd(net, x)
% _F_MDNFWD_ Forward propagation through Mixture Density Network.
% Description
% `mixparams = MDNFWD(NET, X)` takes a mixture density network data
% structure `NET` and a matrix `X` of input vectors, and forward propagates
% the inputs through the network to generate a structure `MIXPARAMS` which
% describe the parameters of a mixture model. Each row of `X` represents
% one input vector and the corresponding row of `MIXPARAMS` represents the
% data structure vector of the corresponding mixture model parameters
% for the conditional probability of target vectors.
% `[MIXPARAMS, Z] = MDNFWD(NET, X)` also generates a matrix `Z` of the
% hidden unit activations where each row corresponds to one pattern.
```
\% [MIXPARAMS Z, A] = MLPFWD(NET, X) also returns a matrix A giving the
\% summed inputs to each output unit, where each row corresponds to one
\% pattern.
\%
\% See also
\% GMM, MDN, F_MDNERR, F_MDNGRAD, MLPFWD, MDNMIX
\%
\% Copyright (c) Christopher M Bishop, Ian T Nabney (1996, 1997)
\% Copyright (c) David J Evans (1998)
\%
\% Check arguments for consistency
errstring = consist(net, 'f_mdn', x);
if isempty(errstring)
error(errstring);
end
\% Extract mlp and mixture model descriptors
mlpnet = net.mlp;
mix = net.mix;
ncentres = mix.ncentres; \% Number of components in mixture model
dim_target = mix.nin; \% Dimension of targets
nparams = mix.nparams; \% Number of parameters in mixture model
\% Propagate forwards through MLP
[y, z, a] = mlpfwd(mlpnet, x);
\% Compute the position for each parameters in the whole
\% matrix. Used to define the mixparams structure
mixcoeff = [1:1:ncentres];
centres = [ncentres+1:1:(ncentres*(1+dim_target))];
variances = [(ncentres*(1+dim_target)+1):1:nparams];
\% Convert output values into mixture model parameters
\% Use softmax to calculate priors
\% Prevent overflow and underflow: use same bounds as glmfwd
\% Ensure that sum(exp(y), 2) does not overflow
maxcut = log(realmax) - log(ncentres);
\% Ensure that exp(y) > 0
mincut = log(realmin);
temp = min(y(:,1:ncentres), maxcut);
temp = max(temp, mincut);
temp = exp(temp);
mixpriors = temp./(sum(temp, 2)*ones(1,ncentres));
\% This is the dimension of the centres(1, ncentres*dim_target)
mixcentres = y(:,(ncentres+1):ncentres*(1+dim_target));
\% Variances are exp of network outputs
mixwidths = exp(y(:,(ncentres*(1+dim_target)+1):nparams));
\% Now build up all the mixture model weight vectors
Mixture Density Network Training by Computation in Parameter Space
ndata = size(x, 1);
% Return parameters
mixparams.mixcoeffs = mixpriors;
mixparams.centres = mixcentres;
mixparams.vars = mixwidths;
B Listing of the MDN error gradient implementation
function g = f_mdnggrad(net, x, t)
%F_MDNGRAD Evaluate gradient of error function for Mixture Density Network.
% Description
% G = F_MDNGRAD(NET, X, T) takes a mixture density network data
% structure NET, a matrix X of input vectors and a matrix T of target
% vectors, and evaluates the gradient G of the error function with
% respect to the network weights. The error function is negative log
% likelihood of the target data. Each row of X corresponds to one
% input vector and each row of T corresponds to one target vector.
% See also
% F_MDN, F_MDNFWD, F_MDNERR, MLPBKP, MDNMIX
%
% Copyright (c) Christopher M Bishop, Ian T Nabney (1996, 1997)
% Copyright (c) David J Evans (1998)
% Check arguments for consistency
errstring = consist(net, 'f_mdn', x, t);
if ~isempty(errstring)
error(errstring);
end
[mixparams, z] = f_mdnfwd(net, x);
% Compute gradients at MLP outputs: put the answer in deltas
ncentres = net.mix.ncentres; % Number of components in mixture model
dim_target = net.mix.nin; % Dimension of targets
nmixparams = net.mix.nparams; % Number of parameters in mixture model
ntarget = size(t,1);
deltas = zeros(ntarget, net.mlp.nout);
e = ones(ncentres, 1);
f = ones(1, dim_target);
post = f_post(net, mixparams, t);
% Calculate prior derivatives
deltas(:,1:ncentres) = mixparams.mixcoeffs - post;
% Calculate centre derivatives
long_t = kron(ones(1,ncentres),t);
centre_err = mixparams.centres - long_t;
% Get the post to match each ujk
% this array will be (ntarget,(ncentres*dim_target))
long_post = kron(ones(dim_target,1),post);
long_post = reshape(long_post,ntarget,(ncentres*dim_target));
% Get the variance to match each ujk
% this array will be ntarget*(ncentres*dim_target)
var = mixparams.vars;
var = kron(ones(dim_target,1),var);
var = reshape(var,ntarget,(ncentres*dim_target));
% Compute delta
deltas(:,(ncentres+1):((ncentres*(1+dim_target)))) = ...
(centre_err.*long_post)./var;
% Compute variance derivatives
dist2 = f_dist2(net,mixparams,t);
c = dim_target*ones(ntarget,ncentres);
deltas(:,((ncentres*(1+dim_target))+1):nmixparams) = ...
post*((dist2./mixparams.vars)-c)./(-2);
g = mlpbkp(net.mlp, x, z, deltas);
C Timing comparisons
Example results from running profile function in MATLAB
<table>
<thead>
<tr>
<th>Results for f_demmdn1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Total time in "-/Netlab/netopt.m": 10.89 seconds</td>
</tr>
<tr>
<td>100% of the total time was spent on lines:</td>
</tr>
<tr>
<td>[38 35]</td>
</tr>
<tr>
<td>0.01s, 0%</td>
</tr>
<tr>
<td>34: % Extract weights from network as single vector</td>
</tr>
<tr>
<td>35: w = feval(pakstr, net);</td>
</tr>
<tr>
<td>36:</td>
</tr>
<tr>
<td>37: % Carry out optimisation</td>
</tr>
<tr>
<td>10.88s, 100%</td>
</tr>
<tr>
<td>38: [s{1:nargout}] = eval(optstring);</td>
</tr>
<tr>
<td>39: w = s{1};</td>
</tr>
</tbody>
</table>
Results for demmdn1
Total time in "~/Netlab/netopt.m": 700.48 seconds
100% of the total time was spent on lines:
\[38 \text{ to } 35\]
34: \% Extract weights from network as single vector
0.02s, 0% 35: w = feval(pakstr, net);
36:
37: \% Carry out optimisation
700.46s, 100% 38: [s{1:nargout}] = eval(optstring);
39: w = s{1};
References
|
{"Source-Url": "http://publications.aston.ac.uk/1237/1/NCRG_98_016.pdf", "len_cl100k_base": 9851, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 57437, "total-output-tokens": 11094, "length": "2e13", "weborganizer": {"__label__adult": 0.00032782554626464844, "__label__art_design": 0.0007605552673339844, "__label__crime_law": 0.0005059242248535156, "__label__education_jobs": 0.0018482208251953125, "__label__entertainment": 0.0001430511474609375, "__label__fashion_beauty": 0.00024366378784179688, "__label__finance_business": 0.0005130767822265625, "__label__food_dining": 0.0004193782806396485, "__label__games": 0.0005726814270019531, "__label__hardware": 0.001861572265625, "__label__health": 0.0008530616760253906, "__label__history": 0.00038814544677734375, "__label__home_hobbies": 0.0001832246780395508, "__label__industrial": 0.0009369850158691406, "__label__literature": 0.00030231475830078125, "__label__politics": 0.0004041194915771485, "__label__religion": 0.0005311965942382812, "__label__science_tech": 0.44921875, "__label__social_life": 0.00017273426055908203, "__label__software": 0.03948974609375, "__label__software_dev": 0.499267578125, "__label__sports_fitness": 0.00032639503479003906, "__label__transportation": 0.0004711151123046875, "__label__travel": 0.00025177001953125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29181, 0.06798]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29181, 0.07756]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29181, 0.6744]], "google_gemma-3-12b-it_contains_pii": [[0, 719, false], [719, 4124, null], [4124, 6292, null], [6292, 8548, null], [8548, 9522, null], [9522, 10703, null], [10703, 11910, null], [11910, 13416, null], [13416, 14746, null], [14746, 16392, null], [16392, 18941, null], [18941, 21681, null], [21681, 23984, null], [23984, 25691, null], [25691, 27267, null], [27267, 28599, null], [28599, 29181, null]], "google_gemma-3-12b-it_is_public_document": [[0, 719, true], [719, 4124, null], [4124, 6292, null], [6292, 8548, null], [8548, 9522, null], [9522, 10703, null], [10703, 11910, null], [11910, 13416, null], [13416, 14746, null], [14746, 16392, null], [16392, 18941, null], [18941, 21681, null], [21681, 23984, null], [23984, 25691, null], [25691, 27267, null], [27267, 28599, null], [28599, 29181, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29181, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29181, null]], "pdf_page_numbers": [[0, 719, 1], [719, 4124, 2], [4124, 6292, 3], [6292, 8548, 4], [8548, 9522, 5], [9522, 10703, 6], [10703, 11910, 7], [11910, 13416, 8], [13416, 14746, 9], [14746, 16392, 10], [16392, 18941, 11], [18941, 21681, 12], [21681, 23984, 13], [23984, 25691, 14], [25691, 27267, 15], [27267, 28599, 16], [28599, 29181, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29181, 0.03896]]}
|
olmocr_science_pdfs
|
2024-12-07
|
2024-12-07
|
185af114b081b9f78584cc2cd13521e631769216
|
METHOD FOR TRAVERSING QUADTREES, OCTREES, AND N-DIMENSIONAL BI-TREES
Inventors: Ronald N. Perry, Cambridge, MA (US); Sarah F. Frisken, Cambridge, MA (US)
Assignee: Mitsubishi Electric Research Laboratories, Inc., Cambridge, MA (US)
Notice: Subject to any disclaimer, the term of this patent is extended or adjusted under 35 U.S.C. 154(b) by 433 days.
Appl. No.: 10/209,302
Filed: Jul. 31, 2002
Prior Publication Data
Int. Cl.: G06F 7/00
U.S. Cl.: 707/100; 707/2
Field of Search: 707/1-8, 10, 100-102, 707/104.1, 200, 203-205, 711/100, 117-119, 200-207; 715/848, 855
References Cited
U.S. PATENT DOCUMENTS
OTHER PUBLICATIONS
Primary Examiner—Greta Robinson
Assistant Examiner—Harold E. Dodds, Jr.
Attorney, Agent, or Firm—Dirk Brinkman; Andrew J. Curan
ABSTRACT
A method traverses a bi-tree stored in a memory to locate application specific data stored in the memory and associated with the bi-tree. The bi-tree comprises a spatial partitioning of an N-dimensional space into a hierarchy of cells. Starting from a root cell enclosing the N-dimensional space, each cell is successively and conditionally partitioned into 2^N child cells along the cell’s N mid-planes. Each cell of the bi-tree has associated characteristics comprising the application specific data and child cells are indexed directly from a parent cell. First, a set of locational codes, a cell of the bi-tree, and a termination condition are specified. Next, the characteristics of the cell are tested to see if they satisfy the termination condition. If the termination condition is not satisfied, an arithmetic operation on the set of locational codes is performed to directly index a next cell to be tested. Otherwise, the cell identifies a target cell. Finally, the application specific data of the target cell is retrieved from the memory.
24 Claims, 7 Drawing Sheets
METHOD FOR TRAVERSING QUADTREES, OCTREES, AND N-DIMENSIONAL BI-TREES
FIELD OF THE INVENTION
This invention relates generally to tree-structured data representations, and more particularly to locating spatial data stored in quadtrees, octrees, and their N-dimensional counterparts.
BACKGROUND OF THE INVENTION
Tree-structured data representations are pervasive. Because of their long history and many different forms and uses, there are a large variety of "trees" that appear superficially alike, or that have similar names, even though they are quite different in detail and use.
Therefore, a "bi-tree," as defined herein, is a spatial partitioning of an N-dimensional space into a hierarchy of cells. A root cell, enclosing the N-dimensional space, is conditionally partitioned into \( 2^n \) equally sized child cells along its mid-planes. Each child cell is then successively and conditionally partitioned in a similar manner.
Each cell of the bi-tree has associated characteristics comprising application specific data such as graphical elements, e.g., triangles, of a graphical object, e.g., a three-dimensional triangle mesh. Child cells in the bi-tree are indexed directly from their parent cell. Bi-trees can be fully populated or sparse. In fully populated bi-trees, each cell is partitioned down to a deepest common level; in sparse bi-trees only selected cells are partitioned to reduce storage requirements.
FIG. 1 shows an example bi-tree 100 as defined herein. Although the example bi-tree 100 is a quadtree, i.e., a two-dimensional bi-tree, the method according to the invention can be extended to octrees, i.e., three-dimensional bi-trees, as well as lower and higher dimensional bi-trees because our method treats each dimension independently.
Cells branch from a root cell 101, through intermediate cells 102, to leaf cells 103. Typically, the cells are associated with application specific data and characteristics, e.g., a cell type for region quadtrees, or object indices for point quadtrees. The child cells are indexed 110 directly from their parent cell. Direct indexing can be done by ordering the child cells or pointers to the child cells in a memory.
A depth of the bi-tree 100 is \( N_{LEVELS} \). The level of the root cell 101 is \( LEVEL_{ROOT}=N_{LEVELS} \). The level of smallest possible cell is zero. The bi-tree 100 is defined over a normalized space \([0, 1]^d\). Similarly, an N-dimensional bi-tree is defined over \([0, 1]^d\). Although this may seem restrictive, in practice most spatial data can be represented in this normalized space by applying transformations to the coordinates of the data.
As shown in FIG. 2, quadtrees successively partition a region of space into four equally sized quadrants, i.e., cells. Starting from a root cell, cells are successively subdivided into smaller cells under certain conditions, such as when the cell contains an object boundary (region quadtree), or when the cell contains more than a specified number of objects (point quadtree). Compared to methods that do not partition space or that partition space uniformly, quadtrees and octrees can reduce the amount of memory required to store the data and improve execution times for querying and processing the data, e.g., collision detection and rendering.
Managing information stored in a bi-tree generally requires three basic operations: point location, region location, and neighbor searches.
Point location finds a leaf cell 201 containing a given point 200. For example, a quadtree that stores geographical data, such as city locations, is partitioned according to geographical coordinates, i.e., longitude and latitude. Point location can be used to find cities near a given geographical coordinate, i.e., the point 200.
Region location finds a smallest cell or set of cells that encloses a specified rectangular region 210 represented by a minimum vertex \( v_1 \) and a maximum vertex \( v_2 \). With the geographical quadtree example, region location can be used to determine all the cities that are within a given range of specified geographical coordinates.
A neighbor search finds a cell, in a specified direction, that is adjacent to a given cell. In the geographical quadtree, point location can be combined with neighbor searching to first locate a cell containing a given city and then to find nearby cities in a given direction. In all of these operations, the bi-tree is traversed by following pointers connecting the cells.
A forth operation, called ray tracing, is used by graphics applications to render three-dimensional models on a display, see Foley et al., "Computer Graphics Principles and Practice," Addison-Wesley, 1992. In these applications, graphical elements comprising a scene are placed in leaf cells of an octree. Ray tracing requires a sequential identification of leaf cells along a ray. One method for identifying these leaf cells combines point location and neighbor searching.
Traditional point location operations in a bi-tree require a downward branching through the bi-tree beginning at the root node. Branching decisions are made by comparing each coordinate of a point’s position to a mid-plane position of a current enclosing cell.
Traditional neighbor searching in a bi-tree requires a recursive upward branching from a given cell to a smallest common ancestor of the given cell and a neighboring cell, and then a recursive downward branching to locate the neighbor. Each branch in the recursion relies on comparing values that depend on the current cell and its parent. Typically, the values are stored in tables.
Prior art point location, region location, and neighbor searching are time consuming because Boolean operations, i.e., comparisons, are used. Boolean operations are typically implemented by predictive branching logic in modern CPUs. Predictive branching will stall the instruction pipeline on incorrectly predicted branch instructions, see Knuth, The Art of Computer Programming, Volume 1, Addison-Wesley, 1998, and Kauth, MMIXware: A RISC Computer for the Third Millennium, Springer-Verlag, 1999.
Mispredictions occur frequently for traditional tree traversal operations because previous branch decisions generally have no relevance to future branch decisions, see Pritchard, "Direct Access Quadtree Lookup," Game Programming Gems 2, ed. DeLoura, Charles River Media, Hingham, Mass., 2001.
In addition, traditional neighbor searching methods are recursive. Recursion increases overhead as a result of main-
taining stack frames and making function calls. Also, prior art neighbor searching methods use table lookups which require costly memory accesses in typical applications. Finally, prior art neighbor searching methods are limited only to quadrees and octrees and it is exceedingly complex to extend these methods to higher dimensional bi-trees.
FIG. 3 shows a typical prior art point location operation 300. The operation begins with a given point 301 and a starting cell 302. First, characteristics (C) 303 associated with the cell 302 are tested 310. If true (T), then the cell 302 is a target cell 309 containing the point 301. If false (F), then each coordinate of the position of the point 301 is compared 320 to a corresponding mid-plane position of the cell 302. The comparisons 320 allow one to compute 330 an index to a next (child) cell 304 to be tested.
As stated above, the comparisons 320 require Boolean operations. For an N-dimensional bi-tree, at least N such Boolean operations are required for each cell visited during the traversal of the bi-tree. As stated above, these Boolean operations are likely to stall the instruction pipeline thereby degrading performance.
Pritchard, in “Direct Access Quadtree Lookup,” describes a region location operation for quadtrees that uses locational codes of the x and y boundaries of the bounding box of a region. Pritchard’s quadtree is not a bi-tree under the above definition, because his child cells cannot be indexed directly from a parent cell.
That method operates on a hierarchy of regular arrays of cells, where each level is fully subdivided and contains four times as many cells as the previous level. His two-dimensional representation of spatial data requires a significant amount of memory, and would require even more memory for three- and higher-dimensional spatial data. Hence, that method is impractical for many applications.
Pritchard’s method has two steps. First, that method uses locational codes of the left and right x boundaries and the top and bottom y boundaries of a region bounding box to determine a level of an enclosing cell. Then a scaled version of a position of a bottom-left vertex of the region bounding box is used to index into a regular array at this level.
Traditionally, locational codes have been used with “linear quadtrees” and “linear octrees”, see H. Samet, “Applications of Spatial Data Structures: Computer Graphics, Image Processing, GIS,” Addison-Wesley, Reading, Mass., 1990. Linear quadrees and linear octrees are not bi-trees under our definition. Rather, linear quadrees and linear octrees are comprised of a list of leaf cells where each leaf cell contains its interleaved locational code and other cell specific data. In general, linear quadrees and linear octrees are more compact than bi-trees, e.g., they do not represent intermediate cells and they do not provide explicit links for direct indexing, at the expense of more costly and complicated processing methods.
Locational codes for linear quadrees and linear octrees interleave bits that comprise coordinate values of a cell’s minimum vertex such that linear quadrees use locational codes of base 4 (or 5 if a “don’t care” directional code is used) and linear octrees use locational codes of base 8 (or 9), see H. Samet, “Applications of Spatial Data Structures: Computer Graphics, Image Processing, GIS,” Addison-Wesley, Reading, Mass., 1990.
In computer graphics and volume rendering, ray tracing methods often make use of octrees to accelerate tracing rays through large empty regions of space. Those methods determine non-empty leaf cells along a ray passing through the octree and then process ray-surface intersections within these cells.
There are two basic approaches for tracing a ray through an octree: bottom-up and top-down. Bottom-up methods start at the first leaf cell encountered by the ray and then use neighbor finding techniques to find each subsequent leaf cell along the ray. Top-down methods start from the root cell and use a recursive procedure to find offspring leaf cells that intersect the ray. An extensive summary of methods for traversing octrees during ray-tracing is described by Havran, “A Summary of Octree Ray Traversal Algorithms,” Ray Tracing News, 12(2), pp. 11–23, 1999.
Stolte and Caubet, in “Discrete Ray-Tracing of Huge Voxel Spaces,” Computer Graphics Forum, 14(3), pp. 383–394, 1995, describe a top-down ray tracing approach that uses locational codes for voxel data sets stored in an octree. They first locate a leaf cell containing a point where a ray enters the octree. Then, for each leaf cell without a ray-surface intersection, a 3D DDA is used to incrementally step along the ray, in increments proportional to a size of a smallest possible leaf cell, until a boundary between the leaf cell and a neighboring next cell is encountered. The neighboring next cell is then found by popping cells from a recursion stack to locate a common ancestor of the leaf cell and the neighboring next cell and then traversing down the octree using their point location method. However, their method requires Boolean comparisons and thus suffers from the misprediction problems described above.
Therefore, it is desired to provide a traversal method for N-dimensional bi-trees that improves performance over the prior art by avoiding Boolean operations and eliminating recursion and memory accesses for table lookup, without increasing memory requirements.
SUMMARY OF THE INVENTION
The invention provides an efficient traversal method for bi-trees, e.g., quadrees, octrees, and their N-dimensional counterparts. The method uses locational codes, is inherently non-recursive, and does not require memory accesses for table lookup. The method also reduces the number of mispredicted comparisons. The method includes procedures for point location, region location, neighbor searching, and ray tracing.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram illustrating a data structure for a two-dimensional bi-tree in accordance with the present invention;
FIG. 2 is a diagram illustrating a spatial partitioning for a two-dimensional bi-tree in accordance with the present invention;
FIG. 3 is a diagram of a flow chart for a typical prior art point location method;
FIG. 4 is a diagram illustrating a hierarchical tree structure and associated locational codes for a one-dimensional bi-tree in accordance with the present invention;
FIG. 5 is a diagram illustrating a spatial partitioning and associated locational codes for a one-dimensional bi-tree in accordance with the present invention;
FIG. 6 is a diagram of a flow chart for point location according to the present invention; and
FIG. 7 is a diagram illustrating a ray intersecting a two-dimensional bi-tree in accordance with the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
FIG. 4 shows a hierarchical tree structure 400 and associated locational codes for a one-dimensional bi-tree. Loca-
tional codes 401 are used by a bi-tree traversal method according to the present invention. Each locational code 401 is represented in binary form in a data field with a bit size that is greater than or equal to the maximum number of levels in the tree, NLEVELS. For example, each locational code for a bi-tree with up to eight levels can be represented by eight bits.
The bits in each locational code 401 are numbered from right (LSB) to left (MSB) starting from zero. Each bit in the locational code indicates a branching pattern at a corresponding level of the bi-tree, i.e., bit k represents the branching pattern at level k in the bi-tree. Unlike the prior art where locational codes are interleaved, we use separate locational codes for each dimension of the cell, e.g., a set of locational codes for each cell of a two-dimensional bi-tree, i.e., a quadtree, comprises both an x locational code and a y locational code.
The locational codes for a cell can be determined in two ways. A first method multiplies the value of each coordinate of the cell’s minimum vertex by 2^LEVELROOT, e.g., 2^8 = 32, and then represents the product in binary form. FIG. 5 illustrates a spatial partitioning and associated locational codes 500 for the one-dimensional bi-tree 400. For example, the cell 501, [0.25, 0.5), has locational code 502, binary(0.25*32) = binary(8) = 001000.
A second method follows a branching pattern from the root cell to a given cell, setting each bit according to the branching pattern of a corresponding level. Starting by setting bit LEVELROOT to zero, the second method then sets each subsequent bit k to zero if a branching decision from level k+1 to k branches to the left, and to one if it branches to the right. For sparse bi-trees, lower order bits are set to zero if leaf cells are larger than a smallest possible cell.
In quadrees, octrees, and higher dimensional bi-trees, locational codes for each dimension are determined separately from the value of the corresponding coordinate of the cell’s minimum vertex (the first method) or from the left-right, bottom-top, (back-front, etc.) branching pattern used to reach the given cell from the root cell (the second method).
Several properties of these locational codes can be used to provide bi-tree traversal according to the present invention. First, just as locational codes can be determined from branching patterns, branching patterns can be determined from locational codes. That is, a cell’s locational code can be used to traverse the bi-tree from the root cell to a target cell by using the appropriate bit in each of the locational codes to index a corresponding child of each intermediate cell. As an advantage, our method avoids the costly Boolean comparisons of the prior art.
Second, the position of any point in [0,1]^N can be converted into a set of locational codes by using the first method. These properties enable point and region location according to the present invention as described below in greater detail. In addition, the locational codes of a cell’s neighbors can be determined by adding and subtracting bit patterns to the cell’s locational codes. This property is used to eliminate recursion and memory accesses for table lookup during neighbor searches.
Point Location
As shown in FIG. 6, a point location operation, according to the invention, locates a leaf cell that contains a given point location [x, y] in [0,1]^2 by a bi-tree 600 defined over a region [0,1]^2.
A first step converts the values of the coordinates of the point’s position to a set of locational codes 601 by multiplying each value by 2^LEVELROOT and truncating the resultant products to integers. The integers are represented in binary form.
A second step selects a starting cell 602, e.g., the root cell. The characteristics 603 of the cell 602 are tested 610, e.g., "is the cell 602 a leaf cell?" If true, the cell 602 is a target cell 609 containing the point.
While false, at each level k in the bi-tree, the (k-1)^th bits from each of the locational codes 601 are used to determine 630 an index to an appropriate next (child) cell 604 to be tested 610.
Note that all children of a cell are consecutively ordered to enable this indexing. The ordering can be done by storing the child cells or pointers to the child cells consecutively in a memory. When the indexed child cell has no children, the desired leaf cell has been reached and the point location operation is complete.
Unlike the prior art point location operation 300, our point location operation 600 does not require comparisons between the point position and mid-plane positions of each cell at each branching point. This eliminates N comparisons at each level during a traversal of an N-dimensional bi-tree.
For example, to locate a point in a level 0 cell of an eight-level octree, the prior art operation requires an additional 24(=3^8) comparisons to branch to the appropriate children of intermediate cells. These additional comparisons in the prior art operation exhibit mispredictions as described above.
Region Location
Region location finds a smallest cell or set of cells that encloses a given region. Our method finds a single smallest cell entirely enclosing a rectangular, axis-aligned bounding box.
Our method provides for region location in N-dimensional bi-trees. Our method first determines a size of a smallest enclosing cell. Then, a variation of the point location method described above is used to traverse the bi-tree from a root cell to the smallest enclosing cell.
We determine the size, i.e., level, of the smallest enclosing cell by XOR’ing each corresponding pair of locational codes (lc) of a minimum vertex v0 and a maximum vertex v1 defining the region to generate a binary code (bc), i.e., bc=(lc0 XOR lc1).
Each binary code is then searched from the left (MSB) to find the first “one” bit of the set of binary codes, indicating a first level below a root level where at least one of the pairs of locational codes differ. The level of the smallest enclosing cell is then equal to a bit number of the “zero” bit immediately preceding this “one” bit.
Given this level, our method then traverses the bi-tree downward from the root cell following the bit pattern of the locational codes of any of the region vertices, e.g., the minimum vertex, until a leaf cell is encountered OR a cell of the determined size is reached. This yields the desired enclosing (target) cell. We use the logical OR operator here to indicate either one or both conditions will terminate the traversal of the bi-tree.
Note that there are several methods for identifying the highest order “one” bit in the binary codes ranging from a simple shift loop to processor specific single instructions, which bit-scan a value, thereby eliminating the loop and subsequent comparisons.
As a first one-dimensional example, a region [0.31, 0.65) of the bi-tree 400 has left and right locational codes 001001 and 010101 respectively. By XOR’ing these location codes,
a binary code 011100 is obtained, with a first “one” bit from
the left (MSB) encountered at bit position four (recall that bit
positions are numbered from zero starting at the right-most,
LSB, bit), so that the level of a smallest enclosing cell is five,
i.e., the enclosing root cell of the region [0.31, 0.65] is the
root cell.
As a second one-dimensional example, a region [0.31,
0.36] of the bi-tree 400 has locational codes 001001 and
001010. The XOR step yields 000011, with a first “one” bit
from the left encountered at bit position one, so that the level
of a smallest enclosing cell is two. The smallest enclosing
As a second two-dimensional example, a given cell’s
bottom-left leaf cell vertex neighbor is located by traversing
the two-dimensional bi-tree, i.e., the quadtree, downward
from the root cell using the locational codes of the given
cell’s smallest possible left neighbor and the y locational
code of the given cell’s smallest possible bottom neighbor
until a leaf cell is encountered.
After the locational codes of a desired neighbor have been
determined, the desired neighbor can be found by traversing
the bi-tree downward from the root cell. However, it can be
more efficient to first traverse the bi-tree upward from the
given cell to a smallest common ancestor of the given cell
In N dimensions, the N locational codes of a cell are
XOR’ed with N corresponding locational codes of its neigh-
bor generating N difference codes. The highest level cell
reached by the upward traversal using the N difference
codes is the smallest common ancestor.
As a first example, a difference code for a level 3 cell 501,
[0.25, 0.5], in the one-dimensional bi-tree 400 and its right
neighbor is 0010000100100000110000. Traversing the bi-tree
upward from level 3 considers bits in this difference code to
the left of bit 3. A first 0 bit is reached at LEVEL_ROOT, so
a smallest common ancestor of cell 501 and its right
neighbor is the root cell.
As a second example, a difference code for a level 3 cell
505, [0.75, 1], in the one-dimensional bi-tree 400 and its left
neighbor is 01100001011111000111. Examining bits to the
left of bit 3 yields a first 0 at bit 4, corresponding to a level
4 cell. Hence, a smallest common ancestor of the cell 505
and its left neighbor is the cell’s parent cell 506, which has a
locational code 507, 010000.
Depending on the application, several different variations of neighbor searches might be required, e.g., finding a smallest left neighbor of size at least as large as the given cell and finding all of the leaf cell neighbors touching a specified vertex of the given cell.
There are several advantages of the neighbor finding method according to the present invention over traditional methods. First, because we treat each dimension independently, our method works in any number of dimensions. In contrast, prior art methods use table lookups that work only for two- and three-dimensional bi-trees. Construction of these tables has relied on being able to visualize spatial relationships in two- and three-dimensions, extending these tables to higher dimensions is thus exceedingly difficult, error prone, and tedious to verify. In fact, although higher-dimensional bi-trees are of great utility in fields such as computer vision, scientific visualization, and color science, tables for neighbor searching in these higher dimensional bi-trees are not known.
Second, our method trades off traditional table lookups, which require memory accesses, for simple register-based computations in the form of bit manipulations. This is advantageous in modern system architectures where processor speeds exceed memory speeds. Even in modern systems with fast cache memory, the application data and the table data compete for the cache in many practical applications, forcing frequent reloading of the table data from memory, thus degrading the performance of table-based prior art methods.
In addition, prior art neighbor searching methods and tables have been devised for a limited variety of neighbor search spaces. Traditional neighbor searches require different methods for face, edge, and vertex neighbors and “vertex neighbors are considerably more complex,” see H. Samet, “Applications of Spatial Data Structures: Computer Graphics, Image Processing, GIS,” Addison-Wesley, Reading, Mass., 1990. In contrast, our method uses a single approach for all varieties of neighbor searching. Furthermore, prior art tables are specialized for a given cell enumeration and must be re-determined for different cell labeling conventions. Generating tables for different conventions and different types of neighbor searches is difficult, error prone, and tedious to verify.
Finally, our neighbor searching method is inherently non-recursive and requires fewer Boolean operations than traditional methods. In contrast, traditional methods for neighbor searching require non-recursive and unchecking the recursion is non-trivial. A non-recursive neighbor searching method for quadtrees and octrees is described by Bhatacharyya in “Efficient Neighbor Finding Algorithms in Quadtree and Octree,” M. T. Thesis, Dept. Comp. Science and Eng., India Inst. Technology, Kanpur, 2001. However, that method is limited to finding neighbors of the same size or larger than a cell. In addition, like Samet’s, that method requires table-based traversal to determine the appropriate neighbor. Hence, that method suffers from the same limitations of traditional neighbor searching methods as described above.
Ray Tracing
Ray tracing a three-dimensional graphical object stored in a three-dimensional bi-tree, i.e., an octree, requires determination of an ordered sequence of leaf cells along a ray passing through the bi-tree, testing each non-empty leaf cell for ray-surface intersections, and processing the ray-surface intersections.
Three-dimensional ray tracing is used extensively in computer graphics. In addition, there are numerous applications for the determination of an ordered sequence of leaf cells along a ray passing through an N-dimensional bi-tree in fields such as telecommunications, robotics, and computer vision.
As illustrated in FIG. 7, according to the present invention, a first step determines a point 702 where a ray 701 first enters a two-dimensional bi-tree. A second step determines a leaf cell 703 and its locational codes using our point location method (described above) for the point 702. A third step tests the cell 703 for a ray stopping condition, e.g., “is there a ray-surface intersection in the cell?”
If the test fails, locational codes of a next cell 706 along the ray 701 are determined in two steps from the locational codes of the cell 703, a direction of the ray 701, and a size of the cell 703.
The first step determines a subset of coordinates of an exit point 705 whose values are equal to the values of corresponding coordinates in the maximum or minimum vertices of the cell 703. This subset depends on where the ray 701 exits the cell 703, e.g., the subset consists of the x coordinate for the exit point 705 because the ray 701 exits the cell 705 on its right edge 704 (where x=x_{MAX} (cell_{03}). This subset of coordinates determines a corresponding subset of locational codes to the next cell 706 that are then determined from the locational codes and size of the cell 703 according to neighbor searching methods of the present invention described above.
The second step determines the remaining locational codes to the next cell 706 from the locational codes determined in the first step and an equation of the ray 701. Finally, the locational codes to the next cell 706 are used to traverse up the bi-tree to a common ancestor of the cells 703 and 706 and back down to the neighbor 706 according to neighbor searching methods of the present invention described above.
This process of determining next cells along the ray 701 is repeated to determine an ordered sequence of leaf cells along the ray 701 until the ray stopping condition is satisfied.
Our method can be applied to both top-down and bottom-up tree traversal approaches for ray tracing while avoiding the Boolean operations, recursion, and incremental stepping along the ray in increments proportional to a smallest possible leaf cell, used in the prior art.
Effect of the Invention
The invention provides a method for point location, region location, neighbor searching, and ray-tracing for bi-trees which is simple, efficient, works in any number of dimensions, and is inherently non-recursive. The method according to the invention significantly reduces the number of Boolean operations with poor predictive behavior and does not require accessing memory as necessitated by table lookups.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
We claim:
1. A method for traversing a bi-tree stored in a memory to locate application specific data stored in the memory and associated with the bi-tree, wherein the bi-tree comprises a spatial partitioning of an N-dimensional space into a hierarchy of cells, wherein each cell, starting from a root cell enclosing the N-dimensional space, is successively and conditionally partitioned into 2^n child cells along N mid-
planes of the cell, and wherein each cell has associated characteristics comprising the application specific data and child cells are indexed directly from a parent cell, comprising:
specifying a set of locational codes;
specifying a cell;
testing whether the characteristics of the cell satisfy a termination condition and applying only an arithmetic operation on the set of locational codes to directly index a next cell to be tested while false; and otherwise identifying the cell as a target cell if true; and retrieving the application specific data of the target cell from the memory.
2. The method of claim 1 wherein the step specifying the locational codes further comprises:
determining N coordinates of an N-dimensional point in the N-dimensional space;
multiplying, for each coordinate, a value of the coordinate by \(2^k\) where \(k\) is a level of the root cell of the bi-tree to produce a result; and
converting the result to a binary form to specify the corresponding locational code.
3. The method of claim 1 wherein each of the locational codes in the set is determined from a corresponding coordinate of a point in the N-dimensional space, the cell is the root cell of the bi-tree, and the termination condition is satisfied when the cell is a leaf cell.
4. The method of claim 3 further comprising:
representing an N-dimensional rectangular region containing the point by a minimum vertex \(v_0\) and a maximum vertex \(v_1\);
satisfying the termination condition when a leaf cell is reached OR when a termination level of the bi-tree is reached, wherein the termination level is determined from the locational codes of the minimum vertex \(v_0\) and the maximum vertex \(v_1\) to find a smallest cell enclosing the N-dimensional rectangular region.
5. The method of claim 4 wherein the determining of the termination level comprises:
determining, for each coordinate of the N-dimensional space, a candidate level, the determining of the candidate levels further comprising:
determining a corresponding locational code \(lc_{c0}\) from the coordinate of the minimum vertex;
determining a corresponding locational code \(lc_{c1}\) from the coordinate of the maximum vertex;
determining a binary code \(bc=(lc_{c0} \text{ XOR } lc_{c1})\);
determining a first one-bit in the binary code \(bc\) from the left MSB;
setting the candidate level to a bit number of a 0-bit immediately to the left of the first one-bit;
setting the termination level to a maximum of the candidate levels.
6. The method of claim 1 wherein the locational codes are determined by an arithmetic operation on a given cell and a direction from the given cell and wherein the termination condition is satisfied when a leaf cell is reached OR when a termination level is reached in the bi-tree to find a neighboring cell of the given cell in the direction.
7. The method of claim 6 wherein the cell is the root cell of the bi-tree.
8. The method of claim 6 wherein the cell is a common ancestor of the given cell and the neighboring cell.
9. The method of claim 6 wherein the arithmetic operation further comprises:
determining, for each coordinate of the N-dimensional space, a corresponding locational code \(lc_{cell}\) of the given cell;
adding, for each coordinate of the N-dimensional space, a size of the given cell to the corresponding locational code \(lc_{cell}\) if a corresponding coordinate value of a minimum vertex of the neighboring cell is equal to a corresponding coordinate value of a maximum vertex of the cell; and
subtracting, for each coordinate of the N-dimensional space, a size from the corresponding locational code \(lc_{cell}\) if a corresponding coordinate value of a maximum vertex of the neighboring cell is equal to a corresponding coordinate value of a minimum vertex of the cell.
10. The method of claim 9 wherein the size is a size of a smallest cell in the bi-tree.
11. The method of claim 8 wherein the determining of the common ancestor comprises:
determining a candidate level for each coordinate of the N-dimensional space, the determining of the candidate level comprising:
determining a locational code \(lc_{cell}\) of the given cell for the coordinate;
determining a locational code \(lc\) of the neighboring cell for the coordinate;
determining a binary code, \(bc=(lc \text{ XOR } lc_{cell})\);
determining a bit \(b_{c0}\) in \(bc\) whose bit number is a level of the given cell;
determining a first zero-bit \(b_{candidate}\) in be that is to the left of \(b_{c0}\);
determining the candidate level as the bit number of the \(b_{candidate}\);
setting a stopping level to a maximum of the candidate levels; and
determining the common ancestor by following parent pointers upward from the given cell until the stopping level is reached.
12. The method of claim 2 wherein the N-dimensional space is a minimum vertex of an arbitrary cell in the bi-tree to specify the N locational codes of the arbitrary cell.
13. The method of claim 2 further comprising:
tracing a ray through the bi-tree to specify a plurality of points where the ray intersects selected cells in the bi-tree;
determining, for each point, a set of locational codes; and
identifying, for each set of locational codes, the corresponding target cell that is a leaf cell.
14. The method of claim 13 wherein the selected cells are leaf cells for a bottom-up traversal of the bi-tree.
15. The method of claim 13 wherein the identifying of the target cells terminates when a ray stopping condition is satisfied.
16. The method of claim 13 wherein the target cells are identified in an order that the ray is traced.
17. The method of claim 13 wherein the cell is the root cell of the bi-tree, and a first set of locational codes is determined for an entry point where the ray enters the bi-tree, and each next set of locational codes is determined from an exit point where the ray exits the target cell.
18. The method of claim 17 wherein, for each next set of locational codes, the cell is a common ancestor of the target cell and a neighboring cell adjacent to the exit point where the ray exists the target cell.
19. The method of claim 17 wherein the next set of location codes is determined by a second arithmetic operation on locational codes of the selected cells of the bi-tree.
20. The method of claim 19 wherein the second arithmetic operation for determining the next set of locational codes comprises:
determining a subset of coordinates of the exit point whose values are equal to the corresponding coordinate values in the maximum and minimum vertices of the target cell;
determining, for each coordinate in the subset, a corresponding locational code of the next set of locational codes, further comprising:
determining a corresponding locational code \( l_{cell} \) of the target cell;
performing an arithmetic operation on \( l_{cell} \) and a direction of the ray to determine the corresponding locational code; and
determining the locational codes of the next set of locational codes that are not in the subset by performing an arithmetic operation on the ray and the subset of locational codes.
21. The method of claim 15 wherein the ray stopping condition is satisfied when the ray intersects a graphical element stored in the target cell.
22. The method of claim 15 wherein the ray stopping condition is satisfied when the ray exits the bi-tree.
23. The method of claim 1 where \( N \) is an integer greater than zero.
24. A system for traversing a bi-tree stored in a memory to locate application specific data stored in the memory and associated with the bi-tree, wherein the bi-tree comprises a spatial partitioning of an \( N \)-dimensional space into a hierarchy of cells, wherein each cell, starting from a root cell enclosing the \( N \)-dimensional space, is successively and conditionally partitioned into \( 2^N \) child cells along \( N \) midplanes of the cell, and wherein each cell has associated characteristics comprising the application specific data and child cells are indexed directly from a parent cell, comprising:
means for specifying a set of locational codes;
means for specifying a cell;
means for testing whether the characteristics of the cell satisfy a termination condition and applying only an arithmetic operation on the set of locational codes to directly index a next cell to be tested while false; and otherwise
means for identifying the cell as a target cell if true; and
means for retrieving the application specific data of the target cell from the memory.
* * * * *
|
{"Source-Url": "https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/6868420", "len_cl100k_base": 8724, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 17205, "total-output-tokens": 10162, "length": "2e13", "weborganizer": {"__label__adult": 0.0003654956817626953, "__label__art_design": 0.0006937980651855469, "__label__crime_law": 0.00067138671875, "__label__education_jobs": 0.0014352798461914062, "__label__entertainment": 9.769201278686523e-05, "__label__fashion_beauty": 0.00021386146545410156, "__label__finance_business": 0.0006389617919921875, "__label__food_dining": 0.00033855438232421875, "__label__games": 0.0009179115295410156, "__label__hardware": 0.003831863403320313, "__label__health": 0.0004737377166748047, "__label__history": 0.0005774497985839844, "__label__home_hobbies": 0.00016558170318603516, "__label__industrial": 0.0012416839599609375, "__label__literature": 0.00031185150146484375, "__label__politics": 0.00030875205993652344, "__label__religion": 0.0004398822784423828, "__label__science_tech": 0.326171875, "__label__social_life": 7.611513137817383e-05, "__label__software": 0.021026611328125, "__label__software_dev": 0.638671875, "__label__sports_fitness": 0.0002353191375732422, "__label__transportation": 0.0007634162902832031, "__label__travel": 0.0002005100250244141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41435, 0.0377]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41435, 0.73922]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41435, 0.8936]], "google_gemma-3-12b-it_contains_pii": [[0, 2259, false], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 9275, null], [9275, 16277, null], [16277, 23300, null], [23300, 25662, null], [25662, 32854, null], [32854, 39181, null], [39181, 41435, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2259, true], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 2259, null], [2259, 9275, null], [9275, 16277, null], [16277, 23300, null], [23300, 25662, null], [25662, 32854, null], [32854, 39181, null], [39181, 41435, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41435, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41435, null]], "pdf_page_numbers": [[0, 2259, 1], [2259, 2259, 2], [2259, 2259, 3], [2259, 2259, 4], [2259, 2259, 5], [2259, 2259, 6], [2259, 2259, 7], [2259, 2259, 8], [2259, 9275, 9], [9275, 16277, 10], [16277, 23300, 11], [23300, 25662, 12], [25662, 32854, 13], [32854, 39181, 14], [39181, 41435, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41435, 0.0]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
5755b082f74531f95035e15f0b3d0218fb342d68
|
Using Dynamic Priority
in Real-Time Database Systems
Prasad Wagle and Sang H. Son
Computer Science Report No. TR-90-14
July 5, 1990
Using Dynamic Priority in Real-Time Database Systems
Prasad Wagle
Sang H. Son
Department of Computer Science
University of Virginia
Charlottesville, VA 22903
USA
Abstract
A real-time database system has timing constraints associated with transactions and the database. To ensure that such a system completes as many transactions as possible without violating their timing constraints, its scheduling strategy should be dynamic and use information about the timing constraints associated with transactions and the database. Ideally, to enhance the predictability of the system, such a scheduling strategy should be used in all situations where there is resource contention. This paper describes an intelligent dynamic scheduling strategy for scheduling transactions in real-time database systems. The scheduling strategy uses timing information about transactions and the database to enhance the system’s ability to meet transaction deadlines. The performance of the scheduling strategy is tested by using it in a simulated pulse detection system.
Key words: real-time database, concurrency control, time-critical scheduling, priority, locking
1. Introduction
1.1. What are Real-time Database Systems?
Real-time database systems are database systems that support real-time computing. *Real-time computing* is that type of computing where the correctness of the system’s response depends not only on the logical result of the computation, but also on the time at which the results are produced [Stan88A]. The timing constraint on the system’s response is called *deadline*. Traditional real-time systems have concentrated on systems which have hard deadlines. If a system misses a hard deadline, the consequences can be disastrous. On the other hand, if the system misses a soft deadline, there may still be some value for computing the response of the system. Real-time systems are assuming an increasingly important role in our society. Examples of current real-time computing systems are command and control systems, aircraft avionics, robotics, network management, and program trading.
Most of the complex real-time computing applications need to access large amount of data. Thus, we need database systems which are cognizant of the requirements of real-time computing, i.e. *real-time database systems*. Transactions in a real-time database systems are required to do operations on the database, like read, write, insert and delete, subject to timing constraints. An example of a real-time database system is a pulse detection system. A pulse detection system is used to track objects using radars. The information about objects in reality is maintained in a database of emitter files. Typically, a pulse detection system consists of simultaneously active transactions, with different timing constraints and resource requirements, which read and update the database of emitter files.
1.2. Comparison with Conventional Real-time Systems and Database Systems
A real-time database system has similarities as well as differences with conventional real-time systems and database systems.
The following are the similarities between real-time database systems and conventional database systems. First, both systems process transactions which access data items according to the consistency constraints of the database. Second, both systems have transactions with complex and unpredictable data requirements.
The following are the differences between real-time database systems and conventional database systems. First, transactions in conventional database systems have no timing constraints. The goal of conventional database systems is to reduce the average response time of the transactions being processed rather than trying to satisfy the timing constraint of individual transactions. Second, the consistency constraints that exist in conventional database systems are strict serializability constraints which are not always needed in real-time database systems.
The similarity between real-time database systems and conventional real-time systems is that both systems process entities (tasks and transactions) which have timing constraints.
The following are the differences between real-time database systems and conventional real-time systems. First, tasks in conventional real-time systems have hard deadlines whereas transactions in real-time database systems can have soft deadlines. Second, data in conventional real-time systems normally do not have consistency constraints. Third, tasks in a conventional real-time system have simple and predictable data or resource requirements.
1.3. Validity Constraints
Deadlines are timing constraints associated with transactions. There exist another kind of timing constraints which are associated with transactions and data objects in the database. In a database, there may be some data objects which get old or out-of-date if they are not updated within a certain period of time. To quantify this notion of age we associate with each data object a degree of validity which decreases with time. The validity curve associated with each data object is a plot of the degree of validity of the data object with respect to the time elapsed after the object was last modified. Fig. 1 shows an example validity curve for data objects.
If \( w \) is the time of last modification of a data object and \( t \) is the current time, we can calculate the validity of the data object at time \( t \) from its validity curve. Now, a transaction may require all the data objects it reads to have a minimum degree of validity. This constraint could be either hard or soft, like deadlines. Scheduling decisions could be made more intelligent by incorporating this validity information about transactions and data objects they read.
1.4. Scheduling Problem in Real-Time Database Systems
Scheduling theory is used in widely different areas like general computer systems, operations research, real-time systems, database systems, and finally, real-time database systems. The common aspects about scheduling in all the above disciplines are:
- there is a scarce resource
- there is more than one entity wishing to use the resource.
the scheduling decision is choosing the entity to which the resource should be granted next.
This is the most general and abstract description of the scheduling problem. The scheduling problem is made more specific to the application depending on the characteristics of the resource, characteristics of the entities using the resource and the way the scheduling decision is made. The resource could be preemptible or non-preemptible. The entity to be scheduled could be a task or a transaction. The scheduling decision could be made with the aim of optimizing some performance metric with regard to certain resource and/or timing constraints.
In operations research scheduling problems, there is a fixed system having completely specified and static service characteristics. The goal is to find optimal static schedules which minimize the response time for a given task set [Stan88A].
In database systems, a scheduler accepts database operations from transactions and schedules them appropriately for the data manager [Bern87]. In conventional database systems, the scheduler is entrusted with the task of enforcing serializability constraints of the database. In this case, the resource is a data item, the entity to be scheduled is a database operation and the scheduling decision is made according to the consistency constraints of the database.
In normal operating systems, there are tasks waiting for resources like the CPU or the I/O processor. The scheduling decision can be made according to well known scheduling algorithms like priority based scheduling or round robin scheduling.
In real-time systems, there exist tasks contending for scarce resources. But, unlike the above disciplines, the entities to be scheduled (tasks) have timing constraints (deadlines). There is generally no incentive to minimize the response time other than meeting deadlines.
In real-time database systems, transactions which have timing constraints contend for scarce resources. But, the scheduling strategies devised for conventional database systems or real-time systems cannot be applied to real-time database systems because of the differences that exist between them (see section 1.2). In real-time database systems, it is necessary to take into
account the timing constraints associated with the transactions as well as the consistency constraints associated with the database while making scheduling decisions.
1.5. Static versus Dynamic Scheduling
It is possible to statically guarantee real-time constraints by pre-calculating all possible schedules of transactions off-line. There are two reasons why this approach is infeasible [Stan88B]. First, the task of finding all possible schedules of transactions is NP hard. Therefore, the task becomes computationally intractable when there are a large number of simultaneously active transactions. Second, the demands on a real-time database system can change frequently. For example, aperiodic transactions, by their very nature, can be activated at unpredictable times. Therefore, a dynamic scheduling strategy is needed to make the system more flexible. Also, to make "intelligent" scheduling decisions, the scheduling strategy should use as much timing information as possible about transactions and the data objects they access.
A scheduler in database systems accepts database operations from transactions and schedules them appropriately for the data manager [Bern87]. In a distributed system, each site has its own scheduler which can receive database operations from transaction managers at different sites. In conventional database systems, the scheduler is entrusted with the task of enforcing the serializability constraints. In real-time database systems, it is also necessary to take into account the timing constraints associated with the transactions and the database while making scheduling decisions.
However, to guarantee real-time constraints, it may be insufficient to use the extra information about transactions only while scheduling database operations. This is because transactions interact with the operating system and the I/O subsystem in extremely unpredictable ways. For example, we have no control over the way the scheduling decisions are made for scarce resources at the operating system level. Therefore, to improve the predictability of real-time database systems, i.e., to enhance the guarantee of meeting real-time constraints, we should use the additional
information about transactions to make scheduling decisions at all places where more than one transactions try to use (or access) a scarce resource. This scarce resource could be the CPU, a data object, or the communications subsystem.
2. The Scheduling Algorithm
In this section, we describe a dynamic scheduling strategy for transactions in real-time database systems. The scheduling strategy uses timing and validity information about transactions and data objects to calculate dynamic priorities of transactions. These priorities are then used to make scheduling decisions at all places where transactions contend for scarce resources.
2.1. Information required for intelligent scheduling
This section discusses the nature of information about transactions required by the scheduling strategy and a way to represent it.
A transaction can be represented as a tuple (SP, RS, WS, A, D, E, MV). The elements of the tuple are described below.
(1) System priority (SP):
This is the static component of the dynamic priority associated with a transaction. It is a measure of the value to the system of completing the transaction within its timing constraints. For example, transactions dealing with emergency situations should have a higher priority than routine transactions.
(2) Read set (RS):
This is the set of data objects which the transaction reads.
(3) Write set (WS)
This is the set of data objects which the transaction writes.
(4) Arrival time (A):
This is the time at which the transaction arrives in the system.
(5) Deadline (D):
This is the time before which the transaction has to finish its execution. The transaction specifies whether the deadline is hard or soft.
(6) Runtime estimate (E):
This is the estimate of the processing time required by a transaction. This includes the time required for CPU as well as I/O operations.
(7) Minimum Validity($V_{\text{min}}$):
This is the minimum degree of validity required of all objects read by the transaction. The transaction specifies whether this validity constraint is hard or soft.
The above information about the transaction is available to the system before the transaction is started and remains constant throughout the transaction execution. Since the scheduling strategy is dynamic, it needs information about the transaction which varies with time. The information which varies with time is described below.
(8) Read set validity(RSV):
This is the degree of validity of data objects in the transaction’s read set. The degree of validity of a data object can be calculated from its validity curve. The validity curve of a data object defines a function of the degree of validity of the data object with respect to the time elapsed after the data object was last modified. Therefore, if we know the time the object was last modified, we can calculate the degree of validity of the data object at the current time from the validity curve.
(9) Processing time(P):
This is the processing time already received by a transaction. This includes the time required for CPU as well as I/O operations.
(10) Current time(C):
This is the time at which the scheduling decision is made.
2.2. Scheduling design issues
Before implementing any scheduling strategy, it is important to consider the overhead it requires. Obviously, a complicated scheduling strategy requires more time. This factor can be crucial in deciding whether it is of any practical benefit to use the extra information about transactions and the database in the scheduling strategy.
For instance, if the database is disk-resident and the transactions are I/O intensive, the time required for I/O operations would be large compared to the time required for doing CPU operations. In that case, it would not make a big difference whether or not we use a complicated scheduling policy at the CPU level. The bottleneck in this case would be the data objects and it would be imperative to schedule the database operations in an intelligent way. But if the database is memory resident and the transactions are CPU intensive then it would become necessary to use the extra information about transactions in the scheduling decision at the CPU level. Given below is a scenario which illustrates a situation where an intelligent scheduling strategy at the CPU level would be helpful.
Assume that transactions execute CPU and I/O instructions alternately. Let the time required for one session of CPU computation be 10 time units and the time required for one I/O operation be 2 time units (if there is no blocking). Let the transactions to be scheduled ($T_1$ and $T_2$) have the characteristics given below. This situation can arise if both $T_1$ and $T_2$ wait for some other transaction to release a data object. The transaction releases the data object at time 5. Thus, the scheduling decision has to be made at time 5.
<table>
<thead>
<tr>
<th>Transaction</th>
<th>Arrival time</th>
<th>Estimate</th>
<th>Deadline (Hard)</th>
<th>Operations</th>
</tr>
</thead>
<tbody>
<tr>
<td>$T_1$</td>
<td>0</td>
<td>12</td>
<td>30</td>
<td>read(1)</td>
</tr>
<tr>
<td>$T_2$</td>
<td>5</td>
<td>12</td>
<td>20</td>
<td>read(1)</td>
</tr>
</tbody>
</table>
According to an elementary FCFS scheduling strategy, $T_1$ is scheduled first and it completes at time 12. $T_2$ starts at time 10, but since it requires 12 time units to complete, it misses its deadline at time 20. (As shown in Fig. 2.1.)
If the system is intelligent enough to follow the elaborate scheduling strategy to be discussed in Section 4, $T_2$ would be scheduled first. (According to the least slack method of assigning priorities, $T_2$ has a higher priority than $T_1$, because the slack of $T_2$ is less than the slack of $T_1$.) In that case both transactions would meet their deadlines as shown in Fig. 2.2.
An issue involved in designing a scheduling strategy is whether or not to allow preemption. The scheduling decision at the CPU level normally allows preemption. However, if we allow preemption at the data object level, we may have to abort the preempted transaction for maintaining consistency of the database. The general problem descriptions for the two cases without having a particular resource type in mind, are as the following:
Case 1. No preemption.
There are more than one transactions requesting a resource and we have to decide the tran-
Fill the blank with a suitable word or phrase to complete the sentence:
Once a transaction gets the resource it runs till it finishes using the resource.
Case 2. Allow preemption.
There is a transaction currently holding a resource and there is a transaction requesting the same resource. We have to decide whether or not to preempt the transaction holding the resource and grant the resource to the transaction requesting it.
When preemption is not allowed, the scheduling decision has to be made whenever a transaction relinquishes a resource or when a transaction requests a resource which is not being used. When preemption is allowed the scheduling decision has to be made whenever a transaction either requests or relinquishes a resource.
2.3. Real-time Database Scheduler
The scheduling strategy for transactions in real-time database systems can be decomposed into three sub-parts [Abbo88], [Abbo89]:
5
15
17
25
27
T2 completes
T1 completes
Fig. 2.2 Intelligent Scheduling
(1) Determining eligibility
(2) Assigning dynamic priorities
(3) Making the final scheduling decision of granting the resource.
In this section we discuss each of these sub-parts in detail.
2.3.1. Determining eligibility
Before making a scheduling decision we have to decide whether the transactions involved are eligible for scheduling i.e. whether it is of any use to the system to start processing those transactions. If a transaction is ineligible for scheduling we abort it immediately.
We assume that, if a transaction misses a hard deadline, it is ineligible for scheduling and should be aborted. If a transaction misses a soft deadline, it is still eligible for scheduling. We also check whether it is possible for the transaction to finish before its deadline:
\[(\text{deadline} - \text{current time}) \geq (\text{Estimate} - \text{Processing time received})\]
\[\text{i.e.} \quad (D - T) \geq (E - P)\]
If it is not possible, and the deadline in question is hard, we consider the transaction ineligible for scheduling. However, if the deadline is soft, the transaction remains eligible for scheduling.
The steps taken in incorporating validity constraints are similar to those taken for deadlines. If a transaction misses a hard validity constraint then it is ineligible for scheduling and should be aborted. If the validity constraint missed is soft, then we continue executing the transaction at a different priority. We also check, for each data item read by the transaction, whether its degree of validity is greater than the minimum validity level expected by the transaction:
For all data objects d read by the transaction, \(V_d(T) > V_{\text{min}}\)
where, \(V_d(T)\) is the degree of validity of object d at time \(T\)
If that is not the case, and the validity constraint of the transaction is hard, we consider the transaction ineligible for scheduling. However, if the validity constraint is soft, the transaction
remains eligible for scheduling.
2.3.2. Assigning dynamic priorities
The dynamic priority of a transaction is a number calculated by the scheduler while making the scheduling decision. It is a measure of the importance, to the over-all goals of the system, of scheduling that transaction before others at that point in time [Stra89]. Since this measure may change with time, it has to be calculated dynamically every time two transactions are compared during the scheduling decision making process.
Dynamic priority (DP) is a weighted sum of the following factors:
1. System priority (SP): It is the static component of dynamic priority.
2. Slack with respect to deadline (SDL): It is the amount of time the transaction can be delayed and still meet its deadline. It is calculated as follows:
\[
\text{Slack} = \text{Deadline} - \text{Current time} - (\text{Estimate} - \text{Processing time}) \quad \text{SDL} = D - T - (E - P)
\]
3. Slack with respect to minimum validity constraints (SV): It is the amount of time the transaction can be delayed and still be completed without violating its validity constraints.
\[
\text{SV} = \min \{ t \mid \text{For all data objects } d \text{ read by the transaction, } V_d(T + t) > V_{\text{min}} \}
\]
where, \(V_d(T + t)\) is the degree of validity of object \(d\) at time \((T + t)\), assuming no updates between time \(T\) and \((T + t)\).
Dynamic Priority (DP) is calculated as follows:
\[
DP := DP_1 + DP_2 + DP_3
\]
where,
\[
\begin{align*}
DP_1 &= W_1 \times SP \\
DP_2 &= W_2 \times SDL \\
DP_3 &= W_3 \times SV
\end{align*}
\]
The factors involved in determining the dynamic priority of a transaction have constraints closely related to the characteristics of real-time transactions. First, $W_1 > 0$, since if SP increases, DP should increase. Also, if SDL > 0 then $W_2 < 0$, since if SDL decreases then DP should increase. If SDL < 0, then the transaction has already missed its deadline. Note that since the transaction is still eligible for scheduling, the deadline missed must have been soft. At this point, there are two options available to us. We could reason as follows: Since the transaction has missed its deadline (soft), it should be finished as soon as possible, and hence its priority must be increased. In that case, $W_2 < 0$. However, we might reason that since the transaction has already missed its deadline, its priority should be reduced so that it does not interfere with other transactions in the system which are nearing their deadlines. In that case, $W_2 > 0$. Similar discussion applies to $W_3$ and SV.
The relative values of $W_1$, $W_2$, $W_3$ depend on the high level goals of the system. For example, some systems may aim at minimizing the number of transactions that miss their deadline, in which case $W_1$ would not be very high. Some systems might require that absolutely none of the higher priority transactions be aborted, in which case $W_1$ would very high.
Given below is a scenario which illustrates that a scheduling strategy at the CPU level taking validity constraints into account does prevent unnecessary aborts of transactions. Assume that transactions use the CPU and do I/O operations alternately. Let the time required for one session of CPU computation be 10 time units and the time required for one I/O operation be 2 time units (if there is no blocking). Let the transactions to be scheduled ($T_1$ and $T_2$) have the characteristics given below.
<table>
<thead>
<tr>
<th>Transaction</th>
<th>Arrival time</th>
<th>Estimate</th>
<th>Deadline (Hard)</th>
<th>Minimum Validity (Hard)</th>
<th>Operations</th>
</tr>
</thead>
<tbody>
<tr>
<td>$T_1$</td>
<td>0</td>
<td>12</td>
<td>30</td>
<td>100%</td>
<td>read(1)</td>
</tr>
<tr>
<td>$T_2$</td>
<td>0</td>
<td>12</td>
<td>25</td>
<td>50%</td>
<td>read(1)</td>
</tr>
</tbody>
</table>
Let the validity curve for object 1 be as shown in Fig. 2.3, and the time it was last modified be 0. Let the weights $W_2$ and $W_3$ for calculating dynamic priorities be -1. This implies that, in the formula for calculating dynamic priorities, the slacks with respect to deadline and validity constraints have the same weight.
If validity constraints are not considered:
In this case, $DP := DP_1 + DP_2$. The slack of $T_1$ with respect to deadline is 18. The slack of $T_2$ with respect to deadline is 13. Therefore,
$$DP_2(T_1) = -18 \text{ and } DP_2(T_2) = -13.$$
i.e. $DP_2(T_2) > DP_2(T_1)$.
Assuming equal system priorities, $DP(T_2) > DP(T_1)$, implying that $T_2$ would be scheduled first. The execution would proceed as shown in Fig. 2.4. $T_2$ would finish its execution at time 12. Then $T_1$ would start. But, at time 20 the validity of object 1 would be 50%. This would violate the validity constraint of $T_1$, which would have to be aborted.

Fig. 2.3 Validity Curve
If validity constraints are considered:
In this case, \( DP := DP_1 + DP_2 + DP_3 \). The slack of \( T_1 \) with respect to validity constraints is 10. The slack of \( T_2 \) with respect to validity constraints is 20. Therefore,
\[
\begin{align*}
DP_3(T_1) &= -10 \text{ and } DP_3(T_2) = -20. \\
i.e. \quad DP_2(T_1) + DP_3(T_1) &= -28 \text{ and } DP_2(T_2) + DP_3(T_2) = -33 \\
i.e. \quad DP_2(T_1) + DP_3(T_1) &> DP_2(T_2) + DP_3(T_2).
\end{align*}
\]
Assuming equal system priorities, \( DP(T_2) > DP(T_1) \), implying that \( T_1 \) would be scheduled first. The execution would proceed as shown in Fig. 2.5. At time 10 the validity of object 1 would be 100%, satisfying \( T_1 \)'s validity constraints. Thus \( T_1 \) would finish its execution at time 12. Then \( T_2 \) would start. At time 20, the validity of object 1 would be 50%, satisfying \( T_2 \)'s validity constraints. Thus \( T_2 \) would finish its execution at time 22.
Thus, incorporating validity constraints in the scheduling strategy does prevent transactions from being aborted unnecessarily.
2.3.3. Making the final scheduling decision
The way the final scheduling decision is made depends on whether preemption is allowed or not. In the following discussion we assume that the transactions considered have already passed the eligibility test. Let us consider the scheduling algorithms for the two cases:
Case 1. No preemption.
There are more than one transactions requesting a resource and we have to decide the transaction which should be granted the resource. In this case we grant the resource to the transaction with the highest dynamic priority.
Case 2. Allow preemption.
There is a transaction currently holding a resource and there is a transaction requesting the same resource. We have to decide whether to preempt the transaction holding the resource and grant the resource to the transaction requesting it.
Let $T_h$ and $T_r$ be the two transactions requesting the resource. Let $P(T_h)$ and $P(T_r)$ be dynamic priorities of the two transactions. Let $P(T_h$ if preempted) be the priority of $T_h$ were it to be preempted by $T_r$. The algorithm is as follows:
IF $P(T_r) > \text{MAX}(P(T_h), P(T_h \text{ if preempted}))$ THEN
IF RemainingTime($T_h$) > Slack($T_r$) THEN
Preempt $T_h$;
END;
END;
where RemainingTime($T_h$) = Runtime estimate
- Processing time received by $T_h$.
2.3.4. Handling Periodic Transactions
There are many applications in real-time database systems which have periodic transactions. For example, a pulse detection system used in radar tracking needs to periodically read pulse data from antennas, process them, and then display them on an operator console [Hale89]. Periodic transactions are restarted after an interval of time equal to their period. If an execution of a periodic transaction does not complete before the end of its period, it is aborted and a new instance of the same transaction is restarted. From the scheduler's viewpoint, periodic transactions can be modelled as transactions having hard deadlines equal to their periods.
If a data object is updated by a periodic transaction with period ($T$), its validity curve can be similar to the one shown in Fig. 2.6. The form of the validity curve implies that the validity of the data object remains 100% during an interval $T$ after the object has been updated. Henceforth, it reduces by a fixed amount $v$ every $T$ time units. This makes the task of calculating the degree of validity of a data object easy. If $t$ is the time elapsed since the data object was last modified,
Degree of validity = 100 - ($t / T$) * $v$
where, "$/$" signifies integer division.
This behavior of the degree of validity of a data object is similar to the concept of normalized age of data objects [Song89]. For periodic transactions, the basic scheduling strategy for determining eligibility, assigning priorities and making the final decision remains the same as for aperiodic transactions.
3. Simulation Study
3.1. Need for a Real-Life Application
The research on real-time transactions scheduling is still in its infancy. There exists no formal theoretical framework to analyze the performance of the existing scheduling algorithms. For this reason, experimentation is a necessity to compare the performance of different scheduling algorithms.
Until now, none of the algorithms proposed in previous studies have been evaluated in real systems. [Abbo88] and [Abbo89] present experimental results based on simulation, whereas [Huan89] presents an integrated approach to study real-time transaction processing on a testbed system. In these studies semantically meaningless transactions are randomly generated with random system priorities, resource requirements, and timing constraints. The disadvantage of this approach is that it does not give the researcher a true feel for real-life problems. Also, for any scheduling strategy to be used in industry, it has to be supported by an extensive round of
experimentation with a *real-life* application.
We feel that in the area of real-time systems, there is a pressing need for a canonical problem which can be used to test different strategies for solving problems like scheduling or fault tolerance. An analogy can be drawn to the dining philosophers problem in the area of interprocess communication. For these reasons, we chose to simulate a pulse detection system, a real-life, real-time database system application, to evaluate the proposed scheduling algorithm.
### 3.2. What is a Pulse Detection System?
A pulse detection system is an example of a real-time database system [Hale89]. It is used to detect and track external objects by means of pulses (radar or sonar) received from them. The pulse detection system maintains information about each object in reality in a database of emitter files. It contains a number of simultaneously active transactions with different system priorities, timing constraints, and resource requirements.
Examples of periodic transactions are:
1. Transactions which collect pulse data from the radar.
2. Transactions which evaluate the pulse data received and perform the necessary operations in the database of emitter files.
3. Transactions which remove emitter files which have not been updated for a certain interval of time.
4. Transactions which monitor the operator's console for operator commands.
Examples of aperiodic transactions are:
1. Transactions which shoot missiles at the enemy objects.
2. Transactions which display information about the enemy objects.
Now, it is possible for some of these transactions to require the same resource at the same time. This is when the question of intelligent scheduling of transactions becomes extremely
important.
The simulation system we have implemented runs on a SUN Workstation (preferably SUN 3/75 with a color monitor). It is based on the scenario of a battleship surrounded by airborne enemy objects like aircrafts or missiles. It consists of two windows: the *reality* window, and the *operator's console* window (see Fig. 3.1).

Fig. 3.1. Simulation Screen
The reality window consists of a stationary battleship at its center and the surrounding enemy objects. Each object has a position and velocity associated with it. An object is implemented as a process which calculates the new position of the object and displays it in the reality window. The reality window is managed by two modules: Object and Reality. The module Object is responsible for creating objects in reality, continuously updating their positions and detecting collisions. The module Reality is responsible for creating the reality window. It has a procedure called GetPulseData which simulates the operation of a radar by getting new pulse data of an object in reality.
The operator's console window displays the operator's view of reality as maintained by the pulse detection system. It is supposed to display the most current positions of enemy objects in reality. The operator's console window is managed by the modules: Detect and EmitterFile. The module EmitterFile maintains an emitter file to store information corresponding to each enemy object in reality.
The Detect module contains three periodic and two aperiodic transactions. Each transaction is implemented as a process. The following are the periodic transactions with a brief description of what they do.
1. **Track**: It calls Reality:GetPulseData to get a new pulse data of an object in reality. It scans all the emitter files to find an emitter file which correlates with the pulse data received. If it finds such an emitter file, it updates it; else it creates a new emitter file with that pulse data.
2. **Clean**: It periodically scans the emitter files and deletes emitter files which haven't been updated for a predetermined amount of time assuming that the object which they represent have been destroyed.
3. **Operator Interaction**: This transaction accepts operator commands. For example, an operator may query the database to find more information about an emitter file, or he
may start a transaction to shoot an enemy object.
The operator interaction transaction, in turn, can start two aperiodic transactions, which are:
(1) **Display Information**: This transaction displays information about the object chosen by the operator.
(2) **Shoot Object**: This transaction shoots a missile at the object chosen by the operator.
### 3.3. The *Simulation Module*
Since this is a simulation of the original system, it is very important that the experimenter has control over the relative speeds of the transactions being executed and the amount of time a transaction needs to use a resource. This is done by using the *Simulation module*. The Simulation module is a general purpose module which contains the following procedures:
**PROCEDURE Hold** (delay : LONGINT);
(* This procedure is executed by processes whose execution is to be suspended by "delay" units of time. *)
**PROCEDURE Open** (VAR r : Resource; attr : ARRAY OF CHAR);
(* This procedure is used to create a resource with a name stored in attr *)
**PROCEDURE Close** (VAR r : Resource);
(* This procedure is used to delete a resource. A resource should not be deleted until its statistics are printed. *)
**PROCEDURE HoldR** (VAR r : Resource; delay : LONGINT);
(* This procedure is executed by processes which desire to use resource "r" for "delay" units of time. If more than one processes desire to use the same resource at the same time, their requests are serialized according to certain scheduling strategy.*)
**PROCEDURE SetSchedStrat** (schedStrat : CARDINAL);
(* This procedure can be called any time during the simulation to set the scheduling strategy to be followed. *)
**PROCEDURE SetPreemption** (preemption : BOOLEAN);
(* This procedure can be called any time during the simulation to decide whether preemption is allowed. *)
A process calls \texttt{Hold} to simulate the passage of time when it executes some actions. It calls \texttt{HoldR} when it uses some shared resource. If two or more processes want to use a resource at the same time, a decision has to be made in the \texttt{Simulation} as to which process should be granted the resource. This decision is made considering the attributes associated with the different processes according to some scheduling strategy.
Currently each process contending for a shared resource has the following attributes: (1) System priority; (2) Arrival time; (3) Deadline; (4) Run-time estimate; (5) Processing time it has received; and (6) Minimum validity of the data it reads.
The simulation system allows the researcher to choose the scheduling strategy followed, with or without preemption, and examine its effects on the pulse detection system. Currently the following strategies, with or without preemption, are supported: (1) First Come First Served; (2) System Priority; (3) Earliest Deadline First; (4) Least Slack First; and (5) A variant of the scheduling strategy presented in the previous chapter, which will be henceforth referred to as the \textit{Combination} strategy. The \textit{Combination} strategy uses the system priority (SP) and the slack with respect to deadline (SDL) while making its scheduling decisions.
Our intention is to show that the performance of the pulse detection system can be enhanced by the use of intelligent scheduling algorithms. The performance of a scheduling strategy can be judged by two ways: (1) by the visual behavior of the simulated pulse detection system; or (2) by the information about successful completion of transactions displayed each time the scheduling strategy is changed.
\subsection*{3.4. Simulation Assumptions}
The following are the assumptions made about the simulations.
(1) The timing parameters of transactions like the run-time estimate or deadline are arbitrary and do not correspond to any realistic system. This is done because data about real systems is of a highly classified nature.
(2) The scheduling overhead is ignored. This assumption is supported by [Huan89].
(3) The consistency of the database is maintained using exclusive locks which are non-preemptible. A more efficient concurrency protocol would be the priority ceiling protocol using shared locks [Sha88].
(4) All transactions have hard timing and validity constraints. When a periodic transaction or an instance of a periodic transaction is started, the run-time estimate and the deadline parameters of the transaction are set.
(5) A transaction cannot use more than one resource at the same time.
3.5. Simulation Results
To make the differences in the performance of the different scheduling strategies obvious two periodic dummy transactions were added to the system. This is justified, since, real-time systems do have certain background tasks which are not directly connected to the real-time application. The following are the dummy transactions and their characteristics:
(1) Dummy1: Low system priority, Tight deadline.
(2) Dummy2: High system priority, Loose deadline.
The simulation results can be grouped into three cases:
(1) Case 1: Dummy1, but not Dummy2, is activated.
(2) Case 2: Dummy2, but not Dummy1, is activated.
(3) Case 3: Both Dummy1 and Dummy2 are activated.
To quantitatively evaluate the results of a particular scheduling strategy, we calculate its figure of merit as follows:
\[
\text{figure of merit} = \sum_{\text{Transaction types}} (\% \text{ success})(\text{System Priority})
\]
where
\[
\% \text{ success} = \frac{(\text{No. of successful completions})}{(\text{No. of instances started})}
\]
The system priorities of the different transaction types is shown in the following table.
<table>
<thead>
<tr>
<th>Transaction Type</th>
<th>System Priority</th>
</tr>
</thead>
<tbody>
<tr>
<td>Track</td>
<td>2</td>
</tr>
<tr>
<td>Clean</td>
<td>1</td>
</tr>
<tr>
<td>User Interaction</td>
<td>3</td>
</tr>
<tr>
<td>Shoot</td>
<td>3</td>
</tr>
<tr>
<td>Display Information</td>
<td>3</td>
</tr>
<tr>
<td>Dummy1</td>
<td>0</td>
</tr>
<tr>
<td>Dummy2</td>
<td>2</td>
</tr>
</tbody>
</table>
The simulation results based on the above performance metric are summarized in the following tables. The entries in the table are either quantitative (figures of merit) or qualitative (good or bad). The qualitative assessment is done by taking into account the visual behavior of the system.
3.5.1. When Preemption is Allowed
**Quantitative Assessment:**
<table>
<thead>
<tr>
<th>Scheduling Strategy</th>
<th>Case 1</th>
<th>Case 2</th>
<th>Case 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>FCFS</td>
<td>300</td>
<td>500</td>
<td>511</td>
</tr>
<tr>
<td>System Priority</td>
<td>1159</td>
<td>504</td>
<td>500</td>
</tr>
<tr>
<td>Earliest Deadline First</td>
<td>1056</td>
<td>1220</td>
<td>828</td>
</tr>
<tr>
<td>Least Slack</td>
<td>305</td>
<td>1036</td>
<td>306</td>
</tr>
<tr>
<td>Combination</td>
<td>1114</td>
<td>1350</td>
<td>1194</td>
</tr>
</tbody>
</table>
**Qualitative Assessment:**
<table>
<thead>
<tr>
<th>Scheduling Strategy</th>
<th>Case 1</th>
<th>Case 2</th>
<th>Case 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>FCFS</td>
<td>bad</td>
<td>bad</td>
<td>bad</td>
</tr>
<tr>
<td>System Priority</td>
<td>good</td>
<td>bad</td>
<td>bad</td>
</tr>
<tr>
<td>Earliest Deadline First</td>
<td>bad</td>
<td>good</td>
<td>bad</td>
</tr>
<tr>
<td>Least Slack</td>
<td>bad</td>
<td>good</td>
<td>bad</td>
</tr>
<tr>
<td>Combination</td>
<td>good</td>
<td>good</td>
<td>good</td>
</tr>
</tbody>
</table>
We observe that the FCFS strategy performs poorly in all the three cases. This is because the FCFS strategy does not possess the requisite intelligence to prevent the dummy transactions from using the resources. This causes the more important transactions to miss their deadline.
In Case 1, the dummy transaction activated has low priority but a tight deadline. The scheduling strategy based on system priority can filter out the dummy transaction. But the earliest deadline first and least slack first strategies do process the dummy transaction, thus causing the system to behave poorly.
In Case 2, the dummy transaction activated has high priority but a loose deadline. The earliest deadline first and least slack first strategies can filter out the dummy transaction. But, the scheduling strategy based on system priority does process the dummy transaction, thus causing the system to behave poorly.
In Case 3, dummy transactions of both kinds are activated. The Combination strategy works well since it uses information about system priority as well as information about the timing constraints while making its scheduling decision.
Thus, we observe that adding intelligence to the scheduling strategy does improve the system performance.
3.5.2. When Preemption is Not Allowed
Quantitative Assessment:
<table>
<thead>
<tr>
<th>Scheduling Strategy</th>
<th>Case 1</th>
<th>Case 2</th>
<th>Case 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>FCFS</td>
<td>400</td>
<td>1200</td>
<td>608</td>
</tr>
<tr>
<td>System Priority</td>
<td>400</td>
<td>600</td>
<td>600</td>
</tr>
<tr>
<td>Earliest Deadline First</td>
<td>400</td>
<td>600</td>
<td>600</td>
</tr>
<tr>
<td>Least Slack</td>
<td>400</td>
<td>600</td>
<td>600</td>
</tr>
<tr>
<td>Combination</td>
<td>400</td>
<td>600</td>
<td>600</td>
</tr>
</tbody>
</table>
Qualitative Assessment:
<table>
<thead>
<tr>
<th>Scheduling Strategy</th>
<th>Case 1</th>
<th>Case 2</th>
<th>Case 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>FCFS</td>
<td>bad</td>
<td>bad</td>
<td>bad</td>
</tr>
<tr>
<td>System Priority</td>
<td>bad</td>
<td>bad</td>
<td>bad</td>
</tr>
<tr>
<td>Earliest Deadline First</td>
<td>bad</td>
<td>bad</td>
<td>bad</td>
</tr>
<tr>
<td>Least Slack</td>
<td>bad</td>
<td>bad</td>
<td>bad</td>
</tr>
<tr>
<td>Combination</td>
<td>bad</td>
<td>bad</td>
<td>bad</td>
</tr>
</tbody>
</table>
As seen above, in general, scheduling strategies perform poorly when preemption is not allowed. From the output of the simulation runs it is observed that almost all of the track transactions miss their deadlines, implying that the operator’s console is empty most of the time. Due to this, the clean transactions trivially complete, since they have no emitter files to clean. But, it is almost impossible to start any transactions to shoot or display information about objects. Thus, the entire purpose of the pulse detection system is defeated.
4. Conclusion
Real-time database systems have timing and validity constraints associated with transactions. To ensure that such a system completes as many transactions as possible without violating their timing and validity constraints, its scheduling strategy should have the following characteristics. First and foremost, the scheduling strategy should be dynamic. Second, it should use the timing and validity information associated with transactions and the database. Third, the scheduling strategy should be used at all places where there is resource contention. Fourth, preemption should be allowed wherever possible.
In this project, we described a dynamic scheduling strategy for transactions in real-time database systems. The scheduling strategy uses timing and validity information about transactions and data objects to calculate dynamic priorities of transactions. These priorities are then used to make scheduling decisions at all places where transactions contend for scarce resources. The extra information used by the scheduler enables it to schedule transactions intelligently so that the system completes as many critical transactions as possible.
For any scheduling strategy to be used in industry, it has to be supported by an extensive round of experimentation with a real-life application. The simulation study conducted in this project used a pulse detection system as a real-life, real-time database application. The simulation results obtained showed that scheduling strategies for real-time database transactions can be made more intelligent by making use of extra information about transactions such as their system priority, resource requirements and timing constraints.
REFERENCES
|
{"Source-Url": "https://libraopen.lib.virginia.edu/downloads/02870v908", "len_cl100k_base": 10120, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 35433, "total-output-tokens": 11528, "length": "2e13", "weborganizer": {"__label__adult": 0.0003333091735839844, "__label__art_design": 0.0003809928894042969, "__label__crime_law": 0.0004374980926513672, "__label__education_jobs": 0.0018320083618164065, "__label__entertainment": 0.00011777877807617188, "__label__fashion_beauty": 0.00017976760864257812, "__label__finance_business": 0.0007338523864746094, "__label__food_dining": 0.0003821849822998047, "__label__games": 0.0008397102355957031, "__label__hardware": 0.00270843505859375, "__label__health": 0.0007863044738769531, "__label__history": 0.00039887428283691406, "__label__home_hobbies": 0.00013554096221923828, "__label__industrial": 0.0009794235229492188, "__label__literature": 0.0003337860107421875, "__label__politics": 0.0003192424774169922, "__label__religion": 0.0004897117614746094, "__label__science_tech": 0.38330078125, "__label__social_life": 0.00010371208190917967, "__label__software": 0.02398681640625, "__label__software_dev": 0.580078125, "__label__sports_fitness": 0.0002605915069580078, "__label__transportation": 0.000827789306640625, "__label__travel": 0.0002073049545288086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47359, 0.04244]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47359, 0.66274]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47359, 0.90587]], "google_gemma-3-12b-it_contains_pii": [[0, 134, false], [134, 1282, null], [1282, 3231, null], [3231, 5428, null], [5428, 6314, null], [6314, 8561, null], [8561, 10763, null], [10763, 12205, null], [12205, 13843, null], [13843, 15917, null], [15917, 17095, null], [17095, 18085, null], [18085, 20034, null], [20034, 21625, null], [21625, 23901, null], [23901, 24917, null], [24917, 25994, null], [25994, 27082, null], [27082, 28903, null], [28903, 29917, null], [29917, 31668, null], [31668, 32061, null], [32061, 34035, null], [34035, 35870, null], [35870, 37956, null], [37956, 39462, null], [39462, 41330, null], [41330, 43041, null], [43041, 45117, null], [45117, 45650, null], [45650, 47359, null]], "google_gemma-3-12b-it_is_public_document": [[0, 134, true], [134, 1282, null], [1282, 3231, null], [3231, 5428, null], [5428, 6314, null], [6314, 8561, null], [8561, 10763, null], [10763, 12205, null], [12205, 13843, null], [13843, 15917, null], [15917, 17095, null], [17095, 18085, null], [18085, 20034, null], [20034, 21625, null], [21625, 23901, null], [23901, 24917, null], [24917, 25994, null], [25994, 27082, null], [27082, 28903, null], [28903, 29917, null], [29917, 31668, null], [31668, 32061, null], [32061, 34035, null], [34035, 35870, null], [35870, 37956, null], [37956, 39462, null], [39462, 41330, null], [41330, 43041, null], [43041, 45117, null], [45117, 45650, null], [45650, 47359, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47359, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47359, null]], "pdf_page_numbers": [[0, 134, 1], [134, 1282, 2], [1282, 3231, 3], [3231, 5428, 4], [5428, 6314, 5], [6314, 8561, 6], [8561, 10763, 7], [10763, 12205, 8], [12205, 13843, 9], [13843, 15917, 10], [15917, 17095, 11], [17095, 18085, 12], [18085, 20034, 13], [20034, 21625, 14], [21625, 23901, 15], [23901, 24917, 16], [24917, 25994, 17], [25994, 27082, 18], [27082, 28903, 19], [28903, 29917, 20], [29917, 31668, 21], [31668, 32061, 22], [32061, 34035, 23], [34035, 35870, 24], [35870, 37956, 25], [37956, 39462, 26], [39462, 41330, 27], [41330, 43041, 28], [43041, 45117, 29], [45117, 45650, 30], [45650, 47359, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47359, 0.13889]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
e7d1127d5cd492a06e595fabe2bc5fe026b4a845
|
Deliverable D4.2.3
Techniques for Compositional Risk-Based Security Testing v.3
Project title: RASEN
Project number: 316853
Call identifier: FP7-ICT-2011-8
Objective: ICT-8-1.4 Trustworthy ICT
Funding scheme: STREP – Small or medium scale focused research project
<table>
<thead>
<tr>
<th>Work package:</th>
<th>WP4</th>
</tr>
</thead>
<tbody>
<tr>
<td>Deliverable number:</td>
<td>D4.2.3</td>
</tr>
<tr>
<td>Nature of deliverable:</td>
<td>Report</td>
</tr>
<tr>
<td>Dissemination level:</td>
<td>PU</td>
</tr>
<tr>
<td>Internal version number:</td>
<td>1.0</td>
</tr>
<tr>
<td>Contractual delivery date:</td>
<td>2015-09-30</td>
</tr>
<tr>
<td>Actual delivery date:</td>
<td>2015-09-30</td>
</tr>
<tr>
<td>Responsible partner:</td>
<td>Fraunhofer</td>
</tr>
</tbody>
</table>
Abstract
Work package 4 has developed a framework for security testing guided by risk assessment. This framework, starting from security test patterns and test generation models, allows for a compositional security testing approach that is able to deal with large-scale networked systems. This deliverable is the final part of a series of three deliverables (D4.2.1, D4.2.2, D4.2.3) that document how the RASEN approach for risk-based security testing has been evolved through continuous and iterative updates. It provides the final update for the RASEN approach of formalizing test patterns using the Test Purpose Language, and it introduces the RASEN Testing Dashboard for Test Result Aggregation.
Keywords
Security testing, risk-based security testing, Test Purpose Language, fuzzing on security models, security testing metrics, large-scale networked systems, test selection, test prioritization
Executive Summary
The overall objective of RASEN WP4 is to develop techniques for the use of risk assessment as guidance and basis for security testing, and to develop an approach that supports a systematic aggregation of security testing results by means of security testing metrics. This comprises the development of a tool-based integrated process for guiding security testing by means of reasonable risk coverage and probability metrics. This deliverable is the third and final part of a series of three deliverables that define the overall RASEN approach for risk-based security testing. The earlier deliverables have introduced approaches for risk-based test identification and selection, the notion of test pattern, new fuzz testing techniques and the RASEN approach for pattern-driven and model-based vulnerability testing (PMVT). This deliverables updates the PMVT approach by showing the formalization and operationalization of test patterns using the Test Purpose Language. Moreover, it introduces metrics that classify test results at the testing level and show their implementation by the RASEN Testing Dashboard. The RASEN Testing Dashboard allows for a concise visualization of metric results.
# Table of contents
<table>
<thead>
<tr>
<th>Section</th>
<th>Page</th>
</tr>
</thead>
<tbody>
<tr>
<td>Introduction</td>
<td>6</td>
</tr>
<tr>
<td>FORMALIZING TEST PATTERNS WITH TEST PURPOSE LANGUAGE</td>
<td>7</td>
</tr>
<tr>
<td>2.1 Extension of the Test Purpose Language</td>
<td>7</td>
</tr>
<tr>
<td>2.1.1 Keyword Lists</td>
<td>9</td>
</tr>
<tr>
<td>2.1.2 Iterating the Result of an OCL Expression</td>
<td>9</td>
</tr>
<tr>
<td>2.1.3 Variable Usage in Nested “for each” Loops</td>
<td>10</td>
</tr>
<tr>
<td>2.1.4 Variable Usage in OCL Expressions</td>
<td>10</td>
</tr>
<tr>
<td>2.1.5 Stage Loops</td>
<td>10</td>
</tr>
<tr>
<td>2.1.6 Test Purpose Catalog</td>
<td>11</td>
</tr>
<tr>
<td>2.2 Vulnerability Test Purposes</td>
<td>11</td>
</tr>
<tr>
<td>2.2.1 Cross-Site Scripting</td>
<td>11</td>
</tr>
<tr>
<td>2.2.2 SQL Injections</td>
<td>13</td>
</tr>
<tr>
<td>2.2.2.1 Error-Based SQL Injections</td>
<td>13</td>
</tr>
<tr>
<td>2.2.2.2 Time Delay SQL Injections</td>
<td>14</td>
</tr>
<tr>
<td>2.2.2.3 Boolean-Based SQL Injections</td>
<td>15</td>
</tr>
<tr>
<td>2.2.3 Cross-Site Request Forgeries</td>
<td>16</td>
</tr>
<tr>
<td>2.2.4 Privilege Escalation</td>
<td>18</td>
</tr>
<tr>
<td>2.2.4.1 Privilege Escalation of Pages</td>
<td>18</td>
</tr>
<tr>
<td>2.2.4.2 Privilege Escalation of Action</td>
<td>19</td>
</tr>
<tr>
<td>2.3 Synthesis</td>
<td>19</td>
</tr>
<tr>
<td>3 SECURITY TEST RESULT AGGREGATION</td>
<td>21</td>
</tr>
<tr>
<td>3.1 List Up Metrics</td>
<td>21</td>
</tr>
<tr>
<td>3.2 Coverage Metrics</td>
<td>22</td>
</tr>
<tr>
<td>3.3 Efficiency Metrics</td>
<td>24</td>
</tr>
<tr>
<td>3.4 Process/Progress Related Metrics</td>
<td>25</td>
</tr>
<tr>
<td>3.5 The RASEN Testing Dashboard</td>
<td>25</td>
</tr>
<tr>
<td>3.5.1 Principles</td>
<td>25</td>
</tr>
<tr>
<td>3.5.2 Architecture</td>
<td>25</td>
</tr>
<tr>
<td>3.5.3 GUI</td>
<td>27</td>
</tr>
<tr>
<td>3.5.4 API and Implementation Guidelines</td>
<td>29</td>
</tr>
<tr>
<td>4 SUMMARY</td>
<td>31</td>
</tr>
<tr>
<td>REFERENCES</td>
<td>32</td>
</tr>
</tbody>
</table>
1 Introduction
The overall objective of RASEN WP4 is to develop techniques for risk-based security testing. Risk assessment is used to provide guidance and yield as basis for security testing and to develop an approach that supports a systematic aggregation of security testing results. The objective includes the development of a tool-based integrated process for guiding security testing by means of reasonable risk coverage and probability metrics.
This deliverable is the third and final deliverable in a series of three deliverables that presents techniques to address compositional security testing guided by risk assessment. Risk assessment is used to provide a systematic guidance for planning, structuring and organizing the security testing process. The overall RASEN approach for risk-based security testing, that defines the Innovation 3 of the project, has been described and detailed in the previous deliverable of this series [2]. The process is recalled in Figure 1.

This process starts with a risk model, a result obtained from the risk assessment that is created by using the CORAS method from SINTEF. This risk model allows identifying potential threat scenarios and vulnerabilities, and is used for the identification and prioritization of appropriate security test patterns. Based on the selected security test patterns, test cases are generated by combining information from the risk model, a test model and test generation techniques. The latter are composed of test purposes (formalizing the security test patterns) developed by UFC for Smartesting CertifyIt and fuzzing techniques implemented by the Fraunhofer FOKUS’s fuzzing library Fuzzino. Finally, test scripts are generated, compiled and executed against the application under test, and related test results are gathered and displayed in a Dashboard that provides various security testing metrics.
This deliverable focuses on results from the task dealing with automating test execution based on risk assessment in a compositional way and the task to develop metrics and a dashboard for security testing results based on risk assessment. In this way, Section 2 provides an update for the RASEN approach of formalizing test pattern using the Test Purpose Language. Section 3 shows metrics that classify test results at the testing level and their implementation by the RASEN Testing Dashboard that allows for a concise visualization of test results and test metric results.
2 Formalizing Test Patterns with Test Purpose Language
Security test patterns, based on prioritized vulnerabilities from the CORAS model, provide a starting point for security test case generation by giving information on how appropriate security test cases can be created from risk analysis results. The security test patterns express in a textual manner the testing procedures to detect Web application threats. This way, they propose solutions to improve testing automation (data vector libraries, test metrics to complete test coverage criteria, etc.). Therefore they are imported from the risk model elements and then formalized to drive and automate the test generation process. To enable automation, the test generation tool CertifyIt proposes a catalogue of generic test purposes that formalize test patterns. To summarize, a test purpose formalizes the intention of a given security test pattern and thus allows to automatically generate the expected test cases with model-based testing techniques.
A test purpose is a high-level expression that formalizes a testing objective to drive the automated test generation on the test model. As introduced in deliverable D4.2.1 [1], such a test purpose can be seen as a partial algorithm (some steps may be not explicit) defining a sequence of significant steps that has to be executed by the test case scenario. Each step takes the form of a set of operations or behaviors to be covered, or specific state to be reached on the test model in order to assess the robustness of the application under test with respect to the related vulnerability to be tested.
A typical test purpose is composed of two main entities: iterators and stages.
- Stages define execution steps (in terms of states to be reached and operations to be executed) that the test generation engine must activate.
- Iterators specify the various contexts within which stages must be activated.
Thus, a typical test purpose has the construction introduced in Listing 1.
```
for_each Contexts
activate stage1
activate stage2
activate stage3
...
```
Listing 1 – Test purpose construction
A first version of the syntax and examples of practical use of the test purpose language is described in the Deliverable D4.2.1 [1] and its grammar is recalled in Figure 2. However, to make generic test purposes and to formalize complex and sophisticated attacks (required to conduct the RASEN case-studies and thus to validate the proposed approach), this initial version has been extended. The next subsections respectively detail these additions and introduce the generic test purposes that formalize the four vulnerabilities identified during risk assessment of the RASEN case-studies and targeted by test generation (namely Cross-Site Scripting, SQL Injections, CSRF, and Privilege Escalation).
2.1 Extension of the Test Purpose Language
Within RASEN vulnerability testing approach, a test purpose formalizes the expression of the essence of a well-understood solution to a recurring software vulnerability testing problem and how it can be solved. To reach this goal, a test purpose captures in a generic way the test pattern part that concerns the test intention with one or several operational test purposes, in order to automatically produce the corresponding test cases with model-based testing.
Such a test purpose aims to be generic (meaning that it can be applied without update whatever the test model is) in order to be applied on several models to generate test sequences. However, current test purposes contain information coming directly from the current test model, which makes them reliant on it. To avoid any dependence, several additions were made to the test purpose language to allow and improve their genericity.
Namely, these contributions to the language are the following:
- Creation of the lists of keywords, referring to model entities, to externalize the use of data;
- Improvement of “for_each” statements to iterate the results of an OCL expression;
- Addition of variable usage for nested iterators on a set of instances, to use the instance obtained from the outer iterator as context for the OCL expression of the inner iterator;
- Addition of variable usage in OCL expressions throughout a test purpose;
- Introduction of stage loops so that one or several stages can be activated more than once;
- Creation of a test purpose catalogue that allows automatic import/export of existing test purposes from one testing project to another.
```
Figure 2 – Grammar of the test purpose language
```
The next sections introduce each of these additions that enable a sufficient expressiveness to formalize generic test purposes targeting the four vulnerability types handled during the RASEN case studies.
2.1.1 Keyword Lists
The keywords mechanism, which has initially been introduced in [1], consists of using specific arguments, called keywords, in test purposes to represent generic artifacts of a test model. They can represent behaviors, calls, instances, integers, literals, operations, or a state regarding a specific instance of the model. Test engineers only have to link keywords with the specific elements of the current test model.
Keywords are contained in lists, and a list may only contain keywords that point to elements of the same nature (behaviors, instances, literals, etc.). Keywords lists can be used both in the iteration and stage phases to replace any of this model information preceded by the character “#”.
For instance, considering an enumeration, a keyword list enables to only apply test purposes to literals of the enumeration that share the same properties or restrictions (e.g., selecting only keywords that point to user actions and excluding unnecessary actions that represent for instance search forms).
In the Listing 2, the iterator for_each goes through all the keywords from the #KEYWORD_LIST, each keyword pointing to a certain enumeration literal.
```
for_each literal $lit from #KEYWORD_LIST
```
**Listing 2 – Literal iteration construction**
As introduced in Listing 3, a test purpose stage can require the test generation engine to call an operation from a restricted set, or prohibit the call to a given set of operations. This is done as follows:
```
use any_operation #RELEVANT_OPS to_reach OCL_EXPR1 on_instance $inst1
use any_operation but #UNWANTED_OPS to_reach OCL_EXPR2 on_instance $inst2
```
**Listing 3 – Operation call construction**
The first state expresses to only use any operation that have a corresponding keyword in #RELEVANT_OPS. Contrariwise, the second stage expresses to use any operation, except the ones that have a corresponding keyword in #UNWANTED_OPS.
2.1.2 Iterating the Result of an OCL Expression
Keywords lists provide a first level of genericity to test purposes. The use of such lists is necessary when the objects they contain must be selected manually. However, when the keywords from a list can be deduced based on the information from the model, it is thus possible to extract their corresponding element automatically. Hence the language has been extended to iterate the results of an OCL expression. It is constructed as shown in Listing 4.
```
for_each instance $inst from "self.all_users->select(u:User|u.att1= 2)" on_instance User1
```
**Listing 4 – OCL result iteration construction**
First the OCL expression is evaluated, in the context of the User1 instance. The expression returns all User instances such that att1 is equal to 2. Then, the results are transmitted to the iterator to be used in the stage phase. This construction preserves the generic features of test purposes and automates the test data selection to be used for test generation.
2.1.3 Variable Usage in Nested “for_each” Loops
Certain types of attack require to consider several data types as well as the relationships between them (e.g., testing for multi-step XSS implies, for a given page, to retrieve all the user inputs that are rendered back on this page). To meet this need, variable usage between for_each loops has been implemented. In case where the outer loop iterates instances and the inner loop iterates the results of an OCL expression, it is possible to use the instance from the first loop as the OCL context for the second loop as described in Listing 5.
```
for_each instance $inst1 from #INST_LIST
for_each instance $inst2 from "self.all_items" on_instance $inst1
```
Listing 5 – Variable usage in nested iteration construction
In this example, the outer for_each iterates a list of instance. The inner for_each is reliant on the value coming from its parent as it uses it for defining the context of its OCL expression. Thereby, the self variable from the OCL expression corresponds to $inst1.
Usage of data-dependent nested loops is for instance necessary to compute abstract test cases for multi-step XSS, as it avoids the production of unreachable targets.
2.1.4 Variable Usage in OCL Expressions
In more sophisticated attacks, data dependency goes beyond their selection and must be carried throughout the test purpose. For instance, Privilege Escalation attacks involve session types, pages, and their relations, in order to test that access control policies are not flawed. In these cases, it needs to use the value from the iterator to configure OCL expressions in order to make test purposes more precise and avoid the submission of irrelevant or unreachable test targets to the test generation engine. As introduced in Listing 6, variables can be used in the iteration phase in cases of nested for_each statements, thus:
```
for_each literal $lit from #LITERAL_LIST
for_each instance $inst from "self.all_users->select(u:User|u.att1= 2)" on_instance User1
```
Listing 6 – Variable usage in OCL expressions within nested iteration
Moreover, variables can also be used in OCL expressions from the restriction part of stages as shown in Listing 7
```
use any_operation to_reach "self.status = STATUS::$lit" on_instance SUT
```
Listing 7 – Variable usage in OCL expressions within stages
This stage expresses that any operation from the model can be used, with the goal that the status attribute from the system under test is valuated with the content of $lit, which contains an enumeration literal from the enumeration STATUS.
2.1.5 Stage Loops
In some cases, it is necessary to reproduce the exact same set of steps several times, in order to conduct an attack. This is the case especially for time-based and Boolean-based SQL injections, which require the injection of several vectors in the same user input and compare the results.
To make the design of such test purpose simpler while reducing test generation time, the notion of stage loops has been introduced in the test purpose language. As introduced in Listing 8, stage loops are defined using the declaration keyword repeat, followed by a integer and the keyword times, expressing the number of loop to accomplish:
repeat 3 times
use ...
thenuse...
end_repeat
Listing 8 – Stage loop construction
In this sequence, the stages “use...” enclosed in the loop must be repeated three times.
2.1.6 Test Purpose Catalog
Test purposes are stored in a test purpose catalogue (in XML format), with a reference to the pattern it belongs to. Within the RASEN project, test purpose selection is directly conducted based on a risk assessment: regarding the information present in the CORAS model, the corresponding test purposes are chosen for test generation. It should be noted that test engineers can also manually select relevant test purposes to be applied, depending on the test objective or motivated by a test selection criteria.
2.2 Vulnerability Test Purposes
This section introduces the generic test purposes designed during the RASEN project to tackle the four vulnerabilities targeted during the conducted case studies (Cross-Site Scripting, SQL Injections, CSRF, and Privilege Escalation). Some vulnerability required the design of several test purposes when the implementation of multiple attack subcategories was necessary for efficient testing (e.g., for SQL injections and Privilege Escalation). For each test purpose, we first present the test purpose used to design it, and describe next its functionality by going through each of its steps.
2.2.1 Cross-Site Scripting
Cross-Site Scripting vulnerability (XSS for short) consists of an attacker injecting a hostile browser executable code (e.g., JavaScript, VBScript) into Web pages through user inputs, typically Web forms, or through parameters, which value can be modified by clients, such as cookie values. This vulnerability type is one of the consequences due to the lack of proper user-supplied input data analysis from the web application under test.
As stated in the XSS test pattern, it is possible to perform XSS attack by applying the following testing strategy that is composed of three steps: (i) locate a user-supplied input, (ii) inject an XSS vector, and (iii) analyze the server response. However, to tackle all XSS types at once, the XSS test purpose makes use of the structure information that is specified in the model: links between user-supplied inputs and the pages of the Web application under test that use them to compute an output. Thereby, for the testing of a particular user input, the test purpose for XSS proceeds as follows:
1. Locate the user input: Following proper user interactions, the Web application is put in a state where the current page is the page where the user input can be provided. It can be a form field, a parameter in the “href” attribute of an anchor, a cookie value, etc.
2. Fill nominal values: Often, the user input under test is part of a form or URL, which contains multiple parameters. These parameters need to be assigned with relevant values to prevent any data validation functions (e.g., some parameter must not be left empty, or must be only assigned with a specific data type) to block the submission of the form/request.
3. Replace input with attack vector: Here, the initial nominal content of the user input under test is erased and an attack vector is injected instead.
4. Submit the crafted request: Once the attack vector has been inserted, the crafted request is submitted. Depending on the type of user input, it means submitting the form, or clicking on the link.
5. Locate an output point: Instead of simply waiting for the next server response, the test model is used to determine which page uses the user input under test to compute its output, and the Web application state is changed such that it displays the page.
6. **Analyze the result**: The content of the page is then analyzed to assess whether the attack vector has been inserted in the page. If it has not undergone any modification, it can be concluded that the Web application is vulnerable to XSS, from this particular user input and on this particular page.
This test procedure has been translated into a test purpose in order to give each instruction to the test generation engine from Smartesting. The test purpose for multi-step XSS is shown in Listing 9.
```java
for_eachInstance$pagefrom
"self.all_pages->select(p:Page|not(p.all_outputs->isEmpty()))" on_instance$param,
for_eachInstance$paramfrom"self.all_outputs"on_instance$param,
useany_operation_buts#UNWANTED_OPSany_number_of_times$toreach
"WebAppStructure.allInstances()"->any(true),
ongoingAction.all_inputs->exists(d:Data|d=self)*"on_instance$param
thenusethreat.injectXSS($param)
thenusewas.finalizeAction()
thenuseany_operation_buts#UNWANTED_OPSany_number_of_times$toreach
"self.was.p.current_page=selfand
self.was_p.ongoingAction.octIsUndefined()" on_instance$param
thenusethreat.checkXSS()
```
Listing 9 – Test purpose for cross-site scripting
The first three lines of the test purpose for XSS compose the first phase. Because this is about XSS, the first for_each statement selects all the pages that are using at least one user input as output. The selection is done using the OCL expression “Pages.allInstances()–>select(p:Page|not(p.all_outputs->isEmpty()))” executed from the context of the SUT instance, that defines the Web application under test. This OCL expression can be split as follows: from all the pages “Pages.allInstances()”, all the pages “->select(p:Page|not(p.all_outputs->isEmpty())” are linked to one or more data instances “not(p.all_outputs->isEmpty())” are selected. The result of the OCL expression is a set of page instances.
Afterwards the for_each statement selects all the data instances linked to the page instance contained in $page, i.e. all the user inputs that $page uses to compute its output. Here, the selection is done using the OCL expression “self.all_outputs” from the context of $page. Therefore, the second stage of the test purpose handles two elements, a user input and one of the pages that outputs it.
The second phase starts on lines 4-5-6 by putting the Web application in a state where the page displayed to the user is the injection page, and where the action containing $param is ongoing, meaning all other fields (in the case of a form) or parameters (in the case of a link) have been filled with nominal values, ready to be submitted. In the context of the selected user input (“on_instance $param”), the test purpose tells the test generation engine to satisfy the OCL expression "WebAppStructure.allInstances()"->any(true).ongoingAction.all_inputs->exists(d:Data|d=self)". First, the instance of the Web application under test is retrieved "WebAppStructure.allInstances()"->any(true)”. Second, we navigate in the model until the ongoing action “ongoingAction” is reached. Third, one check that the user input $param is contained in the action “all_inputs->exists(d:Data|data=self)”. To satisfy this expression, the test generation engine must animate the model by executing the instructions "use any_operation_buts #UNWANTED_OPS any_number_of_times”, which means that any behavioral or navigational operation from the Web application can be called, as many times as needed, in order to find the right state. Indeed, each designed test purpose possesses a keyword list, named #UNWANTED_OPS, which contains all the operations from the WebAppStructure classes except those to exercise an attack. Those operations are not meant to be called during the computation of navigational and behavioral steps, therefore they are excluded to find the right state,
but they are used to complete XSS injections in line 7 by calling the operation
"threat.injectXSS($param)"; which targets the user input $param.
Lines there after (8 to 12) handle verdict assignment. The goal is to put the Web application such that it displays the page ($page) that outputs the user input and analyzes its content. This is done by satisfying the OCL expression in lines 10 and 11, defined in the context of $page. The first part of the expression is about verifying that no action is pending, and the second part specifies that the current page is equal to self, i.e. $page. Again, the test generator engine may use any behavioral or navigational operation of the model, as much as necessary. The last line of the test purpose is a call to the operation "threat.checkXSS()", which scans the page content to look for the injected vector.
2.2.2 SQL Injections
Like XSS, SQL Injections are another consequence of poor input data validation. This class of vulnerability exploits the trust a Web application has in its users by triggering unwanted interactions between the application and its database. This is done by injecting SQL fragments through user inputs, such as form fields or cookie variables, to alter the semantic of hardcoded SQL queries. Of course, SQL injections are only possible when the value contained in the user input lacks sanitization used by the application to configure a SQL query.
However, the discovery of SQL injections is much more complex than XSS. Indeed, XSS targets Web browsers and therefore happens on the client-side, where it is easily possible to assess the existence of a vulnerability. On the contrary, SQL injections affect the database of the Web application, to which users (and test engineers) do not have direct access. Moreover, in many cases the database is installed on another server. For these reasons, probing the database is out of question.
Test purpose approach follows the same verdict assignment process as penetration testers, which consists of “taking what the Web application gives you”. The amount of information about the database that is leaked by the application varies a lot. Hence, SQL injections cannot be tackled with only one test purpose but with a set of three test purposes, each one implementing a dedicated SQL injection.
2.2.2.1 Error-Based SQL Injections
This is the best case scenario for a hacker / test engineer. Error-based SQL injection means that syntax error messages from the database (e.g., “You have an error in your SQL syntax” for MYSQL) are displayed to end-users. It can be default error messages from the database but also custom ones, designed for development purposes. Consequently, the main objective of error-based SQL injections, when limited to vulnerability discovery only\(^1\), is about breaking the syntactic correctness of the initial query to generate an error message. The reception of an error message is a strong indicator of the presence of a vulnerability, because it means we were able to temper with the query.
```
1 for_each $param from #DATA
2 use any_operation but #UNWANTED_OPS any_number_of_times to reach
3 "not(self.ongoingAction.oclIsUndefined())and
4 "self.ongoingAction.all_inputs->exists(d:Data|d.id=DATA_IDS::$param)"
5 on_instance was
6 then use threat.injectSQLi($param)
7 then use was.finalizeAction()
8 then use threat.checkErrorBasedSQLi()
```
Listing 10 – Test purpose for error-based SQL injection
Listing 10 shows the test purpose for Standard SQL injections. There is only one iterator for the first phase (line 1) that receives user input identifiers, since all user inputs must be tested regardless of their possible resurgence. To perform such a testing coverage, the for_each iterates a keyword list,
called #SQLI_VULN_PARAMETERS, that lists all the SQL injections vulnerable parameters referenced in the test model.
The second phase starts on lines 2 to 5: the test purpose instructs the test generation engine to satisfy, in the context of the Web application instance, an OCL expression composed of two sub-expressions. The first sub expression imposes that an action is ongoing to ensure that all other fields that are part of the same request have been properly set. The second sub-expression imposes that the ongoing action must involve the data instance, which identifier is contained in $param. This way, it enables to reach the right state to inject the user input.
The injection is performed on line 6, with the dedicated operation “threat.injectSQLi()”. Then, the data is submitted by calling the finalize operation in line 7, and result is assigned in line 8 with the operation “threat.checkErrorBasedSQLi()”.
### 2.2.2.2 Time Delay SQL Injections
When error messages from the database are not passed on to end-users, another solution for the detection of SQL injection vulnerabilities is to conduct a temporal differential analysis between several injections. This is performed with the injection of two vectors.
The role of the first vector is to disrupt the syntax of the SQL query in order to cause an immediate response from the database, i.e. with little latency, such as
```
SELECT * FROM products WHERE name LIKE '';
```
that uses a single quote, which effect is to disrupt the syntactic correctness of the query.
The role of the second vector is to alter the initial query to generate delay while being processed by the database. It can be done by modifying the query to make the database returns as much data as possible, or by injecting built-in methods such as `sleep(10)`, which stalls the database for 10 seconds:
```
SELECT * FROM products WHERE name LIKE '1 or sleep(10)#';
```
The objective is to observe a variation in response time from the Web application between the two injections. The test purpose for time delay SQL injections is depicted in Listing 11.
```
for_each literal $param from #DATA
use any_operation_bank #UNWANTED_OPS any_number_of_times to_reach
"not(self.ongoingAction.oclIsUndefined()) and
"self.ongoingAction.all_inputs->exists(d:Data|d.id=DATA_IDS::$param)"
on_instance was
then use was .finalizeAction()
repeat 2 times
then use was .reset()
use any_operation_bank #UNWANTED_OPS any_number_of_times to_reach
"not(self.ongoingAction.oclIsUndefined()) and
"self.ongoingAction.all_inputs->exists(d:Data|d.id=DATA_IDS::$param)"
on_instance was
then use threat.injectSQLi($param)
then use was .finalizeAction()
end_repeat
then use threat.checkTimeDelaySQLi()
```
**Listing 11 – Test purpose for time delay SQL injection**
This test purpose has a similar logic as the one for error-based injections. The iterator in the first phase collects all user input identifiers from a keywords list, which contains only the identifiers that are intended to be tested for SQL injections.
The second phase is composed of a stage sequence meant to be executed two times under the same conditions, one execution dedicated to each injection. During this repeated sequence, the test purpose instructs the test generation engine to drive the model in a state where the current page is the page displaying the user input, then a call to “threat.injectSQLi()” performs the attack by replacing the nominal value with an attack vector, to finally submit the data by calling the “was.finalizeAction()”.
Note that the sequence starts with a call to “sut.reset()”, which goal is to reset the Web application in order to perform another injection within the same conditions. Once the sequence has been executed two times, the injections results are assessed by calling the “threat.checkTBSQLI()”.
2.2.2.3 Boolean-Based SQL Injections
Another technique for Blind SQL injections is to perform several attacks and conduct a differential analysis between the server responses. The test pattern we rely on to create this test purpose has been designed following the testing strategy proposed by IBM and implemented in its scanning tool, AppScan. Indeed, by injecting SQL fragments that will cause singular changes to the initial SQL query, the objective is to observe a difference of behavior from the Web application under test.
Consider a Web application with a search page containing a text field. The content of this field $inputvalue is sent to the database in order to configure the following SQL query:
```
SELECT * FROM products WHERE name LIKE '$inputvalue';
```
The response contains the product entries whose name is close to the content of the search field. The result is sent to the user, in the form of a Web page that lists the content.
A Boolean-Based SQL injection is therefore composed of four injections, as follows:
1. **Nominal Injection**: This is the intended interaction with the Web application. The server response is used as “control group” and its objective is to compare the nominal behavior of the application with its behavior when receiving SQL fragments as input.
2. **AND TRUE**: The objective is to inject an SQL fragment that is always evaluated to true and does not change the overall value of the query, such as:
```
SELECT * FROM products WHERE name LIKE 'NOM' AND 1=1;
```
Based on the monotone law of identity for AND, since the Boolean sub expression $1=1$ is always true and because it is tied to a conjunction, the result of the expression depends on the other sub-expression of the conjunction.
3. **AND FALSE**: The objective is to inject an SQL fragment that is always evaluated to false and changes the overall value of the query, such as:
```
SELECT * FROM products WHERE name LIKE 'NOM' AND 1=2;
```
Based on the monotone law of identity for AND, since the Boolean sub-expression $1=2$ is always false and because it is tied to a conjunction, then the result of the expression is always false.
4. **OR FALSE**: This injection is similar to the AND TRUE injection, and is mainly used to rule out the possibility of SQL injections by reinforcing the verdict:
```
SELECT * FROM products WHERE name LIKE 'NOM' OR 1=2;
```
Based on the monotone law of identity for OR, since the Boolean sub-expression $1=2$ is always false and because it is tied to a disjunction, then the result of the expression depends on the other sub-expression of the disjunction.
---
Verdict is assigned by comparing the responses from the server. If all responses are equivalents, it can be assumed that SQL injections are not possible. However, if the results from the nominal and AND TRUE injections are equivalents, but there is a difference in the responses between the AND TRUE and AND FALSE injections, it can be assumed that there is a strong possibility that the injected user input is vulnerable to SQL injections.
This attack has been translated into a test purpose, as shown in Listing 12.
First phase consists of collecting all the user input identifiers that are intended to be tested for SQL injections, and assigning them one after another to $param to compute attack traces.
Second phase is composed of two main sequences. The first one consists of sending nominal values and collecting the resulting page. First, the test purpose in line 2 proposes to use any behavioral or navigational operation, as many times as necessary, to satisfy the OCL expression defined in lines 3-4. This expression requires, on the one hand, that an action must be ongoing, and on the other hand that this action involves the user input whose identifier is $param. Satisfying this expression will take the hypothetical user to the injection page, with all fields filled (in the case of a Web form).
Then, the test purpose instructs to use any behavioral or navigational operation, as many times as necessary, to satisfy the OCL expression defined in line 8. Satisfying this expression means finalizing the ongoing action, which implies the submission of the form, or click on the link.
The second sequence is responsible for the completion of the three SQL injections. Since the protocol is the same for each injection, the repeat keyword is used to simplify the test purpose and save test generation time.
Therefore, each attack starts by calling the “was.reset()” operation, in order to put the test model back to its initial state. Then, similarly to the nominal sequence, the second step is to put the model in a state where the current ongoing action is this action involving the user input under test ($param). Then, the injection is performed in line 13, and the crafted request is submitted to the server in line 14.
Once the attack sequence has been executed three times, a call to the “threat.checkBlindSQLi()” operation in line 16 compares the responses to assign a verdict.
### 2.2.3 Cross-Site Request Forgeries
A Cross-Site Request Forgery attack (CSRF for short) consists of tricking a victim into making a specific request through his browser that will ultimately lead to unwanted consequences on a trusted Web application. It is qualified as malicious because it indirectly impersonate a user to perform actions only him or a restricted group of users is allowed to do, and without him knowing. It is due to the fact that browsers automatically append user credentials (session data) to each request made towards a Web application where a user session has been started. These attacks are made possible when the
targeted Web application does not check whether an incoming request is really originating from the user owning the active session.
The test pattern strategy in use consists of conducting an actual CSRF attack by cloning the action being tested on an external server, to assess whether this action can be triggered from outside the application. The logic is similar to BURP’s CSRF PoC, and goes as follows:
1. **Nominal Action:** The objective is to follow the intended behavior of the application and perform the action from inside, using the GUI.
2. **Information collect:** The link / form being responsible for the triggering of the action is retrieved, the output page for later comparison is also collected.
3. **Reset:** The application is reinitialized and the current user session is closed.
4. **Login:** The user authenticates to the application to open a new session.
5. **External Action:** The action is submitted from an external Web server, using the same browser. To do this, a dedicated java program starts a local Web server, which takes as input the data gathered during information collect. The server recreates the form or link based on the received data, and sends the result to the user, in the form of an interactive Web page.
6. **Result Comparison:** The results from the nominal and the external actions are compared. If both results are similar, it can be concluded that CSRF attacks are possible.
This strategy has been translated into the test purpose described in Listing 13.
```java
for_each literal $action from CSRF_(SESSION_TYPE)_ACTIONS
use any_operation but #UNWANTED_OPS any_number_of_times to reach
"not(self.ongoingAction.oclIsUndefined()) and self.ongoingAction.id = ACTION_IDS::$action"
and self.session_type == (SESSION_TYPE)) on_instance was
then use threat.gatherCSRFInfo() then use was.finalizeAction() then use was.reset()
then use any_operation but #UNWANTED_OPS any_number_of_times to reach
"not(self.ongoingAction.oclIsUndefined()) and
and self.session_type == (SESSION_TYPE)) on_instance was
then use was.finalizeAction() then use threat.performCSRFAttack()
then use threat.checkCSRF()
```
**Listing 13 – Test purpose for cross-site request forgery**
In the first phase, all the actions that are part of the test objective regarding CSRF are collected. Each action will be affected to the $action variable to configure the second phase of the test purpose. The second phase starts by triggering the action as intended by the application. This is performed by satisfying the OCL expression in line 3-4 that requires to put the model in a state where the action is ongoing. Then in line 5, The "threat.gatherCSRFInfo()" operation is called to retrieve the Web form or link that is used to submit the action. In line 6, the actions are finalized and then the application is reset in line 7. The attack sequence starts in line 8-10 by instructing the test generation engine to satisfy an OCL expression that expresses that a new user session should be started, with the same privileges as during the nominal sequence, and that no action should be ongoing (i.e., the login form has been submitted). This is done with an OCL expression, that can be satisfied using any behavioral or navigation operation from the model, as many times as necessary. Finally, the CSRF attack is performed in line 12, and the two results are compared in line 13, by calling the "threat.checkCSRF()" operation.
---
https://support.portswigger.net/customer/portal/articles/1965674-using-burp-to-test-for-cross-site-request-forgery-csrf [Last visited: September 2015]
2.2.4 Privilege Escalation
Applications do not always protect application functions properly. As anyone with network access to a Web application can send a request to it, such application should verify action level access rights for all incoming requests. When designing a Web application front-end, developers must build restrictions that define which users can see various links, buttons, forms, and pages. Although developers usually manage to restrict Web interface, they often forget to put access controls in the business logic that actually performs business actions: sensitive actions are hidden but the application fail to enforce sufficient authorization for these actions. If checks are not performed and enforced, malicious users may be able to penetrate critical areas without the proper authorization.
The strategy implemented to test Privilege Escalation is called Forced Browsing. The objective is to obtain a direct URL to trigger an action or access a page of the Web application that is supposed to be available only to users with sufficient rights. The underlying idea is that developers may have hidden the access to such actions or pages in the GUI but forgot to enforce the restriction in the actions’ code. Thus, the test pattern strategy for Privilege Escalation consists of the following steps:
1. Access the page / Trigger the action as intended, from the GUI, with a session that has the sufficient rights.
2. Save the direct URL that point to that page / action.
3. Save the output result for later comparison.
4. Logout from the Web application, or change the session state (from admin to regular user, for instance).
5. Access the URL directly, and save the output result.
6. Compare the two outputs.
If the output results are equivalent, it constitutes an indicator that the restricted page or action can be accessed. This strategy has been formalized in two test purposes: the first one for pages and the second one for actions.
2.2.4.1 Privilege Escalation of Pages
As shown in Listing 14, the first phase of the test purpose for Privilege Escalation of pages, is composed of two nested for_each. The first iterator retrieves all the possible session types, and the second iterator retrieves all the pages that are not accessible to the currently iterated session type. To do this, a dedicated private operation “isAccessible()” of the test model is used to define whether a given session type can access to a given page.
In the second phase, the test purpose first instructs the test generation engine to satisfy an OCL expression that requires to put the model in a state where the current page is $page, and no action is ongoing. Then, the relevant information is collected using the “collectPage()” operation. The next step is then to reset the system, and start the attack part. This is done by instructing the test generation engine to satisfy an OCL expression, which is evaluated to true when the current session type of the Web application under test is the session type from the iterator. Once the system is in the right state, the restricted page is accessed using the “accessPage()” operation. The last step is verdict assignment, thanks to the “checkPrivilegeEscalation()” operation.
```plaintext
for_each literal $session from #SESSION_TYPES,
for_each instance $page from
"self.all_pages->select(p:Page|not(self.isAccessible(SESSION_TYPES::$role,p.id)))"
on_instance was,
use any_operation but #UNWANTED_OPS any_number_of_times to_reach
"self.was_p.current_page=selfandself.was_p.ongoingAction.oclIsUndefined()"
on_instance $page
then use threat.collectPage()
then use was.reset()
```
thenuse any_operation_but #UNWANTED_OPS any_number_of_times to_reach
"self.ongoingAction.oclIsUndefined()" and
"andsel.ongoingAction.isCallable()" and
"self.session_type=SESSION_TYPES::$role"
on_instance was
thenuse threat.accessPage()
thenuse threat.checkPrivilegeEscalation()
Listing 14 – Test purpose for privilege escalation of pages
2.2.4.2 Privilege Escalation of Action
The test purpose for privilege escalation of restricted actions, introduced in Listing 15, shares a similar structure with the one for pages.
for_each literal $session from #SESSION_TYPES,
for_each instance $action from
"self.all_pages->select(p:Page|not(self.isAccessible(SESSION_TYPES::$role,p.id)))
->collect(p:Page|p.all_actions" on_instance was,
use any_operation_but #UNWANTED_OPS any_number_of_times to_reach
"self.ongoingAction.oclIsUndefined()" and
"self.session_type=SESSION_TYPES::$role" on_instance was
thenuse threat.activateCapture()
thenuse threat.accessPage()
thenuse threat.collectPage()
thenuse threat.reset()
thenuse any_operation_but #UNWANTED_OPS any_number_of_times to_reach
"self.ongoingAction.oclIsUndefined()" and
"andsel.session_type=SESSION_TYPES::$role" on_instance was
thenuse threat.triggerAction()
thenuse threat.checkPrivilegeEscalation()
Listing 15 – Test purpose for privilege escalation of actions
In the first phase, the outer loop retrieves all possible session types and for each session type, the inner loop retrieves all the actions that cannot be triggered by users under this session type.
The second phase starts by requesting to put the model in a state where the iterated action is ongoing, which means the current page is the page owning this action. In line 7, the "activateCapture()" is for concretization purposes: it tells the test harness to start capturing the outgoing request made by the test script, in order to collect relevant information (targeted URL, parameters, etc.). Then, the action is submitted, the page result collected, and the application reset in lines 8-10.
The attack sequence first requests to put the model in a state where the current session type of the Web application corresponds to the one from the iterator, and where no action is on-going (meaning the authentication credentials has been submitted). Line 14 tries to trigger the action by calling the "triggerAction()" operation, using the information collected by the "activateCapture()" operation. Finally, the two outputs from the server are compared by calling the "checkPrivilegeEscalation()" operation.
2.3 Synthesis
This section describes the update regarding test purpose language expressiveness. These updates allow on the one hand to make the vulnerability test purposes generic, and on the other hand to make them more efficient in detecting vulnerabilities. More precisely, the section details the test purposes of the four vulnerabilities that have been mostly targeted during the RASEN case studies: Cross-Site Scripting, SQL Injections (error-based, time-based and Boolean-based), Cross-Site Request Forgeries and Privilege Escalation (page-based and action-based).
Each of these test purposes allows producing one or several abstract test cases verifying the test purpose specification and the behavioral test model constraints. Such a test case takes the form of a sequence of steps, where a step corresponds to an operation call representing either an action or an observation of the system under test. It also embeds the security test strategies (from security test patterns) that is next used to apply data fuzzing strategies on attack vectors during test scripts generation and execution, as described in the testing process illustration depicted in Figure 1.
The next and last phase of the testing process consists of exporting and executing the test cases in the execution environment in order to provide test results. In the present case, it consists of creating a JUnit test suite, where each abstract fuzzed test case is exported as a JUnit test case, and creating an interface. This interface defines the prototype of each operation of the application and links the abstract structures / data of the test cases to the concrete ones. Since this process ensures the traceability between the verdict of the test case execution and the targeted vulnerabilities identified during risk assessment, the test results can be gathered and processed to provide testing metrics that help engineers to complement the risk picture of the system under test. The next section introduces the test result aggregation that makes it possible to deliver such relevant and useful testing metrics.
3 Security Test Result Aggregation
Security test result aggregation is the process of summarizing test results in a meaningful way. Within the RASEN context, testing metrics is a concept to transfer the information from security testing to risk assessment. Test metrics in general can serve as an important indicator of the efficiency and effectiveness of a software testing process. In RASEN test result aggregation is specified on basis of metrics that use the information contained in the RASEN Test Result Exchange Format (see Deliverables D5.4.2 and D5.4.3). The aggregation is processed by the RASEN Testing Dashboard that is described in Section 3.5. The aggregation results are propagated via the RASEN Aggregated Test Result Exchange Format so that they can be processed in Security Risk Assessment Tools according to the integration scenarios defined in Deliverable D5.4.3. The Sections 3.1, 3.2, and 3.3 specify a set of testing metrics that could be used for test aggregation. The definitions show ID, Name and Description of the metric. The Metric Description uses references to items from the RASEN conceptual model and the RASEN Exchange Format. These references are denoted with a starting backslash (e.g. \ estItem). Section 3.5 shows the implementation of the test metrics and the process of test aggregation by means of the RASEN Testing Dashboard.
3.1 List Up Metrics
List up metrics are the most basic kind of a testing metrics. Applying their functions does nothing but listing up a summary of the most important test results in a format specified by the metric. The results are used as documentation in the risk graphs. Additionally, list up metrics can be used to identify any unexpected incidents. These can be suggested as potential new unwanted incidents to the risk analysts. Table 1 specifies a set of simple list up metrics.
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>LU1</td>
<td># of specified test cases</td>
<td>counts up all specified test cases for a certain \ estItem, /testCoverageItem. The # of specified test cases is usually an indicator for the intended coverage of the \ estItem or \testCoverageItem with test cases.</td>
</tr>
<tr>
<td>LU2</td>
<td># executed test cases</td>
<td>counts up all specified test cases for a certain \ estItem, /testCoverageItem. The # of executed test cases is usually an indicator for the actual coverage of the \ estItem or \testCoverageItem with test cases.</td>
</tr>
<tr>
<td>LU3</td>
<td># of passed test cases</td>
<td>counts up all test cases for a certain \testItem, \testCoverageItem that have been executed and passed. The # of passed test cases is usually an indicator for a lower probability of the existence of errors or vulnerabilities in covered functions of the \testItem.</td>
</tr>
<tr>
<td>LU4</td>
<td># failed test cases</td>
<td>counts up all test cases for a certain \testItem, \testCoverageItem that have been executed and failed. The # of failed test cases is usually an indicator for the existence of vulnerabilities in the \testItem.</td>
</tr>
<tr>
<td>LU5</td>
<td># inconc test cases</td>
<td>counts up all test cases for a certain \testItem, \testCoverageItem that have been executed and shows an inconclusive result. The # of inconclusive test cases is usually an indicator for open issues that need to be resolved manually.</td>
</tr>
<tr>
<td>LU6</td>
<td># of error test cases</td>
<td>counts up all test cases for a certain \testItem, \testCoverageItem that have been executed and shows an erroneous result. Erroneous results are caused by errors in the test system and not by errors in the \testItem. The # of erroneous test cases is usually an indicator for the quality of the test system or its connection with the \testItem.</td>
</tr>
</tbody>
</table>
LU7 # incidents counts up all incidents that occur during the execution of test for a testItem, testCoverageItem (an incident is indicated through test case that result to fail and error).
LU8 # errors counts up all errors that occur during the execution of test for a testItem, testCoverageItem.
LU9 Fail/pass ratio ratio of # of failed test cases to # passed test cases. The ratio is usually an indicator for the effectiveness of the test cases and the stability of the software.
LU10 Test execution stats: executed/specified ratio ratio of # of executed test cases to # specified test cases. The ratio is usually an indicator for the status of the test execution and the status of the test implementation.
LU13 Vulnerability discovery rate ratio of total # of vulnerabilities discovered for testItem, testCoverageItem to # of test cases.
LU14 Vulnerability density # of vulnerabilities / total size of the system (e.g. loc, Mbyte of binary, Mbyte of source code).
Table 1 – List up metrics
3.2 Coverage Metrics
This kind of metric tries to calculate how complete the testing was. Such metrics measure for example, how much of the potential input value space has actually been created as test data or how much of the code of the system under test has in fact been executed during the testing process. Coverage metrics are widely used for all kinds of testing and there is a large amount of literature on that subject [13][12][11].
Coverage metrics are typically used as an indicator for the overall test quality. Results can be used for documentation purpose within the risk analysis. Eventually coverage of negative tests might be an indicator for the likelihood that some vulnerability exists at all. Table 2 specifies a set of coverage metrics by denoting the ID, a name, a description of the metric and by showing references to the list up metrics from Table 1 that could be used to detail the respective coverage statements.
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Description</th>
<th>Combinations</th>
</tr>
</thead>
<tbody>
<tr>
<td>C1</td>
<td>Requirements or specification coverage</td>
<td>percentage of requirements/features/specification elements that are addressed by test cases/test procedures. Can be used as an indicator for the completeness of testing.</td>
<td>LU1 -LU10, E1, E2</td>
</tr>
<tr>
<td>C2</td>
<td>Attack surface coverage</td>
<td>percentage of the attack surface elements that are addressed by test cases/test procedures. Can be used as an indicator for the completeness of testing. Can be extended by counting up # of test cases/resources used for each interface item of the attack surface or by differentiating the vulnerabilities (C3) and the respective attack vector (see C4).</td>
<td>LU1 -LU10, E1, E2</td>
</tr>
<tr>
<td>C3</td>
<td>Known/expected vulnerability coverage</td>
<td>percentage of the known/expected vulnerabilities that are addressed by test cases/test procedures. Can be extended by counting up # of test cases/resources used for each known/expected vulnerability. Can be weighted with factors estimating severity, probability and detectability.</td>
<td>LU1 -LU10, E1, E2</td>
</tr>
</tbody>
</table>
Table 2 – Coverage metrics
The metric C4 (Attack Vector Coverage) is a variant of a metric that measures the coverage of the input space equivalence partitioning. Equivalence partitioning is a software testing technique that divides the input data or test scenarios into partitions. The data or scenarios within one partition are considered to be equivalent with respect to the given testing problem, thus it is expected that the results do not differ for any of the data or scenarios in one partition. In theory, equivalence partitioning requires only one test case for each partition to evaluate the software properties for the related partition.
In the case of security testing, we propose to define partitions on basis of the attack vector for a given vulnerability. The attack vector itself comprises all possible attacks to exploit a given vulnerability. By means of decomposition we have tried to identify attack vector classifications to distinguish attack vector partitions with equivalent attack vectors. Examples for vulnerabilities, the related attack vector and the proposed attack vector classifications could be found in Table 3.
<table>
<thead>
<tr>
<th>Vulnerability</th>
<th>Attack vector</th>
<th>Attack vector classification</th>
</tr>
</thead>
</table>
| CWE-89: Improper Neutralization of Special Elements used in an SQL Command | SQL injection | 1) Union exploitation
2) Boolean exploitation
a) Force usage of logical operations for invalidating values
b) Force usage of big numbers for invalidating values
3) Error-based exploitation
4) Out of band exploitation
5) Time delay exploitation
a) Use of escaping mechanism
b) Randomcase
6) Stored procedure exploitation
7) Obfuscation of the payload
8) Stacked queries exploitation |
Table 3 – Attack vector classification examples
3.3 Efficiency Metrics
Efficiency metrics are used to calculate how much effort has been spent for testing. These metrics are especially interesting for the case that with the testing effort spend so far no fault or unwanted incident has been triggered. The idea is that using the same attack strategy, which was used for testing, an attacker will probably have to spend even more resources in order to trigger an unwanted incident.
The result of an efficiency metrics for security testing is an indicator for the costs of related threat scenario. Taking the resources and the calculation power potential attackers have in relation to these costs might be a good indicator for the likelihood that the threat scenario will be exploited successfully within a given time period. Table 4 shows a set of efficiency metrics for testing.
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>E1</td>
<td>Test case/procedure preparation complexity: Effort per test case/procedure</td>
<td>sums up the efforts spent for specifying and implementing a test case or a test procedure. The efforts spent could be used as an indicator for the complexity of the testing problem and thus of the detectability of the addressed vulnerability.</td>
</tr>
</tbody>
</table>
Table 4 – Efficiency metrics
3.4 Process/Progress Related Metrics
Process/progress related metrics are used to measure the progress of the test process and the respective quality improvements of the test item over time. Table 5 shows two process/progress related testing metrics.
<table>
<thead>
<tr>
<th>ID</th>
<th>Name</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>P1</td>
<td>Vulnerability discovery rate increase/decrease</td>
<td>Compares vulnerability discovery rates over time. Can be used as an indicator if additional test effort leads to the identification of more vulnerabilities/failures.</td>
</tr>
<tr>
<td>P2</td>
<td>Test case/procedure preparation complexity increase/decrease</td>
<td>Compares test case/procedure preparation complexity over time. Can be used as an indicator if additional test effort would lead a number of reasonable new test cases.</td>
</tr>
</tbody>
</table>
Table 5 – Process/progress related metrics
3.5 The RASEN Testing Dashboard
The Testing Dashboard is designed to realize and visualize the security testing metrics defined in Chapter 3 and to provide an exporter for aggregated test reports[8].
3.5.1 Principles
A RASEN security testing metric refers to multiple parts of security models like risk model and test report. The RASEN Testing Dashboard manages the referenced models for such metrics, generates metrics and measurements for the security elements of interest, visualizes them in different views and provides an exporter for aggregated results.
3.5.2 Architecture
The Testing Dashboard is designed as a Java plugin for the Eclipse environment. It consists of a risk test model analyzer, a metric generator and two different views embedded in the Eclipse workbench (see Figure 3): a dashboard metric table view and a metric chart view. The risk test model analyzer processes registered models for metric generation and visualization. The GUI part of the Testing Dashboard provides different user interaction options, including selecting elements and metrics for the analysis, and setting different parameters for their visualization. Selected elements and their metrics can be exported as aggregated test reports.
Figure 3 – RASEN testing dashboard embedded in Eclipse
The dashboard architecture consists of the models to be analyzed and the aggregated report as the analysis result in the bottom level, the analyzer and the test metric generator in the middle, and the GUI part on the top level (see Figure 4).
The central part of RASEN Testing Dashboard is the analyzer component compositing and working on the associated risk test model, and the test metric generator processing the security metrics with help of the analyzer. The analyzer collects all data from the model needed for single analysis requests from the GUI or the test metric generator. The risk test model is a unified model of all registered models and will be used by the analyzer. Such single model can be dynamically added to and removed from the dashboard. Each model part, namely a risk model, a test report model and also a test pattern catalogue must be defined in the exchange format [7], an XML Ecore format, and will be loaded by an EMF loader. The risk test analyzer links such loaded EMF models together to the risk test model. To identify a relation between a risk model and a test report model, for instance, these models contain reference elements with only an identifier attribute for the related element in the related model. These reference elements are required for tracking relationships between test results, metrics and the risk test model.
The metric generator realizes the security testing metrics and enables the export into the aggregated test report format. The test metric format and the aggregated test report format is in detail described in [8]. The risk test analyzer will be used to resolve inquiries for the risk test model to create the metric and measurement values. Such values are used for visualization in the GUI as well as for the
The dashboard GUI as the user interaction part is integrated in the Eclipse workbench and its views are realized as Eclipse SWT view elements. The Testing Dashboard uses model parts, which are dynamically registered, as described before. To register, the user can select files of such models in the project explorer and also in the Java package explorer. By right-click on one or more selected files he can select the option “Register Model for Testing Dashboard” in the popup menu dialog (see Figure 5) for registration. The analyzer will try to load and register all selected files. If the loading fails, Eclipse will open an information dialog and continue with the next file. If the loading succeeded, the risk test model will be immediately updated, and also all dashboard views and shown metric values.

Only models in the exchange format defined in [7] are valid for the registration. Additionally, risk models in the CORAS risk format [16] can also be directly registered. This is realized by an internal transforming into the risk model exchange format. The registered models will be saved as property values in the Eclipse storage so that all models will be loaded with the start of a new Eclipse dashboard session.
The first view of the dashboard GUI is the dashboard metric table (see Figure 6). The view can be found in Eclipse in Window -> Show View ->Other in the “Other” folder. The view lists categorized elements and their metric values in a table format. Each line shows an element followed by the metric values of all registered metric types. By selecting the option “select displaying object(s)...” (see Figure 7) the user can choose one or more categories for displaying elements in the dashboard. A category can be “unwanted incidents” that displays all unwanted incidents of registered risk models and all metrics related to each incident, or can be “test procedure” that displays test procedures of registered test reports and all metrics related to them.
The user can also remove registered models or clear the whole dashboard by selecting the appropriate option in the option menu. Finally, all shown elements and metrics can be exported into a XML file. The exported format conforms to the aggregated test report format.
The second view is the metric chart view (see Figure 8) and can be found in Eclipse in Window -> Show View -> Other in the “Other” folder.
The chart view visualizes elements and their metric result in a chart diagram. The elements are selected by the user in the dashboard table view or in the CORAS diagram editor. The metric type for the charts is also selectable by users in the “select metric” dialog (see Figure 9). He can choose one of
all registered metrics in the dashboard. The user has also the option to switch between different chart types, namely: a bar chart, a stacked bar chart, a line chart and an area chart.

**Figure 9– Select metric for dashboard charts**
Finally, all visualized elements in chart diagram can also be exported in the same way as in the dashboard table view.
### 3.5.4 API and Implementation Guidelines
The RASEN security dashboard is designed to create new metrics and measurements with customizable visualizations. The risk test model can be used to evaluate extensive risk inquiries such as determining a risk value for a risk element or estimate the test effectiveness and test effort level by considering related test patterns and test reports for the risk element. This can be inevitable when complex metrics such as efficiency metrics are added to the dashboard.
The dashboard metric generator realizes the most of list up metrics in the current state. Other metrics, particularly single coverage metrics and efficiency metrics, need complex analytical methods on the risk test model for realization.
The underlying metric and measurement model handling intends to extend it for new metrics and measurement types for further development (see Figure 10). The test metric and measurement model is generated as an EMF Ecore model. A test metric model handler is available to create such models in a comfortable way. Customized metric and measurement classes are using this handler and have a small interface for specific calculations and for inquiry handling on the risk test model. These customized classes can be used to develop new metric types. To add the new metrics to the dashboard, the plug-in has to realize a `TestMetricFactory` class that provides all such new metrics, and register this factory to the provided extension point. The testing dashboard will dynamically load all such registered metric factories and provides the metrics for further handling of the metric generator and the dashboard GUI. To show a metric in the chart diagram, a specific visualization has to be specified, realized by a customized visualization class with different customization options like naming, coloring and displaying input values. A metric can only be displayed in the chart diagram with an according visualization.
Figure 10 – Overview of the test measurement and metric API
4 Summary
The overall objective of RASEN WP4 is to develop techniques to use risk assessment as guidance and basis for security testing, and to develop an approach that supports a systematic aggregation of security testing results by means of security testing metrics. The objective includes the development of a tool-based integrated process for guiding security testing by means of reasonable risk coverage and probability metrics. This deliverable is the last deliverable of a series of three deliverables that describes the RASEN tools and techniques for risk-based security testing.
Deliverable D4.2.1 has introduced a technique for risk-based test identification and prioritization and an approach for test identification and generation based on the notion of security test pattern and their formal representation by means of a test purpose language.
Deliverable D4.2.2 has updated the technique for risk-based test identification and prioritization with an approach for test identification, prioritization, selection and test case derivation based on CAPEC attack patterns. The approach describes how a test procedure can be derived in three steps by (1) generating generic risk models from CAPEC attack patterns, (2) adapting them to the target of the risk assessment and (3) deriving from the target-specific risk model a test procedure consisting of select and prioritized test scenarios. Based on these test scenarios, test patterns may be selected as starting point for test case derivation. They provide refined techniques for test generation (stimulation strategies) and test verdict arbitration (observation strategies). The actual test case generation starts by instantiating a security test pattern, employing security test purposes for test sequence generation and fuzzing techniques for actual security test case generation.
First, this deliverable has described the update developed to improve and extend the test purpose language expressiveness. These updates enable providing vulnerability test purposes formalizing complex and sophisticated attacks, and on the other hand to make them more efficient to detect vulnerabilities. More precisely, we have detailed the test purposes of the four types of vulnerability that have been targeted within the RASEN case studies: Cross-Site Scripting, SQL Injections (error-based, time-based and Boolean-based), Cross-Site Request Forgeries and Privilege Escalation (page-based and action-based).
Second, this deliverable has introduced the notion of test result aggregation and a testing dashboard in order to provide an overview of test progress and results. The RASEN Testing Dashboard is the implementation of a set of testing metrics that provide a problem specific view on the results of security testing. The RASEN Testing Dashboard on one hand allows the visualization of test results and their problem specific aggregation by testing metrics both in the context of testing as well as in the context of the initial risk assessment. On the other hand the RASEN Testing Dashboard allows exporting aggregated test results (i.e. test results and test metric results) for further processing by other tools.
All of the above mentioned achievements provide a set of techniques that completely support the risk-based security testing process, beginning with risk assessment, followed by test identification, test prioritization, test selection, test generation, test execution and finally test result aggregation, test reporting and visualization (see Figure 1 on page 6).
References
|
{"Source-Url": "https://cordis.europa.eu/docs/project/docs-projects-cnect-3-316853-080-deliverables-001-rasend423v10techniquesforcompositionalriskbasedsecuritytesting.pdf", "len_cl100k_base": 15756, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 95688, "total-output-tokens": 17609, "length": "2e13", "weborganizer": {"__label__adult": 0.0003757476806640625, "__label__art_design": 0.000469207763671875, "__label__crime_law": 0.0006694793701171875, "__label__education_jobs": 0.0011663436889648438, "__label__entertainment": 7.575750350952148e-05, "__label__fashion_beauty": 0.0001678466796875, "__label__finance_business": 0.00027561187744140625, "__label__food_dining": 0.0002536773681640625, "__label__games": 0.0009136199951171876, "__label__hardware": 0.0007910728454589844, "__label__health": 0.0003342628479003906, "__label__history": 0.000270843505859375, "__label__home_hobbies": 7.867813110351562e-05, "__label__industrial": 0.0004107952117919922, "__label__literature": 0.0002617835998535156, "__label__politics": 0.0002409219741821289, "__label__religion": 0.0003120899200439453, "__label__science_tech": 0.035888671875, "__label__social_life": 9.196996688842772e-05, "__label__software": 0.0162353515625, "__label__software_dev": 0.93994140625, "__label__sports_fitness": 0.00022041797637939453, "__label__transportation": 0.0002868175506591797, "__label__travel": 0.0001684427261352539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77725, 0.02186]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77725, 0.24648]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77725, 0.88566]], "google_gemma-3-12b-it_contains_pii": [[0, 81, false], [81, 569, null], [569, 1472, null], [1472, 2682, null], [2682, 5527, null], [5527, 8076, null], [8076, 11838, null], [11838, 12630, null], [12630, 15786, null], [15786, 19026, null], [19026, 22675, null], [22675, 26503, null], [26503, 30528, null], [30528, 33584, null], [33584, 37152, null], [37152, 40202, null], [40202, 43805, null], [43805, 47450, null], [47450, 50549, null], [50549, 52071, null], [52071, 56111, null], [56111, 59403, null], [59403, 61193, null], [61193, 62461, null], [62461, 64778, null], [64778, 66609, null], [66609, 68635, null], [68635, 69347, null], [69347, 71714, null], [71714, 71774, null], [71774, 75314, null], [75314, 77725, null]], "google_gemma-3-12b-it_is_public_document": [[0, 81, true], [81, 569, null], [569, 1472, null], [1472, 2682, null], [2682, 5527, null], [5527, 8076, null], [8076, 11838, null], [11838, 12630, null], [12630, 15786, null], [15786, 19026, null], [19026, 22675, null], [22675, 26503, null], [26503, 30528, null], [30528, 33584, null], [33584, 37152, null], [37152, 40202, null], [40202, 43805, null], [43805, 47450, null], [47450, 50549, null], [50549, 52071, null], [52071, 56111, null], [56111, 59403, null], [59403, 61193, null], [61193, 62461, null], [62461, 64778, null], [64778, 66609, null], [66609, 68635, null], [68635, 69347, null], [69347, 71714, null], [71714, 71774, null], [71774, 75314, null], [75314, 77725, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77725, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77725, null]], "pdf_page_numbers": [[0, 81, 1], [81, 569, 2], [569, 1472, 3], [1472, 2682, 4], [2682, 5527, 5], [5527, 8076, 6], [8076, 11838, 7], [11838, 12630, 8], [12630, 15786, 9], [15786, 19026, 10], [19026, 22675, 11], [22675, 26503, 12], [26503, 30528, 13], [30528, 33584, 14], [33584, 37152, 15], [37152, 40202, 16], [40202, 43805, 17], [43805, 47450, 18], [47450, 50549, 19], [50549, 52071, 20], [52071, 56111, 21], [56111, 59403, 22], [59403, 61193, 23], [61193, 62461, 24], [62461, 64778, 25], [64778, 66609, 26], [66609, 68635, 27], [68635, 69347, 28], [69347, 71714, 29], [71714, 71774, 30], [71774, 75314, 31], [75314, 77725, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77725, 0.13948]]}
|
olmocr_science_pdfs
|
2024-12-09
|
2024-12-09
|
9b71a34e068161fdb0c829df3bb7f39a2b1d075e
|
This document defines an XMPP protocol extension for broadcasting and dynamically discovering client, device, or generic entity capabilities. In order to minimize network impact, the transport mechanism is standard XMPP presence broadcast (thus forestalling the need for polling related to service discovery data), the capabilities information can be cached either within a session or across sessions, and the format has been kept as small as possible.
Legal
Copyright
This XMPP Extension Protocol is copyright © 1999 – 2024 by the XMPP Standards Foundation (XSF).
Permissions
Permission is hereby granted, free of charge, to any person obtaining a copy of this specification (the "Specification"), to make use of the Specification without restriction, including without limitation the rights to implement the Specification in a software program, deploy the Specification in a network service, and copy, modify, merge, publish, translate, distribute, sublicense, or sell copies of the Specification, and to permit persons to whom the Specification is furnished to do so, subject to the condition that the foregoing copyright notice and this permission notice shall be included in all copies or substantial portions of the Specification. Unless separate permission is granted, modified works that are redistributed shall not contain misleading information regarding the authors, title, number, or publisher of the Specification, and shall not claim endorsement of the modified works by the authors, any organization or project to which the authors belong, or the XMPP Standards Foundation.
Warranty
## NOTE WELL: This Specification is provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. ##
Liability
In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall the XMPP Standards Foundation or any author of this Specification be liable for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising from, out of, or in connection with the Specification or the implementation, deployment, or other use of the Specification (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if the XMPP Standards Foundation or such author has been advised of the possibility of such damages.
Conformance
This XMPP Extension Protocol has been contributed in full conformance with the XSF’s Intellectual Property Rights Policy (a copy of which can be found at <https://xmpp.org/about/xsf/ipr-policy> or obtained by writing to XMPP Standards Foundation, P.O. Box 787, Parker, CO 80134 USA).
## Contents
1 Introduction ................................. 1
1.1 Motivation .................................. 1
1.2 How It Works ............................. 1
2 Assumptions ................................ 4
3 Requirements ............................... 4
4 Protocol .................................... 5
5 Verification String ......................... 6
5.1 Generation Method ....................... 6
5.2 Simple Generation Example ............. 7
5.3 Complex Generation Example .......... 8
5.4 Processing Method ....................... 9
6 Use Cases ................................ 11
6.1 Advertising Capabilities ............... 11
6.2 Discovering Capabilities ............... 11
6.3 Stream Feature ......................... 12
7 Determining Support .................... 13
8 Implementation Notes .................... 14
8.1 Hashing Algorithm Support ............. 14
8.2 Caching ................................ 14
8.3 Directed Presence ....................... 14
8.4 Caps Optimization ....................... 15
9 Security Considerations ............... 15
9.1 Mandatory-to-Implement Technologies 15
9.2 Preimage Attacks ....................... 15
9.3 Caps Poisoning ......................... 17
9.4 Information Exposure .................... 17
10 IANA Considerations .................. 18
11 XMPP Registrar Considerations ....... 18
11.1 Protocol Namespaces .................... 18
11.2 Service Discovery Features .......... 18
11.3 Stream Features ....................... 18
12 XML Schema .......................... 18
13 Legacy Format
14 Acknowledgements
1 Introduction
1.1 Motivation
It is often desirable for an XMPP application (commonly but not necessarily a client) to take different actions depending on the capabilities of another application from which it receives presence information. Examples include:
- Showing a different set of icons depending on the capabilities of other entities.
- Not sending XHTML-IM (XEP-0071) or other rich content to plaintext clients such as cell phones.
- Allowing the initiation of a Voice over IP (VoIP) session only to clients that support Jingle (XEP-0166) and Jingle RTP Sessions (XEP-0167).
- Not showing a "Send a File" button if another user’s client does not support SI File Transfer (XEP-0096).
- Filtering Publish-Subscribe (XEP-0060) notifications based on advertised subscriber interests.
In the past, after logging in some Jabber clients sent one Service Discovery (XEP-0030) and one Software Version (XEP-0092) request to each entity from which they received presence. That "disco/version flood" resulted in an excessive use of bandwidth and was impractical on a larger scale, particularly for users with large rosters. Therefore this document defines a more robust and scalable solution: namely, a presence-based mechanism for exchanging information about entity capabilities. Clients should not engage in the older "disco/version flood" behavior and instead should use Entity Capabilities as specified herein.
1.2 How It Works
This section provides a friendly introduction to entity capabilities ("caps"). Imagine that you are a Shakespearean character named Juliet and one of your contacts, a handsome fellow named Romeo, becomes available. His client wants to publish its capabilities, and does this by adding to its presence packets a `<c/>` element with special attributes. As a result, your client receives the following presence packet:
8 Entity capabilities is not limited to clients, and can be used by any entity that exchanges presence with another entity, e.g., a gateway. However, this specification mainly uses the example of clients.
The 'node' attribute represents the client software Romeo is using. The 'ver' attribute is a specially-constructed string (called a "verification string") that represents the entity's service discovery identity (category and type as registered at <https://xmpp.org/registrar/disco-categories.html>, as well as, optionally, xml:lang and name) and supported features (as registered at <https://xmpp.org/registrar/disco-features.html> as well as, optionally, extended service discovery information data registered at <https://xmpp.org/registrar/formtypes.html>).
At this point, your client has no idea what the capabilities are of someone with a verification string 'QgayPKawpPSwYmW/WM94uAlu0='. Your client therefore sends a service discovery query to Romeo, asking what his client can do.
The response is:
```
<iq from='juliet@capulet.lit/chamber'
id='disco1'
to='romeo@montague.lit/orchard'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'
node='http://code.google.com/p/exodus#/QgayPKawpPSwYmW/WM94uAlu0='/>
</iq>
```
At this point, your client knows that a contact who advertises a verification string of 'QgayPKawpKPSDYmwT/WM94uAlu0=' supports Multi-User Chat (XEP-0045) and the other features returned by Romeo because the contact in fact uses the same version of the same client software as Romeo, with the same enabled features, plugins, presented client name(s), and the like (i.e., the same input to the verification string generation method). Your client remembers this information, so that it does not need to explicitly query the capabilities of a contact with the same verification string. For example, your Nurse may use the same client that Romeo does:
```xml
<presence from='nurse@capulet.lit/chamber'>
<c xmlns='http://jabber.org/protocol/caps' hash='sha-1'
node='http://code.google.com/p/exodus'
ver='QgayPKawpKPSDYmwT/WM94uAlu0='/>
</presence>
```
Therefore you know that she also supports the same features that Romeo does.
On the other hand, for a person with the following presence ...
```xml
<presence from='benvolio@capulet.lit/230193'>
<c xmlns='http://jabber.org/protocol/caps' hash='sha-1'
node='http://psi-im.org'
ver='q07IKJEyjvHSyhy//CH0CxmKi8w='/>
</presence>
```
... or the following presence ...
```xml
<presence from='bard@shakespeare.lit/globe'>
<c xmlns='http://jabber.org/protocol/caps' hash='sha-1'
node='http://www.chatopus.com'
ver='zHyE0gxTrkpSdGcQKH8EFPLsriY='/>
</presence>
```
... you have no information about what this contact’s client is capable of unless you have cached previous entity capabilities information; therefore you need to query for capabilities explicitly again via service discovery.
---
10The string can be relied upon because of how it is generated and checked, as explained later in this document.
2 Assumptions
This document makes several assumptions:
- The identity of the client I am using is of interest to the people in my roster.
- Clients for the people on my roster might want to make user interface decisions based on my capabilities.
- Members of a community tend to cluster around a small set of clients with a small set of capabilities. More specifically, multiple people in my roster use the same client, and they upgrade versions relatively slowly (commonly a few times a year, perhaps once a week at most, certainly not once a minute).
- Some clients are running on networks without server-to-server connectivity enabled and without access to the Internet via HTTP.
- Conversations are possible between users who are not on each other’s rosters.
- Client capabilities may change over the course of a presence session, as features are enabled or disabled.
3 Requirements
The protocol defined herein addresses the following requirements:
1. Clients must be able to participate even if they support only XMPP Core\(^{11}\), XMPP IM\(^{12}\), and XEP-0030.
2. Clients must be able to participate even if they are on networks without connectivity to other XMPP servers, services offering specialized XMPP extensions, or HTTP servers.\(^{13}\)
3. Clients must be able to retrieve information without querying every entity with which they communicate.
4. Since presence is normally broadcast to many contacts, the byte size of the proposed extension must be as small as possible.
5. It must be possible to write a XEP-0045 server implementation that passes the given information along.
6. It must be possible to publish a change in capabilities within a single presence session.
---
\(^{13}\)These first two requirements effectively eliminated XEP-0060 as a possible implementation of entity capabilities.
7. Server infrastructure above and beyond that defined in XMPP Core and XMPP IM must not be required for this approach to work, although additional server infrastructure may be used for optimization purposes.
8. The defined mechanism must not be limited to clients but must be usable by servers, components, and other network entities.
### 4 Protocol
Entity capabilities are encapsulated in a `<c/>` element qualified by the 'http://jabber.org/protocol/caps' namespace. The attributes of the `<c/>` element are as follows.
<table>
<thead>
<tr>
<th>Name</th>
<th>Definition</th>
<th>Inclusion</th>
</tr>
</thead>
<tbody>
<tr>
<td>ext</td>
<td>A set of nametokens specifying additional feature bundles; this attribute is deprecated (see the Legacy Format section of this document).</td>
<td>DEPRECATED</td>
</tr>
<tr>
<td>hash</td>
<td>The hashing algorithm used to generate the verification string; see Mandatory-to-Implement Technologies regarding supported hashing algorithms.</td>
<td>REQUIRED</td>
</tr>
<tr>
<td>node</td>
<td>A URI that uniquely identifies a software application, typically a URL at the website of the project or company that produces the software. *</td>
<td>REQUIRED</td>
</tr>
<tr>
<td>ver</td>
<td>A string that is used to verify the identity and supported features of the entity. **</td>
<td>REQUIRED</td>
</tr>
</tbody>
</table>
* Note: It is RECOMMENDED for the value of the 'node' attribute to be an HTTP URL at which a user could find further information about the software product, such as “http://psi-im.org” for the Psi client; this enables a processing application to also determine a unique string for the generating application, which it could maintain in a list of known software implementations (e.g., associating the name received via the disco#info reply with the URL found in the caps data).
* Note: Before version 1.4 of this specification, the 'ver' attribute was used to specify the released version of the software; while the values of the 'ver' attribute that result from use of the algorithm specified herein are backwards-compatible, applications SHOULD appropriately handle the Legacy Format.
5 Verification String
5.1 Generation Method
In order to help prevent poisoning of entity capabilities information, the value of the verification string MUST be generated according to the following method.
Note: All sorting operations MUST be performed using "i;octet" collation as specified in Section 9.3 of RFC 4790 14.
1. Initialize an empty string S.
2. Sort the service discovery identities 15 by category and then by type and then by xml:lang (if it exists), formatted as CATEGORY '/' [TYPE] '/' [LANG] '/' [NAME]. 16 Note that each slash is included even if the LANG or NAME is not included (in accordance with XEP-0030, the category and type MUST be included).
3. For each identity, append the 'category/type/lang/name' to S, followed by the '<' character.
4. Sort the supported service discovery features. 17
5. For each feature, append the feature to S, followed by the '<' character.
6. If the service discovery information response includes XEP-0128 data forms, sort the forms by the FORM_TYPE (i.e., by the XML character data of the <value/> element).
7. For each extended service discovery information form:
a) Append the XML character data of the FORM_TYPE field's <value/> element, followed by the '<' character.
b) Sort the fields by the value of the "var" attribute.
c) For each field other than FORM_TYPE:
i. Append the value of the "var" attribute, followed by the '<' character.
ii. Sort values by the XML character data of the <value/> element.
iii. For each <value/> element, append the XML character data, followed by the '<' character.
8. Ensure that S is encoded according to the UTF-8 encoding (RFC 3629 18).
16The combination of category, type, and xml:lang forms a unique combination, so it is not necessary to also sort by name (the name merely provides some human-readable text associated with a category/type/lang).
17A registry of service discovery features is located at <https://xmpp.org/registrar/disco-features.html>.
9. Compute the verification string by hashing S using the algorithm specified in the 'hash' attribute (e.g., SHA-1 as defined in RFC 3174 19). The hashed data MUST be generated with binary output and encoded using Base64 as specified in Section 4 of RFC 4648 20 (note: the Base64 output MUST NOT include whitespace and MUST set padding bits to zero). 21
Note: If the four characters '&', 'l', 't', ';' appear consecutively in any of the factors of the verification string S (e.g., a service discovery identity of 'Some-Client<:http://jabber.org/protocol/muc') then that string of characters MUST be treated as literally '<' and MUST NOT be converted to the character '&:', because completing such a conversion would open the protocol to trivial attacks.
5.2 Simple Generation Example
Consider an entity whose category is "client", whose service discovery type is "pc", whose service discovery name is "Exodus 0.9.1", and whose supported features are "http://jabber.org/protocol/disco#info", "http://jabber.org/protocol/disco#items", and "http://jabber.org/protocol/muc". Using the SHA-1 algorithm, the verification string would be generated as follows (note: line breaks in the verification string are included only for the purposes of readability):
1. S =
2. Only one identity: "client/pc"
3. S = 'client/pc//Exodus 0.9.1<'
6. ver = QgayPKawpkPSDYmwT/WM94uAlu0=
21 The OpenSSL command for producing such output with SHA-1 is "echo -n 'S'| openssl dgst -binary -sha1 | openssl enc -nopad -base64".
5.3 Complex Generation Example
Consider a more complex example, where the entity includes several identities (with the service discovery name in different languages) as well as extended information formatted according to XEP-0128.
```
<iq from='benvolio@capulet.lit/230193'
id='disco1'
to='juliet@capulet.lit/chamber'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'
node='http://psi-im.org#q07IKJEyjvHSyhy//CH0CxmK18w='>
<identity xml:lang='en' category='client' name='Psi_0.11' type='pc'/>
<identity xml:lang='el' category='client' name='Psi_0.11' type='pc'/>
<Feature var='http://jabber.org/protocol/caps'/>
<Feature var='http://jabber.org/protocol/disco#info'/>
<Feature var='http://jabber.org/protocol/disco#items'/>
<Feature var='http://jabber.org/protocol/muc'/>
<x xmlns='jabber:x:data' type='result'>
<field var='FORM_TYPE' type='hidden'>
<value>urn:xmpp:dataforms:softwareinfo</value>
</field>
<field var='ip_version' type='text-multi'>
<value>ipv4</value>
<value>ipv6</value>
</field>
<field var='os'>
<value>Mac</value>
</field>
<field var='os_version'>
<value>10.5.1</value>
</field>
<field var='software'>
<value>Psi</value>
</field>
<field var='software_version'>
<value>0.11</value>
</field>
</x>
</query>
</iq>
```
Using the SHA-1 algorithm, the verification string would be generated as follows (note: line breaks in the verification string are included only for the purposes of readability):
1. \( S = '' \)
2. Two identities: "client/pc/Psi" and "client/pc/□"
3. \( S = 'client/pc/el/□0.11<client/pc/en/Psi 0.11<' \)
5. \( S = 'client/pc/el/□0.11<client/pc/en/Psi 0.11<http://jabber.org/protocol/caps<http://jabber.org/protocol/disco#info< http://jabber.org/protocol/disco#items<http://jabber.org/protocol/muc<' \)
6. Sort the extended service discovery forms by FORM_TYPE (there is only one: "urn:xmpp:dataforms:softwareinfo").
8. Sort the fields by var and append the value(s): "ip_version<ipv4<ipv6", "os<Mac", "os_version<10.5.1", "software<Psi", "software_version<0.11".
9. \( S = 'client/pc/el/□0.11<client/pc/en/Psi 0.11<http://jabber.org/protocol/caps<http://jabber.org/protocol/disco#info< http://jabber.org/protocol/disco#items<http://jabber.org/protocol/muc<urn:xmpp:dataforms:softwareinfo<ip_version<ipv4<ipv6<os<Mac<os_version<10.5.1<software<Psi<software_version<0.11<' \)
10. \( \text{ver} = q07IKJEyjhSyhy//CH0CxmKi8w= \)
### 5.4 Processing Method
When an entity receives a value of the 'ver' attribute that appears to be a verification string generated in accordance with the generation method defined in this specification, it MUST process the 'ver' according to the following method.
1. Verify that the <c/> element includes a 'hash' attribute. If it does not, ignore the 'ver' or treat it as generated in accordance with the Legacy Format (if supported).
2. If the value of the 'hash' attribute does not match one of the processing application’s supported hash functions, do the following:
a) Send a service discovery information request to the generating entity.
b) Receive a service discovery information response from the generating entity.
c) Do not validate or globally cache the verification string as described below; instead, the processing application SHOULD associate the discovered identity+features only with the JabberID of the generating entity.
3. If the value of the 'hash' attribute matches one of the processing application’s supported hash functions, validate the verification string by doing the following:
a) Send a service discovery information request to the generating entity.
b) Receive a service discovery information response from the generating entity.
c) If the response includes more than one service discovery identity with the same category/type/lang/name, consider the entire response to be ill-formed.
d) If the response includes more than one service discovery feature with the same XML character data, consider the entire response to be ill-formed.
e) If the response includes more than one extended service discovery information form with the same FORM_TYPE or the FORM_TYPE field contains more than one <value/> element with different XML character data, consider the entire response to be ill-formed.
f) If the response includes an extended service discovery information form where the FORM_TYPE field is not of type ”hidden” or the form does not include a FORM_TYPE field, ignore the form but continue processing.
g) If the response is considered well-formed, reconstruct the hash by using the service discovery information response to generate a local hash in accordance with the Generation Method).
h) If the values of the received and reconstructed hashes match, the processing application MUST consider the result to be valid and SHOULD globally cache the result for all JabberIDs with which it communicates.
i) If the values of the received and reconstructed hashes do not match, the processing application MUST consider the result to be invalid and MUST NOT globally cache the verification string; however, it SHOULD check the service discovery identity and supported features of another generating entity who advertises that value.
Note: If the four characters '&', 'l', 't', ';' appear consecutively in any of the factors of the verification string S (e.g., a service discovery identity of 'Some-Client<http://jabber.org/protocol/muc') then that string of characters MUST be treated as literally '<', and MUST NOT be converted to the character '<', because completing such a conversion would open the protocol to trivial attacks.
6 Use Cases
6.1 Advertising Capabilities
Each time a generating entity sends presence, it annotates that presence with an entity identifier ('node' attribute) and identity and feature identifier ('ver' attribute). So that servers can remember the last presence for use in responding to probes, a client SHOULD include entity capabilities with every presence notification it sends.
Listing 1: Presence with caps
```xml
<presence>
<c xmlns='http://jabber.org/protocol/caps'
hash='sha-1'
node='http://code.google.com/p/exodus/
ver='QgayPKawpkPSDYmwT/WM94uAlu0='/>
</presence>
```
If the supported features change during a generating entity's presence session (e.g., a user installs an updated version of a client plugin), the application MUST recomputethe verification string and SHOULD send a new presence broadcast.
Listing 2: Presence with recomputed ver attribute
```xml
<presence>
<c xmlns='http://jabber.org/protocol/caps'
hash='sha-1'
node='http://code.google.com/p/exodus/
ver='66/0NaeaBKkwk85efJTGmU47vXI='/>
</presence>
```
6.2 Discovering Capabilities
An application (the "requesting entity") can learn what features another entity supports by sending a disco#info request (see XEP-0030) to the entity that generated the caps information (the "generating entity").
Listing 3: Disco#info request
```xml
<iq from='juliet@capulet.lit/balcony'
id='disco1'
to='romeo@montague.lit/orchard'
type='get'>
<query xmlns='http://jabber.org/protocol/disco#info'
node='http://code.google.com/p/exodus/#QgayPKawpkPSDYmwT/WM94uAlu0='/>
</iq>
```
The disco#info request is sent by the requesting entity to the generating entity. The value of the 'to' attribute MUST be the exact JID of the generating entity, which in the case of a client will be the full JID <localpart@domain.tld/resource>.
Note: The generating entity SHOULD NOT include the "caps node" in the list of entities it returns in its disco#items responses; i.e., the caps node is a kind of virtual or phantom node, not a true items node that is associated with the generating entity for service discovery purposes.
The disco 'node' attribute MUST be included for backwards-compatibility. The value of the 'node' attribute SHOULD be generated by concatenating the value of the caps 'node' attribute (e.g., "http://code.google.com/p/exodus") as provided by the generating entity, the "#" character, and the value of the caps 'ver' attribute (e.g., "QgayPKawpkPSDYmwT/WM94uAlu0=") as provided by the generating entity.
The generating entity then returns all of the capabilities it supports.
### Listing 4: Disco#info response
```xml
<iq from='romeo@montague.lit/orchard' id='disco1' to='juliet@capulet.lit/balcony' type='result'>
<query xmlns='http://jabber.org/protocol/disco#info' node='http://code.google.com/p/exodus/#QgayPKawpkPSDYmwT/WM94uAlu0='>
<identity category='client' type='pc'/>
<feature var='http://jabber.org/protocol/disco#info'/>
<feature var='http://jabber.org/protocol/disco#items'/>
<feature var='http://jabber.org/protocol/muc'/>
</query>
</iq>
```
Note: If the generating entity incorporated multiple identities with different xml:lang values in its verification string, it MUST return all of the identities even if the request specified a particular xml:lang.
### 6.3 Stream Feature
A server MAY include its entity capabilities in a stream feature element so that connecting clients and peer servers do not need to send service discovery requests each time they connect.
### Listing 5: Stream feature element including capabilities
```xml
<stream:features>
<c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jabberd.org'/>
<feature var='ItBTI0XLDfVvZ72NQElAzKS9sU='/>
</stream:features>
```
When a connected client or peer server sends a service discovery information request to determine the entity capabilities of a server that advertises capabilities via the stream feature, the requesting entity MUST send the disco#info request to the server’s JID as provided in the 'from' attribute of the response stream header (the 'from' attribute was recommended by RFC 3920 22 and is required by RFC 6120 23). To enable this functionality, a server that advertises support for entity capabilities MUST provide a 'from' address in its response stream headers, in accordance with RFC 6120.
7 Determining Support
If an entity supports the entity capabilities protocol, it MUST advertise that fact by returning a feature of 'http://jabber.org/protocol/caps' in response to a service discovery information request.
Listing 6: Service discovery information request
```xml
<iq from='romeo@montague.lit/orchard'
id='disco2'
to='juliet@capulet.lit/balcony'
type='get'>
<query xmlns='http://jabber.org/protocol/disco#info'/>
</iq>
```
Listing 7: Service discovery information response
```xml
<iq from='juliet@capulet.lit/balcony'
id='disco2'
to='romeo@montague.lit/orchard'
type='result'>
<query xmlns='http://jabber.org/protocol/disco#info'>
...
<feature var='http://jabber.org/protocol/caps'/>
...
</query>
</iq>
```
If a server supports the Caps Optimization functionality, it MUST also return a feature of 'http://jabber.org/protocol/caps#optimize' in response to service discovery information requests.
8 Implementation Notes
8.1 Hashing Algorithm Support
An application SHOULD maintain a list of hashing algorithms it supports, which MUST include the algorithm or algorithms listed in the Mandatory-to-Implement Technologies section of this document.
8.2 Caching
It is RECOMMENDED for an application that processes entity capabilities information to cache associations between the verification string and discovered identity+features within the scope of one presence session. This obviates the need for extensive service discovery requests within a session.
It is RECOMMENDED for an application to cache associations across presence sessions, since this obviates the need for extensive service discovery requests at the beginning of a session (this is especially helpful in bandwidth-constrained environments).
8.3 Directed Presence
If two entities exchange messages but they do not normally exchange presence (i.e., via presence subscription), the entities MAY choose to send directed presence to each other,
where the presence information SHOULD be annotated with the same capabilities information as each entity sends in presence broadcasts. Until and unless capabilities information has been received from another entity, an application MUST assume that the other entity does not support capabilities.
8.4 Caps Optimization
A server that is managing an connected client’s presence session MAY optimize presence notification traffic sent through the server by stripping off redundant capabilities annotations (i.e., the \(<c/>\) element). Because of this, receivers of presence notifications MUST NOT expect an annotation on every presence notification they receive. If the server performs caps optimization, it MUST ensure that the first presence notification each subscriber receives contains the annotation. The server MUST also ensure that any changes in the caps information (e.g., an updated ‘ver’ attribute) are sent to all subscribers.
If a connected client determines that its server supports caps optimization, it MAY choose to send the capabilities annotation only on the first presence packet, as well as whenever its capabilities change.
9 Security Considerations
9.1 Mandatory-to-Implement Technologies
The SHA-1 hashing algorithm is mandatory to implement. All implementations MUST support SHA-1.
An implementation MAY support other algorithms. Any such algorithm SHOULD be registered in the [IANA Hash Function Textual Names Registry](http://www.iana.org/assignments/hash-function-text-names).
In the future, the [XMPP Council](https://xmpp.org/about/xmpp-standards-foundation#council) may, at its discretion, modify the mandatory-to-implement hashing algorithm if it determines that SHA-1 has become practically vulnerable to Preimage Attacks.
9.2 Preimage Attacks
As described in [RFC 4270](http://tools.ietf.org/html/rfc4270), protocols that use the output of hash functions such as MD5 or SHA-1 can be vulnerable to collision attacks or preimage attacks or both. Because of how the hash output is used in entity capabilities, the protocol will not be subject to collision attacks even if the hash function used is found to be vulnerable to collision attacks. However, it is
---
25The XMPP Council is a technical steering committee, authorized by the XSF Board of Directors and elected by XSF members, that approves of new XMPP Extensions Protocols and oversees the XSF’s standards process. For further information, see <https://xmpp.org/about/xmpp-standards-foundation#council>.
possible that the protocol might become subject to preimage attacks if the hash function used is found to be vulnerable to preimage attacks.
In theory, such a preimage attack would take one of the following forms:
- Given knowledge of a particular value V of the 'ver' attribute, an attacker can find an input message X such that hash(X) yields V (this is known as a "first preimage attack").
- Given knowledge of a particular value S used as the input message to the hash function, an attacker can find a value S' that yields V (this is known as a "second preimage attack").
In practice, a preimage attack would need to meet all of the following criteria in order to be effective against the entity capabilities protocol:
1. The hashing algorithm used would need to be found not only theoretically but practically vulnerable to first or second preimage attacks (e.g., this is not yet true of the MD5 or SHA-1 algorithms, but may become true in the future).
2. An attacker would need to find an input message X or S' that matches the hash V for a particular value of V or S, which may not be practical given that (a) the values of S used as input to the hash function in entity capabilities are relatively short and (b) cryptanalysis to date indicates that existing hash functions may not be vulnerable to preimage attacks except in the case of relatively long input messages (on the order of $2^{55}$ blocks).
3. The input message X or S' would need to conform to the structure of S as specified under Verification String, including the order of service discovery identity or identities followed by service discovery features, delimited by the '<' character and sorted using "i;octet" collation.
4. The input message X or S' would need to make it seem as if a desirable feature (e.g., end-to-end encryption) is not supported by other entities that advertise the same hash V even though the feature is indeed supported (i.e., the attacker would need to return a set of service discovery identities and features that match X or S', and have that set be plausible for an entity that communicates via XMPP), or make it seem as if an undesirable feature is supported even though the feature is not supported.
5. The attacker would need to propagate the hash V before some other entity with the true input message S could broadcast presence with the relevant entity capabilities data and provide the true service discovery response (thus the attacker might need to subvert the development process of a particular software project or subvert the namespace issuance process of the XMPP Registrar\(^\text{27}\), or both).
\(^{27}\)The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
It currently seems extremely unlikely that an attacker could meet all of the foregoing conditions in the foreseeable future. However, the XMPP Council shall continue to monitor the state of cryptanalysis regarding the mandatory-to-implement hash function as well as the possibility that any vulnerabilities in that function might lead to practical threats against the entity capabilities protocol. If and when it becomes practical (or even possible) to launch effective preimage attacks against the entity capabilities protocol, the XMPP Council shall consider updating this specification to change the mandatory-to-implement hashing algorithm to a safer technology.
Note: If the four characters '&', 'l', 't', ';' appear consecutively in any of the factors of the verification string S (e.g., a service discovery identity of 'Some-Client<http://jabber.org/protocol/muc') then that string of characters MUST be treated as literally '<' and MUST NOT be converted to the character '<', because completing such a conversion would open the protocol to trivial attacks.
9.3 Caps Poisoning
Adherence to the method defined in the Verification String section of this document for processing of the 'ver' attribute is known to be vulnerable to certain cache poisoning attacks that can not be fixed in a backwards compatible manner. If the value of the 'ver' attribute is a verification string as defined herein (i.e., if the 'ver' attribute is not generated according to the Legacy Format), inclusion of the 'hash' attribute is REQUIRED. Knowing explicitly that the value of the 'ver' attribute is a verification string enables the recipient to avoid spurious notification of invalid or poisoned hashes.
9.4 Information Exposure
Use of entity capabilities might make it easier for an attacker to launch certain application-specific attacks, since the attacker could more easily determine the type of client being used as well as its capabilities. However, since most clients respond to Service Discovery and Software Version requests without performing access control checks, there is no new vulnerability. Entities that wish to restrict access to capabilities information SHOULD use Privacy Lists (XEP-0016) to define appropriate communications blocking (e.g., an entity MAY choose to allow IQ requests only from “trusted” entities, such as those with whom it has a presence subscription of “both”); note, however, that such restrictions may be incompatible with the recommendation regarding Directed Presence.
28[Security] Trivial preimage attack against the entity capabilities protocol.
10 IANA Considerations
This document requires no interaction with the Internet Assigned Numbers Authority (IANA).
11 XMPP Registrar Considerations
11.1 Protocol Namespaces
11.2 Service Discovery Features
11.3 Stream Features
12 XML Schema
```xml
<?xml version='1.0' encoding='UTF-8'?>
<xs:schema
xmlns:xs='http://www.w3.org/2001/XMLSchema'
targetNamespace='http://jabber.org/protocol/caps'
xmlns='http://jabber.org/protocol/caps'
elementFormDefault='qualified'>
<xs:annotation>
<xs:documentation>
...The Internet Assigned Numbers Authority (IANA) is the central coordinator for the assignment of unique parameter values for Internet protocols, such as port numbers and URI schemes. For further information, see <http://www.iana.org/>.
31 The XMPP Registrar maintains a list of reserved protocol namespaces as well as registries of parameters used in the context of XMPP extension protocols approved by the XMPP Standards Foundation. For further information, see <https://xmpp.org/registrar/>.
```
The protocol documented by this schema is defined in XEP-0115: http://www.xmpp.org/extensions/xep-0115.html
13 Legacy Format
Before Version 1.4 of this specification, the 'ver' attribute was generated differently, the 'ext' attribute was used more extensively, and the 'hash' attribute was absent. For historical purposes, Version 1.3 of this specification is archived at <http://www.xmpp.org/extensions/attic/xep-0115-1.3.html>. For backwards-compatibility with the legacy format, the 'node' attribute is REQUIRED and the 'ext' attribute MAY be included.
An application can determine if the legacy format is in use by checking for the presence of the 'hash' attribute, which is REQUIRED in the current format.
If a caps-processing application supports the legacy format, it SHOULD check the 'node', 'ver', and 'ext' combinations as specified in the archived version 1.3 of this specification, and MAY cache the results.
If a caps-processing application does not support the legacy format, it SHOULD ignore the 'ver' value entirely (since the value cannot be verified) and SHOULD NOT cache it, since the application cannot validate the identity and features by checking the hash.
14 Acknowledgements
Thanks to Rachel Blackman, Dave Cridland, Richard Dobson, Olivier Goffart, Sergei Golovan, Justin Karneges, Ralph Meijer, Ian Paterson, Kevin Smith, Tomasz Sterna, Michal Vaner, and Matt Yacobucci for comments and suggestions.
|
{"Source-Url": "https://xmpp.org/extensions/xep-0115.pdf", "len_cl100k_base": 8982, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 59079, "total-output-tokens": 11312, "length": "2e13", "weborganizer": {"__label__adult": 0.0004322528839111328, "__label__art_design": 0.0003554821014404297, "__label__crime_law": 0.002132415771484375, "__label__education_jobs": 0.000873565673828125, "__label__entertainment": 0.0001857280731201172, "__label__fashion_beauty": 0.000179290771484375, "__label__finance_business": 0.0009741783142089844, "__label__food_dining": 0.00024378299713134768, "__label__games": 0.001110076904296875, "__label__hardware": 0.00399017333984375, "__label__health": 0.0003254413604736328, "__label__history": 0.0004050731658935547, "__label__home_hobbies": 7.826089859008789e-05, "__label__industrial": 0.0005307197570800781, "__label__literature": 0.0004382133483886719, "__label__politics": 0.0004150867462158203, "__label__religion": 0.0004684925079345703, "__label__science_tech": 0.10760498046875, "__label__social_life": 9.989738464355467e-05, "__label__software": 0.1253662109375, "__label__software_dev": 0.7529296875, "__label__sports_fitness": 0.00029921531677246094, "__label__transportation": 0.0004589557647705078, "__label__travel": 0.00021016597747802737}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42700, 0.03105]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42700, 0.1819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42700, 0.81362]], "google_gemma-3-12b-it_contains_pii": [[0, 453, false], [453, 2988, null], [2988, 4575, null], [4575, 4613, null], [4613, 7177, null], [7177, 8240, null], [8240, 10106, null], [10106, 12188, null], [12188, 14171, null], [14171, 16469, null], [16469, 18510, null], [18510, 20308, null], [20308, 22090, null], [22090, 24857, null], [24857, 26472, null], [26472, 28653, null], [28653, 30431, null], [30431, 31446, null], [31446, 34163, null], [34163, 37049, null], [37049, 39714, null], [39714, 41268, null], [41268, 42453, null], [42453, 42700, null]], "google_gemma-3-12b-it_is_public_document": [[0, 453, true], [453, 2988, null], [2988, 4575, null], [4575, 4613, null], [4613, 7177, null], [7177, 8240, null], [8240, 10106, null], [10106, 12188, null], [12188, 14171, null], [14171, 16469, null], [16469, 18510, null], [18510, 20308, null], [20308, 22090, null], [22090, 24857, null], [24857, 26472, null], [26472, 28653, null], [28653, 30431, null], [30431, 31446, null], [31446, 34163, null], [34163, 37049, null], [37049, 39714, null], [39714, 41268, null], [41268, 42453, null], [42453, 42700, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42700, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42700, null]], "pdf_page_numbers": [[0, 453, 1], [453, 2988, 2], [2988, 4575, 3], [4575, 4613, 4], [4613, 7177, 5], [7177, 8240, 6], [8240, 10106, 7], [10106, 12188, 8], [12188, 14171, 9], [14171, 16469, 10], [16469, 18510, 11], [18510, 20308, 12], [20308, 22090, 13], [22090, 24857, 14], [24857, 26472, 15], [26472, 28653, 16], [28653, 30431, 17], [30431, 31446, 18], [31446, 34163, 19], [34163, 37049, 20], [37049, 39714, 21], [39714, 41268, 22], [41268, 42453, 23], [42453, 42700, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42700, 0.01493]]}
|
olmocr_science_pdfs
|
2024-12-11
|
2024-12-11
|
116f201307e606df2023be62de00d1973733841d
|
[REMOVED]
|
{"len_cl100k_base": 9583, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 50288, "total-output-tokens": 12216, "length": "2e13", "weborganizer": {"__label__adult": 0.0003814697265625, "__label__art_design": 0.0003895759582519531, "__label__crime_law": 0.0003039836883544922, "__label__education_jobs": 0.0007109642028808594, "__label__entertainment": 7.69495964050293e-05, "__label__fashion_beauty": 0.00017154216766357422, "__label__finance_business": 0.0002510547637939453, "__label__food_dining": 0.0003783702850341797, "__label__games": 0.0005388259887695312, "__label__hardware": 0.002132415771484375, "__label__health": 0.0005841255187988281, "__label__history": 0.0003218650817871094, "__label__home_hobbies": 0.00012135505676269533, "__label__industrial": 0.0006113052368164062, "__label__literature": 0.00032711029052734375, "__label__politics": 0.00028634071350097656, "__label__religion": 0.0006489753723144531, "__label__science_tech": 0.056427001953125, "__label__social_life": 8.445978164672852e-05, "__label__software": 0.005401611328125, "__label__software_dev": 0.92822265625, "__label__sports_fitness": 0.00033974647521972656, "__label__transportation": 0.0008254051208496094, "__label__travel": 0.0002290010452270508}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47423, 0.02271]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47423, 0.3502]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47423, 0.88114]], "google_gemma-3-12b-it_contains_pii": [[0, 2952, false], [2952, 6165, null], [6165, 9885, null], [9885, 9958, null], [9958, 12332, null], [12332, 13162, null], [13162, 15459, null], [15459, 18296, null], [18296, 20091, null], [20091, 22696, null], [22696, 24984, null], [24984, 25220, null], [25220, 27460, null], [27460, 30819, null], [30819, 33673, null], [33673, 33838, null], [33838, 37108, null], [37108, 39251, null], [39251, 41535, null], [41535, 44969, null], [44969, 47423, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2952, true], [2952, 6165, null], [6165, 9885, null], [9885, 9958, null], [9958, 12332, null], [12332, 13162, null], [13162, 15459, null], [15459, 18296, null], [18296, 20091, null], [20091, 22696, null], [22696, 24984, null], [24984, 25220, null], [25220, 27460, null], [27460, 30819, null], [30819, 33673, null], [33673, 33838, null], [33838, 37108, null], [37108, 39251, null], [39251, 41535, null], [41535, 44969, null], [44969, 47423, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47423, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47423, null]], "pdf_page_numbers": [[0, 2952, 1], [2952, 6165, 2], [6165, 9885, 3], [9885, 9958, 4], [9958, 12332, 5], [12332, 13162, 6], [13162, 15459, 7], [15459, 18296, 8], [18296, 20091, 9], [20091, 22696, 10], [22696, 24984, 11], [24984, 25220, 12], [25220, 27460, 13], [27460, 30819, 14], [30819, 33673, 15], [33673, 33838, 16], [33838, 37108, 17], [37108, 39251, 18], [39251, 41535, 19], [41535, 44969, 20], [44969, 47423, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47423, 0.0378]]}
|
olmocr_science_pdfs
|
2024-12-10
|
2024-12-10
|
2688f21fc5af891b166bfd5d522fff1b16e89432
|
[REMOVED]
|
{"Source-Url": "https://core.ac.uk/download/pdf/16375205.pdf", "len_cl100k_base": 11663, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 49608, "total-output-tokens": 14711, "length": "2e13", "weborganizer": {"__label__adult": 0.0003173351287841797, "__label__art_design": 0.00048279762268066406, "__label__crime_law": 0.0002980232238769531, "__label__education_jobs": 0.002086639404296875, "__label__entertainment": 6.258487701416016e-05, "__label__fashion_beauty": 0.0001882314682006836, "__label__finance_business": 0.0003306865692138672, "__label__food_dining": 0.00028514862060546875, "__label__games": 0.000514984130859375, "__label__hardware": 0.0004458427429199219, "__label__health": 0.0003647804260253906, "__label__history": 0.0002199411392211914, "__label__home_hobbies": 7.301568984985352e-05, "__label__industrial": 0.00030350685119628906, "__label__literature": 0.00030684471130371094, "__label__politics": 0.00021588802337646484, "__label__religion": 0.00035834312438964844, "__label__science_tech": 0.01052093505859375, "__label__social_life": 7.778406143188477e-05, "__label__software": 0.006381988525390625, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.00022995471954345703, "__label__transportation": 0.0003724098205566406, "__label__travel": 0.0001766681671142578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62781, 0.02365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62781, 0.18174]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62781, 0.88338]], "google_gemma-3-12b-it_contains_pii": [[0, 2319, false], [2319, 6213, null], [6213, 9768, null], [9768, 12969, null], [12969, 16141, null], [16141, 19687, null], [19687, 21757, null], [21757, 23317, null], [23317, 26584, null], [26584, 29111, null], [29111, 30168, null], [30168, 32330, null], [32330, 34273, null], [34273, 37362, null], [37362, 41034, null], [41034, 44530, null], [44530, 45622, null], [45622, 47741, null], [47741, 49818, null], [49818, 50818, null], [50818, 53763, null], [53763, 56807, null], [56807, 58857, null], [58857, 61054, null], [61054, 62781, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2319, true], [2319, 6213, null], [6213, 9768, null], [9768, 12969, null], [12969, 16141, null], [16141, 19687, null], [19687, 21757, null], [21757, 23317, null], [23317, 26584, null], [26584, 29111, null], [29111, 30168, null], [30168, 32330, null], [32330, 34273, null], [34273, 37362, null], [37362, 41034, null], [41034, 44530, null], [44530, 45622, null], [45622, 47741, null], [47741, 49818, null], [49818, 50818, null], [50818, 53763, null], [53763, 56807, null], [56807, 58857, null], [58857, 61054, null], [61054, 62781, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62781, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62781, null]], "pdf_page_numbers": [[0, 2319, 1], [2319, 6213, 2], [6213, 9768, 3], [9768, 12969, 4], [12969, 16141, 5], [16141, 19687, 6], [19687, 21757, 7], [21757, 23317, 8], [23317, 26584, 9], [26584, 29111, 10], [29111, 30168, 11], [30168, 32330, 12], [32330, 34273, 13], [34273, 37362, 14], [37362, 41034, 15], [41034, 44530, 16], [44530, 45622, 17], [45622, 47741, 18], [47741, 49818, 19], [49818, 50818, 20], [50818, 53763, 21], [53763, 56807, 22], [56807, 58857, 23], [58857, 61054, 24], [61054, 62781, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62781, 0.06531]]}
|
olmocr_science_pdfs
|
2024-12-01
|
2024-12-01
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.